“Safe Cracker” Final Project Complete! Michael Frankfort & Jill Sauer

Our idea has evolved from where we initially started, but our final project is now complete! As you can see from our previous blog posts, we’ve been working on the hardware and software, and with some final additions and polish our Safe Cracker game is complete!

Our final hardware consists of the Arduino uno board, a tilt sensor and a buzzer mounted inside the plastic case, and three buttons, a red LED, and a green LED mounted on the outside. Our homemade “safe cracking” controller hooks into a computer to play the game.

On the software side, we are using Processing to control the mechanics of the game, including the input from our hardware, and the visuals of the game that appear on-screen. We are using Pure Data to control the sounds of the game, based on the hardware input and on information passed from Processing to Pure Data. Finally, for software, there is the Arduino firmware itself that interfaces with the hardware we’ve assembled.

We really wanted to add sound to our project as another dimension of interactivity, another sense to add to the mix. Now that it has been implemented, we definitely think that it improves the game and makes it more fun to play! When the game starts, the initial background music is the smooth jazz of the “Pink Panther” which is reminiscent of mystery and spy movies, which goes well with our “safe cracking” game. However, as the timer counts down for how much time the player has left to crack the safe, the music fades from the “Pink Panther” to a more tense and fast-paced soundtrack to reflect that there isn’t much time left! Having these background sounds adds to the overall game experience. Additionally, we have a few triggered sounds over the course of the gameplay. When the player presses one of the physical hardware buttons, not only does the image change on-screen to reflect which button has been pressed, but a button “click” sound now plays as well. Also, when a player successfully completes a level, a sound effect plays to indicate that they’ve completed the puzzle correctly. Last but not least, applause plays when the player has completed the final puzzle, indicating that they have won the game!

As another part of the polish, we switched out the placeholder art for art that we have made. The screen shows a safe, with “safe cracking” equipment and the paper “cheat sheet” that directs the player on how to solve the puzzle and crack the safe. The hand icon moves depending on which button the player has pressed, as well as the wrench moving if the player tilts the controller one way or the other. The black buttons also light up red if the player does something incorrectly, and there is now an official win screen as well to go along with the applause sound effect.

The video below shows the final implementation of the Safe Cracking game that we’ve created, with the multi-sensory experience, hardware controller, and a pretty good demonstration of playing the game!


To see more of our process, see some of our previous blog posts:

http://www.joshuarosenstock.com/teaching/IMGD3x00_B12/2012/12/02/final-project-idea-michael-frankfort/

http://www.joshuarosenstock.com/teaching/IMGD3x00_B12/2012/12/02/final-project-concepts-ideas/

http://www.joshuarosenstock.com/teaching/IMGD3x00_B12/2012/12/08/safe-cracker-part-1-final-project-michael-frankfort-jill-sauer/

http://www.joshuarosenstock.com/teaching/IMGD3x00_B12/2012/12/08/safe-cracker-part-2-final-project-michael-frankfort-jill-sauer/

http://www.joshuarosenstock.com/teaching/IMGD3x00_B12/2012/12/08/safe-cracker-part-3-the-code-final-project-michael-frankfort-jill-sauer/

http://www.joshuarosenstock.com/teaching/IMGD3x00_B12/2012/12/12/safe-cracker-part-4-hardware-polish-final-project-michael-frankfort-jill-sauer/

Final Project Concepts & Ideas

For our final project, Mike and I will be working together on an interactive game-like installation using Arduino and Processing. Our idea is to create a game experience where the controller itself is the game, like “Simon” or “Bop-It” toys that we grew up with and are still popular today. Most often in digital games, the “controller” – whether a handheld unit or a mouse and keyboard setup – is really just an interface between the player and the game, and how the player communicates and interacts with the game world. We would like to create an interactive experience where the controller – the interface – IS the game.The line between the interface and game is blurred in dance and rhythm games, of which there are many popular and diverse versions, and we aim to make something along those lines. It would be interesting to play with this idea as part of our metaphor, that we can blur the line between controller and game by engaging different senses between the player and the game machine.

Our core design ideas are that our project will have real-time interactivity, will encompass both the physical components of Arduino and sensors that the players can touch and feel, as well as digital components that work together and receive input from different methods of interaction. We would also like to include audio and visual components, engaging multiple senses of the user in addition to the tactile sense of handling the game/controller itself. The components we have to work with are an Arduino board, a tilt sensor, a buzzer, colored LEDs and push buttons.

Both of us have worked with Arduino with previous projects, but we should research Arduino in relation to exactly what we would like to use it for. We also need to research how Arduino and Processing can work with each other, and if there are any limitations to what we can do when using them together. We should also find out the physical and digital limitations of our components, if there are any, so we are working within the scope of what our parts can accomplish and design with these things in mind.

To bring all of this together, we will create a handheld device that will the the “controller” with the Arduino board, push buttons, LEDs and tilt sensors incorporated. This will be connected via USB to a computer running the Processing part of the game, which will provide the audio and visual interaction. We will have to research how to do sound with Processing, and see if it is possible to do stereo sound in processing, which would be another level of interactivity and feedback we could play with. The Processing programming component will keep track of all of the sensory input from the Arduino hardware handheld controller, and we will have to program the designated outputs for the interactivity of the controller. Once we have the hardware functionally working and have started on getting the Processing part functionally working, we can work getting the two to work together and polishing it until it is the game experience we are aiming for.

Project 2: Cat Meme Drum Machine

For my second project, I created a video and sound remixing drum machine using Pure Data, and using the “meme of memes” on the internet: cat videos. I chose 5 viral cat videos, some of the most popular on Youtube, and used Pure Data to create a drum machine that  would remix them together and incorporate sound.

I primarily used the ‘switch’ function in Pure Data to change between the five different videos, and I made the drum machine interactive in different ways to produce different remix effects. Using the space bar starts the video remix, which begins simply just switching between two videos with a cat-meow drum beat. The user can then use the bracket keys [ ] to increase or decrease the tempo, which also increases and decreases the speed of the video switching. The s, d and f keys are linked to three other videos, and when pressed activate a different cat meow sound and cause the new video to be remixed into the visuals. Finally, pressing the a key starts a random function that randomly adds the three s, d and f videos into the remix.

The result is a remix of cat meme videos, with accompanying drum-machine like cat meow sounds to provide a beat. It’s somewhere between amusing and obnoxious, which is somewhat like the overabundance of cat videos themselves!

 

In the first part of the video, you can see the activation of the drum machine with the space bar, and the individual uses of the s, d and f keys. Towards the end of the video you can see the a key being used to activate the random cat drum beat function for a somewhat more ‘musical’ result.

Otomata: Generative Audio

As an IMGD major, I am very interested in the converging of art and technology; it is what drew me to games in the first place, and one of the reasons I love what I do as a digital artist. Aside from video games, one of my other passions is music, and I similarly love new, creative combinations of music and technology.

Audio sampling and drum machines can provide a means to mix audio into new songs, but does not always create flowing pieces of music. I was very impressed the first time I found Otomata, an online generative music sequencer with an easy-to-use interface that allows the user to quickly and intuitively create music without needing to fuss with too many buttons.

http://www.earslap.com/projectslab/otomata/?q=10_0_150_650160620220032831601732

Otomata is a generative sequencer that uses cellular automaton logic, comprising of a grid with cells in four states: the four arrow directions, up, down, left, and right. Pitch is triggered when an arrow cell encounters a wall, with the pitch itself determined by the location of the collision, after which the arrow changes direction. Pretty cool! Just by playing with Otomata and watching it sequence it is not hard to understand the logic behind it. Still, it creates beautiful music for a relatively simple sequencer. The music evolves over time as the sequencer continues to cycle and play, and really does create flowing music out of what seems to be just logic (in terms of coding) or chaos (depending on how many arrows you decide to add to it!). Users can add, remove, and change arrows real-time as well as adjusting the tempo of the sequencer and the scale used, which changes the overall tone and mood of the music generated. Additionally, you can record and download the music you create with Otomata!

Otomata is very approachable, with an easy to understand interface and it isn’t hard to grasp how it works if you play with it even for only a couple minutes.This is definitely part of what makes it so successful; it is so easy to use anyone can find it online and play with it, and additionally, no two sequences will be similar as you interact with it. Even with such a simple interface, the results are limitless depending on how you play with it, and it is satisfying to see and hear Otomata react to your input in real-time.

Generative music is very cool, so it would be cool to extend this idea of approachable, real-time generative audio. Adding generative visual elements would be cool, in addition to the interface that allows you to create and manipulate the generative audio. A similar set of buttons that could create generative art based on the generative audio that the user creates would be a neat addition to Otomata!

Nature vs Machines, Night & Light 3: Nebulae

The third sketch in “Night & Light” is Nebulae, is which creates a different starry sky with each mouse or key press. The color is randomized as well as the star field, so you get a different sky each time!

Nature vs Machines, Night & Light 2: Will o’ the Wisps

The second of my “Night & Light” sketches is based on will o’ the wisps, ghostly lights supposedly seen by travelers at night that lead them on safe paths over swamps and marshes in English folklore.

The two will o’ the wisps dance around each other naturally, but will also dance in time as you move your mouse around the canvas. Try it! You’ll need to hover your mouse over it to get them started.

 

Nature vs Machine, Night & Light 1: Fireflies

My series of sketches for the Nature vs Machine project is called “Night & Light” as each sketch plays with the idea of nature and light at night. This first sketch is “Fireflies” in which I aimed to recreated the beautiful light trails left by fireflies at night.

This sketch is both animated and interactive; click the mouse to add more fireflies to the jar, and use the S key to save the frame!

 

“Nature” Sketches & Iterations

I’ve been working on the “Nature vs Machine” project due next week, and one of the natural phenomenons I really love are fireflies, or lightning bugs, and the effects of photographing them with low shutter speeds. It can be very beautiful! So, for one of the sketches, I knew I wanted to play with the idea of fireflies and light “paths” left by their movement.

I’ll be updating this post with the iterations I make. Because of the way I’ve set up the “jar” these fireflies are in, I want to make sure it looks correct when posted on the blog!

 

In the first iteration, the fireflies are more scattered when the ellipses are drawn (due to use of the random fuction) so they look more sparkly. In the second iteration, I played with the variables and lowered them so that the paths left by the fireflies are more linear, but still randomized. I also added noise to their movements to make them more realistic.

Processing Sketches

I’ve never used Processing before, but I’ve found I like it so far with the first assignment and the things we’ve begun to do in class. Learning how to create the sketches and then making one based on a drawing was a little fussy and tedious, but I’m happy with the results!

I sketched out a cute little robot on paper for my design before moving to Processing.

He came out pretty good! I primarily used CENTER mode for the ellipses and rectangles that I drew, first getting the shape and color correct and then moving it into position, which worked out well!

I’ll continue to update this blog post with the iterations I make on this for the next assignment. I had a lot of fun in class today playing with variables and colors, so I’ll try to work in the various “challenges” in some interesting ways!

Quick update, here’s my first new iteration on the robot – now with dynamic colors and moving eyes! Try clicking on him, too!

Edit 2: Here’s another iteration, working on completing the challenges all within this sketch.

In addition to dynamic eyes and colors based on mouse location, and eyebrow movement based on mousepress, you can use the keys R,O,Y,G,B,I and V to change the color of the robot’s square front panel!

Here’s what I’ve done so far for each of the challenges:

Challenge 1: I’ve been using variables instead of hard-coding values into the sketch. This came in useful when centering the design, where I was able to use ‘width/2’ and ‘height/2’ instead of the numerical pixel values of the locations I wanted. Additionally, for the movement of the eyes, I used an if statement and looked for the location of the mouse based on thirds; if the mouse is in the left third of the screen, the eyes look left; in the center third, the eyes are centered; finally, in the right third, the eyes are drawn looking right! This is also how the up and down movements of the eyes were coded.

Challenge 2: I declared my own variables to create the key-press activated color changing for the robot’s square panel. I declared rColor, bColor, and gColor at the beginning of the sketch with initial values of 255 (when you start the sketch you can see the panel is white.) Then, I used if-statements to figure out which key was being pressed, and assigned new values to rColor, bColor and gColor depending on which letter was pressed so that when the rectangle is drawn, it changes color and stays changed, not just changing when the specific key is pressed.

Challenge 3: The eyes and colors of the robot are dependent on the mouseX and mouseY, the eyebrows are dynamic with mouse presses, and the square panel is dynamic depending on specific key presses.

Challenge 4: All of the above use logic, including IF, AND, OR, ELSE, and ELSE IF.

I’ll continue to work on playing with the sketch to work in more logic to get more familiar with the different conditional logic concepts soon!