Let the week of rendering begin! Watch fullscreen.
Let the week of rendering begin! Watch fullscreen.
ForScare is a scavenger hunt/assassins-style game where 2 teams seek out real life NPC’s in a fixed public space. The targets are identified by having checked-in through any social media platform at the location of the game.
Set up teams and email them by group so team members can communicate easily
Pull the 5 most recent check-ins at game location (e.g. Washington Square Park)
Make Screen Captures of the 5 targets
Email both teams with the same 5 targets
Email suggestions on how to identify targets
Prepare 2nd round targets, time permitting
Teams meet at the same point in the park, receive the rules
Teams split up and assign roles (Command Center, Shooters, Spotters, Runners)
There must be at least ONE (1) player at each role.
When the teams receive their Targets, they sort into their roles and begin locating targets
Each Target spotted must be photographed from any distance with a CLEAR image of the target’s face
Each photo must be taken with the team’s Canon 5D
At some point during the 30 minute game teams must download images from the CF card, and then upload them to the Team Twitter Account, tagging each target with their @xxxxxxxx username
The team with the most successfully posted and identified Targets at the end of 30 mins wins
Command Center – stays at a fixed point on the map, cross-references targets on the internet for optimal identification. Also receives shot CF cards, downloads shot files and uploads image selections to Twitter, posting all relevant information
Shooters – Each team has only one team camera (provided) with which they may shoot photos of their targets. The Shooters’ job is to find targets on the map and shoot acceptable images (clear picture of face and elements from their post).
Runners – Deliver the shot CF cards back to the Command Center and return downloaded cards to the Shooter
Spotters – Take known information about Targets and locate them. Bring Shooters to target location
There are a variety of ways to learn more about and verify visual confirmation of targets. Here are a few:
Look up Target’s username on Instagram. Scroll through picture archive if allowed. Scan for “Selfie’s” or pictures of Target with friends
Crosscheck Target’s username with same username on other social media apps
Google the username (especially if it indicates Target’s real name)
Look for identifying details in Target’s posted photo. Identifiable clothing, companions, park landmarks (e.g. Fountain, Arch, Cherry Blossom Trees, Port-o-Potties, Dog Park)
Compare angle of sun and shadow to something in real space to help with perspective
Go up to a Target and ask for verification (be prepared to talk convincingly about the purpose of the game)
Triangulate with street performers when possible
Persistence of Perception is a joystick controlled 360 degree panoramic timelapse in which you can control both the speed of panning and speed of the timelapse with the joystick. I capturing timelapse footage from 8 Canon cameras with a uBasic script on them that runs an intervelometer.
Here’s what the setup looks like
I did a trial run last week just so I could see what the images would look like. Of course it rained for an hour which knocked one camera out from the very beginning. Only two cameras actually made it the whole day, but even they glitched out at points. I’m going to take more time setting up the cameras and replace the ones that didn’t work with fresh ones
I’ve been working with a program called CrazyBump to produce normal maps for shading in Unity this semester. Normal maps are just one way to make simple geometry look complex and textured while not bogging down the CPU. When you see a wooden crate in a video game, the chances are that it’s just a simple 6 sided polygon with normal, diffusion, specular, transparency mappings, etc which give the illusion of depth. These are called shaders which are processed by the GPU which relives the stress off of the CPU so that it can maintain an acceptable frame rate to give the illusion of continuous motion in video and video games.
Many people explore using shaders as a way of adding detail and making generative designs to add to simple objects. My question was – can we take these shaders and commit them to actual geometry, and then can then bring that geometry into the real world.
Cubist painting lend themselves nicely to this kind of manipulation. I picked a good pice with simple geometry and clear black and white levels which would provide the lowest depths and highest peaks of the final 3D work. This would provide a textural canvas that we can then project the original work back onto. From there we can get into interaction and movement for the projected video.
While normal maps are the norm for video games, Cinema 4D is more responsive to displacement maps. We chose The Guitar Player by Picasso.
Using the Displacer object I was able to make an actual displaced mesh on a 3D plane.
Mesh Render with both soft shading a higher polygon hard shading side by side
Deformed 3D plane with original color map projected back on.
I then took this file into Rhino and converted it to a single object that was 2″ thick at it thickest and 1″ thick at the deepest valleys. From here it will be ported to Mastercam for production to the CNC Router to be cut out of a 24″ x 48″ piece of blue styrofoam. I’ve reached an impasse at this point. I sat with most of the advanced MasterCam users of the students and although they assure me that it can be done I have not yet been able to get a good simulation cut with my current CAD file. I still have a few methods to try out.
Assuming we get the 3D canvas cut, the next step is to project the image back onto the canvas. It is out idea to have the piece generate a random guitar tone in C major upon each distinct touch and have the image mapping undulate like it’s being manipulated by a physical sound waves expanding in a circle out from the contact point much like a stone makes the water ripple when it is tossed into a pond.
The Emerging Processes in Video Art class is currently studying Delaunay Triangulation which is visually similar to the geometry that we are dealing with and just might be the key to a compelling visualization, but getting the depth image with a Kinect and calibrating its threshhold with Max/MSP and Freenect may be difficult.
Since we already have the 3D object we can just import that into Unity and then project from there which will facilitate the mapping to the physical object
This week in Rest of You we took a look at the rest of us and build a machine to expose our intuitive prejudices, and induce empathy with people with different intuitions. I decided to build upon my PComp final – the Wall Hanging Mood Ring.
This is a simple CNC routed mirror frame with 3 layers of diffused acrylic and a one way mirror out the outside. There are 6 meters of LED strip coiled behind it all attached to a TMP36 temperature sensor. As body heat increases and decreases, the colors of the LED string go through the color spectrum changing the color of the person’s reflection.
As we’ve been studying different ways to monitor and display autonomic bodily processes I’ve been interested in ways of integrating those in the the mood ring so that the color displays the entire emotional state. This led me to the work of Robert Plutchik and his theory of emotion.
Plutchik’s psychoevolutionary theory of basic emotions has ten postulates.
From this he created a color coded wheel of emotion in both 2D and 3D form.
I’m proposing an implant or wearable that color codes the wearers emotional state. This will give everyone complete transparency and provide ways for people to employ mood management theory and navigate the social world to find the experiences that would be the most beneficial for them. It will also help them avoid those that wish to harm or are in a negative or different emotional state. It has other benefits such as help those with developmental insufficiencies to ready emotional state such as children or those with autism.
Once you have experienced a color on your own you will have achieved empathy with others who have obtained the same color level regardless of whether or not you have experienced the same specific event. This will help bridge the gap between sympathy and true empathy.
We spent this week looking at our digital footprint. I began by tracking my key strokes with Backtrack. I was also able to export my gmail from 2007 to 2009 using a Python script. Finally I was able to export single folders from my gmail to google drive where they could be downloaded as a text file. For this final example I chose the folder with all the Rhizome newsletters in order to detect art-speak.
BackTrack picked out a lot of background processes which would take a while to filter through. In the end I used my computer in the past week for very little except reading articles, browsing github, and testing Unity Games, so it wasn’t very coherent or telling beyond just those activities.
Unfortunately I couldn’t get Processing or Excel to read the eml filetype generated by the Python script. It was very powerful even if it did stop after two year which was 2 gigs of emails. It can be set to run between certain dates so if one were to pursue it I would suggest doing year long increments to keep it from stalling or crashing. According to a classmate I can make a change to the visualization Processing sketch to accomodate the eml file type.
The Rhizome email export proved to be more interesting in a predictable way. My favorite words – hacktivism, byob, humlab, topophobia, posthuman, gilgamesh, googlegeist, queercore, burpology, grunwald, collegium, & autopoiesis.