I’ve been working on my personal website for the last few weeks, and it’s finally ready!
Check it out here. :)
You can also find new blog posts here.
A thousand thank-you-you-rock! to Guillermo Arrioja for helping me out!
Stardust got selected for the Bridge Design Collaboration Exhibition at DogA, The Norwegian Center for Design and Architecture. At this exhibition, selected projects from Oslo National Academy of the Arts, and the Architecture and Design School of Oslo (AHO) were showcased during the whole summer. I was happy to participate among fellow designers.
The exhibition featured in AHO’s website here.
Here is a review from the exhibition by Norsk Form.
Bridge Design Collaboration Facebook event can be found here.
Finally: the end of year show! It was a great summer day, with friends and family (my parents flew in from Mexico) and with the project finally for people to experience.
Here is a description of my project from Khio’s publication.
The Stardust installation and exhibition at the end of year show.
After completing the screen-tests, I had an idea of what type of visual language to use. Before I continued developing the software, I tested the stating images but without a screen this time. I projected them at the participant’s body directly.
When I did this, I noticed something very important for the whole experience. the idea was to have the stardust on your body, with no screens anywhere. But, as a participant, you do want to see how they look: I needed a way to document how it looked like and show it back to the people experiencing the installation. I used real mirrors to do so.
In this way, the main system would work with embodiment, and you had to experience stardust on you, yet the project would also include the perception of your body image, but not through a screen; it would be through a real mirror instead.
Test projecting a stardust demo on the participant’s body.
Projecting stardust. A different experience once the participant can see his own body in full-scale with the stardust on.
With a mirror to look at yourself, the experience was enhanced for the participants. After these tests, I sketched out ideas of how to use the mirrors in the final setup. It was tricky to work with the feed from the kinect, align that to the projector (I wanted the projected stars on the person’s silhouette) and figure out where the mirror would be placed.
I came up with 3 options for the final setup.
1. The mirror would be a one-way mirror (like the ones used in focus groups or interrogations with the cops). Using this type of material, you could look at your own reflection, yet (I thought) the light coming from the projector could go through the mirror and on to the body; source of light and documentation device (mirror) would be perfectly aligned.
I remembered the “one way mirror” thing from an old psychology class where the teacher explained “La cámara Gesell”, or the Gesell dome, used to study children without them being aware they were being watched.
…a prominent hot-spot in the center of he one-way mirror (in this case, I used a model as prototype, and tinted sunglasses as one-way mirror). Not only the light, but also the heat from the projector also contributed to the hot spot. Time to try the next option.
2. A paper-foil mirror-like material. This way, the kinect would be hidden behind it and register the participant, while the projector would be behind the people, at an angle in order for the stardust to be reflected by the paper-like mirror. The reason for it to be paper, is so that it can be paper thin, and thus, easier for the kinect to sense through it.
3. Use a real mirror, and figure out a way to incorporate it in the final installation.
It worked! It reflected the stardust back into the model. When the final installation is set-up the trick would be to see 1) if the kinect picks up the user in front of the mirror and if not, how to integrate it into the installation in a practical way.
The next step then, was to develop the visual language of stardust. I used the image selected by the participants as a starting point (the yellow-golden particle reference image).
What if the animation you create could be seen not only on your smart phone or tablet, but jump in real time to the screen next to you? What if the animation could inhabit and jump in all the screens in a room, at once? Seb Lee-Delisle (a creative coder, speaker and teacher) did just this. He specializes in bringing people together through large-scale installations. His project, Pixels fro the People, turns each phone in the audience into pixels in a larger display. It runs in the phones’ browser so there is no need to download/install an app. (Brilliant.)
It was built with openFrameworks (C++). Seb made a game where a cat animation runs from screen to screen, and whoever catches him fastest wins!
Engagement. Synchronicity. Unexpectedness. Surprise. Unity. Openness. Awareness. Fun. All of these attributes encompass this wonderful project, bringing people together, and showing possibilities for using our gadgets collaboratively, allowing the content to really flow from display to display, and giving an alternative to the isolation of only looking at your own phone screen.
Find the whole article here —-> PixelPhones