Research Topic
Microemotion is a study of interactivity. The users influence the installation by inputting their emotions The installation provides users with emotional statistics and trends based on frequency, location, and weather information collected. The installation is dynamically updated to reflect the dominant emotional mood. The users interact with the current emotional through sound, text, and colour. Users move and catch emotion words using their own silhouette and other objects. Users make loud noises, triggering the installation to emit reactive sounds and display new words. These form a sort of conversation between the user and the installation space.
Problem Identification
From what we can tell and what we have discussed, one of the biggest technical challenges we think we will face will keeping track of where each user is within the space. Assigning emotions into a database is fairly straight-forward, as is projecting emotion rings around a person from above.
Some ideas for possible solutions: Colour tracking or some other video-based tracking. We would likely require more cameras to view from the side, and might affect how we choose to handle lighting within the installation. Besides, we have to give users something to help us track them. For example, something with a specific colour or sound.
Aim and Objectives
· To reflect the dominant emotional mood
· To emit reactive sounds and display new words
Motivation
The more predominate the emotion, hence the larger the font and the more leeway the word has to travel outside of the user created shadow
Ideation and Concept
The idea is about compatibility. The user walks into the room and a halo surrounds them with different emotions or perhaps descibing their profile. When one user approaches another, there could be an attraction or rejection symptom. If two emotions attract then the halos of each user become joined. The more users together the more attraction or rejection. This is just an interesting take on compatibility between users. Of course we are still using text as our visual element. We think the idea of the rings around the user on the ground is awesome, probably not too tough either from what we have been looking at. We think we could definitely create a simple installation around this, or work it into something more complex, although we even like the idea of just simple white panels, with all of the focus being on the floor. The biggest unresolved issue is how do we assign an emotion to each person. Is it based on their movement? Something they touch or they type in or say or position their body, which could be difficult, but super cool if it worked. Alternatively, we could define poses for each emotion and flash them or display them in some other way so that people would know to “do a pose” in order to acquire an emotion ring or it could be based on where they walk initially when they enter, or the colour of their clothes? Or some combination of these, or something else entirely. We will focus on touch or light or sound sensors.
Review, Analysis and Precedence Studies
There are few interesting articles about dealing with translating the emotional and gestural aspects of dance and movement into related sounds and instruments. The articles are kind of expand on the ideas proposed.
- http://dafx04.na.infn.it/WebProc/Proc/P_005.pdf
- http://www.edu.gov.on.ca/eng/curriculum/secondary/arts910curr.pdf
We have been doing some readings with emotions
and facial expressions. Here are some PDF’s we found relating to Ekman’s research. We have not had time to go through all the documents in detail but they seem to be useful information.
- http://www.csun.edu/~gk45683/Ekman%20(1992).pdf
- http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf
- http://www.paulekman.com/wp-content/uploads/2009/02/Facial-Expressions-Of-Emotion.pdf
- http://www.english.upenn.edu/~cavitch/pdf-library/Gladwell_NakedFace.pdf
The last thing that we thought would be necessary to include is the actual “Gesture Dictionary” that they propose using to determine the emotional aspects of movement.
· http://recherche.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/index.html
Design Approach
We think we need a really solid idea of how we want to tie everything together. Just because it is feasible to have input through large over-sized items does not necessarily mean that the installation will make sense. Ideally, it might be more enjoyable if we could come up with ways that users can interact and influence the installation without necessarily moving around objects with sensors in them. For example, if we could use video and sound detection to sense where in the room people are gravitating and then capture input from them, strictly text perhaps, we could translate this into less direct input. Another idea we had was that instead of mixing all of the emotion words together throughout the installation, it could be divided into the 6 themes, or however many we decide. For example, one wall per basic emotion. We could then detect emotions from users simply based on their location within the installation. To make this more interesting, we could trace the shadow of the users on each wall. Perhaps using text or some other cool light grafiti in the shape of the person and give them the characteristics of that particular emotion. For example, user come into the installation and user go over to the the sadness wall or area that is displaying all kinds of sad words, maybe using shades of sad colours, moving the words around in a sad way. A camera tracks users movement and projects users shadow on the wall in amongst the text. So now it is almost as though the room is influencing the user, not just the other way around. We could have the user hold or wear some kind of object as they come in, or leave different objects. Even something simple like cubes or spheres of different colours around the installation that users can touch. There are also different input forms that we can look at like temperature, touch or motion sensors. We can also have have the standard text input through keyboard, computer or phone. We could even explore things like connecting different users by their location, mood words nearest them, type of words they input, colours they are wearing and amount they are moving. Or we might also track user paths through the room to create a sort of inverted light or shadow graffiti.
Sketches
No comments:
Post a Comment