Unknown Side
Unknown side is an interactive installation where people can see their shadows interact with each other’s based on the position they are standing at and the distance between them.
In Jungian psychology, the “shadow” may refer to an unconscious aspect of the personality or the entirety of the unconscious. In short, the shadow is the unknown side. Through this piece, the shadow is used to open up a topic about the unrevealed sides of people and their nonverbal communication.
TIME: Nov. 2019 | 3 Weeks

TEAM: Jiwon Shin, Ellie Lin

ROLE: Concept Development, Research, Shader Programming

TOOL: OpenFrameworks, Kinect Azure
Interaction Demonstration
Research – Proxemics and Nonverbal Communication
Proxemics, a term coined by the cultural anthropologist Edward T. Hall, is a theory of nonverbal communication that explains how people perceive and use space to achieve communication goals.
Level One: 1.5ft (0.45m), intimate spaceSocial behaviors: affection – kisses, pats, linking, and cuddling

Level Two: 4ft (1.2m), personal spaceSocial behaviors: inclusion – tactile greetings like handshakes or hugs

Level Three: 12ft (3.6m), social spaceSocial behaviors: involvement – standing position: body orientation, leaning inward or towards the other individual.Level Four: 25ft (7.6m),public spaceSocial behaviors: no tangible interactions
Process of Development
To capture the joint positions of the movements of the selected actions, we used the Azure Kinect device with ofxAzureKinect openFrameworks addon to save the x, y, z coordinates of the 32 joints in text files using ofBuffer object (link to code). Joint coordinates are joined by commas (,) and semicolons (;) in between joints and a line return at the end of each frame. The saved files look like the above screen capture.
Next, we took the saved text files and started by rendering them as positions for ellipses on openFrameworks. It was relatively easy to parse through the saved data as we had planned ahead and developed a way to separate data. We could animate the data by stepping through each line of the text file every frame and splitting each line by semicolon (;) character to get the positions for each joints.
Using ofxGui, we built a simulator to test triggering different animations based on the distance between the red and blue ellipse shown above. In order to prevent the animation from constantly triggering, we only pick a new animation sequence based on the distance, if the animation is not currently playing already AND if both ellipses are not moving. There is also a 60 frame delay from starting an animation to make sure that the positions of the ellipses are fairly constant before triggering the animation.
For the shadow-like visualization, we wanted to recreate metaballs Processing sketch from Dan Shiffman’s Coding Train in openFrameworks, but had some difficulties. We found that the exact same code structure of the Processing code was very slow in openFrameworks (getting frameRate between 1 – 5 fps). With Elias’ help, we were able to figure out how to draw 2D metaballs as a shader, which enhanced the processing speed drastically.
We continued to improve the visualization for the shadows. We built switch cases for different joints to be able to render a more shadow-like visualization. Each joint has different size of area in the human body and only by specifying the area for each joints, we were able to generate the shadow animation like above.When we tested out visually representing human bodies in front of the Kinect in real time, we realized that this method of visualization using switch cases would make the arms look very weird. Because the elbows in most of our saved animations were not angled, the visualization looked realistic, especially when the bodies are sideways rather than facing front, but when someone stands in front of the Kinect in T-pose, it was obvious that there were distinct ellipse in the place of joints of the arms. We decided to compensate for this stark separation of ellipses around arm joints by adding opacity and blur to the shader as well as the background draw of the application (Link to code).
If we had more time to work on this project, we would focus on two main areas for further development: 1) smoothing the triggering of animations upon changes in distances between the two bodies in front of the Kinect and 2) refining the visualization of the shadows.It would also be interesting to try to build for interaction of more than two bodies. What if we were to trigger animations between two closest bodies?We would also like to try out saving depth data of areas around the joint rather than just the joint positions to see if this will help us improve the visualization of the shadows.
Thanks for making it here
Coffee or Tea?