Space Oddity – Animation 3

I have been interested in astronomy and science fictions stuff. So for the unity project, I wanted to make a scene where the character lost in the magnificent universe.  I wanted to create a sense of loneliness and insignificance of human being in the space.

I created the character with FUSE and Maximo.

I found a ambient space sound in Unity Assets Store first  before I animated the scene so that the animation works with the sound effects.

Screen Shot 2017-12-21 at 8.19.28 AM.png

I used three animation from Maximo: Dancing, float and fly to animate my character. I used blend tree to blend these three actions together : I made the fly action to gradually grow in the blend tree so the character started by space-walking while slowly flew away.

Screen Shot 2017-12-21 at 8.09.37 AM.pngI then imported a universe skybox from the assets store. Now the character was already flew around in the sky.  I wanted to make the space scene more dynamic so I created some stones/meteorolite and added scripts to make them rotate.

Screen Shot 2017-12-21 at 8.24.22 AM.png

I also added a spaceship and a space station from assets store. I wanted to add scripts for them to make them move in an orbit. But I didn’t get it to work. Screen Shot 2017-12-21 at 8.26.54 AM.png

I ended up group the rocks and the spaceship and applied the spinning codes to the group so they rotate as in an orbit.

Finally, I added camera panning to the entire scene by using keyframe recording.

Screen Shot 2017-12-21 at 8.29.19 AM.png





Space Oddity – Animation 3

Time–the Process

Images taken by Nicolas Peña-Escarpentier – thanks for awesome photos

What’s time? Is time a logical thing? A project “Time” started from throwing simple questions about time and its standard – eventually evolving into the combination of physical piece and digital visualization about different time zones. The physical piece is consisted of two parts: inner cylinder that represents UTC Time Zone map, and outer sphere that is rotatable and contains light source. As the user rotates the outer sphere, and it creates movement in light. While the light is moving, the inner cylinder that has 24 light sensors for each longitude reacts to the changing brightness.

The physical piece will be installed on transparent and round table that I personally own, which approximately has 40” diameter and 30” height. People should be able to walk around the table and see the cylinder time zone map inside the sphere. The digital visualization will be projected from underneath the table; due to its transparency, the table is able to directly show the visualization on is top surface. In this way, people can view both physical and digital piece without being distracted.

When the light shifts from one time zone to another, the digital visualization reflects the movement as well by changing gallery of skylines. Those skylines are from survey towards ITP community regarding which cities they came from, and organized according to the longitude location.

All the coding parts can be found in my Github Repository:



Screen Shot 2017-12-15 at 11.31.54 PM.png



The skyline photos are are collected in a single folder and organized in each city with same 24 steps. The archive is called via .json file.

24 light sensors send array of numbers. During this process, we had to add black tubes around the sensors to block the ambient light. Usually when the light source is not close enough, the reading is under 10~15. When the light source is in front of the sensor, it gives value between 50~100. Using that, if there’s enough difference between max and min, it will indicate max as 12:00PM.


What happens if the difference between max and min is too small – such as, when the light is off because I put the switch on the light source without thinking everyone’s going to press it? Well, it basically makes the whole sketch goes to “sleep.” This won’t and shouldn’t happen in real life, but since our work is not literally showing the scientific information, we rather found it will be a good visual effect that wraps up the whole idea.


Second User Testing:

In order to give a general idea of how it functions, I brought the partially completed physical piece and the sketch that works with slider input. Although we were in the finishing stage at this point, the user testing was extremely helpful to make some small changes. The most mainstream feedback I got was that it’s hard to recognize the light location inside the sketch.


After the user testing, we realized the need of light indicator and initially built a white line that goes across the sketch and directs to the max input – which is the version I presented in the last ICM class. Then later it became a line along with orange gradient to make the light change more noticeable and dramatic. Furthermore, the text color for max input zone will be turn into yellow as well.



This is the second short video that contains some process and imagery of how it works. There will be more documentation and modification until the show.

Time–the Process

Animation – Nevermind

Final work

Roland and I are going to work together and we are going tonimate album covers starting from Nirvana Nevermind to David Bowie’s Black Star. The baby from Nirvana album will travel through 26 albums and interact with characters/scenes in  each album, for each year starting from 1991 to 2016. The baby will age in the journey and will be an adult in the end. The soundtracks will also evolved with the animated album cover,


Here is our detailed storyboard:

And here is a rough sketch of our animation.


Animation – Nevermind

Becoming a Machine

I have always been interested in cyborg; I am interested in the idea that how machine has fundamentally changed the way we perceive and experience the world — and even become an extension of our body. For example, we start to look at the world through the lens of camera; We used social media so much that it is part of our identity.


Coming from this idea, I want to make a project that makes metaphor about how machine is becoming part of our body, and also to encourage human contact. That being said, I want to transform myself into a machine that can be activated by human contact. There will be two parts of this.


icm part

I want to feed my social media data into machine learning tools and generate auto-responses based on them. So whenever some one talk to me, “I” will respond them with auto response generated by machine learning(or not machine learning, just by social media content)

pcom part

i want to transform my eyes into camera lens and the camera function will only be triggered by human contacts. A screen on my helmet will also displays photos that my eyes captured.


Becoming a Machine

ICM/PCOM Final Idea

Coming from my own experience of living in different places in the world, I am interested in globalization and the natural effect of earth spinning creating day/night shift. For my final project, I want to collaborate with Alice and create an experience where people can traveling in space and time through the lens of global webcam. 

We decided to combine and expand our designs for midterm.  Here is a sketch of the installation.

Screen Shot 2017-11-07 at 8.21.07 PM.png

When some one approaches the installation(globe), he/she will be instructed to pick up a flashlight. He/she can then walk around the installation with the flashlight, like a sun. Wherever the light source hits, a live webcam video of this place will be showing in the projections on the ceiling. If several people interact with the installation at the same time, multiple live stream will be showing. And if there are considerable amount of people lightening up the globe, an random abstract painting will be generated by these webcams.


We can’t necessarily cover every single place in the earth, so we decided to narrow it down and make this installation relevant to the ITP Community. We collected where itp people come from and we will be choosing cities based on this.

Screen Shot 2017-11-08 at 1.56.01 AM.png


This is a documentation of my pcom project in which I made a globe is reactive to lights(sun). I put light sensors on the globe so that whenever the light hits a sensor, it triggers a video to play. The same mechanism will be used for this project.



ICM Part 

We made a demo, just to test getting webcam data from

full screen view:

Qustion 1 : How to show multiple images on the screen and make their size/position responsive to the number of image/videos.

Questions 2 : There are some cities that have a lot of webcam resources and there are cities where no webcam resource is available online. How can we choose and what to show for cities that do not have webcam videos?


Screen Shot 2017-11-08 at 2.19.19 AM.pngScreen Shot 2017-11-08 at 2.19.11 AM.png

Some references.


Alice’s past project.

Our globe installation will be a combination of the above reference and Alice’s previous project. We will use cardboard to fabricate the globe shape, but we will also keep the protruding contours that represent the division of time zones. (As shown in Alice’s project).



  1. Size: How big should the installation be? We want it big enough to allow multi-people interaction, but we also need to consider our ability to fabricate.
  2. projectors. How do we fix projector? And how do deal with the air flow problem if we want to put projector inside the installation?
ICM/PCOM Final Idea

Animation – The Secrete Life of LED

This is so much harder and time-consuming than I expected.

Concept Development 

Our group met for the first time on Friday afternoon to nail down the medium and concept. We had several ideas but eventually we agreed on animate an everyday object and told a story about its “secret life”. We decided we want to use our arduino because that might be the object thats most relevant to all itp students. We came up with a rough idea of the story line of arduino parts having a dance party inside the box while the owner is away.



Over the weekend, I drew a story board based on what we discussed. I came up with a more concrete story line with an led being the protagonist and trying to wake up other parts.




Prepare for Shooting 

We fine tuned our storyboard on Monday morning and prepared the materials and equipment for shooting. We had a debate between if we want to shoot from top or from the top. We wanted more dynamic motion by shooting from the side but it would be more difficult to fix our objects. We came up with the idea of fixing our leds on a foam board.


It took so long to set up!

Here are some set photos.






-It was difficult to animate walking. It took us almost 3 hours.

-The LED leg breaks!





Animation – The Secrete Life of LED