Uureal Engine Exercise: Zombie Dance

Character

I created this creepy bald zombie in Adobe Fuse. Since I wanted to let this character be as absurd as possible, I used regular arms and legs, but I put a muscular and robust body to make sense of conflict. On Mixamo, I played with different built-in animation and saw which is the most ridiculous and funniest one. Eventually, I chose Samba Dancing.

Scene and camera route

In the scene set in the Unreal Engine, I made 7 zombies with different sizes, from 1.0X to 2.2X, align them in the same X coordinate, and let them do the same movement – Samba Dancing. Regarding the camera, I started from a bird’s eye view and gradually move toward the zombie parade. Meanwhile, I pan the camera to the direction which can shoot the zombies’ faces. I found 3D modeling and animation is a fascinating topic after I’ve gotten used to manipulating the parameters of actors in the 3D environment. I look forward to diving more deeply into this field to make legit animations and games.

Session 6: Mounting Motors

Structure

Materials

  1. Standoffs
  2. 6V motors * 2
  3. Robot wheels (DC motor compatible)
  4. Batteries
  5. Potentiometer
  6. Switch
  7. Acrylic (White * 1, Black * 1, Transparent * 1)
Process

i. Design the graphic of the face and tear it down to different component in laser cutting files

I used white acrylic as the face part and I used black acrylic as mouth, eyes, and eyebrow.

ii. Laser cut the acrylics and glued the components to correspondent position

iii. Test the motors

iv. Mount the switch and motor

I was trying to mount the motors by connecting the screws to the holes on top of the motors. However, it didn’t work very well. The distance of the laser cut holes is not accurate and the size of the screws doesn’t fit into the holes. After that, I realized that the holes might not design to mount the motors. I couldn’t figure out a better way to mount it because I had laser cut all the materials and it was too late to re-do and re-design. As a result, I end up doing the guilty thing – glue it to the panel. Although this way is not that elegant, it worked pretty well.

v. Solder wires to connect the motors to the switch and the battery. Install them to the correspondent position. 

vi. Do the same thing again with the second motor set

Initially, I intended to use the potentiometer to adjust the rotation speed of the motors. However, it doesn’t work the way I imagined. When the resistant of the potentiometer passes a threshold, the motor stops rotating. When the resistant of the potentiometer goes just below the threshold, the motor starts to rotate very fast. Hence, I end up using the potentiometer as another switch to control the other eye.

vii. Glue the eyes part with the robot wheel and connect them to the motors


viii. Connect different layers with standoffs

End Product

Final Project: Worry Capsule Tree

HxpT94R

Project Overview

Worries surround us every day. However, as time passed, we always realize how trivial those problems are in comparison. Worry Capsule Tree is an interactive installation that stores people’s current worries and sends them back in the future, thus allowing its users to retrieve their previous worries and revisit the past selves.

Collaborators and Roles

Xiran Yang – Voice Recognition/Database Server/Physical Computing/Fabrication

Hau Yuan – Visual Programming/Physical Computing/Fabrication

Technical Features

Coding: Voice Recognition(p5.speech.js)/Database Server (M-Lab)/Interface(After Effects/p5.js)/Serial Communication

Physical Computing: Arduino MEGA/USB microphone/Jumbo RGB LEDs/Neopixel RGB Lights

Fabrication: Soldering/Laser Cutting


How it works?

1 – Start

At the beginning, user would see both physical and digital animation.

2 – Say “Hello” to trigger next step

In this step, users have to say “Hello” to go to the next step. The purpose of this step is to let users know the microphone in front of them is part of the interaction. By doing so, users would know where and how to use the voice input in the recording process later.

2-2 – Say “Hello” to trigger next step

After users say “Hello,” the screen will reply “I am listening” to them. Besides, the tree installation will change its color to respond.

3 – Input user’s name

To ask the information form users without letting them feel like their privacy are invading like conducting an online survey, I incorporated this step by using a conversation to ask user’s name.

“Pardon me for not asking your name! How may I address you?”

Apart from this purpose, I call user’s name twice in the conversation of the following steps to increase the intimacy and engagement of the experience.

4 – Read the introduction to the tree

The first, second, and third steps are an opening which aims to allow users dissolve into this experience. The fourth is the stage gives the whole context and idea of this tree. The following is the introduction:

Would you like to travel back in time?

In times of introspection, do you ever realize how trivial those problems are in comparison?

Are you under a lot of stress? What’s worrying you now?

Please tell me in one sentence and allow me to carry your anxieties in safekeeping.

Perhaps in the future, you may want access to them.

Once you are ready, please click start to record.

5 – Start to record

At the end of the introduction, the “start to record” button would pop-up. When users feel ready to talk, they can click the button to start.

6 & 7- Recording and Stop Recording

When users finish recording, they can click this button to stop.

8 – See the sentence they said in text format

Users will see what that said and the date and time of this moment.

9 – Upload to the tree

When users click “upload to the tree,” the worries would “conceptually upload to the tree. The screen would show the animation first, and then the LED strips would light up gradually from the bottom of the tree to the top, which aim to convey an uploading image.
Technically, the program won’t save users’ data to the database until the last step.

10 – Input user’s email

After users enter their email and click finish, their information and the worries they said will be packaged and store in the database.


How did we decide to make it?

Idea development

The original idea of Worry Capsule Tree comes from the one of the creators, Yuan’s desire to release daily stress by keeping his anxiety into a certain device. After Yuan talked to me, this idea of telling and storing worries reminded me of a time capsule, whose function is to help people store memories that are accessible a long time later.

Both of us liked the concept of storing current anxiety and retrieve it in the future. It, on one hand, helps release the stress we have now, and on the other hand, also provides us with a chance to re-visit the past selves and see how much we have overcome and grown.

Since we decided to build something that deals with emotions and memories, the questions of “how would the users best communicate with this capsule” and “how do they emotionally engage with the device” becomes very important.

These questions triggered us to first think about the appearance of our device.Our first thought is the image of a tree. Trees have the ability absorb substances from the environment and convert it to nutritious fruits. We think it’s a metaphor for transforming people’s anxiety to their assets in the future. Therefore, we decided to build a worry capsule tree which can respond to the users and store their worries.

In addition, based on our personal experience, we figured out two ways that the users can interact with this tree, which are: writing down their worries to the tree and telling their worries to the tree.

In order to know which way makes people feel more comfortable to interact with the tree, we did user testing and research.

Our very first prototype

Based on our research, about half of the testers prefer speaking to the tree while the other half prefer writing their worries down. Since both of us prefer speaking to the tree over writing down our worries to the tree. We finally decided to create an interactive capsule tree which can listen to its users and store their worries.

We also thought about the design of the tree. We proposed two type of designs firstly.

Xiran’s proposal:

 

Yuan’s proposal:

We decided to follow my proposal, a more sculptural design because we thought it looked more interesting and can create more intimacy between our tree and the users.

Schedule

After finalizing the idea, we started to think about how to realize this idea. We figured out that our tree will need to have four functions, which are:

  • Receive users’ voice input
  • Visualize that voice inputting progress
  • Store the voice input
  • Send back the voice input to the user in the future

We decided to use the p5.speech library to receive voice input and transfer the audio input into a text file, use M-lab service to store all the received information into a database, visualize the progress of storing and uploading worries by changing the tree’s behavior and send back the worries to the users through an email postcard.

Therefore, our project breaks down into three major parts: Physical Computing, Coding, and Fabrication.

Coding: (p5.js/HTML/CSS)

  1. Use p5.speech.js library to do voice recognition;
  2. Use M-Lab service to store voice input and users’ information;
  3. User Interface design/P5-Arduino Serial Communication

Physical Computing

  1. Design different light patterns;
  2. Test different materials and light behaviors using switch and sensors;
  3. Connect the behaviors of lights to the user interface by doing serial communication;

Fabrication

  1. Design the tree’s appearance;
  2. Decide what materials to use;
  3. Fabricate the tree;

How we made it?

Fabrication

Size

Components & Materials

There are three main parts in this in installation:

  1. Load-bearing structure:
    The base of the structure consists of a wood panel and three legs. The space between the wood panel and the ground allows us to easily manage and manipulate the circuits and microcontroller beneath it. We chose copper pipe as the spine of the tree structure. Apart from giving rigidity to the tree, the shallow space in the middle of the pipe can hide and concentrate all the wires from top to the bottom of the tree.
  2. Light source:
    There are three RGB LEDs, six White LEDs, and three Neo-Pixels(each has 30 pixels) on this tree. The positions are shown as follows:

    Bird’s eye view:

Coding: (p5.js/HTML/CSS)

Voice recognition

The voice recognition is done by p5.speech.library which can translate audio input into text.

Store information in a database

I used M-Lab’s database server. Basically, you need to register an account at M-lab, create a database, then a collection. Each account will have an API key, you will be able to use that API key to access your database, upload data and view all the documents you have sent to that database.

Easy Tutorial on how to use M-lab to create database and upload data

M-lab’s tutorial (Also easy to follow)

These two websites provide some very good tutorials on how to set up a database at M-Lab and store data into it.

While M-lab’s own sample code is written in Jquery, I used p5.js to do the same thing. I majorly used HttpPost() function to push information (voice text array) into my database.

Once you are able to create your database in M-lab and have sent some data into it, in the database you have created, it will look something like this:

User Interface Design

All animations are done by After Effects and are embedded and organized into the interface using p5 DOM library.

Serial Communication:

We used p5.serialport.js library to do serial communication. Basically everytime when we want the computer side to trigger the tree to change its behavior, we send one-byte binary (a number) from the computer to Arduino by doing p5-to-Arduino serial communication.

*You can view our code here (We used p5.speech.js and p5.serialport.js libraries. But animations and all libraries are excluded from the code down below)

https://gist.github.com/xiranisabear/c71a31c7ef521a17790bdd18df384e1c

Physical Computing

We used Jumbo RGB LEDs, Neopixel RGB LED strips as well as some white Jumbo LEDs.

We first tested all the behaviors of the lights by just doing Arduino, using some switches and a sound sensor.

After we made sure that everything worked well on Arduino, we used serial communication to allow the computer to control the behaviors of all lights.

Our Final code is shared below:

https://gist.github.com/xiranisabear/d37f9358aab43c246481d87e5e4485f9

Session 5: Materials and fasteners

After I saw the mind-blowing  Perch Light in the class, I decided to use paper and metal sheet as my materials to use this week, and I found this interesting origami tutorial on Youtube.

In the tutorial, the video maker used two kinds of color papers to make this origami Mandala. My goal is to replace one of them with copper sheet. The Mandala consists of 8 same shapes, which can connect each other nearby. After joining each other,  it would become a circular shape.

Materials
Copper Sheet 
Construction Paper
Step 1

Cut the paper to right size (7.5cm x 7.5cm)

Step 2

Cut the copper to right size (7.5cm x 7.5cm)

Step 3

Fold the paper into the shape of the Mandala pedal

Step 4

Fold the paper into the shape of the Mandala pedal

Step 5

Connect it together

The main problem I met is the thickness and foldability of the copper sheet. I underestimated the thickness of the copper sheet. After I folded it, it became way thicker than the paper one. Besides, it became tough to unfold it, which affected the final step – connecting each element. As a result, it doesn’t look harmony. I think if I want to take advantage of the metal sheet, I shouldn’t have folded it that much.

After Effect Final Project: Angel Devil

To maintain the visual consistency, we decided to assign one person to design the visual assets first and then let everyone to animate it. I was in charge of creating the visual assets. Regarding the visual style, I use strokes with crayon texture to create a childlike drawing. However, we remain the fill of the main character – Angeldevil because we want it to stand out of the scene. Since we decided to use a simple and childlike visual style, the assets look somewhat same as our storyboard. I used Adobe Illustrator to make it. All the assets are shown as follows:

Assets

When I was designing the assets, I had to consider all the details after we import it to After Effect. Hence, I’ve already started to animate it and project it in my mind when I drew these. I had to pay attention to which parts are going to animate, which parts should be static, and which elements should parent together. I organized them in Illustrator layers. Besides, naming all these layers is another challenge because I have to call them understandably and intuitively, which could allow my teammates quickly catch up and manipulate.

layers

To make sure our style is consistent, we met on the weekend during the Thanksgiving break and do it together until we finished it. In the beginning, it took us hours to animate just one thing. But after we’ve gotten familiar with all the tricks, it was getting faster and faster. All in all, apart from learning the software skills of After Effect, I learned a lot about project and time management since I was in charge of design the visual assets. Even though our animation is straightforward and understandable, it still has tons of details to consider.

Session 10: Final Project – Iteration 1

The result of the playtesting

Interestingly, the playtesting showed that 50% testers chose writing as their way to convey their worries and the other 50% testers chose microphone. The main reason for people choosing writing is privacy. Saying they don’t want to let others hear their thought in public. On the other hand, the microphone group said that is easier and more intuitive to just say the worries out. Besides, one of the testers said that when he is writing something, he has to care about the grammar and think about the structure of the writing. In other words, recording the voice is more realistic than writing down the words.

Eventually, we decided to use the microphone as our input because we think it is more engaging in terms of interactivity. We want to create an illusion that the users are actually talking to the tree. Other than that, we also determined to use p5.js speech library as our input based on two reasons. The first reason is that uploading a sound format file is technically harder than directly record the sound from the computer. The second reason is that we want to use email as the format to send the content back to users. Hence, p5.js voice recognition is a perfect fit for our need since it can convert users’ voice into text strings.

The following are the graphs of the user flow:

Session 4: Enclosures

This week, I combined the assignment with the prototype of the Physical Computing final project, which is an installation of a tree. We tried two different directions. The first is a 3D tree sculpture. The second is a 2D tree. I am in charge of creating the latter one. Eventually, I utilized standoffs and laser cut panels to make this prototype. The following are the materials I used:

Materials
Acrylic (Thickness: 1/16 “, Color: Clear)
Standoffs
White LED
Cardboard
Steps

i. Create Laser cut files in Adobe Illustrator. I used the first file to etch the tree shape on cardboard. Besides, there are twelve circles on top of the canopy, which will be the holes to install LEDs. The second file is to laser cut acrylic to create panels. There are two panels with different purposes. The first is to protect the cardboard and the second is to load all the electronics.

ii. Laser Cutting

iii. Test whether the size of the standoffs could fit into the hole on acrylic and cardboard.

iv. Install the electronics and LEDs

LED perfectly fitted to each hole
It’s 3V LED and I used 9V battery to provide electricity. Hence, I grouped three LEDs as on set.
Install power source and switch
Install the bottom panel
End Product

 

 

Session 9: Final Project Proposal

Inspirations

My final project inspiration came from my p-comp mid-term project and personal experiences. My mid-term project’s idea was to utilize sound to evaluate one’s pressure. After mid-term, I’ve made up my mind that I want to do something related to emotion for my final project.

As a consequence, I started to dig out deeply from myself. I am a very rational person. I rarely lost the control of everything in my life. However, when I feel desperately stressful and upset due to the urgent accident or crazy schedules and deadlines, my mind would shut down, and I could barely do anything like a paralyzed soldier. Under this circumstance, I would stop doing anything and write down my thoughts, worries, and the reasons why I felt the extremely negative feelings. After I talked to my notebook or a piece of random paper, I felt relieved, and I could start to organize my mind again.

After half of a year later, when I saw my worries and thought again, the emotion has converted from negative to positive feeling. It’s interesting and worthwhile to ruminate when seeing the negative thought generated by yourself past. The scenario is like: Why this circumstance trapped me so bad a year ago? Can I cope with the emotion again when dealing with the same situation?

As a result, I want to transform this interaction with myself to a physical interaction device. I want to create a device or installation that is very approachable. You can tell all your secrets to the invention, and it will “listen” and help you store your worries and thoughts. The device will upload your thoughts to the computer, and you can set the date you want to listen to it again. In a nutshell, this is a “time capsule” of your emotions.

Workflow of the idea