Session 1: Physical Pixelation

My work was inspired by an artist, Zenyk Palagniuk. He used 13,000 nails and thread to create this portrait. In my view, this kind of technic perfectly utilized the idea of pixelation. What makes it more interesting is that it’s a combination of two matrials as opposed to most of the pixelation works. Since it’s not a step by step tutorials, I just emulated the way he used.


Step 1: I sketched the outline of an eye

Step 2: I drove nails into the wood. In the darker part such as the eyebrow and the black part of the eye, I put more nails. In the lighter parts, I used less nails.

Step 3: I wired the thread on it. As you can see, the density of the thread is higher in the darker area.

It’s recognizable but not very sharp. I didn’t do very well for the gradient effet for shadows on the face. I think I need more nails to make the contrast more obvious.


Session 2: Lock Screen

(Still in progress…)

Interactions and feedback

Users have to swipe the correct sequence to unlock the screen. If users input wrong sequences, the screen would turn into red and tell users to try again. If users input the right sequence, the screen would turn into green and display “unlock”.

The problems I met

I ran into some issues when I was doing the assignment. First, I chose the swipe method and I followed the sample code from the class repository. I created my own project and followed the code step by step and I found that my buttons weren’t responsive to user interaction. I couldn’t figure it out why until I knew that I didn’t check the “User Interaction Enabled” in the UI element dashboard. Another problem is the layout issue. It appears correctly in the simulator but it’s very odd on my iPhone like the screenshot below. I know this problem comes from the Constraints but I tried different arrangements, and I still got the same issue. Besides, I haven’t done the “Only 3 attempts possible before app needs to be reset/restarted” part. I’ll keep updating this blog post until I accomplish.

Session 1: One button hookup


The idea is super straightforward yet creepy. I wanted to humanize my iPhone and do something very mean to it. That was why I created this little purple monster and “torture” it by doing “one button interaction”

When you tap on the eye of the little monster, the monster would shout and have a bloodshot eye. While you keep tapping on it, the eye would become scarier and there would be more angry sign on its face. Eventually, the monster would close its eye.

User Interfaces

I used Adobe Illustrator to design all the graphics and imported them into the Xcode. The sound effect is from

Github Repository

Session 2: Ideas for the project

I want to create something that can bring the joy of dining and cook back to our life. I have a workaholic lifestyle friend who is currently working as software developers. He never cooks by himself and he sometimes even doesn’t have enough time to enjoy his meal. Soylent, a beverage which contains protein, carbohydrates, lipids, and micronutrients that our body need for a whole day, has become one of the best solutions for him instead of keeping skipping meals and eating microwave food. I agree that’s a better option than fast food since Soylent doesn’t just provide you calories. Instead, it gives every nutrition you need.

However, in my view, I think having soylent as the primary food source of our daily life is a palliative way to squeeze more time in our overloading life. In the long term, I don’t think it’s beneficial for us. It’s a panacea which can make you alive and only cost you a few minutes, but it also makes you lose the chances of enjoying meals once you’re relying on it. Eating food is one of the best things in our life. It’s pathetic to consider food as nutrition only instead of a beautiful experience.

Hence, what I aim to do is I want to create a fun experience to mitigate this phenomenon. I want to bring the happiness of eating and cooking back to our life. There are existing services like Blue Apron which provides package contains ingredients and also include suggested step-by-step recipes. I want to create something more than that. Allowing people to plant their ingredient is what I am thinking of.

In a nutshell, in the industrialized farming era, I want to let people enjoy the process of growing the ingredients they need, cook them, and then eat them.

View post on


Session 1: My Vision in BioDesign Class

Before I came to ITP, I studied Agriculture in Taiwan. However, I didn’t like the conventional educating way that my college gave to me. Hence, I started to learn visual design in my junior year. After I graduated, I worked as a UX/UI designer in a tech start-up. This class not only allow me to re-visit the field I used to learn. Moreover, the experimental ways to bio-design are the things that make me really excited.

We don’t thoroughly understand the food we eat. Where did the food come from and what kind of nutrition can the food provide? We can quickly achieve our daily calories need, but we are not familiar with the ingredient of it. Agrochemical companies utilized biotechnology to develop crops that will not produce viable offspring, which aims to monopolize the seed market. What’s worse, they systemized the commercial structure among crops, pesticide, and herbicide. These approaches all target to make the profit, specifically, maximize the crop yields. However, this led to the The Great Nutrition Collapse of the food we eat.

Besides the food and nutrition issues, I am also interested in urban farming, vertical farms and any other unconventional farming method which aims to mitigate the environmental problems we have. All in all, I look forward to using creative approaches to tackle the severe issues.

Uureal Engine Exercise: Zombie Dance


I created this creepy bald zombie in Adobe Fuse. Since I wanted to let this character be as absurd as possible, I used regular arms and legs, but I put a muscular and robust body to make sense of conflict. On Mixamo, I played with different built-in animation and saw which is the most ridiculous and funniest one. Eventually, I chose Samba Dancing.

Scene and camera route

In the scene set in the Unreal Engine, I made 7 zombies with different sizes, from 1.0X to 2.2X, align them in the same X coordinate, and let them do the same movement – Samba Dancing. Regarding the camera, I started from a bird’s eye view and gradually move toward the zombie parade. Meanwhile, I pan the camera to the direction which can shoot the zombies’ faces. I found 3D modeling and animation is a fascinating topic after I’ve gotten used to manipulating the parameters of actors in the 3D environment. I look forward to diving more deeply into this field to make legit animations and games.

Session 6: Mounting Motors



  1. Standoffs
  2. 6V motors * 2
  3. Robot wheels (DC motor compatible)
  4. Batteries
  5. Potentiometer
  6. Switch
  7. Acrylic (White * 1, Black * 1, Transparent * 1)

i. Design the graphic of the face and tear it down to different component in laser cutting files

I used white acrylic as the face part and I used black acrylic as mouth, eyes, and eyebrow.

ii. Laser cut the acrylics and glued the components to correspondent position

iii. Test the motors

iv. Mount the switch and motor

I was trying to mount the motors by connecting the screws to the holes on top of the motors. However, it didn’t work very well. The distance of the laser cut holes is not accurate and the size of the screws doesn’t fit into the holes. After that, I realized that the holes might not design to mount the motors. I couldn’t figure out a better way to mount it because I had laser cut all the materials and it was too late to re-do and re-design. As a result, I end up doing the guilty thing – glue it to the panel. Although this way is not that elegant, it worked pretty well.

v. Solder wires to connect the motors to the switch and the battery. Install them to the correspondent position. 

vi. Do the same thing again with the second motor set

Initially, I intended to use the potentiometer to adjust the rotation speed of the motors. However, it doesn’t work the way I imagined. When the resistant of the potentiometer passes a threshold, the motor stops rotating. When the resistant of the potentiometer goes just below the threshold, the motor starts to rotate very fast. Hence, I end up using the potentiometer as another switch to control the other eye.

vii. Glue the eyes part with the robot wheel and connect them to the motors

viii. Connect different layers with standoffs

End Product

Final Project: Worry Capsule Tree


Project Overview

Worries surround us every day. However, as time passed, we always realize how trivial those problems are in comparison. Worry Capsule Tree is an interactive installation that stores people’s current worries and sends them back in the future, thus allowing its users to retrieve their previous worries and revisit the past selves.

Collaborators and Roles

Xiran Yang – Voice Recognition/Database Server/Physical Computing/Fabrication

Hau Yuan – Visual Programming/Physical Computing/Fabrication

Technical Features

Coding: Voice Recognition(p5.speech.js)/Database Server (M-Lab)/Interface(After Effects/p5.js)/Serial Communication

Physical Computing: Arduino MEGA/USB microphone/Jumbo RGB LEDs/Neopixel RGB Lights

Fabrication: Soldering/Laser Cutting

How it works?

1 – Start

At the beginning, user would see both physical and digital animation.

2 – Say “Hello” to trigger next step

In this step, users have to say “Hello” to go to the next step. The purpose of this step is to let users know the microphone in front of them is part of the interaction. By doing so, users would know where and how to use the voice input in the recording process later.

2-2 – Say “Hello” to trigger next step

After users say “Hello,” the screen will reply “I am listening” to them. Besides, the tree installation will change its color to respond.

3 – Input user’s name

To ask the information form users without letting them feel like their privacy are invading like conducting an online survey, I incorporated this step by using a conversation to ask user’s name.

“Pardon me for not asking your name! How may I address you?”

Apart from this purpose, I call user’s name twice in the conversation of the following steps to increase the intimacy and engagement of the experience.

4 – Read the introduction to the tree

The first, second, and third steps are an opening which aims to allow users dissolve into this experience. The fourth is the stage gives the whole context and idea of this tree. The following is the introduction:

Would you like to travel back in time?

In times of introspection, do you ever realize how trivial those problems are in comparison?

Are you under a lot of stress? What’s worrying you now?

Please tell me in one sentence and allow me to carry your anxieties in safekeeping.

Perhaps in the future, you may want access to them.

Once you are ready, please click start to record.

5 – Start to record

At the end of the introduction, the “start to record” button would pop-up. When users feel ready to talk, they can click the button to start.

6 & 7- Recording and Stop Recording

When users finish recording, they can click this button to stop.

8 – See the sentence they said in text format

Users will see what that said and the date and time of this moment.

9 – Upload to the tree

When users click “upload to the tree,” the worries would “conceptually upload to the tree. The screen would show the animation first, and then the LED strips would light up gradually from the bottom of the tree to the top, which aim to convey an uploading image.
Technically, the program won’t save users’ data to the database until the last step.

10 – Input user’s email

After users enter their email and click finish, their information and the worries they said will be packaged and store in the database.

How did we decide to make it?

Idea development

The original idea of Worry Capsule Tree comes from the one of the creators, Yuan’s desire to release daily stress by keeping his anxiety into a certain device. After Yuan talked to me, this idea of telling and storing worries reminded me of a time capsule, whose function is to help people store memories that are accessible a long time later.

Both of us liked the concept of storing current anxiety and retrieve it in the future. It, on one hand, helps release the stress we have now, and on the other hand, also provides us with a chance to re-visit the past selves and see how much we have overcome and grown.

Since we decided to build something that deals with emotions and memories, the questions of “how would the users best communicate with this capsule” and “how do they emotionally engage with the device” becomes very important.

These questions triggered us to first think about the appearance of our device.Our first thought is the image of a tree. Trees have the ability absorb substances from the environment and convert it to nutritious fruits. We think it’s a metaphor for transforming people’s anxiety to their assets in the future. Therefore, we decided to build a worry capsule tree which can respond to the users and store their worries.

In addition, based on our personal experience, we figured out two ways that the users can interact with this tree, which are: writing down their worries to the tree and telling their worries to the tree.

In order to know which way makes people feel more comfortable to interact with the tree, we did user testing and research.

Our very first prototype

Based on our research, about half of the testers prefer speaking to the tree while the other half prefer writing their worries down. Since both of us prefer speaking to the tree over writing down our worries to the tree. We finally decided to create an interactive capsule tree which can listen to its users and store their worries.

We also thought about the design of the tree. We proposed two type of designs firstly.

Xiran’s proposal:


Yuan’s proposal:

We decided to follow my proposal, a more sculptural design because we thought it looked more interesting and can create more intimacy between our tree and the users.


After finalizing the idea, we started to think about how to realize this idea. We figured out that our tree will need to have four functions, which are:

  • Receive users’ voice input
  • Visualize that voice inputting progress
  • Store the voice input
  • Send back the voice input to the user in the future

We decided to use the p5.speech library to receive voice input and transfer the audio input into a text file, use M-lab service to store all the received information into a database, visualize the progress of storing and uploading worries by changing the tree’s behavior and send back the worries to the users through an email postcard.

Therefore, our project breaks down into three major parts: Physical Computing, Coding, and Fabrication.

Coding: (p5.js/HTML/CSS)

  1. Use p5.speech.js library to do voice recognition;
  2. Use M-Lab service to store voice input and users’ information;
  3. User Interface design/P5-Arduino Serial Communication

Physical Computing

  1. Design different light patterns;
  2. Test different materials and light behaviors using switch and sensors;
  3. Connect the behaviors of lights to the user interface by doing serial communication;


  1. Design the tree’s appearance;
  2. Decide what materials to use;
  3. Fabricate the tree;

How we made it?



Components & Materials

There are three main parts in this in installation:

  1. Load-bearing structure:
    The base of the structure consists of a wood panel and three legs. The space between the wood panel and the ground allows us to easily manage and manipulate the circuits and microcontroller beneath it. We chose copper pipe as the spine of the tree structure. Apart from giving rigidity to the tree, the shallow space in the middle of the pipe can hide and concentrate all the wires from top to the bottom of the tree.
  2. Light source:
    There are three RGB LEDs, six White LEDs, and three Neo-Pixels(each has 30 pixels) on this tree. The positions are shown as follows:

    Bird’s eye view:

Coding: (p5.js/HTML/CSS)

Voice recognition

The voice recognition is done by p5.speech.library which can translate audio input into text.

Store information in a database

I used M-Lab’s database server. Basically, you need to register an account at M-lab, create a database, then a collection. Each account will have an API key, you will be able to use that API key to access your database, upload data and view all the documents you have sent to that database.

Easy Tutorial on how to use M-lab to create database and upload data

M-lab’s tutorial (Also easy to follow)

These two websites provide some very good tutorials on how to set up a database at M-Lab and store data into it.

While M-lab’s own sample code is written in Jquery, I used p5.js to do the same thing. I majorly used HttpPost() function to push information (voice text array) into my database.

Once you are able to create your database in M-lab and have sent some data into it, in the database you have created, it will look something like this:

User Interface Design

All animations are done by After Effects and are embedded and organized into the interface using p5 DOM library.

Serial Communication:

We used p5.serialport.js library to do serial communication. Basically everytime when we want the computer side to trigger the tree to change its behavior, we send one-byte binary (a number) from the computer to Arduino by doing p5-to-Arduino serial communication.

*You can view our code here (We used p5.speech.js and p5.serialport.js libraries. But animations and all libraries are excluded from the code down below)

Physical Computing

We used Jumbo RGB LEDs, Neopixel RGB LED strips as well as some white Jumbo LEDs.

We first tested all the behaviors of the lights by just doing Arduino, using some switches and a sound sensor.

After we made sure that everything worked well on Arduino, we used serial communication to allow the computer to control the behaviors of all lights.

Our Final code is shared below:

Session 5: Materials and fasteners

After I saw the mind-blowing  Perch Light in the class, I decided to use paper and metal sheet as my materials to use this week, and I found this interesting origami tutorial on Youtube.

In the tutorial, the video maker used two kinds of color papers to make this origami Mandala. My goal is to replace one of them with copper sheet. The Mandala consists of 8 same shapes, which can connect each other nearby. After joining each other,  it would become a circular shape.

Copper Sheet 
Construction Paper
Step 1

Cut the paper to right size (7.5cm x 7.5cm)

Step 2

Cut the copper to right size (7.5cm x 7.5cm)

Step 3

Fold the paper into the shape of the Mandala pedal

Step 4

Fold the paper into the shape of the Mandala pedal

Step 5

Connect it together

The main problem I met is the thickness and foldability of the copper sheet. I underestimated the thickness of the copper sheet. After I folded it, it became way thicker than the paper one. Besides, it became tough to unfold it, which affected the final step – connecting each element. As a result, it doesn’t look harmony. I think if I want to take advantage of the metal sheet, I shouldn’t have folded it that much.