Final Project: Worry Capsule Tree

HxpT94R

Project Overview

Worries surround us every day. However, as time passed, we always realize how trivial those problems are in comparison. Worry Capsule Tree is an interactive installation that stores people’s current worries and sends them back in the future, thus allowing its users to retrieve their previous worries and revisit the past selves.

Collaborators and Roles

Xiran Yang – Voice Recognition/Database Server/Physical Computing/Fabrication

Hau Yuan – Visual Programming/Physical Computing/Fabrication

Technical Features

Coding: Voice Recognition(p5.speech.js)/Database Server (M-Lab)/Interface(After Effects/p5.js)/Serial Communication

Physical Computing: Arduino MEGA/USB microphone/Jumbo RGB LEDs/Neopixel RGB Lights

Fabrication: Soldering/Laser Cutting


How it works?

1 – Start

At the beginning, user would see both physical and digital animation.

2 – Say “Hello” to trigger next step

In this step, users have to say “Hello” to go to the next step. The purpose of this step is to let users know the microphone in front of them is part of the interaction. By doing so, users would know where and how to use the voice input in the recording process later.

2-2 – Say “Hello” to trigger next step

After users say “Hello,” the screen will reply “I am listening” to them. Besides, the tree installation will change its color to respond.

3 – Input user’s name

To ask the information form users without letting them feel like their privacy are invading like conducting an online survey, I incorporated this step by using a conversation to ask user’s name.

“Pardon me for not asking your name! How may I address you?”

Apart from this purpose, I call user’s name twice in the conversation of the following steps to increase the intimacy and engagement of the experience.

4 – Read the introduction to the tree

The first, second, and third steps are an opening which aims to allow users dissolve into this experience. The fourth is the stage gives the whole context and idea of this tree. The following is the introduction:

Would you like to travel back in time?

In times of introspection, do you ever realize how trivial those problems are in comparison?

Are you under a lot of stress? What’s worrying you now?

Please tell me in one sentence and allow me to carry your anxieties in safekeeping.

Perhaps in the future, you may want access to them.

Once you are ready, please click start to record.

5 – Start to record

At the end of the introduction, the “start to record” button would pop-up. When users feel ready to talk, they can click the button to start.

6 & 7- Recording and Stop Recording

When users finish recording, they can click this button to stop.

8 – See the sentence they said in text format

Users will see what that said and the date and time of this moment.

9 – Upload to the tree

When users click “upload to the tree,” the worries would “conceptually upload to the tree. The screen would show the animation first, and then the LED strips would light up gradually from the bottom of the tree to the top, which aim to convey an uploading image.
Technically, the program won’t save users’ data to the database until the last step.

10 – Input user’s email

After users enter their email and click finish, their information and the worries they said will be packaged and store in the database.


How did we decide to make it?

Idea development

The original idea of Worry Capsule Tree comes from the one of the creators, Yuan’s desire to release daily stress by keeping his anxiety into a certain device. After Yuan talked to me, this idea of telling and storing worries reminded me of a time capsule, whose function is to help people store memories that are accessible a long time later.

Both of us liked the concept of storing current anxiety and retrieve it in the future. It, on one hand, helps release the stress we have now, and on the other hand, also provides us with a chance to re-visit the past selves and see how much we have overcome and grown.

Since we decided to build something that deals with emotions and memories, the questions of “how would the users best communicate with this capsule” and “how do they emotionally engage with the device” becomes very important.

These questions triggered us to first think about the appearance of our device.Our first thought is the image of a tree. Trees have the ability absorb substances from the environment and convert it to nutritious fruits. We think it’s a metaphor for transforming people’s anxiety to their assets in the future. Therefore, we decided to build a worry capsule tree which can respond to the users and store their worries.

In addition, based on our personal experience, we figured out two ways that the users can interact with this tree, which are: writing down their worries to the tree and telling their worries to the tree.

In order to know which way makes people feel more comfortable to interact with the tree, we did user testing and research.

Our very first prototype

Based on our research, about half of the testers prefer speaking to the tree while the other half prefer writing their worries down. Since both of us prefer speaking to the tree over writing down our worries to the tree. We finally decided to create an interactive capsule tree which can listen to its users and store their worries.

We also thought about the design of the tree. We proposed two type of designs firstly.

Xiran’s proposal:

 

Yuan’s proposal:

We decided to follow my proposal, a more sculptural design because we thought it looked more interesting and can create more intimacy between our tree and the users.

Schedule

After finalizing the idea, we started to think about how to realize this idea. We figured out that our tree will need to have four functions, which are:

  • Receive users’ voice input
  • Visualize that voice inputting progress
  • Store the voice input
  • Send back the voice input to the user in the future

We decided to use the p5.speech library to receive voice input and transfer the audio input into a text file, use M-lab service to store all the received information into a database, visualize the progress of storing and uploading worries by changing the tree’s behavior and send back the worries to the users through an email postcard.

Therefore, our project breaks down into three major parts: Physical Computing, Coding, and Fabrication.

Coding: (p5.js/HTML/CSS)

  1. Use p5.speech.js library to do voice recognition;
  2. Use M-Lab service to store voice input and users’ information;
  3. User Interface design/P5-Arduino Serial Communication

Physical Computing

  1. Design different light patterns;
  2. Test different materials and light behaviors using switch and sensors;
  3. Connect the behaviors of lights to the user interface by doing serial communication;

Fabrication

  1. Design the tree’s appearance;
  2. Decide what materials to use;
  3. Fabricate the tree;

How we made it?

Fabrication

Size

Components & Materials

There are three main parts in this in installation:

  1. Load-bearing structure:
    The base of the structure consists of a wood panel and three legs. The space between the wood panel and the ground allows us to easily manage and manipulate the circuits and microcontroller beneath it. We chose copper pipe as the spine of the tree structure. Apart from giving rigidity to the tree, the shallow space in the middle of the pipe can hide and concentrate all the wires from top to the bottom of the tree.
  2. Light source:
    There are three RGB LEDs, six White LEDs, and three Neo-Pixels(each has 30 pixels) on this tree. The positions are shown as follows:

    Bird’s eye view:

Coding: (p5.js/HTML/CSS)

Voice recognition

The voice recognition is done by p5.speech.library which can translate audio input into text.

Store information in a database

I used M-Lab’s database server. Basically, you need to register an account at M-lab, create a database, then a collection. Each account will have an API key, you will be able to use that API key to access your database, upload data and view all the documents you have sent to that database.

Easy Tutorial on how to use M-lab to create database and upload data

M-lab’s tutorial (Also easy to follow)

These two websites provide some very good tutorials on how to set up a database at M-Lab and store data into it.

While M-lab’s own sample code is written in Jquery, I used p5.js to do the same thing. I majorly used HttpPost() function to push information (voice text array) into my database.

Once you are able to create your database in M-lab and have sent some data into it, in the database you have created, it will look something like this:

User Interface Design

All animations are done by After Effects and are embedded and organized into the interface using p5 DOM library.

Serial Communication:

We used p5.serialport.js library to do serial communication. Basically everytime when we want the computer side to trigger the tree to change its behavior, we send one-byte binary (a number) from the computer to Arduino by doing p5-to-Arduino serial communication.

*You can view our code here (We used p5.speech.js and p5.serialport.js libraries. But animations and all libraries are excluded from the code down below)

https://gist.github.com/xiranisabear/c71a31c7ef521a17790bdd18df384e1c

Physical Computing

We used Jumbo RGB LEDs, Neopixel RGB LED strips as well as some white Jumbo LEDs.

We first tested all the behaviors of the lights by just doing Arduino, using some switches and a sound sensor.

After we made sure that everything worked well on Arduino, we used serial communication to allow the computer to control the behaviors of all lights.

Our Final code is shared below:

https://gist.github.com/xiranisabear/d37f9358aab43c246481d87e5e4485f9

Session 10: Final Project – Iteration 1

The result of the playtesting

Interestingly, the playtesting showed that 50% testers chose writing as their way to convey their worries and the other 50% testers chose microphone. The main reason for people choosing writing is privacy. Saying they don’t want to let others hear their thought in public. On the other hand, the microphone group said that is easier and more intuitive to just say the worries out. Besides, one of the testers said that when he is writing something, he has to care about the grammar and think about the structure of the writing. In other words, recording the voice is more realistic than writing down the words.

Eventually, we decided to use the microphone as our input because we think it is more engaging in terms of interactivity. We want to create an illusion that the users are actually talking to the tree. Other than that, we also determined to use p5.js speech library as our input based on two reasons. The first reason is that uploading a sound format file is technically harder than directly record the sound from the computer. The second reason is that we want to use email as the format to send the content back to users. Hence, p5.js voice recognition is a perfect fit for our need since it can convert users’ voice into text strings.

The following are the graphs of the user flow:

Session 9: Prototype for playtesting

This week, I made an important decision. I will collaborate with Xiran Yang for the final project. I believe collaboration will foster each other’s creative thinking and avoid a lot of bias, and make the plan more comprehensive and explicit. After I shared my idea for the final project to Xiran, she resonated and gave a lot of valuable feedbacks and potential to the concept of “emotion time capsule.” The most valuable feedback I got from her is that what’s the proper way to communicate with the “time capsule”? As I mentioned last week, I wrote down my worries and thoughts when I felt stressful. However, my first view of the communication is talking to the device.

As a result, for the playtesting, we want to test which one is the proper way when it comes to storing your thoughts and why users think whether writing or talking is suitable for them. This conclusion is very crucial because this would affect the direction of the interaction.

Besides, we started to think about the appearance of the “time capsule.” Our first thought is the image of a tree. Trees have the ability absorb substances from the environment and convert it to nutritious fruits. We think it’s a metaphor for transforming your thoughts to your assets in the future. Consequently, in the first playtesting, we will use a tree shape to represent the image of the time capsule. Also, we look forward to seeing the reaction when users view their thought has been conceptually stored in the time capsule tree.

Session 8: Final Project Ideas and Inspirations

My final project inspiration came from my mid-term project and personal experiences. My mid-term project’s idea was to utilize sound to evaluate one’s pressure. After mid-term, I’ve made up my mind that I want to do something related to emotion for my final project.

As a consequence, I started to dig out deeply from myself. I am a very rational person. I rarely lost the control of everything in my life. However, when I feel desperately stressful and upset due to the urgent accident or crazy schedules and deadlines, my mind would shut down, and I could barely do anything like a paralyzed soldier. Under this circumstance, I would stop doing anything and write down my thoughts, worries, and the reasons why I felt the extremely negative feelings. After I talked to my notebook or a piece of random paper, I felt relieved, and I could start to organize my mind again.

After half of a year later, when I saw my worries and thought again, the emotion has converted from negative to positive feeling. It’s interesting and worthwhile to ruminate when seeing the negative thought generated by yourself past. The scenario is like: Why this circumstance trapped me so bad a year ago? Can I cope with the emotion again when dealing with the same situation?

As a result, I want to transform this interaction with myself to a physical interaction device. I want to create a device or installation that is very approachable. You can tell all your secrets to the invention, and it will “listen” and help you store your worries and thoughts. The device will upload your thoughts to the computer, and you can set the date you want to listen to it again. In a nutshell, this is a “time capsule” of your emotions.

Session 7: Mid-term Project

Inspiration

My inspiration came from the struggle and pressure with mid-term projects. All of a sudden, I bumped into the illustration I drew three years ago.

I created this illustration just for fun. The idea is very straight-forward. The left side is a healthy condition. As it goes to the right side, it represents how stressful the character is. I used an exaggerated way to as the pressure goes up. An idea popped up right away after I saw my drawing. I wanted to use this illustration to be the interface of an indication of someone’s pressure.

Afterwards, I start to think about how I can map and indicate someone’s pressure. What do stressful individuals do to relieve? Yelling! Eventually, I used sound as an indicator of stress.

How it works

On the physical interface’s side, I used Electret Microphone Amplifier – MAX9812 as the sound level detector and p5.serialcontrol to transmit the data from Arduino to the laptop.

On the digital interface’s side, I set five volume bars from low to high. There are five different states to tell users how loud they shout. When the volume passes the threshold, the corresponding illustration and animation would trigger. The following are the five different states:

state 0
state 1
state 2
state 3
state 4
state 5

Source code

Session 6: Serial Communication with p5.js

p5.js source code
This week, I used my assignment in ICM class to do this project. Initially, I used a simple for loop and trigonometry functions to create an animated DNA shape. For this task, I transferred the analog input value from potentiometer to the sin() function. I made the wavelength and amplitude changeable by putting the fromSerial value both inside of the sin() function parenthesis and also let it multiply the sin().

for ( i = 0; i < CircleNum; i++) {    
  var g = map(mouseX, 0, 1280, 100, 255);  
  var b = map(mouseX, 0, 1280, 255, 0);  
  var x = i*(width)/(CircleNum); 
  fill(random (0,255),g,b,200);    
  noStroke();  
  ellipse(x,height/2+(fromSerial/10)*sin(((fromSerial/100))*i+frameCount/100),Diameter,Diameter);      
}

Session 5: Photocell and Servo Motor

This week, I am experimenting with the photocell sensor and servo motor. Here is what I did:

As you can see, when I used the flashlight to illuminate the installation, the sun rose; When I removed it, the sun went back to its original position. The logic behind is very simple. When the brightness is higher than the threshold, the servo starts to move gradually to a specific position and stays static. When the brightness is below to the threshold, then servo goes back to the original position and stays. Only the threshold value could trigger the behavior of the servo. In this practice, I utilized Smoothing to create stable analog input from the photocell. Besides, I created a boolean switch which allowed the servo to stay static when it goes to a specific position. The following are is the code I wrote:

#include <Servo.h>
Servo myServo;
const int numReadings = 10;

int readings[numReadings]; 
int readIndex = 0; 
int total = 0; 
int average = 0;

int inputPin = A0;
boolean lowPos = true;

void setup() {
 Serial.begin(9600);
 myServo.attach(9);
 for (int thisReading = 0; thisReading < numReadings; thisReading++) {
 readings[thisReading] = 0;
 }
}

void loop() {
 total = total - readings[readIndex];
 readings[readIndex] = analogRead(inputPin);
 total = total + readings[readIndex];
 readIndex = readIndex + 1;
 if (readIndex >= numReadings) {
 readIndex = 0;
 }
 average = total / numReadings;

Serial.println(average);
 delay(1); 
 if (lowPos == true && average > 100) {
 for (int i = 0; i < 20; i++) {
 myServo.write(90 + 3 * i);
 delay(600);
 }
 lowPos = false;
 }
 if (lowPos == false && average > 100) {
 myServo.write(150);
 }
 if (lowPos == false && average < 100) {
 for (int i = 0; i < 20; i++) {
 myServo.write(150 - 3 * i);
 delay(600);
 }
 myServo.write(90);
 lowPos = true;
 }
 if (lowPos == true && average < 100) {
 myServo.write(90);
 }
}

Session 4: Analog Output

My goal of this assignment is to create an Arduino piano with digital input and analog output. I used 7 different switches as the digital input and use a speaker as an analog output. Each switch corresponds to a pitch(C, D, E, F, G, A, B). If a switch is pressed, the speaker would make the corresponding pitch.

However, It didn’t work very well. When I pressed different switches, the pitches sound exactly the same. I’ve checked every circuit of the switch and every line of the code. I still can’t find what’s the reason lead to this situation so far. I am still debugging it. Hope I can find out in the class.

The following is the circuit:

The following is the code I used:

#include "pitches.h"

void setup() {
 Serial.begin(9600);
 pinMode(2, INPUT);
 pinMode(3, INPUT);
 pinMode(4, INPUT);
 pinMode(5, INPUT);
 pinMode(6, INPUT);
 pinMode(7, INPUT);
 pinMode(9, INPUT);
}
void loop() {

int buttonStateK1 = digitalRead(2);
 int buttonStateK2 = digitalRead(3);
 int buttonStateK3 = digitalRead(4);
 int buttonStateK4 = digitalRead(5);
 int buttonStateK5 = digitalRead(6);
 int buttonStateK6 = digitalRead(7);




if (buttonStateK1 == 1) {
 
 tone(9, NOTE_C4, 300);

} else {
 noTone(9);
 }

if (buttonStateK2 == 1) {
 
 tone(9, NOTE_D4, 300);

} else {
 noTone(9);
 }

if (buttonStateK3 == 1) {
 
 tone(9, NOTE_E4, 300);

} else {
 noTone(9);
 }

if (buttonStateK4 == 1) {
 
 tone(9, NOTE_F4, 300);

} else {
 noTone(9);
 }

if (buttonStateK5 == 1) {
 
 tone(9, NOTE_G4, 300);

} else {
 noTone(9);
 }

if (buttonStateK6 == 1) {
 
 tone(9, NOTE_A4, 300);

} else {
 noTone(9);
 }


}