Smart IxD Lab

Menu

Thoughts, observations and experimentation on interaction by: Smart Design

Check us out

Most of us have heard from health experts that we’re supposed to consume at least eight 8-ounce glasses of water a day, but how can we know if we’re hitting this target? Smart Intern and Lab collaborator Simone Capano set out to explore a solution to this problem with Hydramate, a simple system that passively reminds us when it’s time to drink more water.

hydra3

The project explores the notion of time and how it can be represented in an ambient way through a 3-dimensional interface. “I’ve always been very interested in finding a way to manage time in a physical way. Time is for sure an abstract element, and nowadays a lot of applications allow us to manage time effectively, but what if time were controlled through a tangible object, acting as a reminder, speaking to the user with a specific language/behavior?” asks Simone.

hydra4

Hydramate is an electronic coaster that measures a one hour cycle, divided in four quarters, each representing 15 minutes. As time passes, each quarter begins to gently glow giving the user a visual cue of how many times they have raised their glass to drink. Once a whole hour has passed since the last sip, the device begins to blink signaling that it is time to get hydrated. The blinking becomes stronger while the user is drinking, and once they set the glass back down it resets to the gentile glow of the first quarter .

hydra5

Simone created a fully functioning prototype with an Arduino micro controller. The shell  is  made of spray-painted polycarbonate, and the technology inside of it is very simple:

- A Photocell senses when a glass has been placed on it

- An Arduino Pro Mini  powered by a 3.3V Lithium batter receives the input from the photocell and controls the LEDs accordingly

We look forward to hearing about how this project has developed since the prototype was constructed, and how well test users feel the device helps them to manage their hydration.

 

Screen Shot 2013-12-21 at 12.33.53 PM

One of our current projects in the lab is the StressBot, a friendly console that can read heart activity through the Pulse Sensor to understand whether or not a person is in the physiological state of stress, and then offer coaching exercises and feedback to reduce stress through breathing exercises. We’ve been continuously testing our setup with research participants to try to create an interface that’s clear and accessible to anyone in our studio who might approach the device.

Since the last time we posted about this project, we have learned much more about correlating stress to heart rate.  Joel Murphy, the creator of the Pulse Sensor, has helped us understand IBI (Interbeat Interval, or the time that passes between each heartbeat) and released some code that helped us grasp ways to map heart activity to stress. We have been using IBI measurements and the amplitude function Joel created to assign a specific value for stress, allowing us to measure whether it is relatively high or low.

HRV-1267

Most of our previous prototypes focused on trying to match a visual pattern with the heart rate. This proved to be very complicated, and worst of all, stressful. We also found that having one’s eyes closed is often the best way to achieve a state of relaxation. After a few iterations, we discovered that audio feedback the best way to provide helpful feedback to a person who is trying to relax. This allows the person to close his or her eyes, and focus on finding a constant tone rather than something visual. The image above shows the first trial involving mapping the amplitude of the heart rate to the amplitude of a sine wave, and the IBI to the frequency of the sound. The upper most waves are showing the sound and the lower most wave is displaying the heart rate signature.

Below you can see the various explorations of mapping the same sound wave that is being altered by the user’s heart rate to another visual cue. The concentric circles show a rippling effect based on the IBI change, and the droplets are another way of visualizing the same sonic effect. The user in this case is trying to reach uniformity in pattern, either through the distance of the concentric circles or the distance and shape of the drops.

 HRV-0018

Screen Shot 2013-12-18 at 6.29.32 PM

HRV-0678 HRV-0343

Below you can find the latest iteration of the interface. Using the screen and physical enclosure, the device acts as a coach character to help people know how to approach it and use the interface. It helps engage users with their bio signals, while providing the bot with the signification of the IBI state and a visual cue to ensure them that they are on the right track. Although the project is not complete, we are getting close! Our next steps involve experimenting with parameters, and doing more user testing.

HRV-0485 HRV-0667 HRV-2270

 

 

At Smart Interaction Lab we sometimes create our own circuit boards for physical models in order to have internal parts with exactly the components we need in an efficient form factor. We love using off-the-shelf boards from sources such as Sparkfun and Adafruit for fast solutions, but making great prototypes sometimes requires going one step further.

Since we know many of our readers face the same situation with their own projects, we decided to share our process here. The most recent board we created was made to work with one of our favorite new toys, the Ninja Blocks, a quick and easy kit consisting of a WiFi hub and sensors for exploring Internet of Things in the studio. This post lays out the process we went through to create our own custom RF Arduino Shield for NinjaBlocks.

 

Our Inspiration

When the folks from NinjaBlocks came to visit us in the San Francisco studio this past summer, we were excited to start playing with their product, as it offers an easy, quick way to experiment with design ideas for Internet-connected objects. The NinjaBlocks kit ships with a few basic sensors: temperature and humidity, motion, window/door contact and simple push button. These are great for wiring up a space to monitor it online, or setting it up to trigger an email, tweet or update to a website. While playing with the setup we quickly realized that we would want to go beyond those basic sensors so that we could try to use it with anything we needed like a force sensor, moisture detector, light sensor, etc.

Since the NinjaBlocks hub communicates with its sensors over radio frequency, it’s something that we could easily tap into, and there are helpful guides on the NinjaBlocks website which outline how to use it with your prototyping platform of choice (we used an Arduino).

The guides give instruction on hooking up the Arduino and RF components using jumper wires from a breadboard, and though this is a great way to get started right away, we’ve found that things get messy quickly. Once you begin adding more components like sensors it may end up looking more like a Medusa head of wires than anything else. Hence what this post is really about, making an Arduino shield!

 

Why a Shield?

An Arduino shield is essentially another circuit board sitting on top of the Arduino which adds capabilities which the Arduino does not have. Instead of connections happening through a bundle of loose wires, everything is neatly integrated within the layers of the circuit board and the components on it. Since it easily plug into and out of an Arduino, it also allows you to swap out shields as needed for different projects. The board can be designed using software and then fabricated via a printed circuit board (PCB) facility.

 

Getting Started

Since it’s always good to prototype what you’re doing first, we started with a breadboard, and then later used an all-purpose protoboard (available at hobby stores like Radio Shack), cut it down to dimensions, soldered connections to match the Arduino and hooked up the RF components.

 

Designing the Shield

Once our protoboard was working (hurrah!), we knew exactly what we needed, and we could move on to the design of the the PCB. Though there are a growing number of  home-brewed DIY solutions for making PCBs (such as using laser printers, printing on transparent sheets, exposing and etching away parts of copper coated circuit boards, etc.) we wanted something very reliable and repeatable, without having to deal with corroding chemicals. Thus we chose to have the manufacturing process outsourced to one of the few companies which can do small batches of PCBs fairly cost effectively.

The shield was drawn up schematically and then laid out in a program called CadSoft EAGLE. It’s the same tool used by professional engineers, and they offer a limited version for free for hobbyists to make their own boards. Incidentally, many DIY electronics companies, such as SparkFun, Adafruit and Arduino, offer EAGLE files of their components, which makes building on what they’ve done much easier than having to reverse engineer and recreate a part from scratch. Our shield was made by modifying an EAGLE file of a shield to make sure all the necessary holes and pins would be in the right place.

EAGLE can be daunting at first, and drawing out a shield can take quite some time, but with the help of online reading and YouTube tutorial videos even a beginner can get started. There are also many other circuit board layout programs, among them the open source alternative KiCad and the online editor Circuits.io which we’ve mentioned in a previous post, which may give an easier way into creating circuit boards when you’re starting out.

 

Ordering the PCB

After going through a few iterations of designs in EAGLE the finished version was finally shipped off to OSH Park to have it made. While the turnaround time for smaller batches of PCBs can be quite lengthy (we waited about three weeks) they’re still cheap enough to make it worthwhile. It is also a good idea to prototype as far as possible before committing the design to an actual PCB, to make sure the circuit works and everything is laid out properly. It’s better to spend an extra day laying out everything than getting your board back after a few weeks to find that it doesn’t work.

 

PCB Assembly

In the end, after soldering in the RF modules and header pins to the shield it was working beautifully.  (And there’s added bonus to creating an RF Arduino sender-receiver setup, which is that we can use it to communicate back and forth between individual Arduinos as well as with the NinjaBlocks.) Since the RF modules only take up two digital pins on the Arduino you’re still very free to hook up sensors and actuators to trigger whatever your imagination could think of. How would you use it?

 

Source Files

If you’d like to try making the shield yourself, you can download an updated version of the Eagle file, along with documentation and links to Arduino libraries. These files will enable you to order your own, or modify it to fit your own needs better. Enjoy:

Smart Interaction Lab Arduino Ninja Shield on GitHub

 Special thanks to Smartie Daniel Jansson for sharing this process.

How can interactive objects encourage inspiration and dialog during brainstorming sessions?

This summer, the Barcelona branch of the Smart Interaction Lab set out to answer that question through electronic hardware experiments. TOTEM is a series of tangible interaction experiments, presented at Barcelona’s first ever Maker Faire, a place where people show what they are making, and share what they are learning.

Ideation sessions are part of everyday life at Smart Design, informing all the work we do. When reflecting upon these sessions, we developed the concept behind our Maker Faire project. We worked together as a team of multidisciplinary researchers and designers to explore how we can improve people’s experiences of the ideation process through tangible interaction.  Our solution was TOTEM—a family of three unique objects that help people get inspired and stay engaged in creative conversations and debates in order to generate new ideas. It is composed of a stack of three separate but complementary objects: Batón, Echo and Alterego.

1. Batón is an unconventional talking stick that is passed around during ideation sessions allowing only the person holding the Batón to speak. After a certain amount of time the Batón begins to vibrate to indicate the speaker must pass the batón to someone else in the group. This tool allows everyone present in a discussion to be heard and it forces the most dominant speakers to be more concise, but also those that may be more shy to speak up.

 

2. Echo is a recording/playback bell-like device that can fit easily and comfortably into any setting—from creative spaces, to the living room of your home, to cafes. Echo works in two ways: it records the background noise from wherever it is placed, and as soon as a user picks it up and puts it to their ear, Echo will play back what it previously recorded. By shaking it, Echo will randomly play back other sound bytes from other parts of the day. The main aim of Echo is to help users get inspired by other adjacent creative conversations, keywords, and quirky background noises that we would not be able to pick up on without the help of technology.

3. Alterego is designed to be a new version of the Six Thinking Hats mind tool by Edward de Bono. Alterego is a wearable device with three components, similar to a bracelet. As soon as a user picks up the objects they will light up with their own unique color, signaling that they are on. The three colors each have a purpose and code, to help guide the user to adopt different mindsets for evaluating ideas: green = optimistic, red = pessimistic, blue = moderator. This thinking behind this method is that it forces users to think outside their comfort zone and evaluate ideas in a balanced, unbiased manner—an essential skill for productive ideation.

A schematic of the object components is below:

 

Smart Design and their Smart Interaction Lab are supporters of the Maker culture, a movement of invention, creativity and resourcefulness. Our project was well received by Faire visitors and we heard lots of great feedback. We look forward to considering those comments in our continued experiments. If you’d like to learn more about TOTEM, feel free to contact the Smart Interaction Lab.

Special thanks to the TOTEM team: Junior Castro, Marc Morros, Erika Rossi, Ivan Exposito, Valeria Sanguin and Floriane Rousse-Marquet.

Recently, the Barcelona branch of the Smart Interaction Lab explored a project called Smart TV, a system that automatically recognizes the viewer, then aggregates content from multiple sources based on their preferences, and creates a unique set of curated content channels for them. You can read more about it on the Smart Design website.

We took a look behind the scenes by talking to Junior, code/design ninja and IxD Lab Chief in the Barcelona office. Here’s a transcript of our fascinating chat about Python scripting libraries, group therapy sessions, and people who lie about liking Borat.

[10:49:34 AM] Carla: In your own words, what is the Smart TV project?

[10:51:59 AM] Junior: This project was originally an exploration of current technologies to enhance the TV watching experience, it was meant to be a “what if” scenario

[10:52:49 AM] Carla: Cool, can you tell me more? What if…?

[10:54:29 AM] Junior: Well, 2 years ago as part of my internship here I was asked to do a personal project, something that could show my interests while improving people’s life and that would be aligned with Smart’s philosophy.

[10:56:18 AM] Junior: I was really interested in diving into recommendation engines and face recognition, so I came up with the idea of exploring the question, “What if a ‘Smart’ TV could be more than a ‘connected’ TV? What if the TV could actually know who was watching it and then adapt based on that, changing the UI in both the remote control and the content to be displayed?”

[10:56:53 AM] Carla: Why was it important to know who was watching? Was this something that you noticed was a pain point?

[10:58:30 AM] Junior: I felt that watching TV should be a relaxing activity, and with the amount of content that we have available, the effort required to browse through content to find what you like was really painful and less enjoyable than it should be.

[10:58:53 AM] Carla: Ah yes, that makes sense.

[10:58:56 AM] Junior: If the system knows who is watching, it can be amore pleasant experience by offering choices that are more tailored to that person.

[10:59:28 AM] Junior: Also, I wanted to help people that are not especially tech savvy.

[10:59:33 AM] Carla: Can you tell me more about your work with face recognition in this context?

[11:00:20 AM] Junior: I liked the idea of using face recognition because it’s a very natural way of interacting. After all, as humans, we use it all the time without even thinking about it, and I think we are in a point in history in where technology can do it very accurately.

[11:00:45 AM] Carla: How does the face recognition work?

Above: PyVision image source: http://sourceforge.net/apps/mediawiki/pyvision/

[11:01:38 AM] Junior: Face recognition consists of 3 steps:

[11:02:13 AM] Junior: 1. Enrollment: when the system “learns” the face and associates it with a profile2. Tracking: when the system analyzes the images and detects “faces”and 3. Recognition: when the system distinguishes that face from all the faces in the image and identifies it as a specific profile

[11:04:30 AM] Carla: That’s fascinating. For the geeks in our audience, can you tell us what software you’re using?

[11:06:43 AM] Junior: Since I wanted it to be a stand-alone system, I looked into different solutions and finally I opted to use Python as language and a image processing library called PyVision.

[11:07:48 AM] Carla: Can you tell me a little bit more about this?

[11:08:39 AM] Junior: Python is a very portable language and it could be used in a lot of different platforms, both server-based and embedded. It’s a scripted language, but a very high performance one, so, it’s really easy to reuse and port the code to different platforms.

[11:10:31 AM] Junior: My intention was to create a “black box” to contain all the required software and just plug it in to a TV.

[11:10:47 AM] Carla: Cool!

[11:11:24 AM] Carla: Can you talk about some of the experiments you did to get up to speed on it?

[11:12:12 AM] Junior: Sure. I divided the project in 3 parts that were developed separately and then I connected them.

[11:13:05 AM] Junior: First was the face recognition module, which was basically identifying who was watching, I tried several options and algorithms in order to find the one that could be usable and responsive.

[11:13:19 AM] Junior: I did around 20 -25 different scripts.

[11:13:49 AM] Carla: Wow, how did it go with those first scripts?

[11:14:41 AM] Junior: Well… some were good at tracking faces, but in order to recognize, you basically need to create and average of a lot of photos of that face. So, the first scripts were good at tracking but really bad at recognizing. They would be really unresponsive.

[11:15:08 AM] Carla: Ah yeah, that makes sense.

[11:16:05 AM] Carla: And then after the face recognition module?

[11:16:26 AM] Junior: Finally I found a really cool library to implement machine learning in Python.

[11:17:06 AM] Carla: Nice! What’s that called and how did you find it?

[11:17:47 AM] Junior: Mmmm… I read a lot of articles about face recognition, and the guys who developed PyVision use machine learning for face recognition.

[11:17:55 AM] Carla: Gotcha.

[11:18:15 AM] Carla: So after the face recognition module, where did you go with your experiments?

[11:19:02 AM] Junior: After that I did the iPhone app, and used it as remote control.

[11:19:32 AM] Junior: I felt strongly that the UI should not be on the TV screen itself because watching TV is a social activity– you don’t want to interrupt everyone who’s watching when you want to browse or get more information.

[11:19:59 AM] Carla: And what kind of coding environment did you use for the app? There are so many options right now, and a lot of people are confused where to start.

[11:21:07 AM] Junior: I used a framework called Phonegap, its really cool, you create the UI using web technologies (HTML5, CSS, JS) and this framework encapsulates the UI into a native app.

[11:21:56 AM] Junior: It’s really simple and the best way to do a prototype.

[11:21:58 AM] Carla: Oh yeah, I know a lot of people love Phonegap for prototyping, nice to know you can create a native app with it.

[11:22:53 AM] Carla: What were the biggest challenges in developing the Smart TV system, particularly with making it really intuitive?

[11:24:31 AM] Junior: I think the biggest challenge was thinking about how the system will aggregate the content when 2 or more people are watching together

[11:25:03 AM] Junior: I feel that watching TV used to be very simple and social (as in people in the same place watching the same TV)

[11:25:07 AM] Carla: Interesting. I can see how that would be tricky to know whose content belongs to whom.

[11:25:42 AM] Junior: Exactly, and I think our approach was more about forget about “your” or “my” content and think about “our” content.

[11:26:06 AM] Junior: Let other people enrich your experience just by being there in front of the TV.

[11:27:18 AM] Carla: Hm. So does that mean that “we” become(s) another user? Or do you just pick the person whose content it’s more likely to be? I can see how this could get really complex really fast!

[11:29:04 AM] Junior: “We” become something different, we are a group that aggregates all the individuals.

[11:29:37 AM] Junior: Think about the wisdom of crowds applied to TV.

[11:29:58 AM] Carla: So is it kind of like this: person 1, person 2, person 3 and then a fourth profile for all three people combined?

[11:31:29 AM] Junior: Sort of. It’s combined but it’s not exactly the sum of everyone.

[11:31:41 AM] Junior: When you think a family for example, if you separate each member, they each have a personality,

[11:32:13 AM] Junior: but when they are together they have a “group” personality.

[11:34:35 AM] Carla: Ok, I get it. Cool. I think there are a lot of interesting social dynamics to explore there, like who is the most dominant. Super interesting. Could be a project for group therapy.  ;)

[11:35:27 AM] Junior: Exactly. One of the reasons I used face recognition was the possibility of using facial emotional feedback from everyone.

[11:36:15 AM] Carla: What’s next for this? Are you using the face recognition for anything else?

[11:36:26 AM] Junior: Not at the moment, but I’ve been paying attention to people using face recognition as rating system.

[11:37:25 AM] Junior: In a regular recommendation system, its all about “like” or “dislike” but the truth is that we have two “selves” the one who we aim to be and the one we really are.

[11:38:19 AM] Carla: That’s super fascinating about the self we aim to be. There’s so much psychology in all of this. Are you saying that the face recognition gives us a better truth than the rating that we indicate in the interface in another way?

[11:39:06 AM] Junior: Yes, exactly. For example, in order to create a profile in a recommendation engine you have to select content that you like, but most of the time you select things that you think are cool, but not always that you like.

[11:39:25 AM] Carla: So would the system you propose collect recommendation data in a passive way? Like in the middle of the movie I’m watching, rather than a question that’s asked at some other time?

[11:40:23 PM] Carla: Is it passive, accumulated while I’m watching?

[11:40:29 PM] Junior: Ideally it should be tracking your facial feedback at all time.

[11:41:20 AM] Junior: You could choose “Gone With the Wind” or “Citizen Kane”, but in reality your facial feedback says that you like “the Mask” and “Spice World” better.

[11:41:24 PM] Carla: Ha Ha Ha, yes, and “Borat” instead of a La Jetée.

[11:41:54 AM] Junior: Hehehe exactly ;) And facial emotional feedback is universal, independently of culture or geographic location.

[11:42:11 PM] Carla: Yeah, that makes sense.

[11:42:22 PM] Junior: then you could be more accurate about what you like and when.

[11:42:31 PM] Carla: Right.

[11:43:45 PM] Carla: Junior, this has been great! And I learned a lot.

[11:44:11 PM] Junior: Thanks Carla, it was really fun, please let me know if you need anything else.

[11:44:23 PM] Junior: I have tons of examples and references that I can share with the Smart Interaction Lab readers.

We’re always in the process of playing with sensors and tracking, and our newest toy, the Pulse Sensor, has led us to a new lab project, affectionately named “Stressbot”.

In our interview with one of the Pulse Sensor’s creators, Joel Murphy, we learned a lot about the link between Heart Rate Variability (HRV) and the physiological state of stress that can be so dangerous when sustained for long periods. Since there is a Smart-wide initiative around stress awareness and healthy lifestyles, we decided to put our learning to good use and designed the Stressbot as a way for Smarties to learn how to detect and manage stress. The stressbot lets you measure your HRV for a few minutes and displays real-time results on the screen. It then coaches you on breathing activities that can work to even out the HRV and effectively reduce stress.

We’re still in the process of learning more about the science behind it and modifying the code that the sensor’s creators have developed, but wanted to share a glimpse into our sketches and initial inspiration.

Stay tuned for updates on the Stressbot project over the next few weeks.

Above: Photo of the Nanoblocks Sagrada Famiglia model, with the 123D Catch 3D model built from a scan with the DIY iPhone Dolly.

Smart’s Master MacGyver Ron Ondrey loves his photo toys and when he shopped around for movable tripods and dollys he decided he could do a better job if he made one himself. He built the rig, shown below, in order to pan, truck and take 360° bullet-time style spinning videos around people and objects using a standard camera mount or by mounting an iPhone.

We had a lot of fun in the lab taking experimental videos of our product designs, but decided to bring the rig’s coolness factor up a notch by using it in conjunction with Autodesk’s 123D Catch App, software that essentially turns your iPhone into a 3D scanner, and then lets you spin and tilt the virtual 3D object on the screen.

The video above shows our test shots with the Nanoblocks model of Barcelona’s Sagrada Famiglia.

Here’s Ron with the rig.

.

And our test model.

Here’s our test with the rig in 360 spin mode.

This weekend’s New York Times Sunday Review cover featured the article, “Our Talking, Walking Objects” by Lab Founder Carla Diana. The piece highlights the influence of robotics on the design of everyday objects, and how dynamic behaviors like sound, light and motion can be harnessed to express product personality.

The article has led to a lively debate people’s line comments (81 at last count) regarding the value of having an emotional connection with our everyday things.

Check out the piece and weigh in here:

http://www.nytimes.com/2013/01/27/opinion/sunday/our-talking-walking-objects.html

Food is a big part of Smart’s studio culture, but sharing meals can be tricky when we all have different schedules. As part of our ongoing exploration of how products can harness the Internet of Things to keep people connected, we focused on lunchtime in the New York studio.

The Apron Alert project is a concept that emerged when we combined our experiments in wireless devices with our thoughts around improving our communal kitchen experience.  Wireless XBee radios attached to Lilypad Arduinos were used to build a “smart” apron that can sense when the cook has put it on to start the meal, and when he or she has removed it to serve it. In response, the apron triggers a series of tweets or text messages to let people know when a meal is being prepped and when it’s time to come to the table.

Above is a short video showing how we used our Apron Alert system to coordinate an office lunch this week.

And a diagram of how the whole system is set up:

Special thanks to Mark Breneman for his work on this project.  Thanks also to Evi Hui, Nicholas Lim and Edouard Urcadez.

Last Friday we had a visit from  a group of students from SVA’s Summer Intensive program in Interaction Design and it was a great oportunity to share a bit of what we’ve been up to in the Smart Interaction Lab.

Lab researcher and summer Smartie Mark Breneman demonstrated a bit of what we’ve been cooking up with our recent Internet of Things experiments. We’re using off-the-shelf components to prototype scenarios around mealtime to let family members communicate with one another to coordinate dinner schedules. Here, Mark demonstrates how a wireless Xbee-enabled device can be used to send a tweet or text message when a certain gesture is made.

Thanks, SVA summer intensive students, for stopping by our lab to exchange ideas.