Smart IxD Lab

Menu

Thoughts, observations and experimentation on interaction by: Smart Design

Check us out

egg o matic

Last week Smart Interaction Lab was on the road in London at the UX London 2014 conference. In addition to giving a talk on the Internet of Things, we ran a 3-hour workshop on E-Sketching for Interaction Prototyping. In the workshop, we introduced participants to the basics of Arduino, and then quickly moved into demonstrations of a range of sensors. We used the scenario of an eldercare patient whose loved ones would like to be informed of changes in behavior, such as when medications haven’t been taken in time–much like the Lively system or the Vitality GlowCaps that we use in our Internet of Things examples. With some quick foam core and cardboard mockups, we showed how tilt switches, light sensors, accelerometers and temperature sensors can be used to help medication compliance.

Since UX designers value being able to visualize data, we linked our Arduino demos to Processing, a programming language that allows people to quickly create dynamic on-screen graphics. Once the demonstrations were done, participants worked as teams and it was their turn to get creative with foam core, sensors and code.

The teams worked remarkably well together, and the energy in the room was awesome. Picking from a range of suggested subject areas, teams created working demos of:

-Temperature monitoring for accurate egg boiling
-An anti-snoring system
-Tracking seat usage on public transportation
-A warning system to let you know when you’ve left the refrigerator door open
…and an applause-o-meter to visualize how much participants enjoyed the workshop

photo 1

 

photo 2

photo 1 photo 2 photo 5

Special thanks to Junior Castro from the Interaction Lab for joining us in London.

MAD museum residency

This spring, Smart Interaction Lab’s NYC branch went uptown to the Museum of Arts and Design (MAD) for one week to be part in an intensive designer residency to explore the future of desktop 3D printing. The program, sponsored by the online 3D print-on-demand service, Shapeways, featured a different designer every week and was part of a larger exhibition entitled “Out of Hand”, which explores extreme examples of art and design created using digital fabrication techniques. Out of Hand is on display until June 1.

 

Sharing our research

During our week at the museum, lab founder Carla Diana was on hand to chat with museum visitors and share stories and research from her experience in developing LEO the Maker Prince, the first 3D printing book for kids. The book comes with a URL where readers can download and print the objects that appear throughout the story and printed models were available at the museum for people to touch and inspect.

569cdad8b28f11e39605126ba9d895ab_8

 

Playing with New Toys: Printers and Scanners

As a perk, we were invited to experiment with two fun technologies: a Form Labs FORM 1 printer and a full-body Kinect scanning station.

The Form Labs Printer was interesting since it uses a sintering process (SLA) as opposed to the fused deposition (FDM) process that’s used in popular desktop 3D printers such as the MakerBot. In other words, the Form Labs works by having a laser project light onto a bath of liquid resin, hardening the places that the laser hits, layer by layer. The more common FDM process relies on a spool of plastic filament that is fed through a heated nozzle positioned to allow the molten plastic to fall onto a platform and harden as it builds up layers. (At Smart, we have professional-grade printers that employ both technologies, but it was intriguing to see a desktop version of the much messier SLA machine.)

In terms of results, the FormLabs prints capture a great deal more detail at a relatively high resolution. And because the sintered parts don’t require as bulky a structure for support, they are also better at building interlocking and articulated parts than the FDM machines. We spent a good deal of time explore this by building 3D models of chain structures and then printing them on the Form Labs printers.

5785b988ac6311e387de122034e6e329_8

We also took an old pair of eyeglasses and scanned the lenses in order to design and build a new pair of frames, exploiting the detail of the print.

Carla's new frames at MAD

Carla’s new frames built on the FormLabs printer

The scanning station was also quite fun to play with, and consisted of a Kinect camera attached to Skanect software and positioned in front of a motor-driven turntable that a person could stand on. As it rotated, a Shapeways representative moved the Kinect camera up and down in order to capture the 3D data of the person’s body. We hoped to play with the scanner a bit more, but it was outrageously popular with museum visitors who waited on long lines to make scans to use to order small statuettes of themselves. The number of people who come through the museum is astounding and has included Woody Allen and David Byrne.

12ddb1aead5911e39da90a8f5c5dae8d_8

 

David Byrne statuette, created from a scan at the MAD Shapeways exhibition 

Capturing the public’s imagination

Throughout the six days, most of our time wasn’t spent with the tools, but rather talking to people. It was fascinating to hear what questions people have about 3D printing and what’s capturing their imagination. While the technology is quite commonplace to professional designers, about 90 percent of the people who came through the residency exhibition said the same thing, “We’ve heard about 3D printers, but had no idea what they are.” People are reading about them in the news, but that’s the extent of their exposure to it, so they found it fascinating to be able to hold and touch a 3D print, and see the process as it’s happening. Even the folks who did have some understanding of the printing techniques were very cloudy on how a 3D model would be crafted and made on a computer, so we enjoyed giving them a glimpse of the solid modeling techniques that we typically use as well as sharing tips about how to get started with more friendly platforms such as TinkerCAD and Autodesk’s 123D suite.

30033690ad5011e3be2f126b6b44507b_8

10-year old Asher Weintraub, inventor of the Menurkey.

Our favorite visitor to the residency exhibit was 10-year-old Asher Weintraub . We noticed a young boy engrossed in the book and reading intently. When we spoke to his parents, they explained that Asher was the designer of the famous “menurkey”  a sculptural centerpiece created to celebrate both Hanukkah and Thanksgiving simultaneously and developed using 3D printing. Upwards of 7000 menurkeys have been sold,  and the young designer was invited to meet President Obama in the White House to share his story of innovation.

We’re thrilled to know that Asher is a fan of LEO.

 

Maker Week in the Bay Area

This week we’ll be sharing our 3D printing fun on the west coast with a LEO the Maker Prince reading and activity day in the San Francisco Smart Studio. We’ll also be at the MakerCon event on May 13-14 and will have several Lab folks floating around the Maker Faire on May 17-18, so if you are in the Bay Area, come find us!

Displays are a big part of how we build expression into the design exploration work we do at the Smart Interaction Lab, and LED matrices are particularly compelling as they allow us to have lights that glow beneath a surface and disappear when not in use. Currently, we are working with a 32 x 32 RGB LED matrix that’s about 5″ square and share our experience with displaying type here in this post.

For starters, Adafruit offers a library that allows for drawing functions and displaying text on the matrix. The library is preloaded with a generic font, if you want to have custom typography or even iconography mixed in with the type, you’ll want to create your own font. We came across this challenge, and decided to reverse engineer the library files.

In order to create our own font we modified the glcdfont.c document within the Adafruit-GFX library folder. This document stores the program memory, which has the code for each character of the ASCII table. The first thing that we should notice is that the font is going to be 5 x 7 pixels.  When you open glcdfont.c, you can see that the font is a series of hex values separated by commas under a font[] array. The PROGMEM  is the memory of the array that has each character value. I have commented on green the ASCII value for each series of  hex values that comprise each character. Make sure that you use that when you do this, you utilize these symbols:  ”// comment ” or “/* comment text*/”

Screen Shot 2014-03-14 at 5.46.53 PM

1.

In the image above you can see that the capital letter “A” has a value of 65, since all ASCII symbols are represented in this file, all of the characters are in that numerical order, so “A” is line number 65. Each character is defined by hex values, which  are a shortened version of binary values. Each binary string is translated to five hex code bytes, each byte represents a line of seven 1′s or 0′s each representing either an “on” or “off” pixel in a glyph, but they are flipped 90° to the right.

We used excel to redraw each of the glyphs and extract the hex value of each line using their handy conversion function “=BIN2HEX(1111110)” which will return “7E”. Add “0x” for the correct formatting of “0x7E”, and you will have the first byte of the capital letter “A”.  The second line will be ”=BIN2HEX(1001)”, which returns “9″ this time we add another zero in front which would be “0×09″, and would keep our format uniform. Each of these hex values are the separated by a comma,  and each character or glyph has five pixels in width which has been previously defined. The size of each character is 5 x 7 pixels, and can be enlarged proportionally via other functions within the Arduino code, such as ”matrix.setTextSize(1).”

Screen Shot 2014-02-28 at 3.47.34 PM

 

If you are interested in creating your own fonts, here is the link for the Smart IxD modified file glcdfont.c: glcdfont NeatoFont

Here is the excel file that shows how arrive at the hex code: NeatoFont

Most of us have heard from health experts that we’re supposed to consume at least eight 8-ounce glasses of water a day, but how can we know if we’re hitting this target? Smart Intern and Lab collaborator Simone Capano set out to explore a solution to this problem with Hydramate, a simple system that passively reminds us when it’s time to drink more water.

hydra3

The project explores the notion of time and how it can be represented in an ambient way through a 3-dimensional interface. “I’ve always been very interested in finding a way to manage time in a physical way. Time is for sure an abstract element, and nowadays a lot of applications allow us to manage time effectively, but what if time were controlled through a tangible object, acting as a reminder, speaking to the user with a specific language/behavior?” asks Simone.

hydra4

Hydramate is an electronic coaster that measures a one hour cycle, divided in four quarters, each representing 15 minutes. As time passes, each quarter begins to gently glow giving the user a visual cue of how many times they have raised their glass to drink. Once a whole hour has passed since the last sip, the device begins to blink signaling that it is time to get hydrated. The blinking becomes stronger while the user is drinking, and once they set the glass back down it resets to the gentile glow of the first quarter .

hydra5

Simone created a fully functioning prototype with an Arduino micro controller. The shell  is  made of spray-painted polycarbonate, and the technology inside of it is very simple:

- A Photocell senses when a glass has been placed on it

- An Arduino Pro Mini  powered by a 3.3V Lithium batter receives the input from the photocell and controls the LEDs accordingly

We look forward to hearing about how this project has developed since the prototype was constructed, and how well test users feel the device helps them to manage their hydration.

 

Screen Shot 2013-12-21 at 12.33.53 PM

One of our current projects in the lab is the StressBot, a friendly console that can read heart activity through the Pulse Sensor to understand whether or not a person is in the physiological state of stress, and then offer coaching exercises and feedback to reduce stress through breathing exercises. We’ve been continuously testing our setup with research participants to try to create an interface that’s clear and accessible to anyone in our studio who might approach the device.

Since the last time we posted about this project, we have learned much more about correlating stress to heart rate.  Joel Murphy, the creator of the Pulse Sensor, has helped us understand IBI (Interbeat Interval, or the time that passes between each heartbeat) and released some code that helped us grasp ways to map heart activity to stress. We have been using IBI measurements and the amplitude function Joel created to assign a specific value for stress, allowing us to measure whether it is relatively high or low.

HRV-1267

Most of our previous prototypes focused on trying to match a visual pattern with the heart rate. This proved to be very complicated, and worst of all, stressful. We also found that having one’s eyes closed is often the best way to achieve a state of relaxation. After a few iterations, we discovered that audio feedback the best way to provide helpful feedback to a person who is trying to relax. This allows the person to close his or her eyes, and focus on finding a constant tone rather than something visual. The image above shows the first trial involving mapping the amplitude of the heart rate to the amplitude of a sine wave, and the IBI to the frequency of the sound. The upper most waves are showing the sound and the lower most wave is displaying the heart rate signature.

Below you can see the various explorations of mapping the same sound wave that is being altered by the user’s heart rate to another visual cue. The concentric circles show a rippling effect based on the IBI change, and the droplets are another way of visualizing the same sonic effect. The user in this case is trying to reach uniformity in pattern, either through the distance of the concentric circles or the distance and shape of the drops.

 HRV-0018

Screen Shot 2013-12-18 at 6.29.32 PM

HRV-0678 HRV-0343

Below you can find the latest iteration of the interface. Using the screen and physical enclosure, the device acts as a coach character to help people know how to approach it and use the interface. It helps engage users with their bio signals, while providing the bot with the signification of the IBI state and a visual cue to ensure them that they are on the right track. Although the project is not complete, we are getting close! Our next steps involve experimenting with parameters, and doing more user testing.

HRV-0485 HRV-0667 HRV-2270

 

 

At Smart Interaction Lab we sometimes create our own circuit boards for physical models in order to have internal parts with exactly the components we need in an efficient form factor. We love using off-the-shelf boards from sources such as Sparkfun and Adafruit for fast solutions, but making great prototypes sometimes requires going one step further.

Since we know many of our readers face the same situation with their own projects, we decided to share our process here. The most recent board we created was made to work with one of our favorite new toys, the Ninja Blocks, a quick and easy kit consisting of a WiFi hub and sensors for exploring Internet of Things in the studio. This post lays out the process we went through to create our own custom RF Arduino Shield for NinjaBlocks.

 

Our Inspiration

When the folks from NinjaBlocks came to visit us in the San Francisco studio this past summer, we were excited to start playing with their product, as it offers an easy, quick way to experiment with design ideas for Internet-connected objects. The NinjaBlocks kit ships with a few basic sensors: temperature and humidity, motion, window/door contact and simple push button. These are great for wiring up a space to monitor it online, or setting it up to trigger an email, tweet or update to a website. While playing with the setup we quickly realized that we would want to go beyond those basic sensors so that we could try to use it with anything we needed like a force sensor, moisture detector, light sensor, etc.

Since the NinjaBlocks hub communicates with its sensors over radio frequency, it’s something that we could easily tap into, and there are helpful guides on the NinjaBlocks website which outline how to use it with your prototyping platform of choice (we used an Arduino).

The guides give instruction on hooking up the Arduino and RF components using jumper wires from a breadboard, and though this is a great way to get started right away, we’ve found that things get messy quickly. Once you begin adding more components like sensors it may end up looking more like a Medusa head of wires than anything else. Hence what this post is really about, making an Arduino shield!

 

Why a Shield?

An Arduino shield is essentially another circuit board sitting on top of the Arduino which adds capabilities which the Arduino does not have. Instead of connections happening through a bundle of loose wires, everything is neatly integrated within the layers of the circuit board and the components on it. Since it easily plug into and out of an Arduino, it also allows you to swap out shields as needed for different projects. The board can be designed using software and then fabricated via a printed circuit board (PCB) facility.

 

Getting Started

Since it’s always good to prototype what you’re doing first, we started with a breadboard, and then later used an all-purpose protoboard (available at hobby stores like Radio Shack), cut it down to dimensions, soldered connections to match the Arduino and hooked up the RF components.

 

Designing the Shield

Once our protoboard was working (hurrah!), we knew exactly what we needed, and we could move on to the design of the the PCB. Though there are a growing number of  home-brewed DIY solutions for making PCBs (such as using laser printers, printing on transparent sheets, exposing and etching away parts of copper coated circuit boards, etc.) we wanted something very reliable and repeatable, without having to deal with corroding chemicals. Thus we chose to have the manufacturing process outsourced to one of the few companies which can do small batches of PCBs fairly cost effectively.

The shield was drawn up schematically and then laid out in a program called CadSoft EAGLE. It’s the same tool used by professional engineers, and they offer a limited version for free for hobbyists to make their own boards. Incidentally, many DIY electronics companies, such as SparkFun, Adafruit and Arduino, offer EAGLE files of their components, which makes building on what they’ve done much easier than having to reverse engineer and recreate a part from scratch. Our shield was made by modifying an EAGLE file of a shield to make sure all the necessary holes and pins would be in the right place.

EAGLE can be daunting at first, and drawing out a shield can take quite some time, but with the help of online reading and YouTube tutorial videos even a beginner can get started. There are also many other circuit board layout programs, among them the open source alternative KiCad and the online editor Circuits.io which we’ve mentioned in a previous post, which may give an easier way into creating circuit boards when you’re starting out.

 

Ordering the PCB

After going through a few iterations of designs in EAGLE the finished version was finally shipped off to OSH Park to have it made. While the turnaround time for smaller batches of PCBs can be quite lengthy (we waited about three weeks) they’re still cheap enough to make it worthwhile. It is also a good idea to prototype as far as possible before committing the design to an actual PCB, to make sure the circuit works and everything is laid out properly. It’s better to spend an extra day laying out everything than getting your board back after a few weeks to find that it doesn’t work.

 

PCB Assembly

In the end, after soldering in the RF modules and header pins to the shield it was working beautifully.  (And there’s added bonus to creating an RF Arduino sender-receiver setup, which is that we can use it to communicate back and forth between individual Arduinos as well as with the NinjaBlocks.) Since the RF modules only take up two digital pins on the Arduino you’re still very free to hook up sensors and actuators to trigger whatever your imagination could think of. How would you use it?

 

Source Files

If you’d like to try making the shield yourself, you can download an updated version of the Eagle file, along with documentation and links to Arduino libraries. These files will enable you to order your own, or modify it to fit your own needs better. Enjoy:

Smart Interaction Lab Arduino Ninja Shield on GitHub

 Special thanks to Smartie Daniel Jansson for sharing this process.

How can interactive objects encourage inspiration and dialog during brainstorming sessions?

This summer, the Barcelona branch of the Smart Interaction Lab set out to answer that question through electronic hardware experiments. TOTEM is a series of tangible interaction experiments, presented at Barcelona’s first ever Maker Faire, a place where people show what they are making, and share what they are learning.

Ideation sessions are part of everyday life at Smart Design, informing all the work we do. When reflecting upon these sessions, we developed the concept behind our Maker Faire project. We worked together as a team of multidisciplinary researchers and designers to explore how we can improve people’s experiences of the ideation process through tangible interaction.  Our solution was TOTEM—a family of three unique objects that help people get inspired and stay engaged in creative conversations and debates in order to generate new ideas. It is composed of a stack of three separate but complementary objects: Batón, Echo and Alterego.

1. Batón is an unconventional talking stick that is passed around during ideation sessions allowing only the person holding the Batón to speak. After a certain amount of time the Batón begins to vibrate to indicate the speaker must pass the batón to someone else in the group. This tool allows everyone present in a discussion to be heard and it forces the most dominant speakers to be more concise, but also those that may be more shy to speak up.

 

2. Echo is a recording/playback bell-like device that can fit easily and comfortably into any setting—from creative spaces, to the living room of your home, to cafes. Echo works in two ways: it records the background noise from wherever it is placed, and as soon as a user picks it up and puts it to their ear, Echo will play back what it previously recorded. By shaking it, Echo will randomly play back other sound bytes from other parts of the day. The main aim of Echo is to help users get inspired by other adjacent creative conversations, keywords, and quirky background noises that we would not be able to pick up on without the help of technology.

3. Alterego is designed to be a new version of the Six Thinking Hats mind tool by Edward de Bono. Alterego is a wearable device with three components, similar to a bracelet. As soon as a user picks up the objects they will light up with their own unique color, signaling that they are on. The three colors each have a purpose and code, to help guide the user to adopt different mindsets for evaluating ideas: green = optimistic, red = pessimistic, blue = moderator. This thinking behind this method is that it forces users to think outside their comfort zone and evaluate ideas in a balanced, unbiased manner—an essential skill for productive ideation.

A schematic of the object components is below:

 

Smart Design and their Smart Interaction Lab are supporters of the Maker culture, a movement of invention, creativity and resourcefulness. Our project was well received by Faire visitors and we heard lots of great feedback. We look forward to considering those comments in our continued experiments. If you’d like to learn more about TOTEM, feel free to contact the Smart Interaction Lab.

Special thanks to the TOTEM team: Junior Castro, Marc Morros, Erika Rossi, Ivan Exposito, Valeria Sanguin and Floriane Rousse-Marquet.

Recently, the Barcelona branch of the Smart Interaction Lab explored a project called Smart TV, a system that automatically recognizes the viewer, then aggregates content from multiple sources based on their preferences, and creates a unique set of curated content channels for them. You can read more about it on the Smart Design website.

We took a look behind the scenes by talking to Junior, code/design ninja and IxD Lab Chief in the Barcelona office. Here’s a transcript of our fascinating chat about Python scripting libraries, group therapy sessions, and people who lie about liking Borat.

[10:49:34 AM] Carla: In your own words, what is the Smart TV project?

[10:51:59 AM] Junior: This project was originally an exploration of current technologies to enhance the TV watching experience, it was meant to be a “what if” scenario

[10:52:49 AM] Carla: Cool, can you tell me more? What if…?

[10:54:29 AM] Junior: Well, 2 years ago as part of my internship here I was asked to do a personal project, something that could show my interests while improving people’s life and that would be aligned with Smart’s philosophy.

[10:56:18 AM] Junior: I was really interested in diving into recommendation engines and face recognition, so I came up with the idea of exploring the question, “What if a ‘Smart’ TV could be more than a ‘connected’ TV? What if the TV could actually know who was watching it and then adapt based on that, changing the UI in both the remote control and the content to be displayed?”

[10:56:53 AM] Carla: Why was it important to know who was watching? Was this something that you noticed was a pain point?

[10:58:30 AM] Junior: I felt that watching TV should be a relaxing activity, and with the amount of content that we have available, the effort required to browse through content to find what you like was really painful and less enjoyable than it should be.

[10:58:53 AM] Carla: Ah yes, that makes sense.

[10:58:56 AM] Junior: If the system knows who is watching, it can be amore pleasant experience by offering choices that are more tailored to that person.

[10:59:28 AM] Junior: Also, I wanted to help people that are not especially tech savvy.

[10:59:33 AM] Carla: Can you tell me more about your work with face recognition in this context?

[11:00:20 AM] Junior: I liked the idea of using face recognition because it’s a very natural way of interacting. After all, as humans, we use it all the time without even thinking about it, and I think we are in a point in history in where technology can do it very accurately.

[11:00:45 AM] Carla: How does the face recognition work?

Above: PyVision image source: http://sourceforge.net/apps/mediawiki/pyvision/

[11:01:38 AM] Junior: Face recognition consists of 3 steps:

[11:02:13 AM] Junior: 1. Enrollment: when the system “learns” the face and associates it with a profile2. Tracking: when the system analyzes the images and detects “faces”and 3. Recognition: when the system distinguishes that face from all the faces in the image and identifies it as a specific profile

[11:04:30 AM] Carla: That’s fascinating. For the geeks in our audience, can you tell us what software you’re using?

[11:06:43 AM] Junior: Since I wanted it to be a stand-alone system, I looked into different solutions and finally I opted to use Python as language and a image processing library called PyVision.

[11:07:48 AM] Carla: Can you tell me a little bit more about this?

[11:08:39 AM] Junior: Python is a very portable language and it could be used in a lot of different platforms, both server-based and embedded. It’s a scripted language, but a very high performance one, so, it’s really easy to reuse and port the code to different platforms.

[11:10:31 AM] Junior: My intention was to create a “black box” to contain all the required software and just plug it in to a TV.

[11:10:47 AM] Carla: Cool!

[11:11:24 AM] Carla: Can you talk about some of the experiments you did to get up to speed on it?

[11:12:12 AM] Junior: Sure. I divided the project in 3 parts that were developed separately and then I connected them.

[11:13:05 AM] Junior: First was the face recognition module, which was basically identifying who was watching, I tried several options and algorithms in order to find the one that could be usable and responsive.

[11:13:19 AM] Junior: I did around 20 -25 different scripts.

[11:13:49 AM] Carla: Wow, how did it go with those first scripts?

[11:14:41 AM] Junior: Well… some were good at tracking faces, but in order to recognize, you basically need to create and average of a lot of photos of that face. So, the first scripts were good at tracking but really bad at recognizing. They would be really unresponsive.

[11:15:08 AM] Carla: Ah yeah, that makes sense.

[11:16:05 AM] Carla: And then after the face recognition module?

[11:16:26 AM] Junior: Finally I found a really cool library to implement machine learning in Python.

[11:17:06 AM] Carla: Nice! What’s that called and how did you find it?

[11:17:47 AM] Junior: Mmmm… I read a lot of articles about face recognition, and the guys who developed PyVision use machine learning for face recognition.

[11:17:55 AM] Carla: Gotcha.

[11:18:15 AM] Carla: So after the face recognition module, where did you go with your experiments?

[11:19:02 AM] Junior: After that I did the iPhone app, and used it as remote control.

[11:19:32 AM] Junior: I felt strongly that the UI should not be on the TV screen itself because watching TV is a social activity– you don’t want to interrupt everyone who’s watching when you want to browse or get more information.

[11:19:59 AM] Carla: And what kind of coding environment did you use for the app? There are so many options right now, and a lot of people are confused where to start.

[11:21:07 AM] Junior: I used a framework called Phonegap, its really cool, you create the UI using web technologies (HTML5, CSS, JS) and this framework encapsulates the UI into a native app.

[11:21:56 AM] Junior: It’s really simple and the best way to do a prototype.

[11:21:58 AM] Carla: Oh yeah, I know a lot of people love Phonegap for prototyping, nice to know you can create a native app with it.

[11:22:53 AM] Carla: What were the biggest challenges in developing the Smart TV system, particularly with making it really intuitive?

[11:24:31 AM] Junior: I think the biggest challenge was thinking about how the system will aggregate the content when 2 or more people are watching together

[11:25:03 AM] Junior: I feel that watching TV used to be very simple and social (as in people in the same place watching the same TV)

[11:25:07 AM] Carla: Interesting. I can see how that would be tricky to know whose content belongs to whom.

[11:25:42 AM] Junior: Exactly, and I think our approach was more about forget about “your” or “my” content and think about “our” content.

[11:26:06 AM] Junior: Let other people enrich your experience just by being there in front of the TV.

[11:27:18 AM] Carla: Hm. So does that mean that “we” become(s) another user? Or do you just pick the person whose content it’s more likely to be? I can see how this could get really complex really fast!

[11:29:04 AM] Junior: “We” become something different, we are a group that aggregates all the individuals.

[11:29:37 AM] Junior: Think about the wisdom of crowds applied to TV.

[11:29:58 AM] Carla: So is it kind of like this: person 1, person 2, person 3 and then a fourth profile for all three people combined?

[11:31:29 AM] Junior: Sort of. It’s combined but it’s not exactly the sum of everyone.

[11:31:41 AM] Junior: When you think a family for example, if you separate each member, they each have a personality,

[11:32:13 AM] Junior: but when they are together they have a “group” personality.

[11:34:35 AM] Carla: Ok, I get it. Cool. I think there are a lot of interesting social dynamics to explore there, like who is the most dominant. Super interesting. Could be a project for group therapy.  ;)

[11:35:27 AM] Junior: Exactly. One of the reasons I used face recognition was the possibility of using facial emotional feedback from everyone.

[11:36:15 AM] Carla: What’s next for this? Are you using the face recognition for anything else?

[11:36:26 AM] Junior: Not at the moment, but I’ve been paying attention to people using face recognition as rating system.

[11:37:25 AM] Junior: In a regular recommendation system, its all about “like” or “dislike” but the truth is that we have two “selves” the one who we aim to be and the one we really are.

[11:38:19 AM] Carla: That’s super fascinating about the self we aim to be. There’s so much psychology in all of this. Are you saying that the face recognition gives us a better truth than the rating that we indicate in the interface in another way?

[11:39:06 AM] Junior: Yes, exactly. For example, in order to create a profile in a recommendation engine you have to select content that you like, but most of the time you select things that you think are cool, but not always that you like.

[11:39:25 AM] Carla: So would the system you propose collect recommendation data in a passive way? Like in the middle of the movie I’m watching, rather than a question that’s asked at some other time?

[11:40:23 PM] Carla: Is it passive, accumulated while I’m watching?

[11:40:29 PM] Junior: Ideally it should be tracking your facial feedback at all time.

[11:41:20 AM] Junior: You could choose “Gone With the Wind” or “Citizen Kane”, but in reality your facial feedback says that you like “the Mask” and “Spice World” better.

[11:41:24 PM] Carla: Ha Ha Ha, yes, and “Borat” instead of a La Jetée.

[11:41:54 AM] Junior: Hehehe exactly ;) And facial emotional feedback is universal, independently of culture or geographic location.

[11:42:11 PM] Carla: Yeah, that makes sense.

[11:42:22 PM] Junior: then you could be more accurate about what you like and when.

[11:42:31 PM] Carla: Right.

[11:43:45 PM] Carla: Junior, this has been great! And I learned a lot.

[11:44:11 PM] Junior: Thanks Carla, it was really fun, please let me know if you need anything else.

[11:44:23 PM] Junior: I have tons of examples and references that I can share with the Smart Interaction Lab readers.

We’re always in the process of playing with sensors and tracking, and our newest toy, the Pulse Sensor, has led us to a new lab project, affectionately named “Stressbot”.

In our interview with one of the Pulse Sensor’s creators, Joel Murphy, we learned a lot about the link between Heart Rate Variability (HRV) and the physiological state of stress that can be so dangerous when sustained for long periods. Since there is a Smart-wide initiative around stress awareness and healthy lifestyles, we decided to put our learning to good use and designed the Stressbot as a way for Smarties to learn how to detect and manage stress. The stressbot lets you measure your HRV for a few minutes and displays real-time results on the screen. It then coaches you on breathing activities that can work to even out the HRV and effectively reduce stress.

We’re still in the process of learning more about the science behind it and modifying the code that the sensor’s creators have developed, but wanted to share a glimpse into our sketches and initial inspiration.

Stay tuned for updates on the Stressbot project over the next few weeks.

Above: Photo of the Nanoblocks Sagrada Famiglia model, with the 123D Catch 3D model built from a scan with the DIY iPhone Dolly.

Smart’s Master MacGyver Ron Ondrey loves his photo toys and when he shopped around for movable tripods and dollys he decided he could do a better job if he made one himself. He built the rig, shown below, in order to pan, truck and take 360° bullet-time style spinning videos around people and objects using a standard camera mount or by mounting an iPhone.

We had a lot of fun in the lab taking experimental videos of our product designs, but decided to bring the rig’s coolness factor up a notch by using it in conjunction with Autodesk’s 123D Catch App, software that essentially turns your iPhone into a 3D scanner, and then lets you spin and tilt the virtual 3D object on the screen.

The video above shows our test shots with the Nanoblocks model of Barcelona’s Sagrada Famiglia.

Here’s Ron with the rig.

.

And our test model.

Here’s our test with the rig in 360 spin mode.