Smart IxD Lab

Menu

Thoughts, observations and experimentation on interaction by: Smart Design

Check us out

The robots are here! Well, one of them. This week, Cynthia Breazeal, MIT’s pioneer researcher in social robotics, announced a project that we’ve been waiting for: Jibo, your family’s friendly robot companion. It’s essentially a voice-enabled expressive countertop appliance with a colorful screen for a face and a jolly, rotating torso that allows it to turn towards a person with socially appropriate gestures. In other words, Jibo can pay attention to you and carry on a conversation like it really cares.

The video above does a great job of telling the story of how Jibo might fit into your life, serving as an internet-enabled coach, personal assistant, video chatting system, entertainment hub, and all-around connected companion.

We’ve been following the field of social robotics for some time, and have noted women leaders such as Dr. Breazeal in this Fast Company  article, “How Women Are Leading The Effort To Make Robots More Humane”. While most of the groundbreaking work so far has been confined to university laboratories, Jibo represents a bold move into the consumer market. Currently in a crowdfunding effort on IndieGoGo, the product is scheduled to ship in December 2015.

As product designers we know the challenges that go into making a product’s interactions truly socially compelling, as outlined in this piece about Jibo in MIT Technology Review. We also know that it’s tricky to build a general-purpose connected device that can  adequately fulfill a user’s needs better than all the other competing internet-connected appliances in the home. Nonetheless we’re very excited about Jibo and look forward to following its progress as it develops.

See the New York Times piece on Jibo and social robotics here.

spark

At the Lab we’re always on the lookout for new tools to help us quickly and easily prototype connected devices, for “Internet of Things” experiences. One little board that’s caught our eye is the Spark Core board. Here’s how the creators describe it:

“A tiny Wi-Fi development board that makes it easy to create internet-connected hardware. The Core is all you need to get started; power it over USB and in minutes you’ll be controlling LEDs, switches and motors and collecting data from sensors over the internet!”

We got our hands on one recently and were impressed by how easy it was to power up the board and control some LEDs via a web interface, as well as set up a sensor (we used a light-sensitive photocell) that can broadcast its status to the web. Using their cleanly designed Tinker app, we could turn functionality on or off with a tap on a touchscreen.

Perhaps most interesting is Spark Core’s strategy of allowing 3 different entry points for programming the hardware. A complete beginner who wants an easy introduction can use the visual Tinker app, whereas a more advanced user might prefer the browser-based Spark IDE to write custom code. And as an open source project an experienced hardware tinkerer can explore any aspect of the board and software.

tinker

Screen Shot 2014-07-03 at 6.07.47 PM

We’re looking forward to doing some more tinkering of our own.

Check out Spark Core here: http://www.spark.io/

Screen Shot 2014-06-25 at 3.31.26 PM

Like us, Google sees a future where technology plays an increasingly important role in everyone’s lives. In order for new products, services and software to be inclusive, we need women to be active participants in the creation of this new world. At the Lab and in Smart Design’s Femme Den, we’ve been encouraged by seeing more female leaders in new fields such as social robotics, but the facts unfortunately still point to women being incredibly underrepresented in technology fields in general.

When Google realized that only 17% of its tech employees are women, it set out to create initiatives to try to increase that number. One such effort is the recently launched Made W/ Code project, a combination of community resources and online projects.  This inspirational video paints a picture of the larger vision that Google has invested $50 million dollars into:

To build the community, Google has video portraits highlighting mentors, such as Limor Fried, creator of Adafruit online catalog and learning resource and Miral Kotb, founder of the iLuminate dance troupe, there are similar portraits of “makers”, such as young Tesca Fitzgerald at Georgia Tech, who uses code to experiment with social robots.

The projects are interactive and encourage kids to code using a  browser-based tool for crafting many kinds of experiences, from animations and avatars to 3D printed objects and musical compositions. Though these projects seem to be in their early phases, they showcase a solid foundation of how a visual coding language (theirs is called “Blocky”) can be used as the framework for crafting algorithms and establishing behavioral patterns. Below is an image of the 3D printed project created in collaboration with the Shapeways service. Using only what you see in the browser, you can adjust the parameters of a bracelet and then have it 3D printed and shipped to you.

Screen Shot 2014-06-25 at 4.05.02 PM

 

 

We’re excited to follow Made W/ Code and help spread the inspiration to girls everywhere.

To check out Made W/ Code visit http://www.madewithcode.com

 

egg o matic

Last week Smart Interaction Lab was on the road in London at the UX London 2014 conference. In addition to giving a talk on the Internet of Things, we ran a 3-hour workshop on E-Sketching for Interaction Prototyping. In the workshop, we introduced participants to the basics of Arduino, and then quickly moved into demonstrations of a range of sensors. We used the scenario of an eldercare patient whose loved ones would like to be informed of changes in behavior, such as when medications haven’t been taken in time–much like the Lively system or the Vitality GlowCaps that we use in our Internet of Things examples. With some quick foam core and cardboard mockups, we showed how tilt switches, light sensors, accelerometers and temperature sensors can be used to help medication compliance.

Since UX designers value being able to visualize data, we linked our Arduino demos to Processing, a programming language that allows people to quickly create dynamic on-screen graphics. Once the demonstrations were done, participants worked as teams and it was their turn to get creative with foam core, sensors and code.

The teams worked remarkably well together, and the energy in the room was awesome. Picking from a range of suggested subject areas, teams created working demos of:

-Temperature monitoring for accurate egg boiling
-An anti-snoring system
-Tracking seat usage on public transportation
-A warning system to let you know when you’ve left the refrigerator door open
…and an applause-o-meter to visualize how much participants enjoyed the workshop

photo 1

 

photo 2

photo 1 photo 2 photo 5

Special thanks to Junior Castro from the Interaction Lab for joining us in London.

wwdc14-home-branding

As always, excitement and speculation were running high around Apple’s annual developer’s conference, WWDC. It was no different here at Smart, where we all gathered around screens to watch what was being announced. Here were some of our initial thoughts:

Apple is fighting the perception that they’re losing the Android war, and this keynote, especially the beginning, was a response to that.

Nearly every major new OS or iOS feature seemed to address integration of some sort in an exciting way. We can’t wait to see if it lives up to the promise. Particularly iCloud enhancements, HealthKit, HomeKit, new Spotlight, and family sharing features. It’s exciting because this integration fits better into our lives, potentially helps with the disjointed systems that have evolved and connect beyond the personal user.

It was, however, strange to not mention the Beats acquisition at all, except for a phone call to Dr. Dre. Punishment for the early leak?

Yosemite
Apple are really good at iterating on their own stuff, and Yosemite is just another level of refinement.

Easily the best set of features on Yosemite and iOS 8 is Continuity. So many tasks are now cross-platform that enabling that makes perfect sense. Like Tim Cook said, this is the kind of thing only Apple can really pull off, because it owns the whole ecosystem. Airdrop between devices will get rid of all the ridiculous hacks (emailing or texting yourself) to move files between devices. Yes, it’s amazing this is a promoted feature in 2014.

iCloud Drive
iCloud feels like it’s still playing catch up to DropBox and Google Drive. The price is good, but it felt like they needed to take a great leap here to move people away from existing storage solutions. For example, just back up everything, everywhere. All of your devices are backed up, period. Then charge for additional storage. Cloud storage is cheap. It’s not hard to have 100gb of media that could be in the Cloud.

iOS 8
Interactive notifications are something that Android has had forever. Now Apple’s got its version and it’s a great addition. We’re curious how it’ll work on the lock screen when you’re not logged in.

Quicktype
How will this work with swearing? Or sexting? We don’t want any of the guys who were onstage to be predicting sexting.

What’s interesting is that it’ll take into account who you’re talking to and adjust. That’s really interesting predictive technology. This is a way for your phone to say, Hey, I get you. It’ll make the phone feel smarter. Although could be tricky with multiple users of the same device.

Since all the data is being stored on the device, what happens when the device gets stolen? Do you have to start over?

HealthKit
Passbook for health apps. Is your insurance going to integrate with it? Do you want to give an integrated package of your health data to Apple (or anyone outside of your healthcare provider)? It has some promise, but the infrastructure, particularly on the hospital/healthcare side, just isn’t there yet. If it works as well as Passbook (which is to say, not very), it won’t be very useful. Even though it’s ambitious and potentially noble, another concern is this’ll be too much data, that it’ll lead to false diagnoses.

We’re speculating that this is the software precursor to any sort of wearable device from Apple.

Extensibility
This was the biggest change that Apple announced, and potentially the most game-changing.

Apple almost never introduces something we’ve never seen before. They only fine-tune existing things. Extensibility is an example of this, as it’s been in Android for years. App sandboxing has been both a huge selling point and a huge pain point. It’s way more secure against viruses, but it also prevents data and functionality from moving between apps.

It’ll be tricky for developers, but the downside is that it could be crazy and make iOS feel like Android. We haven’t fully comprehended what this will mean yet.

Siri
Why wasn’t Siri incorporated into Spotlight? Will it be part of Extensibility? The other big (unsaid) revelation is that Siri is always on, always listening. Will it be cued into your particular voice? Or if everyone’s iPhones are on the table, by saying, “Hey Siri,” will you be able to turn them all on at once?

HomeKit
The piece we were the most excited about had some of the scantiest details. The “industry partners” here were also underwhelming thus far. Without a big name appliance partner like GE or Whirlpool, the service feels limited. “Scenes” is an interesting metaphor for a programmed cluster of behaviors though.

Swift
It makes sense Apple made their own programming language. It’s so Apple. It was a little unclear how it really integrates with Objective-C/C though.

Swift’s visual Playground reminded us of Bret Victor’s work. It did fall a little short when during the demo, he couldn’t drag the visuals and change the code.
So while there was no hardware announcement, there was still some meaty additions to the Apple cannon…and some accompanying unanswered questions. We can’t wait to get our hands on the new software and try it out.

MAD museum residency

This spring, Smart Interaction Lab’s NYC branch went uptown to the Museum of Arts and Design (MAD) for one week to be part in an intensive designer residency to explore the future of desktop 3D printing. The program, sponsored by the online 3D print-on-demand service, Shapeways, featured a different designer every week and was part of a larger exhibition entitled “Out of Hand”, which explores extreme examples of art and design created using digital fabrication techniques. Out of Hand is on display until June 1.

 

Sharing our research

During our week at the museum, lab founder Carla Diana was on hand to chat with museum visitors and share stories and research from her experience in developing LEO the Maker Prince, the first 3D printing book for kids. The book comes with a URL where readers can download and print the objects that appear throughout the story and printed models were available at the museum for people to touch and inspect.

569cdad8b28f11e39605126ba9d895ab_8

 

Playing with New Toys: Printers and Scanners

As a perk, we were invited to experiment with two fun technologies: a Form Labs FORM 1 printer and a full-body Kinect scanning station.

The Form Labs Printer was interesting since it uses a sintering process (SLA) as opposed to the fused deposition (FDM) process that’s used in popular desktop 3D printers such as the MakerBot. In other words, the Form Labs works by having a laser project light onto a bath of liquid resin, hardening the places that the laser hits, layer by layer. The more common FDM process relies on a spool of plastic filament that is fed through a heated nozzle positioned to allow the molten plastic to fall onto a platform and harden as it builds up layers. (At Smart, we have professional-grade printers that employ both technologies, but it was intriguing to see a desktop version of the much messier SLA machine.)

In terms of results, the FormLabs prints capture a great deal more detail at a relatively high resolution. And because the sintered parts don’t require as bulky a structure for support, they are also better at building interlocking and articulated parts than the FDM machines. We spent a good deal of time explore this by building 3D models of chain structures and then printing them on the Form Labs printers.

5785b988ac6311e387de122034e6e329_8

We also took an old pair of eyeglasses and scanned the lenses in order to design and build a new pair of frames, exploiting the detail of the print.

Carla's new frames at MAD

Carla’s new frames built on the FormLabs printer

The scanning station was also quite fun to play with, and consisted of a Kinect camera attached to Skanect software and positioned in front of a motor-driven turntable that a person could stand on. As it rotated, a Shapeways representative moved the Kinect camera up and down in order to capture the 3D data of the person’s body. We hoped to play with the scanner a bit more, but it was outrageously popular with museum visitors who waited on long lines to make scans to use to order small statuettes of themselves. The number of people who come through the museum is astounding and has included Woody Allen and David Byrne.

12ddb1aead5911e39da90a8f5c5dae8d_8

 

David Byrne statuette, created from a scan at the MAD Shapeways exhibition 

Capturing the public’s imagination

Throughout the six days, most of our time wasn’t spent with the tools, but rather talking to people. It was fascinating to hear what questions people have about 3D printing and what’s capturing their imagination. While the technology is quite commonplace to professional designers, about 90 percent of the people who came through the residency exhibition said the same thing, “We’ve heard about 3D printers, but had no idea what they are.” People are reading about them in the news, but that’s the extent of their exposure to it, so they found it fascinating to be able to hold and touch a 3D print, and see the process as it’s happening. Even the folks who did have some understanding of the printing techniques were very cloudy on how a 3D model would be crafted and made on a computer, so we enjoyed giving them a glimpse of the solid modeling techniques that we typically use as well as sharing tips about how to get started with more friendly platforms such as TinkerCAD and Autodesk’s 123D suite.

30033690ad5011e3be2f126b6b44507b_8

10-year old Asher Weintraub, inventor of the Menurkey.

Our favorite visitor to the residency exhibit was 10-year-old Asher Weintraub . We noticed a young boy engrossed in the book and reading intently. When we spoke to his parents, they explained that Asher was the designer of the famous “menurkey”  a sculptural centerpiece created to celebrate both Hanukkah and Thanksgiving simultaneously and developed using 3D printing. Upwards of 7000 menurkeys have been sold,  and the young designer was invited to meet President Obama in the White House to share his story of innovation.

We’re thrilled to know that Asher is a fan of LEO.

 

Maker Week in the Bay Area

This week we’ll be sharing our 3D printing fun on the west coast with a LEO the Maker Prince reading and activity day in the San Francisco Smart Studio. We’ll also be at the MakerCon event on May 13-14 and will have several Lab folks floating around the Maker Faire on May 17-18, so if you are in the Bay Area, come find us!

Displays are a big part of how we build expression into the design exploration work we do at the Smart Interaction Lab, and LED matrices are particularly compelling as they allow us to have lights that glow beneath a surface and disappear when not in use. Currently, we are working with a 32 x 32 RGB LED matrix that’s about 5″ square and share our experience with displaying type here in this post.

For starters, Adafruit offers a library that allows for drawing functions and displaying text on the matrix. The library is preloaded with a generic font, if you want to have custom typography or even iconography mixed in with the type, you’ll want to create your own font. We came across this challenge, and decided to reverse engineer the library files.

In order to create our own font we modified the glcdfont.c document within the Adafruit-GFX library folder. This document stores the program memory, which has the code for each character of the ASCII table. The first thing that we should notice is that the font is going to be 5 x 7 pixels.  When you open glcdfont.c, you can see that the font is a series of hex values separated by commas under a font[] array. The PROGMEM  is the memory of the array that has each character value. I have commented on green the ASCII value for each series of  hex values that comprise each character. Make sure that you use that when you do this, you utilize these symbols:  ”// comment ” or “/* comment text*/”

Screen Shot 2014-03-14 at 5.46.53 PM

1.

In the image above you can see that the capital letter “A” has a value of 65, since all ASCII symbols are represented in this file, all of the characters are in that numerical order, so “A” is line number 65. Each character is defined by hex values, which  are a shortened version of binary values. Each binary string is translated to five hex code bytes, each byte represents a line of seven 1′s or 0′s each representing either an “on” or “off” pixel in a glyph, but they are flipped 90° to the right.

We used excel to redraw each of the glyphs and extract the hex value of each line using their handy conversion function “=BIN2HEX(1111110)” which will return “7E”. Add “0x” for the correct formatting of “0x7E”, and you will have the first byte of the capital letter “A”.  The second line will be ”=BIN2HEX(1001)”, which returns “9″ this time we add another zero in front which would be “0×09″, and would keep our format uniform. Each of these hex values are the separated by a comma,  and each character or glyph has five pixels in width which has been previously defined. The size of each character is 5 x 7 pixels, and can be enlarged proportionally via other functions within the Arduino code, such as ”matrix.setTextSize(1).”

Screen Shot 2014-02-28 at 3.47.34 PM

 

If you are interested in creating your own fonts, here is the link for the Smart IxD modified file glcdfont.c: glcdfont NeatoFont

Here is the excel file that shows how arrive at the hex code: NeatoFont

Most of us have heard from health experts that we’re supposed to consume at least eight 8-ounce glasses of water a day, but how can we know if we’re hitting this target? Smart Intern and Lab collaborator Simone Capano set out to explore a solution to this problem with Hydramate, a simple system that passively reminds us when it’s time to drink more water.

hydra3

The project explores the notion of time and how it can be represented in an ambient way through a 3-dimensional interface. “I’ve always been very interested in finding a way to manage time in a physical way. Time is for sure an abstract element, and nowadays a lot of applications allow us to manage time effectively, but what if time were controlled through a tangible object, acting as a reminder, speaking to the user with a specific language/behavior?” asks Simone.

hydra4

Hydramate is an electronic coaster that measures a one hour cycle, divided in four quarters, each representing 15 minutes. As time passes, each quarter begins to gently glow giving the user a visual cue of how many times they have raised their glass to drink. Once a whole hour has passed since the last sip, the device begins to blink signaling that it is time to get hydrated. The blinking becomes stronger while the user is drinking, and once they set the glass back down it resets to the gentile glow of the first quarter .

hydra5

Simone created a fully functioning prototype with an Arduino micro controller. The shell  is  made of spray-painted polycarbonate, and the technology inside of it is very simple:

- A Photocell senses when a glass has been placed on it

- An Arduino Pro Mini  powered by a 3.3V Lithium batter receives the input from the photocell and controls the LEDs accordingly

We look forward to hearing about how this project has developed since the prototype was constructed, and how well test users feel the device helps them to manage their hydration.

 

389514_179436285534399_249098610_n

With Christmas time right around the corner, it’s getting down to the wire to find that find that last minute gift for everyone on your Christmas list. While this task is something that we’re all familiar with (and stress over) for parents of kids with physical disabilities this task can be something that’s even more stressful. For many such children playing with off-the-shelf toys is not possible, which is where Hacking for Holidays comes in.

Last Saturday I took part in a DIYAbility’s and the Adaptive Design Association Hacking for the Holiday’s, which sets out to invite makers, hackers,occupational, music and recreational therapists to come together and hack some toys, to make them switch accessible, for children with physical disabilities. The term “switch accessible” refers to the idea of making the toy usable by simple switches which can connect through a mono jack in the toy. If a child can move their head, feet, arm, mouth or any other part of their body it is possible to use a switch pluged into a mono jack to play with the toy. For example, rather than using a joystick to control an RC car, tack switches or momentary switches can be substituted. Adding these switch jacks to a toy does not affect the original toy; the existing buttons will operate as normal and kids who use accessibility switches will now be able to operate the toy, so it works for all users.

As part of the workshop, each participant brought a toy to hack, and together as group we worked toward integrating simple mono-jacks into the toy.

IMG_9298-sized

For my toy, I chose the Fast Lane Lightening Striker. Opening up the remote control revealed four simple spring clip switches(forward, backward, left, right), which would serve as the soldering points for the mono jacks.

IMG_9295-markup

Isolating the PCB from the housing I soldered up four monojacks and then soldered them to the spring clips. A few holes in the housing to feed the monojacks through and the toy was back up and running with the ability for accessibility switches  to be plugged in.

IMG_9299-sized

Overall the Hacking for the Holidays event was great time to geek out, while doing something meaningful. I’ll be looking forward to do it again next year

Screen Shot 2013-12-21 at 12.33.53 PM

One of our current projects in the lab is the StressBot, a friendly console that can read heart activity through the Pulse Sensor to understand whether or not a person is in the physiological state of stress, and then offer coaching exercises and feedback to reduce stress through breathing exercises. We’ve been continuously testing our setup with research participants to try to create an interface that’s clear and accessible to anyone in our studio who might approach the device.

Since the last time we posted about this project, we have learned much more about correlating stress to heart rate.  Joel Murphy, the creator of the Pulse Sensor, has helped us understand IBI (Interbeat Interval, or the time that passes between each heartbeat) and released some code that helped us grasp ways to map heart activity to stress. We have been using IBI measurements and the amplitude function Joel created to assign a specific value for stress, allowing us to measure whether it is relatively high or low.

HRV-1267

Most of our previous prototypes focused on trying to match a visual pattern with the heart rate. This proved to be very complicated, and worst of all, stressful. We also found that having one’s eyes closed is often the best way to achieve a state of relaxation. After a few iterations, we discovered that audio feedback the best way to provide helpful feedback to a person who is trying to relax. This allows the person to close his or her eyes, and focus on finding a constant tone rather than something visual. The image above shows the first trial involving mapping the amplitude of the heart rate to the amplitude of a sine wave, and the IBI to the frequency of the sound. The upper most waves are showing the sound and the lower most wave is displaying the heart rate signature.

Below you can see the various explorations of mapping the same sound wave that is being altered by the user’s heart rate to another visual cue. The concentric circles show a rippling effect based on the IBI change, and the droplets are another way of visualizing the same sonic effect. The user in this case is trying to reach uniformity in pattern, either through the distance of the concentric circles or the distance and shape of the drops.

 HRV-0018

Screen Shot 2013-12-18 at 6.29.32 PM

HRV-0678 HRV-0343

Below you can find the latest iteration of the interface. Using the screen and physical enclosure, the device acts as a coach character to help people know how to approach it and use the interface. It helps engage users with their bio signals, while providing the bot with the signification of the IBI state and a visual cue to ensure them that they are on the right track. Although the project is not complete, we are getting close! Our next steps involve experimenting with parameters, and doing more user testing.

HRV-0485 HRV-0667 HRV-2270