Smart IxD Lab

Menu

Thoughts, observations and experimentation on interaction by: Smart Design

Check us out

wwdc14-home-branding

As always, excitement and speculation were running high around Apple’s annual developer’s conference, WWDC. It was no different here at Smart, where we all gathered around screens to watch what was being announced. Here were some of our initial thoughts:

Apple is fighting the perception that they’re losing the Android war, and this keynote, especially the beginning, was a response to that.

Nearly every major new OS or iOS feature seemed to address integration of some sort in an exciting way. We can’t wait to see if it lives up to the promise. Particularly iCloud enhancements, HealthKit, HomeKit, new Spotlight, and family sharing features. It’s exciting because this integration fits better into our lives, potentially helps with the disjointed systems that have evolved and connect beyond the personal user.

It was, however, strange to not mention the Beats acquisition at all, except for a phone call to Dr. Dre. Punishment for the early leak?

Yosemite
Apple are really good at iterating on their own stuff, and Yosemite is just another level of refinement.

Easily the best set of features on Yosemite and iOS 8 is Continuity. So many tasks are now cross-platform that enabling that makes perfect sense. Like Tim Cook said, this is the kind of thing only Apple can really pull off, because it owns the whole ecosystem. Airdrop between devices will get rid of all the ridiculous hacks (emailing or texting yourself) to move files between devices. Yes, it’s amazing this is a promoted feature in 2014.

iCloud Drive
iCloud feels like it’s still playing catch up to DropBox and Google Drive. The price is good, but it felt like they needed to take a great leap here to move people away from existing storage solutions. For example, just back up everything, everywhere. All of your devices are backed up, period. Then charge for additional storage. Cloud storage is cheap. It’s not hard to have 100gb of media that could be in the Cloud.

iOS 8
Interactive notifications are something that Android has had forever. Now Apple’s got its version and it’s a great addition. We’re curious how it’ll work on the lock screen when you’re not logged in.

Quicktype
How will this work with swearing? Or sexting? We don’t want any of the guys who were onstage to be predicting sexting.

What’s interesting is that it’ll take into account who you’re talking to and adjust. That’s really interesting predictive technology. This is a way for your phone to say, Hey, I get you. It’ll make the phone feel smarter. Although could be tricky with multiple users of the same device.

Since all the data is being stored on the device, what happens when the device gets stolen? Do you have to start over?

HealthKit
Passbook for health apps. Is your insurance going to integrate with it? Do you want to give an integrated package of your health data to Apple (or anyone outside of your healthcare provider)? It has some promise, but the infrastructure, particularly on the hospital/healthcare side, just isn’t there yet. If it works as well as Passbook (which is to say, not very), it won’t be very useful. Even though it’s ambitious and potentially noble, another concern is this’ll be too much data, that it’ll lead to false diagnoses.

We’re speculating that this is the software precursor to any sort of wearable device from Apple.

Extensibility
This was the biggest change that Apple announced, and potentially the most game-changing.

Apple almost never introduces something we’ve never seen before. They only fine-tune existing things. Extensibility is an example of this, as it’s been in Android for years. App sandboxing has been both a huge selling point and a huge pain point. It’s way more secure against viruses, but it also prevents data and functionality from moving between apps.

It’ll be tricky for developers, but the downside is that it could be crazy and make iOS feel like Android. We haven’t fully comprehended what this will mean yet.

Siri
Why wasn’t Siri incorporated into Spotlight? Will it be part of Extensibility? The other big (unsaid) revelation is that Siri is always on, always listening. Will it be cued into your particular voice? Or if everyone’s iPhones are on the table, by saying, “Hey Siri,” will you be able to turn them all on at once?

HomeKit
The piece we were the most excited about had some of the scantiest details. The “industry partners” here were also underwhelming thus far. Without a big name appliance partner like GE or Whirlpool, the service feels limited. “Scenes” is an interesting metaphor for a programmed cluster of behaviors though.

Swift
It makes sense Apple made their own programming language. It’s so Apple. It was a little unclear how it really integrates with Objective-C/C though.

Swift’s visual Playground reminded us of Bret Victor’s work. It did fall a little short when during the demo, he couldn’t drag the visuals and change the code.
So while there was no hardware announcement, there was still some meaty additions to the Apple cannon…and some accompanying unanswered questions. We can’t wait to get our hands on the new software and try it out.

MAD museum residency

This spring, Smart Interaction Lab’s NYC branch went uptown to the Museum of Arts and Design (MAD) for one week to be part in an intensive designer residency to explore the future of desktop 3D printing. The program, sponsored by the online 3D print-on-demand service, Shapeways, featured a different designer every week and was part of a larger exhibition entitled “Out of Hand”, which explores extreme examples of art and design created using digital fabrication techniques. Out of Hand is on display until June 1.

 

Sharing our research

During our week at the museum, lab founder Carla Diana was on hand to chat with museum visitors and share stories and research from her experience in developing LEO the Maker Prince, the first 3D printing book for kids. The book comes with a URL where readers can download and print the objects that appear throughout the story and printed models were available at the museum for people to touch and inspect.

569cdad8b28f11e39605126ba9d895ab_8

 

Playing with New Toys: Printers and Scanners

As a perk, we were invited to experiment with two fun technologies: a Form Labs FORM 1 printer and a full-body Kinect scanning station.

The Form Labs Printer was interesting since it uses a sintering process (SLA) as opposed to the fused deposition (FDM) process that’s used in popular desktop 3D printers such as the MakerBot. In other words, the Form Labs works by having a laser project light onto a bath of liquid resin, hardening the places that the laser hits, layer by layer. The more common FDM process relies on a spool of plastic filament that is fed through a heated nozzle positioned to allow the molten plastic to fall onto a platform and harden as it builds up layers. (At Smart, we have professional-grade printers that employ both technologies, but it was intriguing to see a desktop version of the much messier SLA machine.)

In terms of results, the FormLabs prints capture a great deal more detail at a relatively high resolution. And because the sintered parts don’t require as bulky a structure for support, they are also better at building interlocking and articulated parts than the FDM machines. We spent a good deal of time explore this by building 3D models of chain structures and then printing them on the Form Labs printers.

5785b988ac6311e387de122034e6e329_8

We also took an old pair of eyeglasses and scanned the lenses in order to design and build a new pair of frames, exploiting the detail of the print.

Carla's new frames at MAD

Carla’s new frames built on the FormLabs printer

The scanning station was also quite fun to play with, and consisted of a Kinect camera attached to Skanect software and positioned in front of a motor-driven turntable that a person could stand on. As it rotated, a Shapeways representative moved the Kinect camera up and down in order to capture the 3D data of the person’s body. We hoped to play with the scanner a bit more, but it was outrageously popular with museum visitors who waited on long lines to make scans to use to order small statuettes of themselves. The number of people who come through the museum is astounding and has included Woody Allen and David Byrne.

12ddb1aead5911e39da90a8f5c5dae8d_8

 

David Byrne statuette, created from a scan at the MAD Shapeways exhibition 

Capturing the public’s imagination

Throughout the six days, most of our time wasn’t spent with the tools, but rather talking to people. It was fascinating to hear what questions people have about 3D printing and what’s capturing their imagination. While the technology is quite commonplace to professional designers, about 90 percent of the people who came through the residency exhibition said the same thing, “We’ve heard about 3D printers, but had no idea what they are.” People are reading about them in the news, but that’s the extent of their exposure to it, so they found it fascinating to be able to hold and touch a 3D print, and see the process as it’s happening. Even the folks who did have some understanding of the printing techniques were very cloudy on how a 3D model would be crafted and made on a computer, so we enjoyed giving them a glimpse of the solid modeling techniques that we typically use as well as sharing tips about how to get started with more friendly platforms such as TinkerCAD and Autodesk’s 123D suite.

30033690ad5011e3be2f126b6b44507b_8

10-year old Asher Weintraub, inventor of the Menurkey.

Our favorite visitor to the residency exhibit was 10-year-old Asher Weintraub . We noticed a young boy engrossed in the book and reading intently. When we spoke to his parents, they explained that Asher was the designer of the famous “menurkey”  a sculptural centerpiece created to celebrate both Hanukkah and Thanksgiving simultaneously and developed using 3D printing. Upwards of 7000 menurkeys have been sold,  and the young designer was invited to meet President Obama in the White House to share his story of innovation.

We’re thrilled to know that Asher is a fan of LEO.

 

Maker Week in the Bay Area

This week we’ll be sharing our 3D printing fun on the west coast with a LEO the Maker Prince reading and activity day in the San Francisco Smart Studio. We’ll also be at the MakerCon event on May 13-14 and will have several Lab folks floating around the Maker Faire on May 17-18, so if you are in the Bay Area, come find us!

Displays are a big part of how we build expression into the design exploration work we do at the Smart Interaction Lab, and LED matrices are particularly compelling as they allow us to have lights that glow beneath a surface and disappear when not in use. Currently, we are working with a 32 x 32 RGB LED matrix that’s about 5″ square and share our experience with displaying type here in this post.

For starters, Adafruit offers a library that allows for drawing functions and displaying text on the matrix. The library is preloaded with a generic font, if you want to have custom typography or even iconography mixed in with the type, you’ll want to create your own font. We came across this challenge, and decided to reverse engineer the library files.

In order to create our own font we modified the glcdfont.c document within the Adafruit-GFX library folder. This document stores the program memory, which has the code for each character of the ASCII table. The first thing that we should notice is that the font is going to be 5 x 7 pixels.  When you open glcdfont.c, you can see that the font is a series of hex values separated by commas under a font[] array. The PROGMEM  is the memory of the array that has each character value. I have commented on green the ASCII value for each series of  hex values that comprise each character. Make sure that you use that when you do this, you utilize these symbols:  ”// comment ” or “/* comment text*/”

Screen Shot 2014-03-14 at 5.46.53 PM

1.

In the image above you can see that the capital letter “A” has a value of 65, since all ASCII symbols are represented in this file, all of the characters are in that numerical order, so “A” is line number 65. Each character is defined by hex values, which  are a shortened version of binary values. Each binary string is translated to five hex code bytes, each byte represents a line of seven 1′s or 0′s each representing either an “on” or “off” pixel in a glyph, but they are flipped 90° to the right.

We used excel to redraw each of the glyphs and extract the hex value of each line using their handy conversion function “=BIN2HEX(1111110)” which will return “7E”. Add “0x” for the correct formatting of “0x7E”, and you will have the first byte of the capital letter “A”.  The second line will be ”=BIN2HEX(1001)”, which returns “9″ this time we add another zero in front which would be “0×09″, and would keep our format uniform. Each of these hex values are the separated by a comma,  and each character or glyph has five pixels in width which has been previously defined. The size of each character is 5 x 7 pixels, and can be enlarged proportionally via other functions within the Arduino code, such as ”matrix.setTextSize(1).”

Screen Shot 2014-02-28 at 3.47.34 PM

 

If you are interested in creating your own fonts, here is the link for the Smart IxD modified file glcdfont.c: glcdfont NeatoFont

Here is the excel file that shows how arrive at the hex code: NeatoFont

Most of us have heard from health experts that we’re supposed to consume at least eight 8-ounce glasses of water a day, but how can we know if we’re hitting this target? Smart Intern and Lab collaborator Simone Capano set out to explore a solution to this problem with Hydramate, a simple system that passively reminds us when it’s time to drink more water.

hydra3

The project explores the notion of time and how it can be represented in an ambient way through a 3-dimensional interface. “I’ve always been very interested in finding a way to manage time in a physical way. Time is for sure an abstract element, and nowadays a lot of applications allow us to manage time effectively, but what if time were controlled through a tangible object, acting as a reminder, speaking to the user with a specific language/behavior?” asks Simone.

hydra4

Hydramate is an electronic coaster that measures a one hour cycle, divided in four quarters, each representing 15 minutes. As time passes, each quarter begins to gently glow giving the user a visual cue of how many times they have raised their glass to drink. Once a whole hour has passed since the last sip, the device begins to blink signaling that it is time to get hydrated. The blinking becomes stronger while the user is drinking, and once they set the glass back down it resets to the gentile glow of the first quarter .

hydra5

Simone created a fully functioning prototype with an Arduino micro controller. The shell  is  made of spray-painted polycarbonate, and the technology inside of it is very simple:

- A Photocell senses when a glass has been placed on it

- An Arduino Pro Mini  powered by a 3.3V Lithium batter receives the input from the photocell and controls the LEDs accordingly

We look forward to hearing about how this project has developed since the prototype was constructed, and how well test users feel the device helps them to manage their hydration.

 

389514_179436285534399_249098610_n

With Christmas time right around the corner, it’s getting down to the wire to find that find that last minute gift for everyone on your Christmas list. While this task is something that we’re all familiar with (and stress over) for parents of kids with physical disabilities this task can be something that’s even more stressful. For many such children playing with off-the-shelf toys is not possible, which is where Hacking for Holidays comes in.

Last Saturday I took part in a DIYAbility’s and the Adaptive Design Association Hacking for the Holiday’s, which sets out to invite makers, hackers,occupational, music and recreational therapists to come together and hack some toys, to make them switch accessible, for children with physical disabilities. The term “switch accessible” refers to the idea of making the toy usable by simple switches which can connect through a mono jack in the toy. If a child can move their head, feet, arm, mouth or any other part of their body it is possible to use a switch pluged into a mono jack to play with the toy. For example, rather than using a joystick to control an RC car, tack switches or momentary switches can be substituted. Adding these switch jacks to a toy does not affect the original toy; the existing buttons will operate as normal and kids who use accessibility switches will now be able to operate the toy, so it works for all users.

As part of the workshop, each participant brought a toy to hack, and together as group we worked toward integrating simple mono-jacks into the toy.

IMG_9298-sized

For my toy, I chose the Fast Lane Lightening Striker. Opening up the remote control revealed four simple spring clip switches(forward, backward, left, right), which would serve as the soldering points for the mono jacks.

IMG_9295-markup

Isolating the PCB from the housing I soldered up four monojacks and then soldered them to the spring clips. A few holes in the housing to feed the monojacks through and the toy was back up and running with the ability for accessibility switches  to be plugged in.

IMG_9299-sized

Overall the Hacking for the Holidays event was great time to geek out, while doing something meaningful. I’ll be looking forward to do it again next year

Screen Shot 2013-12-21 at 12.33.53 PM

One of our current projects in the lab is the StressBot, a friendly console that can read heart activity through the Pulse Sensor to understand whether or not a person is in the physiological state of stress, and then offer coaching exercises and feedback to reduce stress through breathing exercises. We’ve been continuously testing our setup with research participants to try to create an interface that’s clear and accessible to anyone in our studio who might approach the device.

Since the last time we posted about this project, we have learned much more about correlating stress to heart rate.  Joel Murphy, the creator of the Pulse Sensor, has helped us understand IBI (Interbeat Interval, or the time that passes between each heartbeat) and released some code that helped us grasp ways to map heart activity to stress. We have been using IBI measurements and the amplitude function Joel created to assign a specific value for stress, allowing us to measure whether it is relatively high or low.

HRV-1267

Most of our previous prototypes focused on trying to match a visual pattern with the heart rate. This proved to be very complicated, and worst of all, stressful. We also found that having one’s eyes closed is often the best way to achieve a state of relaxation. After a few iterations, we discovered that audio feedback the best way to provide helpful feedback to a person who is trying to relax. This allows the person to close his or her eyes, and focus on finding a constant tone rather than something visual. The image above shows the first trial involving mapping the amplitude of the heart rate to the amplitude of a sine wave, and the IBI to the frequency of the sound. The upper most waves are showing the sound and the lower most wave is displaying the heart rate signature.

Below you can see the various explorations of mapping the same sound wave that is being altered by the user’s heart rate to another visual cue. The concentric circles show a rippling effect based on the IBI change, and the droplets are another way of visualizing the same sonic effect. The user in this case is trying to reach uniformity in pattern, either through the distance of the concentric circles or the distance and shape of the drops.

 HRV-0018

Screen Shot 2013-12-18 at 6.29.32 PM

HRV-0678 HRV-0343

Below you can find the latest iteration of the interface. Using the screen and physical enclosure, the device acts as a coach character to help people know how to approach it and use the interface. It helps engage users with their bio signals, while providing the bot with the signification of the IBI state and a visual cue to ensure them that they are on the right track. Although the project is not complete, we are getting close! Our next steps involve experimenting with parameters, and doing more user testing.

HRV-0485 HRV-0667 HRV-2270

 

 

Eg-j_0KKr42P0ZIJNld3i7cqvCs9b154XBQKDISCo8U

Earlier this year, Smart Intern Floriane Rousse-Marquet looked into ways to blend analog and digital in order to create impactful and meaningful experiences. She shares her discoveries in the post below:

We love designing around the physical world because it gives us more sensory experiences to play with, relate to, and personalize in order to trigger strong emotional responses. The digital world, on the other hand, offers the benefit of being efficient. It allows scalability and the possibility to share what’s been created and experienced. It also offers the potential to learn and update in the background. So how can we find balance between the analog and the digital to maximize the emotional value of a product?

VGbG8NRLMYqlAg3Cus4Yu_IHlnvaZ9zyptCVqwj6N1s

To answer this question, I started by analyzing media consumption (text, sound, image), looking for strengths and weaknesses for each category from an emotional point of view. For example, for sound, physical albums don’t let you transport your music the way you could with an MP3, but they allow for a richer gestural ritual. Albums are easily scratched, but  because they are fragile you develop a deeper attachment to the object because you want to protect it. On the other hand, digital music allows for sharing, portability and easily navigating among songs and albums.

I tried to identify patterns and similarities that lead to an anticipated emotional response and a cherished product by leveraging the meaningful parts of the analog while harnessing the power of the digital. These are summarized in the 6 principles below.

AnologandDigital

1. Create rituals through the senses. Use different emotional triggers from the experience such as materials and shape, and enable a step-by-step discovery. Gesture matters. Letting people connect positively to an activity in a physical way builds memorable interactions. 

2. Be conscious of the value of tangible things. Give objects character through unique forms and materials, and implement a sense of lifetime and age. Make the link between the medium and the content visible to create a feeling of responsibility and care over it.  

3. Craft personality through the look and feel and the details to help build product identity. 

In Emotional Design Don Norman said,

“I value my teapots not only for their function for brewing tea, but because they are a sculptural artwork’ , ‘beyond the design of an object, there is a personal component as well’(…) we take pride in them, (…) because of the meaning they bring to our lives,’‘can I tell a story about it? Does it appeal to myself image, to my pride?’”

4. Offer the opportunity to create a personal mark. Take advantage the tangible manifestation of a physical action on the content, and highlight the data through display. The interface should be able to learn from you and help you empathize with the object.

5. Bring focus to the core experience. Define a limit to information coming in and the type of information given in order to offer a more curated experience.

6. Enable people to pass experiences on to one another by making the missing item visible when shared, giving a second life to the data and allowing people to share their experiences.

One key insight we gleaned is that the best digital experiences are enhanced by the platform they are in. Historically, so many digital experiences have been delivered through a single platform. The Internet of Things has the potential to release the tension between digital and analog by making digital more tangible. We believe that designers should break through the barriers of the screen. An object should be able to live on its own, not only through a virtual platform, and bring new forms of interaction between the user and her environment.

 

The diagram below summarizes the key moments of a  physical book experience.

analog_book 

And this diagram has the key moments of the experience of an e-reader.

digital_kindle

OpenBCI stands for Brain Computer Interface, and it’s the first EEG platform commercially available that gives users access to brainwave raw data, without proprietary algorithms or signal tampering. This board is a new programmable EEG (electroencephalography) system that is completely open source. It uses the ADS1299 chip by Texas Instruments, an 8-channel, low-noise, 24-bit analog-to-digital converter designed specifically for measuring teeny-tiny electrical brain signals. It has the capability to read 8 channels simultaneously with a daisy-chain option to give users 16 EEG channels. Considering that the MindFlex has only one and Emotive 5 channels, OpenBCI is the best option for brainwave data available to creative technologists, hobbyists, and research institutions alike. Versions I and II are Arduino shields, but the new boards  have an ATmega328p on board that is programmable over Bluetooth LE.

b2b9df8fd5cdd284ed6b047f500cf4f7_large

 

The possibilities for this technology are endless, outsourcing EEG data might help neurologists better understand the brain while walking down the street, or eating an ice-cream, things that were previously impossible to be measured in a neuroscience lab. Interaction Design is dawning upon a new challenge, to create systems that can adapt to our cognitive behavior. Brain plasticity and AI will take over our buttons and digits. It’s already happening in the field of prosthetics, it’s only a matter of R&D before we see our lives being easily controlled by merely thinking.

 

55bc41df1193bd562a6b4ea6b5db2bf9_large

This is an image of my brainwaves,  you can see on the rightmost graph that there is a slight dip on channels 1 and 2 just over the 4 second mark. This undulation is a blink. As you can see throughout the rest of the channels there is a pattern emerging in the waves. My eyes are completely closed, this pattern is the activity in the occipital lobe where the visual cortex lays. On the left graph, you can see a peak at the 10 Hz. mark, these are the Alpha waves which are between 8-13 Hz. These waves are most prominent when your eyes are closed, and they inhibit the areas of the cortex that are not in use, while playing an important part in communication and coordination.

The minds behind this project are Joel Murphy, the creator of Pulse Sensor, and Conor Russomano, a recent graduate from MFA Design and Technology at parsons. Together they have spearheaded this project as an initiative by Chip Audette, who has attained DARPA funding to create an open source EEG. For more information about this project check out their kickstarter page:

http://www.kickstarter.com/projects/openbci/openbci-an-open-source-brain-computer-interface-fo

 

 

 

Circuit Stickers from Jie Qi and Bunny Huang on Vimeo.

Stickers were the official currency of any kid that grew up in the 80′s and 90′s. These generations learned the concept of value one sticky trade at a time. Everyone had a book carefully curated, with pages organized by the sticker’s value, ones were to trade during recess and others to keep forever. Fuzzy, puffy, glittery, holographic, smelly, or lenticular, stickers where the object of everyone’s desire. Today, the interest for stickers seems to have dwindled a bit, perhaps because more kids are captivated by digital content.

Jie Qi and Bunny are introducing a new kind of computational sticker system called Chibitronics. These component stickers can be placed on conductive lines to create a functional circuit, becoming a basic circuitry design learning tool. These stickers are the glue between digital kids and the old standing tradition of sticker trading.

Chibitronics are designed with a flexible polyimide substrate with anisotropic tape or “Z-tape” laminated on the back. This tape allows for electricity to travel either laterally or vertically, but not both simultaneously, it’s almost like an axis diode. Below you can see a close up of the metallic particles that are embedded in this conductive tape. For more technical information about 3m’s anisotropic tape here is the data sheet. For more information about the project and to find out how to get stickers of your own: http://www.crowdsupply.com/chibitronics/circuit-stickers 

 

 

Smart Interaction Lab’s Erika Rossi recently gave a talk at TEDx Salzburg event, where she presented her Master’s thesis project I Mirabilia (“The Wonders”), a set of interactive dolls for hospitalized children.

Here, Erika tells us more about the event and her presentation:

TEDxSalzburg was a great experience and an amazing opportunity to share my project while making new connections with inspiring and smart people. The conference theme, “I CAN HEAR YOU–YOU CAN HEAR ME” focused on people’s need to be heard, and the fact that while they’re struggling to gain attention, they tend to forget to listen.

In the words of the conference organizers,“Listening and being heard is a critical success factor if you want to give answers to question really asked. You don’t have to be a trendscout to come up with relevant (and innovative) products and solutions: you just have to watch and listen.”

The event was divided in 4 sections I-CAN-HEAR-YOU and I was assigned to the third one “HEAR” because my project is about letting hospitalized children be heard by providing them with additional tools to express their fears and emotions. Below is a description:

I Mirabilia (“The Wonders”)  are a family of three interactive dolls for children who spend a long period of time in hospital, due to terminal illnesses or periodic  therapies. Drawing on interviews and observations in a children’s  hospital, three dolls were designed to help overcome specific emotional difficulties faced by children in this situation.  The different interactions and behaviors triggered by the dolls enable the children to improve their relationships and make new connections with the people within the hospital, such as doctors, psychologists and other hospitalized children.

The other talks covered a wide range of topics, from a new interactive platform for learning music by Albert Frantz (http://www.key-notes.com/) to how the latest sensor technology could be used in healthcare applications to improve patient’s monitoring by Fritz Höllerer (http://www.aeskulapp.com/).

Overall, it was a wonderful experience!

 - Erika Rossi