Smart IxD Lab

Menu

Thoughts, observations and experimentation on interaction by: Smart Design

Check us out

At this year’s South By Southwest festival, Google showed off a groovy prototype of a pair of web-enabled shoes with speakers embedded on the tongue. By tracking your movement and location as you wear them, the shoes can offer advice and coaching, and can broadcast your activity online.

While it’s not clear the chatty sneakers are desirable to wear every day, we love that Google is putting experiments like these out there to ask questions and try out new experiences in the intersection of the physical and the digital.

The shoes were commissioned by Google and created by NYC-based collective YesYesNo, founded by Smart Interaction Lab friend Zach Leiberman and Studio5050.

A great review of the project can be seen on Art Copy & Code.

A first look under a microscope reveals all kinds of activities and entities that we never knew existed. What if we could suddenly see other things that are happening right in front of us, like subtle changes in skin tone as blood flows through our bodies or deformations in buildings caused by wind stresses?

Researchers at MIT CSAIL have been working on methods for analyzing and manipulating video footage in an effort to “amplify” moments that would otherwise be invisible to the human eye. Called “motion magnification”, the technique involves locating pixels that are changing in a scene and then exaggerating those changes and adding them back in a new set of video footage. The result is breathtaking, allowing scientists to “see” a person’s heart rate, or locate discrepancies in blood flow throughout the body. It essentially becomes a “microscope for small motions”.

This New York Times video shows how the technique works: http://www.nytimes.com/video/2013/02/27/science/100000002087758/finding-the-visible-in-the-invisible.html

The technique is similar to what was used for the heart rate mirror that we saw during a visit to the MIT Media Lab last year (and was represented in cartoon form in the NY Times piece, “A Day in the Near Future”).

The code is open source and more detail about the technique and the theories behind it can be found in a SIGGRAPH paper: http://people.csail.mit.edu/celiu/motionmag/motionmag.html

Last night was the opening event for the show “Gimme More” at Eyebeam Art+Technology Center, featuring a collection of experimental projects that made use of augmented reality digital layering techniques and a panel discussion organized by Laetitia Wolff.

We captured some short videos of some of our favorite pieces,  ”Tatooar”, “Last Year” and  ”Beatvox”.

“Tatooar” (shown in the video above) displays an animated tattoo that drips and crawls over a participants’ skin when viewed in a mirror.

In “Last Year”, by Liron Kroll, memorabilia such as post cards and personal notes come alive when viewed through the window of an iPad screen.

“Beatvox” by Yuri Suzuki is composed of a drumset with robotic drumsticks attached to each piece. A microphone is used as the main input, and allows viewers to control the drums using only their voices as they make beatbox-type sounds.
 

Smart Interaction Lab was invited to be part of the panel discussion where the question “Is augmented reality the next medium?” was asked. Here’s what we riffed on:

Our view of augmented reality in our future is different from the one we hear about that involves an all-purpose prosthetic like a pair of glasses or contact lenses. What we think about as a product designers is how our experience of an environment can be enhanced by augmented reality. Consider a surface that turns into a computer workspace that you manipulate by gesture, like our vision for PSFK’s Future of Work report. A keyboard layer lets you do text, a photoshop layer lets you manipulate images with fingers or a paintbrush. Take this one step further and we can consider dynamic interfaces on other surfaces that can morph and change depending on the context. A camera that recognizes a cutting board can project information about a recipe or the nutritional content of the food. Microsoft recently published a vision piece about a project they are calling IllumiRoom where projected images fill an entire space, so your TV is no longer within the frame of a rectangle but on every surface. And the design firm Berg London recently compiled a fascinating blog post on their experiments for Google Creative Lab where they used Kinect to create interactive projection mapping on physical desktop objects.

Another aspect of this can also be what we might call “remote augmented reality”. The technology guru Kevin Kelly talks about a “planetary membrane” comparable to the complexity of the human brain. He talks about how there are cameras everywhere (“three billion artificial eyes”) that see our world and can provide data to an augmented projected layer. In addition to these static cameras, we can think about moving camera-vision entities, essentially robots, that we can tap into to give us an augmented view of a space that we’re not even in. At the moment, the most direct application of this is security surveillance, but it’s fun to think of less fear-driven applications of remote augmented reality as well. For example, telepresence for remote conversations, or for spectator situations such as being virtually present at a performance. The media lab’s Opera of the Future group, is experimenting to see if  “you can take a live experience, whether it’s a concert or a theater show or hanging out with people you care about, and experience that somewhere else” — not only observe it, but feel as if you’re participating in it as well.

 

Moti.ph is a new platform currently under development. It was created by Nicholas Stedman, the brilliant mind behind freaky creature like robots such as this one, in an effort to help anyone and everyone be able to build freaky robots.

Nicholas Stedman’s After Deep Blue

Here’s how he describes it on the moti website:

“It consists of smart motors that are accessible from a web browser. There, you can control and program them using simple sliders, timelines and other graphical elements.

Moti takes the struggle out of getting things to move so you can develop your project a lot faster. Kids can use it to build toys and robots, and designers can use it to build animatronics for film, or store displays.

Moti is also flexible in how it can be used. You can customize your controls, quickly add sensors to your application, and make your project available to users over the web, or by mobile device. Moti even has a powerful API that allows engineers and developers to integrate motion into larger systems.”

We’re looking forward to checking it out and making moving mechanical things that we control via a web browser.

 

The Smarcos Project is a collective effort among 17 partners throughout Europe who are focused on the responsible development of the Internet of Things. Their main objective is to attain “interusability via interconnected embedded systems”, or make sure that hen our devices start talking to the cloud, they can all communicate with one another in a way that spares the user the hassles of bugs, errors and incompatibilities. It’s a tall order, but we’re happy to see some heavyweights working together to tackle this complex challenge.

We love how their video uses clear motion graphics to describe our connected future.

Image by Paul Sahre for the New York Times, “The Internet Gets Physical”, which appeared on December 17, 2011.
 

Below is a list of some good primers and readers relating to the current dialog around the Internet of Things:

Articles and online readings

http://www.mckinseyquarterly.com/The_Internet_of_Things_2538

http://www.nytimes.com/2011/12/18/sunday-review/the-internet-gets-physical.html?pagewanted=all

http://www.cityinnovationgroup.com/1/post/2011/3/essential-reading-11-articles-about-the-future-the-internet-of-things.html

http://www.ge.com/mindsandmachines/?gclid=CIvZ3_608rQCFa5QOgodfn8AKg

 

Books

http://www.amazon.com/Everyware-Dawning-Age-Ubiquitous-Computing/dp/0321384016

http://www.amazon.com/SmartStuff-introduction-Internet-Things-ebook/dp/B008DDW2U2/ref=sr_1_6

http://www.amazon.com/Smart-Things-Ubiquitous-Computing-Experience/dp/0123748992

 

We’d love to hear about great readings you’ve found on this topic.

For those of you who couldn’t make it to our talk “Making Meaning with an Internet of Things” at the IxD13 Conference in Toronto on 1/30/2013, below is a list of technical resources for prototyping IoT.

The books require some understanding of coding and prototyping with electronics, and we recommend starting with the Building Internet of Things book with an Arduino plus wifi or ethernet shield.

Books on getting started:

Building Internet of  Things with the Arduino by Charalampos Doukas
http://www.buildinginternetofthings.com/

Getting Started with the Internet of Things by Cuno Pfister
http://www.gsiot.info/ 

Hardware:

Arduino Wi-Fi shield: https://www.sparkfun.com/products/11287

Arduino Ethernet shield: https://www.sparkfun.com/products/9026

Nanode board: http://www.nanode.eu/

Netduino: https://www.sparkfun.com/products/10107

Gadgeteer: http://research.microsoft.com/en-us/projects/gadgeteer/

Electric Imp: https://www.sparkfun.com/products/11395

There are even more tools and resources out there, and we’d love to hear from you regarding what you’ve discovered.

When we talk about the Internet of Things, we often think about data-heavy interfaces and complex tracking feedback, but this is one project that shows how our connected devices can be meaningful in a subtle and poetic way. The Good Night Lamp is a series of connected lamps. Each person who buys the system gets one large lamp (representing themselves), and then a series of smaller lamps (representing the others people in a given network). When the big Lamp is turned on, the associated little lamps in other people’s networks will light up. In other words, someone turns on their lamp send a signal to others in a network; that same person can glance at her collection of lamps to see who is in or out (or whatever message they’ve agreed upon for the light). Some sample scenarios would be a child turning the lamp out to let parents know they’ve gone to bed, or two colleagues turning lights on to let one another know when they are ready for a call or video chat.

We’ve been following this project since it’s early conception years ago and are excited to see its creator working on making the system a reality.

Researchers at Disney have combined the magic of light pipes (clear tubes that allow a light source to travel to a different physical location) with real time sensing and display to create interactive objects that have no embedded electronics. By placing one of these composite objects on an illuminated interactive surface, they can become central to the user’s interactive experience, allowing people to fully engage with a physical object rather than a flat screen. The ultimate vision involves using multi-material 3D printing techniques, so that the objects will be able to be designed digitally and then emerge from the printer with the light pipe capabilities already embedded. This is an exciting innovation in interaction which would enable the creation of really robust and inexpensive objects that also had a high degree of sophistication in terms of interactivity.

We’re impatient to play with this technology and are already scheming ways to make some of our own light-pipe enabled parts.

The video below does a good job of explaining exactly how it will work and showing some of the possibilities for interaction:

 

This holiday there’s a new army of updated Furbies on the loose. The original Furby was a toy launched in 1998 that reached cult status within a couple years. It would respond to human language when spoken to, and respond with words in the cryptic “Furbish” language, along with physical gestures via movable eyelids, ears and mouth. Over time, the toy would learn an increasing number of English words.

The new Furby features a few updates physical features. Glowing LCD screens replace the old toy’s static eyes, so it can display a greater range of expression through animation. It also features more responsive capacitive sensors instead of physical buttons, and updated AI gives it a range of personalities, selected based on your interaction with it. Additionally, an iOS app lets you feed the Furby selected virtual foods, along with giving you access to a dictionary a translator to help with Engish-Furbish exchanges.

This winter we’ll be spending time with a Furby that’s been adopted by the SVA MFA IxD program to be featured as a class pet in the upcoming Smart Objects course. Above is a video of one our our first meetings with the new Furby. We’ll let you know how it goes as we get to know wach other better.