Smart IxD Lab

Menu

Thoughts, observations and experimentation on interaction by: Smart Design

Check us out

Above: Photo of the Nanoblocks Sagrada Famiglia model, with the 123D Catch 3D model built from a scan with the DIY iPhone Dolly.

Smart’s Master MacGyver Ron Ondrey loves his photo toys and when he shopped around for movable tripods and dollys he decided he could do a better job if he made one himself. He built the rig, shown below, in order to pan, truck and take 360° bullet-time style spinning videos around people and objects using a standard camera mount or by mounting an iPhone.

We had a lot of fun in the lab taking experimental videos of our product designs, but decided to bring the rig’s coolness factor up a notch by using it in conjunction with Autodesk’s 123D Catch App, software that essentially turns your iPhone into a 3D scanner, and then lets you spin and tilt the virtual 3D object on the screen.

The video above shows our test shots with the Nanoblocks model of Barcelona’s Sagrada Famiglia.

Here’s Ron with the rig.

.

And our test model.

Here’s our test with the rig in 360 spin mode.

A first look under a microscope reveals all kinds of activities and entities that we never knew existed. What if we could suddenly see other things that are happening right in front of us, like subtle changes in skin tone as blood flows through our bodies or deformations in buildings caused by wind stresses?

Researchers at MIT CSAIL have been working on methods for analyzing and manipulating video footage in an effort to “amplify” moments that would otherwise be invisible to the human eye. Called “motion magnification”, the technique involves locating pixels that are changing in a scene and then exaggerating those changes and adding them back in a new set of video footage. The result is breathtaking, allowing scientists to “see” a person’s heart rate, or locate discrepancies in blood flow throughout the body. It essentially becomes a “microscope for small motions”.

This New York Times video shows how the technique works: http://www.nytimes.com/video/2013/02/27/science/100000002087758/finding-the-visible-in-the-invisible.html

The technique is similar to what was used for the heart rate mirror that we saw during a visit to the MIT Media Lab last year (and was represented in cartoon form in the NY Times piece, “A Day in the Near Future”).

The code is open source and more detail about the technique and the theories behind it can be found in a SIGGRAPH paper: http://people.csail.mit.edu/celiu/motionmag/motionmag.html

Last night was the opening event for the show “Gimme More” at Eyebeam Art+Technology Center, featuring a collection of experimental projects that made use of augmented reality digital layering techniques and a panel discussion organized by Laetitia Wolff.

We captured some short videos of some of our favorite pieces,  ”Tatooar”, “Last Year” and  ”Beatvox”.

“Tatooar” (shown in the video above) displays an animated tattoo that drips and crawls over a participants’ skin when viewed in a mirror.

In “Last Year”, by Liron Kroll, memorabilia such as post cards and personal notes come alive when viewed through the window of an iPad screen.

“Beatvox” by Yuri Suzuki is composed of a drumset with robotic drumsticks attached to each piece. A microphone is used as the main input, and allows viewers to control the drums using only their voices as they make beatbox-type sounds.
 

Smart Interaction Lab was invited to be part of the panel discussion where the question “Is augmented reality the next medium?” was asked. Here’s what we riffed on:

Our view of augmented reality in our future is different from the one we hear about that involves an all-purpose prosthetic like a pair of glasses or contact lenses. What we think about as a product designers is how our experience of an environment can be enhanced by augmented reality. Consider a surface that turns into a computer workspace that you manipulate by gesture, like our vision for PSFK’s Future of Work report. A keyboard layer lets you do text, a photoshop layer lets you manipulate images with fingers or a paintbrush. Take this one step further and we can consider dynamic interfaces on other surfaces that can morph and change depending on the context. A camera that recognizes a cutting board can project information about a recipe or the nutritional content of the food. Microsoft recently published a vision piece about a project they are calling IllumiRoom where projected images fill an entire space, so your TV is no longer within the frame of a rectangle but on every surface. And the design firm Berg London recently compiled a fascinating blog post on their experiments for Google Creative Lab where they used Kinect to create interactive projection mapping on physical desktop objects.

Another aspect of this can also be what we might call “remote augmented reality”. The technology guru Kevin Kelly talks about a “planetary membrane” comparable to the complexity of the human brain. He talks about how there are cameras everywhere (“three billion artificial eyes”) that see our world and can provide data to an augmented projected layer. In addition to these static cameras, we can think about moving camera-vision entities, essentially robots, that we can tap into to give us an augmented view of a space that we’re not even in. At the moment, the most direct application of this is security surveillance, but it’s fun to think of less fear-driven applications of remote augmented reality as well. For example, telepresence for remote conversations, or for spectator situations such as being virtually present at a performance. The media lab’s Opera of the Future group, is experimenting to see if  “you can take a live experience, whether it’s a concert or a theater show or hanging out with people you care about, and experience that somewhere else” — not only observe it, but feel as if you’re participating in it as well.

 

Moti.ph is a new platform currently under development. It was created by Nicholas Stedman, the brilliant mind behind freaky creature like robots such as this one, in an effort to help anyone and everyone be able to build freaky robots.

Nicholas Stedman’s After Deep Blue

Here’s how he describes it on the moti website:

“It consists of smart motors that are accessible from a web browser. There, you can control and program them using simple sliders, timelines and other graphical elements.

Moti takes the struggle out of getting things to move so you can develop your project a lot faster. Kids can use it to build toys and robots, and designers can use it to build animatronics for film, or store displays.

Moti is also flexible in how it can be used. You can customize your controls, quickly add sensors to your application, and make your project available to users over the web, or by mobile device. Moti even has a powerful API that allows engineers and developers to integrate motion into larger systems.”

We’re looking forward to checking it out and making moving mechanical things that we control via a web browser.

 

The Smarcos Project is a collective effort among 17 partners throughout Europe who are focused on the responsible development of the Internet of Things. Their main objective is to attain “interusability via interconnected embedded systems”, or make sure that hen our devices start talking to the cloud, they can all communicate with one another in a way that spares the user the hassles of bugs, errors and incompatibilities. It’s a tall order, but we’re happy to see some heavyweights working together to tackle this complex challenge.

We love how their video uses clear motion graphics to describe our connected future.

Image by Paul Sahre for the New York Times, “The Internet Gets Physical”, which appeared on December 17, 2011.
 

Below is a list of some good primers and readers relating to the current dialog around the Internet of Things:

Articles and online readings

http://www.mckinseyquarterly.com/The_Internet_of_Things_2538

http://www.nytimes.com/2011/12/18/sunday-review/the-internet-gets-physical.html?pagewanted=all

http://www.cityinnovationgroup.com/1/post/2011/3/essential-reading-11-articles-about-the-future-the-internet-of-things.html

http://www.ge.com/mindsandmachines/?gclid=CIvZ3_608rQCFa5QOgodfn8AKg

 

Books

http://www.amazon.com/Everyware-Dawning-Age-Ubiquitous-Computing/dp/0321384016

http://www.amazon.com/SmartStuff-introduction-Internet-Things-ebook/dp/B008DDW2U2/ref=sr_1_6

http://www.amazon.com/Smart-Things-Ubiquitous-Computing-Experience/dp/0123748992

 

We’d love to hear about great readings you’ve found on this topic.

For those of you who couldn’t make it to our talk “Making Meaning with an Internet of Things” at the IxD13 Conference in Toronto on 1/30/2013, below is a list of technical resources for prototyping IoT.

The books require some understanding of coding and prototyping with electronics, and we recommend starting with the Building Internet of Things book with an Arduino plus wifi or ethernet shield.

Books on getting started:

Building Internet of  Things with the Arduino by Charalampos Doukas
http://www.buildinginternetofthings.com/

Getting Started with the Internet of Things by Cuno Pfister
http://www.gsiot.info/ 

Hardware:

Arduino Wi-Fi shield: https://www.sparkfun.com/products/11287

Arduino Ethernet shield: https://www.sparkfun.com/products/9026

Nanode board: http://www.nanode.eu/

Netduino: https://www.sparkfun.com/products/10107

Gadgeteer: http://research.microsoft.com/en-us/projects/gadgeteer/

Electric Imp: https://www.sparkfun.com/products/11395

There are even more tools and resources out there, and we’d love to hear from you regarding what you’ve discovered.

This weekend’s New York Times Sunday Review cover featured the article, “Our Talking, Walking Objects” by Lab Founder Carla Diana. The piece highlights the influence of robotics on the design of everyday objects, and how dynamic behaviors like sound, light and motion can be harnessed to express product personality.

The article has led to a lively debate people’s line comments (81 at last count) regarding the value of having an emotional connection with our everyday things.

Check out the piece and weigh in here:

http://www.nytimes.com/2013/01/27/opinion/sunday/our-talking-walking-objects.html

When we talk about the Internet of Things, we often think about data-heavy interfaces and complex tracking feedback, but this is one project that shows how our connected devices can be meaningful in a subtle and poetic way. The Good Night Lamp is a series of connected lamps. Each person who buys the system gets one large lamp (representing themselves), and then a series of smaller lamps (representing the others people in a given network). When the big Lamp is turned on, the associated little lamps in other people’s networks will light up. In other words, someone turns on their lamp send a signal to others in a network; that same person can glance at her collection of lamps to see who is in or out (or whatever message they’ve agreed upon for the light). Some sample scenarios would be a child turning the lamp out to let parents know they’ve gone to bed, or two colleagues turning lights on to let one another know when they are ready for a call or video chat.

We’ve been following this project since it’s early conception years ago and are excited to see its creator working on making the system a reality.

Researchers at Disney have combined the magic of light pipes (clear tubes that allow a light source to travel to a different physical location) with real time sensing and display to create interactive objects that have no embedded electronics. By placing one of these composite objects on an illuminated interactive surface, they can become central to the user’s interactive experience, allowing people to fully engage with a physical object rather than a flat screen. The ultimate vision involves using multi-material 3D printing techniques, so that the objects will be able to be designed digitally and then emerge from the printer with the light pipe capabilities already embedded. This is an exciting innovation in interaction which would enable the creation of really robust and inexpensive objects that also had a high degree of sophistication in terms of interactivity.

We’re impatient to play with this technology and are already scheming ways to make some of our own light-pipe enabled parts.

The video below does a good job of explaining exactly how it will work and showing some of the possibilities for interaction: