Smart IxD Lab

Menu

Thoughts, observations and experimentation on interaction by: Smart Design

Check us out

Being in the business of making things makes us a little immune to the news, but with the popular upswing of making comes recognition by industry that we’re all not just consumers.

This video by @Intel_Jim shows him rallying the troops at Ignite Intel 2011. I’ve used AVRs and PIC chips for years now and never Intel. But as a kid my dad built an Intel 8080 based computer from a kit(IMSAI) in the late 70′s. That computer was hacked for almost 10 years for all kinds of cool things we’d take for granted today. I’m really glad to see things come back around to the idea that the next wave of innovation can still be had in someone’s basement.

Here at Smart, we believe it’s important to make people the focus of our design research before we begin creating new concepts for products. We spend a lot of time observing kids and adults alike in order to learn what solutions make the most sense.

In addition to our observations, we look to experts to guide us in our thinking. Jean Piaget, famed Swiss psychologist and philosopher, is one such resource. In 1929, he developed Piaget’s theory of cognitive development, which describes five stages of cognitive growth. He used this theory to explain how humans acquire, construct, and use knowledge… good things to consider when building a research approach!

1. Sensory-motor intelligence (0-2 years old)
In the stage of sensory-motor intelligence, researchers have limited options to investigate usability issues. Language and thought processes are very limited and coordination between vision and apprehension is only developing. In this stage, none of the five steps of the question-answer process can be fulfilled. The only possible way to research this age group is by observation or by interviewing parents.

2. Preconceptual thought (2-4 years old)
During this stage, children learn how to use and represent objects by images, words, and drawings. Children also learn to form concepts and perform mental reasoning. Furthermore, toddlers learn to speak and interact with others. In this age group, qualitative interviews that include ‘playing’ tasks can be carried-out and small focus groups can be held. However, all five steps of the question-answer process are still difficult at this age and both questions and answers must be evaluated carefully.

3. Intuitive thought (4-7 years old)
Language skills improve but comprehension and verbal memory are still limited. Both of these skills are important for step one (understanding the question) and step two (retrieving information from memory) of the question-answer process. Questions should be very simple and the words used should match the child’s language. Further, this age group is very literal, suggestible, has a short attention span, and does not yet understand depersonalized or indirect questions. Methods that can be used for doing research with children in the intuitive thought stage are; small focus groups and short qualitative interviews.

4. Concrete operations (8-11 years old)
Language develops and reading skills are acquired. However, depersonalized or indirect questions are still critical at this age and a careful research design is important for step 1 and 2 of the question-answer process. Keep it simple and be aware of satisficing. Satisficing means that children use only one heuristic to decide on an answer instead of going through the whole question. Motivation and concentration are also critical issues. For children in this age group it is very important to keep it simple, visual, and most of all fun! Methods you can user are surveys, semi-structured or structured interviews as well as focus groups.

5. Formal thought (11-15 years old)
By this age, children’s cognitive functions, formal thinking, negations, and logic, as well as their social skills are well developed. However, kids are very context sensitive at this age. This means that they might, for example, behave completely different in school than they do at home. Besides, they are easily influenced by their classmates, parents, or siblings. Social desirability plays an important role which especially influences step 4 (evaluation of the answer) and 5 (communicating the final answer) of the question-answer process. For this age group, all common research methods can be adapted but be careful with comprehension problems, ambiguity, flippancy and boredom. Again, keep it simple, and keep it fun.

From age 16 cognitive skills are adult like and age becomes a negligible factor for choosing a research method.

Read the full Post here: http://johnnyholland.org/2011/07/04/usability-testing-with-children-a-lesson-from-piaget/

Official description: “The Artvertiser is a software platform for replacing billboard advertisements with art in real-time. It works by teaching computers to ‘recognise’ individual advertisements so they can be easily replaced with alternative content, like images and video”

http://selectparks.net/~julian/theartvertiser/

Following Daniel’s post on the Eyebeam event – yet another take on augmented reality. This time removing advertising or subverting brands. While it’s a big ass set of binoculars for now, one would imagine they could turn it into an app.

Items of interest from last Thursday’s Demo Day by the Creators Project at Eyebeam:

For a recent project we had to come up with different animations for a dot matrix display. Although we could do all this in Flash, After Effects, or Processing, we thought it would be fun to build an animation tools specifically for dot matrices.

After a couple of weeks head deep in jQuery and HTML, we came up with our very own Dot Matrix Animator.  Give it a try dotmatrixanimator.com. We built this as an internal tool (hence the bare bone interface), but thought it would be a fun toolto share.  If you’re a code ninja feel free to download the source from Smart Interaction Lab’s github respository and extend, fork, and hack away!

Basic Instructions:

  • Use mouse to toggle pixels on and off
  • Hold Shift to turn on pixels continuously
  • Hold Ctrl to turn off pixels continuously
  • Press + button to add a frame to animation squence
  • Press the Animate button when you have few frames and watch your animation run!

The screen cast below goes over the in’s and out’s in greater detail.

Every year before Earth Day, we at Smart take a moment to consider our everyday actions and come up with ways that we can do our part to keep our planet sustainable. We share these with each other in the form of Earth Day pledges that can be posted in the studio and revisited every year.  Pledges include things like “I will always carry my bag to the grocery store” and, “Stop drinking bottled water”.

In 2010 before a Smart Salon evening event with Jeffrey Hollender, sustainability expert, author and co-founder of the ecologically-minded brand, Seventh Generation, we decided to come up with a unique way to display our pledges. Since we use a lot of LED matrices in our product prototyping work, we thought it would be great to display them in bright lights, but knew that LED displays require energy consumption, and would be counter-productive to the larger sustainability effort. Our solution? Human power!

We hacked an emergency radio, tweaked our LED display and programmed our Arduino boards. The end result was a two foot wide sign that scrolls through our pledges when a crank is turned. It’s a fun way to see the messages, and offers an element of surprise: you have to wait until you’ve generated enough power to light it in order to see what it says.

Check out the video to see the Human Powered Earth Day Pledge Sign in use. Pictures of the experiment in progress are below.

At Smart, we spend a lot of time in the kitchen, both for design projects but also because we love to eat. Next to great food, we also love great digital products. And like a lot of people, we’ve been using our smartphones and tablets for as many things as we can think to try. However, bringing our two great loves together has proven deeply dissatisfying. With such ground breaking technology like the iPad, why haven’t the design of recipes been reinvented?

After trying a bunch of different recipe apps and becoming increasingly dissatisfied, we decided to design our own. It’s clear that most recipe apps are designed for reading, not cooking. The type is small, the ingredients are separate from the instructions, you have to navigate a bunch to get through it. They’re designed for the content viewing experience, not the cooking experience. None of this makes sense when you’re deep in the kitchen, pots boiling on a hot stove and your hands covered in the makings of your tasty meal. In this moment, the moment of real cooking, you want your iPad far from the flaming stove and spattering grease. You want something that you can ‘tap’ with your elbow not your spice-encrusted fingers. You don’t want something you read, you want something you can use.

In short, you want a recipe app that’s designed for cooking in the kitchen, not for reading on the couch. You want a design that helps you through the recipe while you’re cooking, not one you study in advance and then try to remember while cooking. You want a design that actually helps you engage with the cooking experience. That’s what we made. Take a look and let us know what you think!

Introduce the recipe – this is what you read on the couch, but then swipe to start cooking!

Chunking ingredients by the main activity helps you through each step

Tap each ‘card’ for instructions, tips, etc

Photos are a big part of studio life here at Smart Design, especially when they’re pictures of food or people having fun. While emailing our snaps around the office was a fast way to share them, they were clogging our inboxes with large file attachments, and we knew there was a better way.

After a bit of brainstorming, we decided to create a custom application that would show pictures from all three Smart locations–NYC, San Francisco and Barcelona–in one coherent view, and be  publically displayed and accessible in all our studios. Using Google maps in conjunction with Flickr and wrapped in a Flash interface, we built the application to alternate between map view and snapshot view. Anyone in the office can post by emailing a photo to Flickr. Through geo-tagging on their smartphone, the photos are automatically placed on the map in the location where the image was shot. Adding a tag for New York, San Francisco or Barcelona places it in an album for the corresponding studio. While waiting to be used, the app runs a slideshow of the most recently added photos.

The photo sharing app has saved our servers from the nightmare of large email files floating around, but more importantly has given us a fun way to peek into what’s happening with Smarties all over the planet. There’s usually a large cluster of images on the map right around the three studios, but it’s also fun to see all the other places in the world where pictures pop up.

New Yorkers are gearing up for the opening of a new show called “Talk to Me” this summer at the Museum of Modern Art.  The show’s focus is an exploration of “the communication between people and objects” with a strong emphasis on tangible interaction design. The show opens to the public on July 24, 2011 and runs until November 7, 2011.

Paola Antonelli, the Museum’s Design curator, has taken a unique and open source approach to her curatorial efforts by allowing people to see her review process online and post suggestions through a WordPress blog site: http://wp.moma.org/talk_to_me/

Precious, a small German design consultancy behind beautifully detailed Native Instruments interfaces, has published a brief document entitled “Patterns for Multiscreen Strategies” (see below). It’s pretty basic but is starting to put much needed language around multiscreen experiences and provides good examples as well.