Smart IxD Lab


Thoughts, observations and experimentation on interaction by: Smart Design

Check us out

We’re super excited about two new robots named “Bo” and “Yana” that are part of a programmable system for kids called Play-i. We’ve been following their development for the last few weeks and are excited to see the amazing support and excitement around it.

The main concept behind the system is the combination of a very simple and graphical bluetooth-based remote control app with a pair of moving robot toys to let kids control and set up programs for arm, eye, gesture, and wheel commands. The system is set up to encourage learning by introducing coding in the context of storytelling, and aimed at a variety of age groups to enable even preschoolers to begin coding. (Different interfaces are geared at different age groups.)

Below is a screenshot of the interface, and you can learn more about the project at the Play-i website or in this video.

steve faletti by Carla diana

This summer Pratt Institute instructor and SVA IxD MFA candidate Steve Faletti worked closely with Smart Designers and the Lab on a range of prototypes and experiments as part of his internship. A big part of his process involved using microcontroller platforms such as Arduino, though lately he has become fond of some alternative boards. In this blog post, Steve shares the pros and cons of a variety of the tools and methods he’s been using, along with a summary of his typical workflow, from coding to compiling.



I feel obligated to begin this post by saying how much I love Arduino. It’s an amazing project that has put physical computing tools and understanding in the hands of many artists, designers, students, and hobbyists. It has changed the world and has become synonymous with microcontroller development and low-level computing. It was my first foray into electronics, providing not only a relatively painless path into playing with microcontrollers, but an immeasurable amount of information and learning along the way. I really do love Arduino and am infinitely thankful to the wonderful people who conceived of and develop it, yet lately I rarely use it in prototyping or development.

That’s not entirely true since I still use the language, libraries, and compiler chain extensively. Software capability and efficiency were the Achilles heel of the project in its early days. In those days—before the board even had a USB port—analogRead and PWM were convenient compared to setting up timers and bit-flipping ports, but used more than a few extra clock cycles to provide that. Now, nearly a decade later, those core libraries have been trimmed down and make much better use of the AVR resources. They’re fantastic.

The core language is great, and keeps growing and evolving for both functionality and speed, but the greatest value of the Arduino project is the thousands of available libraries. When I first started using Arduino, I needed to set up a servomotor for a project. The Servo library either didn’t exist, was unstable, or I just did not know about it, and it took over a week to figure out how to write my own—very buggy—code to control one. Now, while I could roll and debug my own servo code inside of an hour, it only takes about 15 seconds to grab the library and implement an object. Or two if I need it. The same goes for debouncing buttons, accessing EEPROM, communicating via SPI, or countless other tasks. This is the real power of the Arduino project; the huge community of developers that have created and refined simple-to-use and accessible code. (I like to think of Arduino more as an AVR framework than its own microcontroller, and I somewhat lament the fact that the two names have become synonymous.) So, with some regret, here are the reasons why I frequently avoid using the rest of Arduino framework.

arduino Uno sketch by Carla Diana


The hardware is too big, too expensive, and too limited for my needs anymore. Originally built to be something of a standalone development tool, and based on components sourced in 2005, the standard Arduino footprint is massive. There is also the weird legacy error in pin layout that will forever lock the Arduino to its unique footprint. I like to keep my circuits completely on the breadboard if possible, and the standard Arduino doesn’t allow that. They’re also $30 a pop at the time of this writing. I use a lot of microcontrollers and tend to leave them in projects. Most of the time the cost and size just don’t make an Uno a viable option.

**Note that I develop on OSX, and than instructions here have only been tested on that OS. Some of the software I use, like Cornflake Terminal, is only available for the Mac. There are many Windows and Linux alternatives and equivalents.

Mini Arduino Pro sketch by Carla Diana


For simple projects I tend to either burn the Arduino bootloader straight onto a bare ATMega, or buy one preburned, and then essentially build an Arduino around it (it’s not hard, and a great learning experience—do it at least once). I’m also a hug fan of the ‘Pro’ line of Arduinos from Sparkfun, especially the minis. They’re just cheap enough that I don’t care if I lose or fry one, yet save me 5 minutes of hooking up wires. Note that to program either of these stripped down options you will either need some kind of FTDI convertor or cable, or you can use a standard size Arduino.

teensy tracing by carla diana


I find that many of my projects require some kind of interface with a computer, and here the ATmega 32u4 is my new favorite chip. This is slightly different from the 328 on the Uno. It offers more I/O connections, more analog pins, and, most importantly, built-in USB capability. This means that not only do you not need another chip to translate between it and your computer, it can also easily emulate keyboards, mice, and joysticks. Arduino has offered this chip for a while in the Leonardo package, and more recently as the Micro, but I greatly prefer the Teensy 2.0 from PJRC. It ridiculously small, fits on a breadboard, and only costs $16 if you’re willing to source your own header pins. There are usually a few rows of these lying around my studio and soldering them on takes a couple minutes. While this is the same chip in the Arduino options, I find that the bootloader and USB profiles (not open source), are a bit more reliable.

Teensy’s developer, Paul Stoffregen, is a big fan of Arduino and maintains regular communication with the community. As such, he’s ported the loading protocol for Teensy into the Arduino chain by way of a convenient plugin, called Teensyduino. With this installed, there is almost no difference between working with Arduino or Teensy, and the same code can be uploaded to either platform over 95% of the time. Teensyduino also offers the option to add just about every working library upon install, which means I don’t need to go hunting for one later. Paul has also built an ARM-based micro board, called the Teensy 3.0. It’s also cheap, compatible with Teensyduino, easy to work with, and powerful. Paul recommends pairing it with a Raspberry Pi to get a nice, inexpensive, and powerful hardware setup that can handle advanced sound, video, and connectivity. I don’t have any experience with the ARM-based Arduino Due, and looking at the specs, I don’t know that I would try to compare them directly. The Due appears to have more features and pins while the Teensy 3.0 is less than half the price.



I want to wrap this up by talking about my choice of IDE. In addition to physical computing projects, I do a fair amount of screen and web work. My favorite editor right now is Sublime Text 2. It has some great features, and with the huge collection of packages available, it is very powerful. Anybody who writes more than a few lines of code a week will quickly grow to hate the Arduno IDE, based on Processing. There is nothing wrong with it per se, it’s just a very bare bones editor, offering little more than clean up and highlighting. Thankfully, the Stino project exists to essentially plug the Arduino IDE into Sublime Text 2. I’d suggest using the excellent Package Control plugin for Sublime, though if you’re resourceful you can do it manually. This will allow you full Arduino functionality inside the Sublime editor. You can edit, choose a target board, and upload sketches. It has a serial monitor built in, though I usually use Cornflake. It also brings in Teensyduino, provided it’s already been installed. I use the ST2-Arduino snippet set for completion. This unfortunately needs to be installed manually, but it’s not that hard to do. I just git-cloned the repository into my ‘Packages” folder. I honestly don’t know if this is the preferred method, but it seems to work just fine. This article may help.

So, Arduino is great. Really, really great. But I’ve found that as my development skills grow and I look for more flexibility and convenience, some of the tools offered by the project no longer fit into my workflow. That’s fine. the Intent of Arduino is to help people learn about electronics and physical computing. The fact that parts of it are amazing for rapid prototyping and development is a bonus.

- Steve Faletti, October 10, 2013


Have any tips of your own to share? Is there a new development board that you’re enjoying using? Let us know!

This week Smart interaction Lab took the stage to talk about the Internet of Things and Design at the Make Hardware Innovation Workshop. In our talk, we discussed the importance of meeting real human needs, and we laid out some of the key challenges to tackle in designing for IOT moving forward.

During the “Design Advantage” session we were joined on the stage by two young innovative companies, Canary, creators of a connected home monitoring system and Strong Arm Technologies, developers of a vest to improve posture during heavy lifting. Both discussed the importance of speaking to real people during design research phases and incorporating changes through an iterative design process.

Canary’s product description video.

The other sessions of the day provided the audience with a wide array of inspiration and resources for building ideas into prototypes and eventually scaled manufacturing. Microcontroller creators Massimo Banzi (Arduino) and Jason (Beaglebone) had a lively discussion about the role of microcontrollers in the innovation process.

Perhaps most inspirational are the new services to help innovators move their projects beyond the prototype phase and into production. Presentations by Dragon Innovation, Highway1, Elihuu and several others suggested that great education and amazing resources are out there for anyone is serious about taking a product to market.

Hardware Innovation Workshop events have taken place in both the Bay Area and New York over the past few years. For more information and a full agenda from the 2013 New York event, visit the Make website: Hardware Innovation Workshop.

At Smart Interaction Lab, we’re already obsessed with putting sensors on our pets in the name of Internet of Things research. In addition to hacking existing bands like FitBit and UP, we’ve been experimenting with more home-grown sensor combinations to see if we can tease out meaningful data that will help dogs and their owners feel more connected.

That’s why we love the startup Whistle, the connected dog collar. Here’s how the company describes their product:

The Whistle Activity Monitor is an on-collar device that measures your dog’s activities including walks, play, and rest, giving you a new perspective on day-to-day behavior and long-term health trends. Check-in from your phone, share memorable moments with family and friends, and send detailed reports to your veterinarian.

The collar basically contains an accelerometer to track the amount for movement that has taken place, and then couples that information with timers so that the movement can be translated into the length of walks, naps, runs, etc.  It has a database for comparison with dogs of a similar breed and age, so users can put the activity into context and know what’s expected of a healthy dog. The  Whistle syncs with the app using WiFi or Bluetooth 4.

We love the idea that dog owners can know what’s happening with their best friends whether they are together or not, and that they can track trends over time to get a sense of when Fido is losing steam or showing a change in behavior. The app also allows them to compile a report to share with a veterinarian, so that all the information can be accessed easily by the vet or during an office visit.

And here’s their super cute video that will make you say, “awww…”

At Smart Interaction Lab we sometimes create our own circuit boards for physical models in order to have internal parts with exactly the components we need in an efficient form factor. We love using off-the-shelf boards from sources such as Sparkfun and Adafruit for fast solutions, but making great prototypes sometimes requires going one step further.

Since we know many of our readers face the same situation with their own projects, we decided to share our process here. The most recent board we created was made to work with one of our favorite new toys, the Ninja Blocks, a quick and easy kit consisting of a WiFi hub and sensors for exploring Internet of Things in the studio. This post lays out the process we went through to create our own custom RF Arduino Shield for NinjaBlocks.


Our Inspiration

When the folks from NinjaBlocks came to visit us in the San Francisco studio this past summer, we were excited to start playing with their product, as it offers an easy, quick way to experiment with design ideas for Internet-connected objects. The NinjaBlocks kit ships with a few basic sensors: temperature and humidity, motion, window/door contact and simple push button. These are great for wiring up a space to monitor it online, or setting it up to trigger an email, tweet or update to a website. While playing with the setup we quickly realized that we would want to go beyond those basic sensors so that we could try to use it with anything we needed like a force sensor, moisture detector, light sensor, etc.

Since the NinjaBlocks hub communicates with its sensors over radio frequency, it’s something that we could easily tap into, and there are helpful guides on the NinjaBlocks website which outline how to use it with your prototyping platform of choice (we used an Arduino).

The guides give instruction on hooking up the Arduino and RF components using jumper wires from a breadboard, and though this is a great way to get started right away, we’ve found that things get messy quickly. Once you begin adding more components like sensors it may end up looking more like a Medusa head of wires than anything else. Hence what this post is really about, making an Arduino shield!


Why a Shield?

An Arduino shield is essentially another circuit board sitting on top of the Arduino which adds capabilities which the Arduino does not have. Instead of connections happening through a bundle of loose wires, everything is neatly integrated within the layers of the circuit board and the components on it. Since it easily plug into and out of an Arduino, it also allows you to swap out shields as needed for different projects. The board can be designed using software and then fabricated via a printed circuit board (PCB) facility.


Getting Started

Since it’s always good to prototype what you’re doing first, we started with a breadboard, and then later used an all-purpose protoboard (available at hobby stores like Radio Shack), cut it down to dimensions, soldered connections to match the Arduino and hooked up the RF components.


Designing the Shield

Once our protoboard was working (hurrah!), we knew exactly what we needed, and we could move on to the design of the the PCB. Though there are a growing number of  home-brewed DIY solutions for making PCBs (such as using laser printers, printing on transparent sheets, exposing and etching away parts of copper coated circuit boards, etc.) we wanted something very reliable and repeatable, without having to deal with corroding chemicals. Thus we chose to have the manufacturing process outsourced to one of the few companies which can do small batches of PCBs fairly cost effectively.

The shield was drawn up schematically and then laid out in a program called CadSoft EAGLE. It’s the same tool used by professional engineers, and they offer a limited version for free for hobbyists to make their own boards. Incidentally, many DIY electronics companies, such as SparkFun, Adafruit and Arduino, offer EAGLE files of their components, which makes building on what they’ve done much easier than having to reverse engineer and recreate a part from scratch. Our shield was made by modifying an EAGLE file of a shield to make sure all the necessary holes and pins would be in the right place.

EAGLE can be daunting at first, and drawing out a shield can take quite some time, but with the help of online reading and YouTube tutorial videos even a beginner can get started. There are also many other circuit board layout programs, among them the open source alternative KiCad and the online editor which we’ve mentioned in a previous post, which may give an easier way into creating circuit boards when you’re starting out.


Ordering the PCB

After going through a few iterations of designs in EAGLE the finished version was finally shipped off to OSH Park to have it made. While the turnaround time for smaller batches of PCBs can be quite lengthy (we waited about three weeks) they’re still cheap enough to make it worthwhile. It is also a good idea to prototype as far as possible before committing the design to an actual PCB, to make sure the circuit works and everything is laid out properly. It’s better to spend an extra day laying out everything than getting your board back after a few weeks to find that it doesn’t work.


PCB Assembly

In the end, after soldering in the RF modules and header pins to the shield it was working beautifully.  (And there’s added bonus to creating an RF Arduino sender-receiver setup, which is that we can use it to communicate back and forth between individual Arduinos as well as with the NinjaBlocks.) Since the RF modules only take up two digital pins on the Arduino you’re still very free to hook up sensors and actuators to trigger whatever your imagination could think of. How would you use it?


Source Files

If you’d like to try making the shield yourself, you can download an updated version of the Eagle file, along with documentation and links to Arduino libraries. These files will enable you to order your own, or modify it to fit your own needs better. Enjoy:

Smart Interaction Lab Arduino Ninja Shield on GitHub

 Special thanks to Smartie Daniel Jansson for sharing this process.

How can interactive objects encourage inspiration and dialog during brainstorming sessions?

This summer, the Barcelona branch of the Smart Interaction Lab set out to answer that question through electronic hardware experiments. TOTEM is a series of tangible interaction experiments, presented at Barcelona’s first ever Maker Faire, a place where people show what they are making, and share what they are learning.

Ideation sessions are part of everyday life at Smart Design, informing all the work we do. When reflecting upon these sessions, we developed the concept behind our Maker Faire project. We worked together as a team of multidisciplinary researchers and designers to explore how we can improve people’s experiences of the ideation process through tangible interaction.  Our solution was TOTEM—a family of three unique objects that help people get inspired and stay engaged in creative conversations and debates in order to generate new ideas. It is composed of a stack of three separate but complementary objects: Batón, Echo and Alterego.

1. Batón is an unconventional talking stick that is passed around during ideation sessions allowing only the person holding the Batón to speak. After a certain amount of time the Batón begins to vibrate to indicate the speaker must pass the batón to someone else in the group. This tool allows everyone present in a discussion to be heard and it forces the most dominant speakers to be more concise, but also those that may be more shy to speak up.


2. Echo is a recording/playback bell-like device that can fit easily and comfortably into any setting—from creative spaces, to the living room of your home, to cafes. Echo works in two ways: it records the background noise from wherever it is placed, and as soon as a user picks it up and puts it to their ear, Echo will play back what it previously recorded. By shaking it, Echo will randomly play back other sound bytes from other parts of the day. The main aim of Echo is to help users get inspired by other adjacent creative conversations, keywords, and quirky background noises that we would not be able to pick up on without the help of technology.

3. Alterego is designed to be a new version of the Six Thinking Hats mind tool by Edward de Bono. Alterego is a wearable device with three components, similar to a bracelet. As soon as a user picks up the objects they will light up with their own unique color, signaling that they are on. The three colors each have a purpose and code, to help guide the user to adopt different mindsets for evaluating ideas: green = optimistic, red = pessimistic, blue = moderator. This thinking behind this method is that it forces users to think outside their comfort zone and evaluate ideas in a balanced, unbiased manner—an essential skill for productive ideation.

A schematic of the object components is below:


Smart Design and their Smart Interaction Lab are supporters of the Maker culture, a movement of invention, creativity and resourcefulness. Our project was well received by Faire visitors and we heard lots of great feedback. We look forward to considering those comments in our continued experiments. If you’d like to learn more about TOTEM, feel free to contact the Smart Interaction Lab.

Special thanks to the TOTEM team: Junior Castro, Marc Morros, Erika Rossi, Ivan Exposito, Valeria Sanguin and Floriane Rousse-Marquet.

It’s long been a sci-fi fantasy to have doors that automatically recognize you, and selectively choose whether or not to grant access to other people who approach it. With internet-connected Internet of Things systems we are one step closer to making that fantasy a reality. Since  Smartphones can be used to communicate with a lock, and the lock itself can be connected to an online database, new products can offer selective keyless entry based on conditions set up by the lock’s owner. A cloud-based service can be used to organize and control who has access to our home and when.

Here are a few products in this realm that we’ve had our eye on:



Lockitron, shown above, was the first on the scene with their Kickstarter campaign in early 2011. Originally set up as an entire lock, they are now offering a product that’s an add-on to existing locks. Through a combination of Bluetooth 4.0 and wifi, almost any smartphone can be used to control the lock from nearby or a remote location.


August (shown at the top of the post) is the slickest of the bunch, with a pretty glowing face and robust look. It communicates with a Smartphone over Bluetooth, and then the app handles interactions with the server. The product description also boasts the ability to distinguish whether a person is inside or outside. The company’s video does a nice job of explaining why you would want one of these.


Goji, above, is a relative newcomer, and is similar to August and Lockitron, but with a few additional features such as a camera that can use WiFi to send you a picture of who is at the door, and key fobs that you can give to kids or others who might not have smartphones. It also boasts the ability to know whether you are inside or outside.

While all this satisfies a sci-fi fantasies, the realists among us are wondering just how safe this can be. If I have one of these locks in place, can my home become prey to burglars via hacking? The answer seems to be yes and no, and Gizmodo’s Peter Ha does a good job of laying out the pros and cons, and scrutinizing Bluetooth 4.0/LE/SMART along with WiFi in terms of overall security:

It’s clear that the Internet of Things is upon us, and products such as these will be entering people’s homes regardless of security risks. We hope that as they become more sophisticated, their ability to thwart would be hackers becomes increasingly better.

We’re fascinated by the potential of LightUp, a system for teaching electronics based on a combination of real, physical electronic components, paired with an augmented reality app to help guide the way.

Based on a series of blocks that snap together magnetically (much like LittleBits), kids (or anyone, really) can experiment with common electronics in a fun and intuitive way by moving pieces around. The blocks are marked so that the app can read exactly what’s going on, and then offer a kind of engineering X-Ray vision of what the outcome of the circuit will be. Animations indicate where LEDs will light up, and what direction the current is flowing. The pieces also have common electronics schematics so kids are exposed to the symbols throughout the process.


We’re excited not only about the potential for this project, but how other augmented reality systems such as these can serve as rich tutoring systems for all kinds of abstract topics.

As of this post, LightUp is in the final days of its Kickstarter phase. For more info and to follow the project, visit

Recently, the Barcelona branch of the Smart Interaction Lab explored a project called Smart TV, a system that automatically recognizes the viewer, then aggregates content from multiple sources based on their preferences, and creates a unique set of curated content channels for them. You can read more about it on the Smart Design website.

We took a look behind the scenes by talking to Junior, code/design ninja and IxD Lab Chief in the Barcelona office. Here’s a transcript of our fascinating chat about Python scripting libraries, group therapy sessions, and people who lie about liking Borat.

[10:49:34 AM] Carla: In your own words, what is the Smart TV project?

[10:51:59 AM] Junior: This project was originally an exploration of current technologies to enhance the TV watching experience, it was meant to be a “what if” scenario

[10:52:49 AM] Carla: Cool, can you tell me more? What if…?

[10:54:29 AM] Junior: Well, 2 years ago as part of my internship here I was asked to do a personal project, something that could show my interests while improving people’s life and that would be aligned with Smart’s philosophy.

[10:56:18 AM] Junior: I was really interested in diving into recommendation engines and face recognition, so I came up with the idea of exploring the question, “What if a ‘Smart’ TV could be more than a ‘connected’ TV? What if the TV could actually know who was watching it and then adapt based on that, changing the UI in both the remote control and the content to be displayed?”

[10:56:53 AM] Carla: Why was it important to know who was watching? Was this something that you noticed was a pain point?

[10:58:30 AM] Junior: I felt that watching TV should be a relaxing activity, and with the amount of content that we have available, the effort required to browse through content to find what you like was really painful and less enjoyable than it should be.

[10:58:53 AM] Carla: Ah yes, that makes sense.

[10:58:56 AM] Junior: If the system knows who is watching, it can be amore pleasant experience by offering choices that are more tailored to that person.

[10:59:28 AM] Junior: Also, I wanted to help people that are not especially tech savvy.

[10:59:33 AM] Carla: Can you tell me more about your work with face recognition in this context?

[11:00:20 AM] Junior: I liked the idea of using face recognition because it’s a very natural way of interacting. After all, as humans, we use it all the time without even thinking about it, and I think we are in a point in history in where technology can do it very accurately.

[11:00:45 AM] Carla: How does the face recognition work?

Above: PyVision image source:

[11:01:38 AM] Junior: Face recognition consists of 3 steps:

[11:02:13 AM] Junior: 1. Enrollment: when the system “learns” the face and associates it with a profile2. Tracking: when the system analyzes the images and detects “faces”and 3. Recognition: when the system distinguishes that face from all the faces in the image and identifies it as a specific profile

[11:04:30 AM] Carla: That’s fascinating. For the geeks in our audience, can you tell us what software you’re using?

[11:06:43 AM] Junior: Since I wanted it to be a stand-alone system, I looked into different solutions and finally I opted to use Python as language and a image processing library called PyVision.

[11:07:48 AM] Carla: Can you tell me a little bit more about this?

[11:08:39 AM] Junior: Python is a very portable language and it could be used in a lot of different platforms, both server-based and embedded. It’s a scripted language, but a very high performance one, so, it’s really easy to reuse and port the code to different platforms.

[11:10:31 AM] Junior: My intention was to create a “black box” to contain all the required software and just plug it in to a TV.

[11:10:47 AM] Carla: Cool!

[11:11:24 AM] Carla: Can you talk about some of the experiments you did to get up to speed on it?

[11:12:12 AM] Junior: Sure. I divided the project in 3 parts that were developed separately and then I connected them.

[11:13:05 AM] Junior: First was the face recognition module, which was basically identifying who was watching, I tried several options and algorithms in order to find the one that could be usable and responsive.

[11:13:19 AM] Junior: I did around 20 -25 different scripts.

[11:13:49 AM] Carla: Wow, how did it go with those first scripts?

[11:14:41 AM] Junior: Well… some were good at tracking faces, but in order to recognize, you basically need to create and average of a lot of photos of that face. So, the first scripts were good at tracking but really bad at recognizing. They would be really unresponsive.

[11:15:08 AM] Carla: Ah yeah, that makes sense.

[11:16:05 AM] Carla: And then after the face recognition module?

[11:16:26 AM] Junior: Finally I found a really cool library to implement machine learning in Python.

[11:17:06 AM] Carla: Nice! What’s that called and how did you find it?

[11:17:47 AM] Junior: Mmmm… I read a lot of articles about face recognition, and the guys who developed PyVision use machine learning for face recognition.

[11:17:55 AM] Carla: Gotcha.

[11:18:15 AM] Carla: So after the face recognition module, where did you go with your experiments?

[11:19:02 AM] Junior: After that I did the iPhone app, and used it as remote control.

[11:19:32 AM] Junior: I felt strongly that the UI should not be on the TV screen itself because watching TV is a social activity– you don’t want to interrupt everyone who’s watching when you want to browse or get more information.

[11:19:59 AM] Carla: And what kind of coding environment did you use for the app? There are so many options right now, and a lot of people are confused where to start.

[11:21:07 AM] Junior: I used a framework called Phonegap, its really cool, you create the UI using web technologies (HTML5, CSS, JS) and this framework encapsulates the UI into a native app.

[11:21:56 AM] Junior: It’s really simple and the best way to do a prototype.

[11:21:58 AM] Carla: Oh yeah, I know a lot of people love Phonegap for prototyping, nice to know you can create a native app with it.

[11:22:53 AM] Carla: What were the biggest challenges in developing the Smart TV system, particularly with making it really intuitive?

[11:24:31 AM] Junior: I think the biggest challenge was thinking about how the system will aggregate the content when 2 or more people are watching together

[11:25:03 AM] Junior: I feel that watching TV used to be very simple and social (as in people in the same place watching the same TV)

[11:25:07 AM] Carla: Interesting. I can see how that would be tricky to know whose content belongs to whom.

[11:25:42 AM] Junior: Exactly, and I think our approach was more about forget about “your” or “my” content and think about “our” content.

[11:26:06 AM] Junior: Let other people enrich your experience just by being there in front of the TV.

[11:27:18 AM] Carla: Hm. So does that mean that “we” become(s) another user? Or do you just pick the person whose content it’s more likely to be? I can see how this could get really complex really fast!

[11:29:04 AM] Junior: “We” become something different, we are a group that aggregates all the individuals.

[11:29:37 AM] Junior: Think about the wisdom of crowds applied to TV.

[11:29:58 AM] Carla: So is it kind of like this: person 1, person 2, person 3 and then a fourth profile for all three people combined?

[11:31:29 AM] Junior: Sort of. It’s combined but it’s not exactly the sum of everyone.

[11:31:41 AM] Junior: When you think a family for example, if you separate each member, they each have a personality,

[11:32:13 AM] Junior: but when they are together they have a “group” personality.

[11:34:35 AM] Carla: Ok, I get it. Cool. I think there are a lot of interesting social dynamics to explore there, like who is the most dominant. Super interesting. Could be a project for group therapy.  ;)

[11:35:27 AM] Junior: Exactly. One of the reasons I used face recognition was the possibility of using facial emotional feedback from everyone.

[11:36:15 AM] Carla: What’s next for this? Are you using the face recognition for anything else?

[11:36:26 AM] Junior: Not at the moment, but I’ve been paying attention to people using face recognition as rating system.

[11:37:25 AM] Junior: In a regular recommendation system, its all about “like” or “dislike” but the truth is that we have two “selves” the one who we aim to be and the one we really are.

[11:38:19 AM] Carla: That’s super fascinating about the self we aim to be. There’s so much psychology in all of this. Are you saying that the face recognition gives us a better truth than the rating that we indicate in the interface in another way?

[11:39:06 AM] Junior: Yes, exactly. For example, in order to create a profile in a recommendation engine you have to select content that you like, but most of the time you select things that you think are cool, but not always that you like.

[11:39:25 AM] Carla: So would the system you propose collect recommendation data in a passive way? Like in the middle of the movie I’m watching, rather than a question that’s asked at some other time?

[11:40:23 PM] Carla: Is it passive, accumulated while I’m watching?

[11:40:29 PM] Junior: Ideally it should be tracking your facial feedback at all time.

[11:41:20 AM] Junior: You could choose “Gone With the Wind” or “Citizen Kane”, but in reality your facial feedback says that you like “the Mask” and “Spice World” better.

[11:41:24 PM] Carla: Ha Ha Ha, yes, and “Borat” instead of a La Jetée.

[11:41:54 AM] Junior: Hehehe exactly ;) And facial emotional feedback is universal, independently of culture or geographic location.

[11:42:11 PM] Carla: Yeah, that makes sense.

[11:42:22 PM] Junior: then you could be more accurate about what you like and when.

[11:42:31 PM] Carla: Right.

[11:43:45 PM] Carla: Junior, this has been great! And I learned a lot.

[11:44:11 PM] Junior: Thanks Carla, it was really fun, please let me know if you need anything else.

[11:44:23 PM] Junior: I have tons of examples and references that I can share with the Smart Interaction Lab readers.

In our research around the emerging Internet Of Things product category, one area for data collection that’s been blossoming is in environmental sensing. Qualities like temperature, humidity, barometric pressure, and VOCs can tell us a good deal about our immediate surroundings and let us diagnose air quality, both indoors and outdoors. Ideally the devices will help us understand how changes in those values might correlate to health or mood.

In the past, we’ve featured Lapka blocks, which are super pretty white and wooden blocks that detect radiation, EMF, humidity and nitrates in foods. We also took a look at yet-to-be released Cube Sensors, which measure what we called the “indoor invisibles”, temperature, humidity, noise, air quality and barometric pressure. A new one we’ve found on Indiegogo is called Motes, a collection of small, wireless sensors for iOS, Android or Linux that measure values such as ambient temperature, humidity, light, soil moisture, soil temperature, object temperature, human presence and movement. According to their website, they last for about a year on a single battery and because they communicate with the cloud via an existing device (smartphone, tablet or computer), they don’t require a wifi connection.

While these products aimed at individual households or offices can help us understand what’s happening in our immediate environments, what makes devices like this really powerful is when they give us tools to build a “macro view” of surroundings based on collective data that’s measured by many individuals over a large area. This larger view offers the ability to crowdsource meaningful aggregated data from a number of different geographic locations at one time, thus validating environmental concerns and providing evidence for citizen journalism.

The Smart Citizen Kit is the latest environment-sensing IoT project that’s caught our eye, and its entire aim is to provide a tool for crowdsourcing data and enabling what might best be described as standardized, citizen-driven smart cities to build an “interactive, worldwide database”. Created as a joint venture between our friends in the Barcelona Fab Lab, IAAC and Acrobotic Industries of Pasadena, the Smart Citizen Kit is composed of a small sensor array and an online open-source platform to allow people to easily visualize and share their data with the world. The sensors measure air composition (CO and NO2), temperature, light intensity, sound levels and humidity. A live (Beta) version of the site is here:

We’re excited about the potential for this to empower citizen journalists. As of this posting, the project was just a few hundred dollars shy of their $50,000 goal, so we wish them luck!

And check out Bruce Sterling’s Spime Watch review of the Smart Citizen vision: