Smart IxD Lab

Menu

Thoughts, observations and experimentation on interaction by: Smart Design

Check us out

We’re intrigued by Amazon’s new entry into the hardware market: the Echo. It’s essentially a cloud-connected smart audio system that can perform web searches in response to voice commands. With the widespread use of voice-based assistants such as Apple’s Siri, we wonder if it will be useful to have yet another in our midst, but since it can sit in the center of a room and be accessed by anyone nearby it can be used in a social and communal way, as opposed to being a private, single-user device. The official video (above) plays out a few scenarios for its use, including a family making music selections, a boy double-checking homework spelling and a woman asking for recipe help while cooking (hands free!).

The lack of a screen is an interesting design decision. Since the Echo can communicate with nearby tablets and phones, it can essentially “borrow” those screens as its own interface when visual output is needed, but as an audio-only device it lacks some of the expressive features  that would make it compelling and satisfying to use. We’re thinking about how different it might be to more physically robotic systems such as the Jibo Family Robot.

Units will sell for $199 but a $99 special for Prime users is available by invitation. For more information here is the official Echo website.

We also love that it’s inspired a series of playful parodies such as this one.

 

WaterWatcher_0_Hero

WaterWatcher_App_ThreeScreens_1-700x393

At the Smart Interaction Lab, we’ve been experimenting with many electronics boards and sensors for prototyping connected devices. In the San Francisco branch of the lab, we decided to use some of our experiments to see how they can give us greater insight into our own behavior in the studio. Since water conservation in California is so critical, we decided to explore ways that devices can help us understand water usage better.

The result is the Water Watcher, a device for monitoring and reporting water consumption as it happens: at our sinks. The glowing device provides an on-the-spot, ambient display of how much water is actually being used. It sits near the sink, where LED feedback subtly reminds users to conserve, and the accompanying app provides a more complete report of consumption.

To read more about the project check out Fast Company Co.Exist‘s coverage and a more detailed explanation and video at the Smart Thinking section of Smart’s website.

And stay tuned for a behind-the-scenes report from the folks at the Lab who worked on the Water Watcher’s development!

Update! We are exited to announce that Water Watcher has been shortlisted as a finalist in 2015 Interaction Awards in the “Engaging” category!

josh-and-James-sketching13

Joshua Walton and James Tichenor are a creative technology duo currently working at Microsoft. They are well known for their pioneering work in creating immersive interactive architecture at the LAB at the Rockwell Group where they created the open-source tool SpaceBrew, a switchboard for choreographing physical spaces and objects through digital behaviors. We sat down with Josh and James to talk about the role of prototyping and the importance of thinking through making.

This is the second in our series of interviews from the Sketching in Hardware 2013 conference held at Xerox PARC. The first interview with Bill Verplank took place during the same conference.

 

You’re known for being at the bleeding edge of digital design. How did you guys get started?

Josh: We have different backgrounds. My undergrad degree was in interaction design, a very early program in IxD, and then I went to graduate school at Cranbrook Academy of Art for Design. I think one of the things I really valued at Cranbrook was learning through making and the importance of exploring materials. Most of those materials were digital, and I’ve always been interested in the relationship between software and design. One thing that James and I have a common approach to is that we like to try things out firsthand in order to gain intuition for those things. It’s really different to try something and reflect on what things were working and what were not working and build up a vocabulary.

James: The thing about building up a vocabulary is so key to what drives learning about a new technology for me. I’m really interested in things that don’t exist yet. If a thing doesn’t exist, but you care about how it comes into the world, the right descriptive languages often don’t exist to describe ways to do it other than to craft it. And those variables that are going to make it good or bad, if you really care about them then you have to figure out a way to work in those media, and that’s what we did at the Rockwell Group Lab. We made so many prototypes, because if you just left it up to the A/V contractor, you wouldn’t be able to really specify the details of what brings an interactive space to life. Or you need to build enough prototypes for that to become the spec. You need to work in it to figure out what is going to make or break that thing for you even if you’re not doing all the production yourself. That’s what really drives it.

I did my undergraduate schooling in Architecture and then studied design computation at MIT. I only went there because I felt like I could only imagine crazy scenarios involving technology–but who cared what I thought. I had to start building enough to have stronger vocabularies and tools, more than I had to be the guy doing all of it.

Josh: Part of that is working on the real problems, too. It’s very easy in design to solve all the problems that aren’t really the ones inhibiting those things from being in the world. That’s why you see all the companies work on legislation because they realize that you can’t make huge shifts without shifts in the government. I guess at a smaller scale you’ll realize that the manufacturing isn’t what you thought it would be, and you realize that it’s going to be totally different and you solved the wrong problems, or problems that aren’t the ones that are really stopping you from creating something.

James: In any large project, or project with a lot of unknowns and crazy variables, like an interactive space or a building that’s not done yet, a lot of the process is trying to figure out where the points are that you will have the most leverage, because you’re not going to be able to do all of it. And what can you do at those inflection points to specify that it lands where you want it to land. Where are those levers? And that’s a lot of what you’re trying to do, and making and building things helps make that a lot more crisp and well defined. We always talk about that great paper, “What Do Prototypes Prototype” that talks about how prototypes can’t do everything. Your time is limited and your time working on a project is limited, so you want to make sure that what you make and control is the part that’s going to transform the experience of it the most.

The word design means so many things to so many people, and I’m really interested in this idea of interaction design being about a user’s real experience. That you’re designing something almost in their mind, you know what I mean?

 

I always think about design as decision-making and the prototypes are a way of asking very specific questions. That’s what I’m always trying to work with young people to help them define the questions they are trying to ask through their prototypes.

Josh: I really love that because you see many people prototype and not really have a point of view on what they are trying to learn from it. It’s valuable as a sketch, but prototypes do this other thing which is help you gain comfort around different areas of a project.

James: Sometimes you sketch as a communication with other people, and sometimes it’s with yourself, and sometimes it’s amongst other aspects. You want it to surprise you as opposed to sketching which is a communication tool. Sometimes you make prototypes that are purely for presentation, but you should make those only when you need to use those and figure out when you’re doing which.

 

I’d love to talk about SpaceBrew. What was the idea behind the development of SpaceBrew?

Josh: We built a lot of different systems in the past to run installations and there were many common themes in those. One of the things that we noticed is that they require a lot of technical expertise. We started off making something that would be good for workshops, so it had to be really fast. We think of it first as a prototyping tool, but then we’ve seen people use it in more permanent installations. That’s a similar trajectory to Processing. The people who developed Processing never thought it was going to be used in a project that went somewhere [beyond a user’s experiments], but when it wound up being used in more permanent, larger scale projects, they thought, “Well, that’s cool”.

James: I think there was a lot of inspiration from Arduino in that. When Arduino first came out, many people who worked with microcontrollers didn’t quite understand it because they thought, “I have a PIC which is cheaper, more robust, more precise, and better in terms of all technical specifications. Why use this?” But this other thing, the Arduino, just made a few things easier and faster when you are first thinking about a project. But then, surprisingly, people built all these tools to make the Arduino work faster, work better, and actually meet those technical needs. People were able to work around the Arduino. It is easier somehow to make something easy and simple fit other people rather than make something huge and complicated boiled down to a single group. It’s easier to build the bottom of the pyramid than the top.

Josh: We’ve always had this idea about routing being important, but in past systems, you never spent your time routing because it’s too hard. You’d have to write these configuration files and all this stuff, so the core part of Spacebrew is: Let’s make routing from one project to another project as simple as we could make it. One thing that surprised me is that people use it for prototyping even just for the components of their own projects in a single device; they’ll use it to separate the inputs and outputs so that they can split the work up differently.

 

Some of my students were using it that way and I wondered if it was overkill because the beauty of the switchboard is that connects multiple, discrete devices to one another. On the other hand, if they need to have a project with, say, a visual output in one place with physical inputs in another, it’s nice to be able to point them to something that I know will work.

James: We talked about how to build more ecosystems around that, or build more sharing tools around that. Because it gets really nice when you get these little choreographies made of components of hardware and software that speak to Spacebrew as a publisher or a subscriber [Spacebrew’s structure for inputs and outputs]. You can say, “I know that works, over there, and I know that other thing works over here.” Then you just focus on putting them together or you focus on making a new piece to introduce into the system.

Josh: For Spacebrew, we’ve always held this fundamental idea of “Play nice with others” because a lot of systems like this try to do too many things. For example, they might try to copy the files from computer to computer, which is good, except that there are already a lot of great tools for that. Instead of SpaceBrew trying to play that role, it just plays well with those tools. So that’s why people can use it in installations.

 

I can see the value with a work in progress of having just one thing that’s experimental and having everything else be standard things. If I think about Spacebrew, I see that it’s helpful to use a tried and true example, like your Javascript data visualization script, as an established, working component. Then experiment with inputs until you have something working and feel like you’re ready to build something new on both sides of it.

Josh: Earlier in my career I built tools all the time, and I got skeptical and upset with myself over doing that if I didn’t use them all the time. I found that if I built a tool and I won’t use it then I probably shouldn’t have built it.

James: What’s nice is that the Javascript examples of, say, a graph, are the kinds of displays that you may use as a debug tool, but because they are not so tightly integrated into the system; they’ve been the kind of thing that people have modified and improved upon. There will be a cycle where you see that someone else made a better one, and you think to yourself, “Oh, well, I’ll just use this one now.” And it feels like it was a better debugging tool for it, but it’s not part of it. It only feels part of it in that the website consumes it and it’s in the right spot.

Josh: The other big thing is that you wouldn’t just make a graph. A lot of times you’ll see someone do Arduino with Processing and feel like they would like to graph these 3 data points, but they just don’t have time. But if it’s already there you’ll just use it.

 

There are starting to be a lot of other systems that are Internet of Things type prototyping. Does it worry you that there are so many tools out there now?

Josh: We try to play nice with those tools, but the biggest thing is that Spacebrew is kind of a service, but it’s also a methodology, so we are really okay with people completely replicating the methodology. For instance, I know of one effort where someone is rewriting the server in another language and that’s okay. I actually think that it’s going to be a while before people should be concerned about those issues because it’s in its infancy. I always make the analogy of the Altair computer, the one with just the switches in the front… it’s like trying to so fiercely protect personal computing based on that computer. That wasn’t it. No one thought that the Altair would be the end of personal computing. It took quite a few iterations before anyone had any real intuition or sense of what that should be. I feel like we are in the same phase, but people are pretending that they have it more figured out than they do.

James: I think it’s also because they saw personal computers get adopted, but forgot about the parts where they didn’t make sense and they were early. So maybe some things don’t need to scale to everyone. I think that methodology is so important. Right now we are trying to figure out what parts you need to know to make things talk to each other. We’re trying to find out a good model that works for that.

Josh: I think it’s actually back to a pretty basic design thing about prototyping as a tool for gaining intuition about the technology. I don’t think people have really good instincts for the Internet of Things right now, so the whole idea is let’s say we use this to develop a good sense around what kind of things work and what things don’t. It doesn’t mean that Spacebrew is the tool to use forever. It means that we can take the intuition we glean from the Spacebrew experiments into the next thing we would do.

 

What’s the most exciting thing that’s been done with Spacebrew (either your projects or other people’s)?

Josh: My favorite things are just the jam sessions themselves–live demos and workshops where groups of people use Spacebrew together and come up with ideas and experiments to try out on the spot– because they don’t always work, and there is always difficulty in and kind of live event like that. It happened after the talk today at the Sketching in Hardware Summit when people here started downloading Spacebrew. Just as we were presenting, there were a bunch of people out in the audience there with working examples. That’s the most exciting thing: seeing how quickly people can get up to speed with using it. One of my favorite moments is that the whole Python library was written in one jam session. I think it was someone from NYC Resistor. He just came in, understood the methodology right away, and just made it. Then afterwards he kept updating it, too.

James: I think there are those moments that you feel Spacebrew can capture an improvisational spirit, where people ask themselves, “I wonder what it would be like if this connected to that?”

 

Those are my favorite moments, too. I find that I’m connecting a light to my heart rate, but the wondering what it would be like to have the robotic arm responding to it. 

Josh: I remember being at hardware hacking things where someone would want to connect their project or a part of their project to someone else’s project, and we would sit around trying to figure out how to do it, and it just was prohibitive to do in a short period of time when networking wasn’t the focus. One other piece of intuition we are getting already is that as designers we’re going to really want to change the way we make objects so that they are created with the idea of connected behavior in mind. I already feel this way from the little bit of stuff we’ve done with Spacebew. If you take the simple example of the light, what kind of sensing and actuation that light would have, how can it ride that really fine line of being abstract enough to be reusable and specific enough to be useful.

James: One other thing that we are able to do in Spacebrew is make things not so precious as a product–it’s more of a prototyping focus. Because maybe all you want in a light switch is a switch that lets you query its state. You don’t need to know all these other things like, did someone walk by, how long did someone touch it, etc. Maybe you don’t need all of that information because that doesn’t help. And that’s a lot of the challenge with the Internet of Things objects, is they have this weight that they have to be a real product and they have to hit a bunch of markets all at once. We love that Spacebrew can help people explore the value of Internet of Things behaviors by building them and trying them out in real time.

 

 

 

 

Carla-Bill-verplank-interview

The Sketching in Hardware Summit is taking place this July in Berlin, Germany. Though we’re not able to attend this year, we wanted to share some conversations from past Sketching in Hardware events through a series of interviews conducted last summer.

The first is with the legendary Bill Verplank. Bill is well known in interaction design circles for his pioneering work on the design of the original graphical user interface and mouse, research which has influenced all the devices which we use today. Below is a transcript of our amazing conversation, which included a description of his groundbreaking work on the Xerox Star project, the importance of real physical interactions, why forest-fighting robots are bad and how to build Italian hill towns in outer space.

  ______________________________________________________________

You’ve had a rich career that’s inspired many students and professional designs around the globe. Was there a favorite phase, or stage or project in your career?

I think the project that changed my life was Xerox Star. It was already going when I joined it, so I don’t have ownership of it, except that I was engaged in helping to decide whether there should be one, two or three buttons on the mouse, and what the layout of the keyboard should be, and what the refresh of the phosphor should be, and what the phosphor should be so that you get a more precision phosphor. So for three years we did a series of important tests that engaged me in debates around whether modelessness is more important than consistency. Is visibility more important than modelessness? So if you can’t see that you’re in a mode, it’s a hidden mode. That’s the worst kind, you see, it’s a double negative, but I think we agreed that visibility was more important than modelessness. Larry Tessler argued before he left to go to Apple that modelessness was so important. That’s why there is only one button on the apple mouse because Tessler decided that. He and I are contemporaries from Stanford. We were undergraduates together.

I think that was very formative because it was my first real commercial job and I had been an academic until then and I realized that I knew some things that were useful to industry. We were breaking new ground and doing testing that was based on some mathematical models. There was a history of doing mathematical models at Xerox PARC where Stu Card was proving that the mouse was better than the joystick, for example, and he had a mathematical model, Fitt’s law. So he would say this compares to human performance, look at the mouse, it’s a Fitt’s law device. And if you use a pencil you can get Fitt’s law performance, and you don’t quite get as fast a performance with a mouse as you do with a pencil but the trade off is the same, it’s a logarithmic relationship between difficulty and time, a nice straight line. So there was this academic side that Xerox PARC established, and so all the engineers were scientists and they really wanted proof that their design was the best and we had big arguments because we did the tests and some of them turned their backs on what we showed. We said, “watch this woman” who was a secretary and she was trying to explain how you select things. She said, “Well I click down here and then I move the mouse with the button down and I get to the end of the line and then I go back to the beginning of the next line…” You see, her conceptual model was a highlighter, so was thinking about highlighting the text, we even used the word “highlight”. She knew what a highlighter was, but had never seen anything highlight on the screen, and if you carefully draw a line across a bunch of text then it looks like you’re drawing a yellow highlight. In fact, it wasn’t yellow, it was inverted; all we had were lack and white pixels. And then she’d come down to the next line and she’d go back to the beginning of the line and she’d undo what she’d done before, but the cursor would jump down, and she was going from line to line. She didn’t know that you could go straight down without having to go from line to line. It was clear what her conceptual model was just by watching what she was doing. And I remember one of the designers (we had designers who would come up with these ideas) kind of turned his back on the video and wanted to engage me in conversation and I said, “Look, you’ll see how she thinks it works. Just listen. Watch what she does.” So our job was to make them pay attention to the people we tried things out on.

We were in L.A. and they were in Palo Alto. They thought they were smarter than we were. They had the ideas, but we had the proof from the user advocates, we call them users. We did three years of really thorough tests and we divided responsibilities Terry Roberts did the selection scheme tests. We had a psychologist running the group. He was a PhD, and she was a PhD from Stanford at PARC And I was a PhD from the MIT Man-Machine Systems Lab and had all this experience in information theory models and decision theory models and control theory models of human performance. And we sort of made it up on the fly as to what kind of tests we would do. I did a whole series of tests on what the icons should look like. I had four designers. It was not a scientific test. We were trying to discover principles, but we wanted to compare them in some rigorous ways, so we had four designers complete tests. It was one of the first things we published in ’83, two years after the Xerox Star came out. It was the foundations of the ACM SigCHI conference. So that was my career at Xerox PARC for 8 years (78-86), including 3 years of testing. The last 4 years I was continuing to refine the design. I was much more of a designer and we were looking at tablets and all other kinds of devices.

Bill Moggridge approached me. He had taken on the contract to do industrial design for the follow up to the Xerox Star, so he and Robin Chu, his principal, the best designer he had, did a beautiful redesign of this really conventional beige box. It was a dramatic color, not black and white, not red and yellow, but it was an architectural rendering of a column of an office building if you looked at it that way. And then he designed the keyboard and a monitor display (I think they were separate), but he then sold Xerox on a strategic vision of the future exercise, and that’s when he hired me. He started that before he hired me and I remember watching it go on and I realized that they didn’t know what they were talking about. And it was run by the head of industrial design for Xerox who was famous for these future studies and every year he would produce these visions for the next five years of Xerox products in mockup. So they could go into that room and say, this is the big printer that’s going to be in every copy center, and this is the personal copier that’s going to be in offices. It think they hadn’t quite yet projected copiers that could fit on tops of tables, but printers at that time were big huge boxes.

Anyway, that was my first exposure to industry, my first exposure to doing human factors testing, and in a sense defining a new field which was centered around human factors studies of human computer interfaces.

When I went to work for Moggridge, I knew we weren’t just going to design computers, but we would design all kinds of things that had interfaces, so we called it “Interaction Design”. I think that’s a good term. Unfortunately most of the people doing that kind of work are graphic designers and it’s mostly graphic designs that are dominating interfaces now–how do you design web pages.

 

  ______________________________________________________________

 

So much of business today is focused on screen-based interfaces, and people like myself and the other folks at the Sketching in Hardware conference are passionate about the intersection of the physical and the digital. So much of business is in screen-based interfaces. Do you think the business of physical interaction design is keeping up with the screen-based design?

There are a lot of people making careers of doing interaction design. In a talk I did a couple of years ago for IxDA, I started with a film that my mother took when I was eight and in it I’m trying to saw. And my dad comes along behind me and he says, “Don’t do it like this” and he took my hand and over the back of my shoulder he said “do it this way” a beautiful straight stroke. Which is hard. It’s easier to do it the way your arm wants to do it. And then he said, well before you saw, you need to mark it, and he pulls out a big metal square and marks the cut, and puts the pencil just like I do in my pocket after you mark the cut. It’s actually how I work with my saw to take my disk drives apart. I even mark it with a straightedge and a pencil. It was a wooden saw, a small one he had for me, I still have that one, too. And I still had the bench until about last year when it fell apart. It was 60 years old! We used it all the time. I have new blades on the hacksaw. We didn’t use the little saw enough. And I learned to sharpen saw blades. I learned a lot from my dad, and I think that interest in the craft of doing things with your hands was very important to me, so what I talked about was craft. Among the graphic designers what I said that you not only bring art and design, but you bring craft. And I had a nice quote from a calligrapher named Hella Basu. It was from a greeting card made in Asheville, SC. In all object making there are three things that are important. One is art, which is about the meaning of something. Not weather or not it’s artistic, but what does it mean. And then there’s design which is about the purpose of the thing, to what human purpose is it built, why did you make it. So art and design go together. There’s the meaning and the purpose, for example to impress someone, or to hold in your hand. And finally there’s the third thing which is craft, and that’s about the quality of the production of it, whether you made it by hand or made it by a machine, there’s craft in every kind of object-making. And art and design are part of it, but then there’ the doing of it and I think that’s what the people in the audience responded to, that I’m a mechanical craftsman and I appreciate the mechanical quality of things. That’s why I was attracted to tangible user interfaces and still am.

So that was important to me based on the way I was raised, as the son of a craftsman… who went to Stanford and became a manager. He was a carpenter in his dad’s business. They were building homes, back in the 40′s and 50′s.

  ______________________________________________________________

 

I really enjoyed the Plank project that you showed. [Plank is an ongoing research project and workshop activity around haptic feedback.] It’s an interesting example of exploring the nuances of physical interaction design beyond the off-the-shelf buttons and dials that have dominated design by necessity up until recently. Those nuances however can be hard to communicate. What you’ve built might look like an ordinary joystick in a video. Do you have any advice for today’s designers regarding how they can explore those nuances?

I think you need dramatic demonstrations. You need to have the before and after. If you can turn the force off and show that you can’t control it like you can with the force feedback on. If there’s something you can hear, or see, some kind of a line you can draw with it, some kind of a start-stop. You need these dramatic demonstrations to show that there’s a rhythm you can get because you’re hitting something. It must be really frustrating for a conductor. He has feedback because he can hear the music, but he has to plan way ahead. He’ll say, okay, we’re not ready start yet, but just watch me. There’s a lot of anticipation and he moves his hands in just the right way, and then, boom. Now at least a quarter second later you begin to hear the music coming out of the performers. And if they’re good, they know what the conductor meant by his gesture, but then he’s kind of following the orchestra rather than leading it. Once he’s given them the downbeat…  I’m very impressed by conductors because they have a horrible instrument to play because of the terrible feedback! And there are a lot of tricks to what they do. if you ever watch Seiji Ozawa, he’s a real dancer, dancing around the orchestra, conveying with his whole body what the story was that he was telling. Others are technicians in getting the exact tempo and amplitude. So I’m really awed by conductors, so I wonder why it is that when you get up to about ten musicians there are very few times in a jazz orchestra that someone needs to stan up to say go. Or you just point over there and they know that when they get up to the twelfth bar it’s time for the horn solo, so the horn soloist might stand up, so that leading goes from one instrument to another. I’m quite interested in how jazz orchestras work and how they listen to each other,and they riff off one another. My son is a jazz musician.

There’s a group here called the Stanford Laptop Orchestra, SLOrck. Princeton was the first place with a laptop orchestra and that was called PLOrk, and Ge Wang whose office is right here came from Princeton where he studied with Perry Cook who came from Stanford so it goes back and forth. And he’s now famous, not just for the laptop orchestra which he brought from Princeton, but with Chuck, a language he wrote for his PhD, and he went from the laptop orchestra to the iPhone orchestra and now the iPad orchestra, so there’s been a nice succession. And he’s done various ways to make speakers so you can hear the sound come from each of the musicians with a dome speaker. It’s quite a development. He has started a company called Smule. With his Ocarina instrument you blow into the microphone of the iPhone and convert that into loudness. So for the last three years he’s been hiring the best students and teaching them here. He’s built quite a business. Perry Cook retired from Princeton and is on the board at Smule and lives in Oregon.

  ______________________________________________________________

 

So my last question: You’ve been very involved in visions of the future throughout your career. In thinking about those visions, have things panned out the way you envisioned?

In 1975 I was finishing up my first four years teaching at Stanford. I hadn’t quite finished my PhD, so I was going back to MIT, but that summer I ran a study for the design division at Stanford and NASA, so for five years they had done a summer study for the American Society of Engineering Education and it was about how to teach systems design. The way we taught design here at Stanford was by doing design, so the professors would come and they would all participate in a group design exercise. Three years before I did my study, they had done a study about how to detect extra-terrestrial intelligence, and they designed an antenna array called Cyclops, and that array was eventually built. It’s out in central California where they don’t have too much smog. It’s run by the SETI project. That was a success, and I ran two summer studies. The first one was on how to fight forest fires in California, and that ended up as a fight between the roboticists who wanted to build big tele-operated robots that would fight the fires and put them down with big cannons of water and flying robots–they were very technological–and the systems analysts who said hey, we don’t need big machines to fight forest fires, because we can prevent forest fires… we just need to let them burn, we need to do forest management. They did a whole cost-benefit analysis and determined that it wasn’t worthwhile to spend money on firefighting, that instead we should spend money on fire prevention. That doesn’t mean putting them out. You’ll hear stories this year about fires getting out of hand because for 15 years they haven’t been cutting down the undergrowth, and the forests were 20 years old since they’d had the last fire, and, boom!, he right conditions set them off. So that was very interesting because I had set up a while speaker series and got all kinds of people in. Even the students who were here were involved. We set it up so that two or three people were on call, so that if there was a big forest fire happening, we’d get in a car and drive with the US Forest Service or the CDF (California Division of Forestry), the principal firefighters here in California. That was a really exciting project because I got to know about something that I didn’t know before, and I got experience running a summer study. But we ended with this debate that I couldn’t solve, so we issued two volumes: Volume 2 was about fighting fires, and Volume 1 was about how to prevent them, and a cost-benefit analysis. And they didn’t see eye-to-eye, but managing that was part of the learning to me.

The big summer study was space colonies. It’s a long story! At the end of the summer we proposed what was known as the Stanford Torus. It came out of 1975 Space Colony Study at NASA Ames Research Center. It was a beautiful big book, because NASA put a lot of money into producing beautiful illustrations, and we had two of the professors editing the final book, and so their names are on it and my name is on it in the back. There was a Princeton researcher who was publishing popular science articles saying that in the future we were going to live in great cylinders that were 16 miles long and two miles in diameter so they would have a 6-mile circumference, so that’s a big cylinder, if you an imagine it. So up there in the sky are clouds, and atmosphere, and way off in the distance you could see people walking upside down on the far wall. They were amazing illustrations. And one of the questions was how fast would it rotate? What rotation rate at one mile radius do you need to get one gravity? And it’s really slow. It’s about one rotation per 10 minutes, but it’s enough to stick you to the ground. And he had these visions of how wonderful it would be would be like to live that way, and he would arrange them in pairs, because he wanted them to be aiming at the sun so that they would collect sunlight, so they would have to be rotating around the sun. And cylinders want to continue rotating in the same direction they started, and you need big torque to get them rotating in the opposite direction; cylinders want to continue rotating, but if you have two cylinders rotating in counter to one another and you connect them together then they will be generating very little force on one another, and have a zero net rotation, so you could set them rotating at one revolution per year around the sun. Big mathematics! But we decided that our first colonies would be for 10,000 people, and we calculated how many square feet we needed, and he agreed that the density of an Italian hill town would be enough, so we calculated around that, and we designed and Italian hill town that didn’t rotate people at more than 1 RPM. It’s published in a report and they’ve continued to study it. Every summer they run a Stanford torus design competition.

  ______________________________________________________________

 

Bill Verplank is currently a visiting scholar at Stanford’s CCRMA and is involved with Stanford’s D-School. You learn more about his thinking in his interview in Bill Moggridge’s Designing Interactions project online. 

 

The robots are here! Well, one of them. This week, Cynthia Breazeal, MIT’s pioneer researcher in social robotics, announced a project that we’ve been waiting for: Jibo, your family’s friendly robot companion. It’s essentially a voice-enabled expressive countertop appliance with a colorful screen for a face and a jolly, rotating torso that allows it to turn towards a person with socially appropriate gestures. In other words, Jibo can pay attention to you and carry on a conversation like it really cares.

The video above does a great job of telling the story of how Jibo might fit into your life, serving as an internet-enabled coach, personal assistant, video chatting system, entertainment hub, and all-around connected companion.

We’ve been following the field of social robotics for some time, and have noted women leaders such as Dr. Breazeal in this Fast Company  article, “How Women Are Leading The Effort To Make Robots More Humane”. While most of the groundbreaking work so far has been confined to university laboratories, Jibo represents a bold move into the consumer market. Currently in a crowdfunding effort on IndieGoGo, the product is scheduled to ship in December 2015.

As product designers we know the challenges that go into making a product’s interactions truly socially compelling, as outlined in this piece about Jibo in MIT Technology Review. We also know that it’s tricky to build a general-purpose connected device that can  adequately fulfill a user’s needs better than all the other competing internet-connected appliances in the home. Nonetheless we’re very excited about Jibo and look forward to following its progress as it develops.

See the New York Times piece on Jibo and social robotics here.

spark

At the Lab we’re always on the lookout for new tools to help us quickly and easily prototype connected devices, for “Internet of Things” experiences. One little board that’s caught our eye is the Spark Core board. Here’s how the creators describe it:

“A tiny Wi-Fi development board that makes it easy to create internet-connected hardware. The Core is all you need to get started; power it over USB and in minutes you’ll be controlling LEDs, switches and motors and collecting data from sensors over the internet!”

We got our hands on one recently and were impressed by how easy it was to power up the board and control some LEDs via a web interface, as well as set up a sensor (we used a light-sensitive photocell) that can broadcast its status to the web. Using their cleanly designed Tinker app, we could turn functionality on or off with a tap on a touchscreen.

Perhaps most interesting is Spark Core’s strategy of allowing 3 different entry points for programming the hardware. A complete beginner who wants an easy introduction can use the visual Tinker app, whereas a more advanced user might prefer the browser-based Spark IDE to write custom code. And as an open source project an experienced hardware tinkerer can explore any aspect of the board and software.

tinker

Screen Shot 2014-07-03 at 6.07.47 PM

We’re looking forward to doing some more tinkering of our own.

Check out Spark Core here: http://www.spark.io/

Screen Shot 2014-06-25 at 3.31.26 PM

Like us, Google sees a future where technology plays an increasingly important role in everyone’s lives. In order for new products, services and software to be inclusive, we need women to be active participants in the creation of this new world. At the Lab and in Smart Design’s Femme Den, we’ve been encouraged by seeing more female leaders in new fields such as social robotics, but the facts unfortunately still point to women being incredibly underrepresented in technology fields in general.

When Google realized that only 17% of its tech employees are women, it set out to create initiatives to try to increase that number. One such effort is the recently launched Made W/ Code project, a combination of community resources and online projects.  This inspirational video paints a picture of the larger vision that Google has invested $50 million dollars into:

To build the community, Google has video portraits highlighting mentors, such as Limor Fried, creator of Adafruit online catalog and learning resource and Miral Kotb, founder of the iLuminate dance troupe, there are similar portraits of “makers”, such as young Tesca Fitzgerald at Georgia Tech, who uses code to experiment with social robots.

The projects are interactive and encourage kids to code using a  browser-based tool for crafting many kinds of experiences, from animations and avatars to 3D printed objects and musical compositions. Though these projects seem to be in their early phases, they showcase a solid foundation of how a visual coding language (theirs is called “Blocky”) can be used as the framework for crafting algorithms and establishing behavioral patterns. Below is an image of the 3D printed project created in collaboration with the Shapeways service. Using only what you see in the browser, you can adjust the parameters of a bracelet and then have it 3D printed and shipped to you.

Screen Shot 2014-06-25 at 4.05.02 PM

 

 

We’re excited to follow Made W/ Code and help spread the inspiration to girls everywhere.

To check out Made W/ Code visit http://www.madewithcode.com

 

egg o matic

Last week Smart Interaction Lab was on the road in London at the UX London 2014 conference. In addition to giving a talk on the Internet of Things, we ran a 3-hour workshop on E-Sketching for Interaction Prototyping. In the workshop, we introduced participants to the basics of Arduino, and then quickly moved into demonstrations of a range of sensors. We used the scenario of an eldercare patient whose loved ones would like to be informed of changes in behavior, such as when medications haven’t been taken in time–much like the Lively system or the Vitality GlowCaps that we use in our Internet of Things examples. With some quick foam core and cardboard mockups, we showed how tilt switches, light sensors, accelerometers and temperature sensors can be used to help medication compliance.

Since UX designers value being able to visualize data, we linked our Arduino demos to Processing, a programming language that allows people to quickly create dynamic on-screen graphics. Once the demonstrations were done, participants worked as teams and it was their turn to get creative with foam core, sensors and code.

The teams worked remarkably well together, and the energy in the room was awesome. Picking from a range of suggested subject areas, teams created working demos of:

-Temperature monitoring for accurate egg boiling
-An anti-snoring system
-Tracking seat usage on public transportation
-A warning system to let you know when you’ve left the refrigerator door open
…and an applause-o-meter to visualize how much participants enjoyed the workshop

photo 1

 

photo 2

photo 1 photo 2 photo 5

Special thanks to Junior Castro from the Interaction Lab for joining us in London.

wwdc14-home-branding

As always, excitement and speculation were running high around Apple’s annual developer’s conference, WWDC. It was no different here at Smart, where we all gathered around screens to watch what was being announced. Here were some of our initial thoughts:

Apple is fighting the perception that they’re losing the Android war, and this keynote, especially the beginning, was a response to that.

Nearly every major new OS or iOS feature seemed to address integration of some sort in an exciting way. We can’t wait to see if it lives up to the promise. Particularly iCloud enhancements, HealthKit, HomeKit, new Spotlight, and family sharing features. It’s exciting because this integration fits better into our lives, potentially helps with the disjointed systems that have evolved and connect beyond the personal user.

It was, however, strange to not mention the Beats acquisition at all, except for a phone call to Dr. Dre. Punishment for the early leak?

Yosemite
Apple are really good at iterating on their own stuff, and Yosemite is just another level of refinement.

Easily the best set of features on Yosemite and iOS 8 is Continuity. So many tasks are now cross-platform that enabling that makes perfect sense. Like Tim Cook said, this is the kind of thing only Apple can really pull off, because it owns the whole ecosystem. Airdrop between devices will get rid of all the ridiculous hacks (emailing or texting yourself) to move files between devices. Yes, it’s amazing this is a promoted feature in 2014.

iCloud Drive
iCloud feels like it’s still playing catch up to DropBox and Google Drive. The price is good, but it felt like they needed to take a great leap here to move people away from existing storage solutions. For example, just back up everything, everywhere. All of your devices are backed up, period. Then charge for additional storage. Cloud storage is cheap. It’s not hard to have 100gb of media that could be in the Cloud.

iOS 8
Interactive notifications are something that Android has had forever. Now Apple’s got its version and it’s a great addition. We’re curious how it’ll work on the lock screen when you’re not logged in.

Quicktype
How will this work with swearing? Or sexting? We don’t want any of the guys who were onstage to be predicting sexting.

What’s interesting is that it’ll take into account who you’re talking to and adjust. That’s really interesting predictive technology. This is a way for your phone to say, Hey, I get you. It’ll make the phone feel smarter. Although could be tricky with multiple users of the same device.

Since all the data is being stored on the device, what happens when the device gets stolen? Do you have to start over?

HealthKit
Passbook for health apps. Is your insurance going to integrate with it? Do you want to give an integrated package of your health data to Apple (or anyone outside of your healthcare provider)? It has some promise, but the infrastructure, particularly on the hospital/healthcare side, just isn’t there yet. If it works as well as Passbook (which is to say, not very), it won’t be very useful. Even though it’s ambitious and potentially noble, another concern is this’ll be too much data, that it’ll lead to false diagnoses.

We’re speculating that this is the software precursor to any sort of wearable device from Apple.

Extensibility
This was the biggest change that Apple announced, and potentially the most game-changing.

Apple almost never introduces something we’ve never seen before. They only fine-tune existing things. Extensibility is an example of this, as it’s been in Android for years. App sandboxing has been both a huge selling point and a huge pain point. It’s way more secure against viruses, but it also prevents data and functionality from moving between apps.

It’ll be tricky for developers, but the downside is that it could be crazy and make iOS feel like Android. We haven’t fully comprehended what this will mean yet.

Siri
Why wasn’t Siri incorporated into Spotlight? Will it be part of Extensibility? The other big (unsaid) revelation is that Siri is always on, always listening. Will it be cued into your particular voice? Or if everyone’s iPhones are on the table, by saying, “Hey Siri,” will you be able to turn them all on at once?

HomeKit
The piece we were the most excited about had some of the scantiest details. The “industry partners” here were also underwhelming thus far. Without a big name appliance partner like GE or Whirlpool, the service feels limited. “Scenes” is an interesting metaphor for a programmed cluster of behaviors though.

Swift
It makes sense Apple made their own programming language. It’s so Apple. It was a little unclear how it really integrates with Objective-C/C though.

Swift’s visual Playground reminded us of Bret Victor’s work. It did fall a little short when during the demo, he couldn’t drag the visuals and change the code.
So while there was no hardware announcement, there was still some meaty additions to the Apple cannon…and some accompanying unanswered questions. We can’t wait to get our hands on the new software and try it out.

MAD museum residency

This spring, Smart Interaction Lab’s NYC branch went uptown to the Museum of Arts and Design (MAD) for one week to be part in an intensive designer residency to explore the future of desktop 3D printing. The program, sponsored by the online 3D print-on-demand service, Shapeways, featured a different designer every week and was part of a larger exhibition entitled “Out of Hand”, which explores extreme examples of art and design created using digital fabrication techniques. Out of Hand is on display until June 1.

 

Sharing our research

During our week at the museum, lab founder Carla Diana was on hand to chat with museum visitors and share stories and research from her experience in developing LEO the Maker Prince, the first 3D printing book for kids. The book comes with a URL where readers can download and print the objects that appear throughout the story and printed models were available at the museum for people to touch and inspect.

569cdad8b28f11e39605126ba9d895ab_8

 

Playing with New Toys: Printers and Scanners

As a perk, we were invited to experiment with two fun technologies: a Form Labs FORM 1 printer and a full-body Kinect scanning station.

The Form Labs Printer was interesting since it uses a sintering process (SLA) as opposed to the fused deposition (FDM) process that’s used in popular desktop 3D printers such as the MakerBot. In other words, the Form Labs works by having a laser project light onto a bath of liquid resin, hardening the places that the laser hits, layer by layer. The more common FDM process relies on a spool of plastic filament that is fed through a heated nozzle positioned to allow the molten plastic to fall onto a platform and harden as it builds up layers. (At Smart, we have professional-grade printers that employ both technologies, but it was intriguing to see a desktop version of the much messier SLA machine.)

In terms of results, the FormLabs prints capture a great deal more detail at a relatively high resolution. And because the sintered parts don’t require as bulky a structure for support, they are also better at building interlocking and articulated parts than the FDM machines. We spent a good deal of time explore this by building 3D models of chain structures and then printing them on the Form Labs printers.

5785b988ac6311e387de122034e6e329_8

We also took an old pair of eyeglasses and scanned the lenses in order to design and build a new pair of frames, exploiting the detail of the print.

Carla's new frames at MAD

Carla’s new frames built on the FormLabs printer

The scanning station was also quite fun to play with, and consisted of a Kinect camera attached to Skanect software and positioned in front of a motor-driven turntable that a person could stand on. As it rotated, a Shapeways representative moved the Kinect camera up and down in order to capture the 3D data of the person’s body. We hoped to play with the scanner a bit more, but it was outrageously popular with museum visitors who waited on long lines to make scans to use to order small statuettes of themselves. The number of people who come through the museum is astounding and has included Woody Allen and David Byrne.

12ddb1aead5911e39da90a8f5c5dae8d_8

 

David Byrne statuette, created from a scan at the MAD Shapeways exhibition 

Capturing the public’s imagination

Throughout the six days, most of our time wasn’t spent with the tools, but rather talking to people. It was fascinating to hear what questions people have about 3D printing and what’s capturing their imagination. While the technology is quite commonplace to professional designers, about 90 percent of the people who came through the residency exhibition said the same thing, “We’ve heard about 3D printers, but had no idea what they are.” People are reading about them in the news, but that’s the extent of their exposure to it, so they found it fascinating to be able to hold and touch a 3D print, and see the process as it’s happening. Even the folks who did have some understanding of the printing techniques were very cloudy on how a 3D model would be crafted and made on a computer, so we enjoyed giving them a glimpse of the solid modeling techniques that we typically use as well as sharing tips about how to get started with more friendly platforms such as TinkerCAD and Autodesk’s 123D suite.

30033690ad5011e3be2f126b6b44507b_8

10-year old Asher Weintraub, inventor of the Menurkey.

Our favorite visitor to the residency exhibit was 10-year-old Asher Weintraub . We noticed a young boy engrossed in the book and reading intently. When we spoke to his parents, they explained that Asher was the designer of the famous “menurkey”  a sculptural centerpiece created to celebrate both Hanukkah and Thanksgiving simultaneously and developed using 3D printing. Upwards of 7000 menurkeys have been sold,  and the young designer was invited to meet President Obama in the White House to share his story of innovation.

We’re thrilled to know that Asher is a fan of LEO.

 

Maker Week in the Bay Area

This week we’ll be sharing our 3D printing fun on the west coast with a LEO the Maker Prince reading and activity day in the San Francisco Smart Studio. We’ll also be at the MakerCon event on May 13-14 and will have several Lab folks floating around the Maker Faire on May 17-18, so if you are in the Bay Area, come find us!