George Roussos Print
Wednesday, 31 January 2007
He talks about motes, trails, microservers, feral robots, Snout, ZigBee, and other aspects of sensor networks, as they relate to learning.

George Roussos
A new project is Snout, with Proboscis, which picks up from our Robotic Feral Public Authoring project. We'll be putting sensors on carnival costumes, to sense some aspects of the environment, and LEDs to give some feedback.

At Birkbeck I teach e-commerce, online databases, and mobile and ubiquitous computing. My e-commerce course is again overflowing with students. Last year I had 12, and this year it exceeded 35. A sign of the times, I guess. The ubiquitous course covers routing for sensor networks, sensor processing, RFID.

A group of us at Birkbeck have been working on navigation in smart spaces, such as museums, for a while. We begun testing this with Bluetooth nodes. And we have run some tests in the London Zoo using mobile phones and GPS receivers to record trails of visitors.

Unfortunately some of our servers crashed and burned over the summer. We had three servers running all the different programs. Students' projects were wiped out. The last server came back online just three weeks ago.

The zoo project is going forward, slowly. One of the servers that went down was for that project. But things have improved considerably since then. And all these things are really by-products of what we really want to do, which is algorithm stuff. We have enough data to start on that now. These types of sensor networks generate huge data sets.

With the Bluetooth boxes, what they try to do is just scan for any Bluetooth devices passing by, collecting the names and whatever other information available. There are a lot of people who do Bluetooth scanning; a conference I went to last November was all about that. Apparently now more than 10 percent of people keep Bluetooth turned on, on their phone or laptop. You can count individual devices, how many people walk by, how long they stay, and so on.

Now we have a live tool for reconstructing trails.

Navigation is quite a big thing for us right now; not the data itself. What we are looking for is two things. One is the metric that could identify significant trails. The other thing is not to look at individuals but groups; it's much more interesting to see how groups of people vote with their feet. For that you can measure how much time people spend at a location, how many people, whether they return, their orientation. It's interesting to find out whether you can detect groups of people moving together. Se we are modifying our algorithms to define groups that are moving together, from the trails data we have collected - multiple IDs that have appeared at the same location, and have reappeared further down the trail.

This is what we have built with a query engine, which is what Google is for page rankings. What they can do is answer whatever question very, very quickly. And we do the same for this type of sensor data. Not only keywords, but also impressions and interactions within a space. You can associate whatever metadata you want with certain devices. It can be an identifier within a classification system, a keyword, or whatever. So you can query for specific keywords, and it will pick up all the related data.

Now we have a live tool for reconstructing trails. You can view descriptions of any space, and you can upload records, which triggers an animation, like Media Player - you can play, step forward, replay your experience. And at every step it can trigger related information, say from a web site. You can also edit your own images, content, and so on. We tested it at the zoo, but we have yet to use it in a real space.

This type of system can help you remember your experience, to see the specific path you've taken. For us as researchers, we can sit somebody down and replay their experience, and they can talk to us about what was happening.

One thing we are looking at in my ubicomp class is appropriate models for sensor networks. One thing that has been envisioned is millions of microscopic sensors that you could just spray around a place. Well, that's not going to happen any time soon. Not for any other reason than it's hard to coordinate sensor systems like that; getting any sort of data is next to impossible. Since the sensors are so tiny, they are quite vulnerable. So what you need are three things - you need a kind of micro-server, you need external devices, and you need mobility. You can work with any kind of robots or sensors, but what really makes them work is the algorithm, and that's what we concentrate on.

You can actually now build a power-scavenging device. It can get energy from light - even artificial light, if it's good enough.

This links back to trails, because one source of mobility can be people. Think about all these people walking around with mobile phones - they can provide a very strong component of these sensor networks. So predicting how people move around a city is very important, and the work we do with trails is to analyze how people move around, so that we can come up with algorithms that can take advantage of these patterns.

In sensor networks, one of the things we are using is moteiv motes with a daughterboard that is completely free of the need for batteries. You can actually now build a power-scavenging device. It can get energy from light - even artificial light, if it's good enough. And it's a very tiny device - it does very limited things; everything is built in. It just wakes up and checks the sensor, and if it's over a specific threshold, then it will notify you; otherwise it will go back to sleep. For that, you consume very, very little energy, and all of it can be provided by a tiny photovoltaic panel. It has a capacitor in it to store energy. If this daughterboard detects something that is important, it can wake up the mote. You don't need synchronous operation of motes; they can be asleep like 99.999 percent of the time. You can run them for years like that.

Right now we are looking at how to wake them all up for a synchronous mode of operation. The next step would be either to wake everybody up, or to use a microserver the next tier up. It might be powered by the mains, so power wouldn't be an issue. And it could wake the rest of the system.

The other option would be to make a collective decision within the network. At the bottom layer, you have pure observation, where at its simplest, a sensor could just send you one or zero depending on a threshold. But that's not enough to make a decision, because what you might be interested in is the average of temperature within a distribution. So the higher-level description is that if the overall temperature is within this range, then I want an action at the system level. You have to translate that lower-level context of observations to the system level, and that's an interesting thing to be able to do.

There are lots of interesting problems in that. You have to be able to do all the data 'smoothing' at the system level. There are other tiers like the business one, with questions like 'I don't care what comes in and what goes out, I only want to be notified if I don't have enough product to ship.' This is the decision-making level. So you have a coding of different tiers of hierarchy and context.

Another thing we're doing, which is kind of associated, is how to build software radios. Rather than have fixed devices that can only talk one language like Bluetooth, it is now possible to use very low power electronics that can actually change their behaviour with software. You have an antenna, another component that does the analog-to-digital conversion, and then you may have some logic to do the signal processing. The ADC can actually have a very big range - in practical terms, from UHF to the gigahertz range within the same circuitry, with a single spread-spectrum antenna.

In terms of learning, you have to look at types of representations, from a semiotic point of view.

Mobile phones generally have Bluetooth on them. But if you use Bluetooth in a sensor network, you're going to run out of battery straight away. So what you need to do is talk ZigBee , which is much lower power, so you can talk for longer. So the sensors, between themselves, can talk ZigBee, but when they see a mobile phone, they can talk Bluetooth. One way to do that is to have two chips on the same mote, and we actually have done that - took a mote and stuck a Bluetooth modem on it. But it means two antennas, more power, more software. So it's nice if you can do all of that with software.

Right now we're concentrating on getting it to work, then the next step would be figuring out how and when to switch. To fit all the software you need to do the modulation and signal processing, you really have to cut down the code. But it is becoming feasible. If you get it right, you can talk GSM, GPRS, RFID. The circuitry that talks GSM is perfectly capable of talking RFID, so you don't actually need more hardware.

It will take maybe a year, year and a half to see if it works. And assuming it does, the next step will be to look at more practical problems, like what it takes to switch between different protocols.

In terms of learning, you have to look at types of representations, from a semiotic point of view. We have been working with Carey Jewitt and Sara Price on this. You have data loggers traditionally in schools, for chemistry and biology. How you visualise that data is important. Some data loggers can capture very rich data sets and capture a lot of data, that you can overlay on physical locations.

See other LKL profiles:
David Buckingham
Diane Carr
Liesbeth de Block
Ettore Ferranti
Sara de Freitas
Sergio Gutiérrez
Carey Jewitt
Mark Levene
Rose Luckin
Darren Pearce
Kaska Porayska-Pomsta
Alex Poulovassilis
Sara Price

Taking it one step further is having an instrumented environment, to have situated representations. This idea we have with Snout includes LEDs to give you feedback straight away. What kind of representations are useful for experimental scientific data? In actual fact, you can see some 'ambient displays' - like in some of the tube stations, where they've replaced static posters with flat screen displays. As time goes by, they're coming up with more experimental ways of using them - which is just the tip of the iceberg for what these kind of technologies really enable you to do.

My degree was in mathematics. I was a flight sergeant/programmer analyst for three years, as part of my national service in the Greek military. For a while I was This e-mail address is being protected from spam bots, you need JavaScript enabled to view it - a kind of geek badge of honour. All the military had an internal Internet exchange, so that they could communicate with each other without having to go onto the public Internet. I was in charge of that, and those servers.

I got interested in learning from my wife Theano. I was reading all her stuff from early on. We are expecting our first child in May - a girl. It's funny how the doctors talk about the 'estimated time of arrival,' which is the same thing computers tell you when you're downloading something! The progress bar is reaching the end....
< Prev   Next >