Truly massive distributed computing, 2032 style

| 1 Comment
Charles Stross, SF author, posted a talk he gave at TNG's Big Tech Day in June of this year. In the talk he postulated what one aspect of computing would look like that far in the future with Moore's and Koomey's laws continuing for some time. He suggested that a device with the power of a modern Tegra 3 driven tablet would cost pennies to make. That's cheap enough you could wall-paper a city with them cheaply.

He had a lot of good ideas in there, and not just the panopticon society stuff you might expect. City planners would be keen on them as it would allow them to much better instrument things that the rest of us take for granted like road-expansion (a.k.a. pot-hole risks), sewer/water-main/storm-drain flow, and truly accurate traffic-flow measurements (including bikes). If you're a Public Works manager in a city that's occasionally subjected to 3cm in 15 minutes downpours, you're really interested in how well your storm-drains are working.

So in true systems-engineer style my brain focused on the engineering details of managing such a network.

Gathering the data output of a billion sensors slathered liberally across a large city is not a small problem. It all has to get to a central point where decisions can be made on the gathered data, and that means a lot of aggregation somewhere along the way. Mesh networks are quite doable, especially at low data-rates, but that's still a lot of discrete inputs that have to be managed.

If processing nodes are that cheap, and you have a lot of them, then you don't really care when one or two die. Or ten. Or a hundred. You have enough for a lot of overlapping coverage, and if coverage gets light enough in one area that the artifacts are showing up in the decisioning aggregation tool then you need to pay attention to it. Lightning strike a street-lamp? It probably blew out all the embedded sensors in a certain radius around the strike, and you may see the City grind and overlay the street near there to replace the sensors, even if the street didn't actually need it yet. Some kid dismantles a microwave oven and manages to get the cyclotron both working and pointed in the right direction to fry electronics? Who cares about the alley, that may not get fixed for years.

There is one thing that such a network would NOT have: emergent AI.

Yes, it can have a beeeeeeelion nodes.

Yes, that is a lot of incomprehensible (to some) complexity.

Yes, the networking protocols likely involve dynamic relinking with neighbors, which sounds an awful lot like a neural network.

But it still isn't enough to bootstrap an AI we'd recognize as an AI. As a smart person I no longer remember said, "AI is just automation we can't comprehend." But that's for another post.

Implementation issues aside, Stross did ask for opinions on what could be done with such massively distributed computing. And this is not distributed like I'm doing in my day-job, this is processing sensor inputs and filtering them inwards to the big iron.

A computer in every doorknob.
We already have that in the form of the Hotel Doorknob, but in 2032 it'll be fancier. Imagine your cell-phone being your room-key, and all you have to do is walk up to the door and it unlocks the moment you touch the knob. The hallway network tracks the phone so it knows which side of the door you're on. When you walk out the door it remembers what RFID tags you're carrying (which could be a lot) to better improve the "is that you" detection.

The same thing could very easily be used in Corporate Headquarters buildings to track your every movement. Such systems already exist, but they'd be even cheaper to deploy and wouldn't require the employee to carry anything they didn't walk in the door with.

A computer in every aerodynamic bolt.
The use in airframe instrumentation is very interesting. Strain gauges for the entire airframe could definitely record to the flight recorders, which would yield very valuable information to crash investigators after an incident. Such systems could feed into avionics just as easily.

Reliable ambient voice-to-text.
We can do VTT with current technology, but it's buggy. Dragon Naturally Speaking. Siri. However, what if you had 15 devices all listening in to the same conversation? You could take the output from that VTT processing and statistically compare them to derive a very good guess of what was actually said.

1 Comment

Hi there, I read your blogs like every week. Your humoristic style is awesome, keep up the good work!