Swarms and Roundabouts
Published on : Friday 02-10-2020
Rule breaking can be a problem, but sometimes an overzealous desire for rule-following can be worse, says David Bruemmer.

Those vying to refashion our highways and city streets have coalesced into two different camps. On one hand we have connected vehicle disciples and on the other the right reverends of old-fashioned AI. Touchpoints for connected vehicle devotees are connectivity, collaborative driving and ultimately swarm intelligence. In this camp the near-term goals are low-latency cellular connectivity and better GNSS. In contrast, the good old-fashioned AI camp uses optical sensors to build and localise within a map. Here, machine learning derives both positioning and behaviour. At one point these different researchers looked for common ground, but more recently, the rift between them has grown. Speaking of the notion that vehicles should be connected to both roadside equipment as well as other vehicles (a strategy referred to as V2X), a lead Lyft researcher recently stated the prevailing AI opinion: “V2X is an interesting technology but does not really help to solve problems that the autonomous industry is facing.” 1 He goes on to argue that we need more researchers, time and money focused on model-based prediction.
What if this just isn't true? What if humans don't use model-based prediction when we drive? Perhaps the AI community's dependence on large, sophisticated models is exactly why they can't handle the millions of unpredictable edge cases that come down the road? We need individual AI, but also peer to peer V2X... a dynamic tension between individual and swarm intelligence.

My wheels screech around the curve of the roundabout as I accelerate to create a space for the Volvo preparing to enter the circle. I ease up as I see the car up ahead in the road prepare to exit out of the circle. I am doing my best to adapt to those around me, monitoring my nearest neighbours and trying to conform to their behaviour. Like any good swarm robot, I am balancing their goals and initiative with my own as I careen along the busy English roads. Still, I can’t see around blind corners and I can’t see what’s happening up ahead, beyond the car in front of me. This is where a connected vehicle strategy could help, orchestrating hundreds of cars around me as part of a local cluster of interconnected awareness.
1 https://www.fierceelectronics.com/electronics/qualcomm-lays-out-its-smart-transport-vision-including-vehicle-prediction-ai
It turns out the roundabout was invented by a pioneering American who campaigned hard for the US to adopt them, but eventually shifted to Europe. I love the efficiency of flying through intersections without having to wait at lights or stop signs. Although I feel confident interacting with peers on the road, my American partner in the passenger seat disagrees. She hates roundabouts and doesn’t understand why anyone would want all the stress: “It’s so much harder to know what to do and requires so much skill.” I ask if she would rather have to sit and wait at a light to which she immediately answers in the affirmative: “Oh definitely! It’s much easier because you know exactly what to do. Besides I don’t trust other drivers.”

And there you have it. We have two different impulses that underlie our world. One craves strong central leadership and depends on clear rules and enforcement. The other seeks interdependence. To those in the first camp, dependence on others seems dangerous. After all, do you really trust that driver who is holding his phone in one hand, while turning around to talk to his kids in the backseat? To the other camp, interdependence is a source of strength, allowing each individual to adapt and conform not only to each other, but to the environment and situation.
I think back to one of my first efforts to map out chemical hazards. The task involved sending robots into a facility to find a spill. The little bots had small chemical sensors under their belly, speakers, hearing aid microphones and infrared break beams that allow them to talk to their peers. While individual robots are single-minded in their purpose, the swarm as a whole must fully explore and eventually transcribe the perimeter. The bots are not given maps because we don’t have them. They cannot build a map because that would require more sophisticated computers and sensors. Instead, these bots are equipped with a form of social distancing. A biologically inspired chirping ability helps them know where their neighbours are and ensures they each find their own place at even intervals around the spill perimeter.
When I first release the swarm of twelve robots into the DOE facility they spin and turn like mad, abruptly switching direction as they see their peers in close proximity. They are programmed to seek out open space so they will fan out to explore new ground. Crammed in like sardines at the start, they respond to each other at first frenetically, but quickly adapt. Some of them are caught in a labyrinth of piping and must change their behavioural parameters to explore effectively. The robots are not intelligent, but they do know when they are stuck or flailing and are willing to change strategy. DOE personnel watching the swarm can’t help but point out which robots seem courageous and which seem timid. Their behaviour is just the emergent effects of the code that motivates them, but the same could be said of human behaviour. We learn and adapt our original code as we interact with the environment and our peers. Like the robots we must be permitted some mistakes as we struggle to adapt.

My experience with robots indicates that swarm behaviour tends to be more adaptive, resilient and efficient, but even in robotics most people are squarely in the first camp. Roboticists want robots and self-driving cars to be rule followers. Famously, there have been debates about whether self-driving cars should follow the speed limit or drive at the same speed as their human counterparts. What if there is a forest fire with flames lapping at your tail? Do you still want the AI to follow the speed limit? For some, it is a matter of control. If we let robots make their own decisions and learn, their behaviour can’t be guaranteed – just like the drivers in a roundabout. If we withhold agency, robots can’t take part in the drama of real-world adaptation and problem solving.
Neither choice is without risk. Perhaps it is not a question of which is better; rather, we must decide which world we want to live in. We cannot guarantee safety with either, but we do know which one is more adaptive and flexible. Have you ever been in a large city when the power goes out? When all those traffic lights stop working, we see swarm behaviour emerge. Slowly, tentatively, people shake off the need for top rule-based control. They venture out into the intersection waiting to see if the other drivers coming towards them are going to stop. It’s an exercise in trust and it gives people the heebie-jeebies. When a traffic light sensor is not working correctly, how long do you sit there unwilling to take initiative? Instead of moving forward with caution we remain stuck in place, waiting for centralised control to tell us what to do. At these times, we seem like dumb robots… Not the agile, adaptive swarm robots, but rather the other kind of automation – the kind that can’t really help you because it isn’t allowed to adapt the rules or solve problems. Rule breaking can be a problem, but sometimes an overzealous desire for rule-following can be worse, especially when it prevents us from adapting.
When the rate of change and the level of uncertainty is low, centralised control seems optimal, but when chaos seeps in, model-based planning and centralised control are too slow. Covid is making this a frontline issue. We tried to create predictive models, but our assumptions were wrong. Faced with all that uncertainty, we could have chosen to adapt on the fly. But instead we demanded more time and money to develop better models. A growing discrepancy between people’s attitudes means that we’ll have two all too familiar camps form. One camp will focus on modelling the disease, enforcing rules and top-down control. The other camp will emphasise personal freedom, local control and fluid guidelines. It all comes down to roundabouts.
Ideally, we can balance swarm and individual intelligence, but this requires that we take time to design the physical and cyber infrastructure necessary to support this vision. Micro-positioning is the critical element needed to do this. If we want smart cars, we need to invest in smart ecosystems where we can bound positioning error to 10cm, limit communication latency to 10msec and synchronise it all with nanosecond-level timing. With this system-level approach in place, connected and autonomous vehicles can eliminate the majority of congestion and accidents. In its absence, the large-scale benefits of autonomy will always be five to ten years beyond our reach.
Reference
https://www.fierceelectronics.com/electronics/qualcomm-lays-out-its-smart-transport-vision-including-vehicle-prediction-ai

David Bruemmer is currently CEO of W8less which offers micropositioning – a critical enabler for connected and autonomous vehicles. Previously, Mr Bruemmer co-founded 5D Robotics – which he grew for eight years into an industry leader in autonomy and positioning. Mr Bruemmer has worked on large scale robotics programs for the Army and Navy, the DOE, the DoT and DARPA. Mr Bruemmer has authored over 60 peer reviewed publications and has been awarded twenty patents in robotics and positioning. He won the South by Southwest Pitch competition sponsored by Caterpillar and is a recipient of the R&D 100 Award.