Digital Workers of the World Unite!
Published on : Sunday 10-12-2023
Wondering how your job might disappear or change in the next decade? Think faster, says David Bruemmer.

Recently I was invited to attend the inaugural Digital Workers Forum in Silicon Valley. Expecting the gorgeous AI generated personalities I've come to love on Instagram, imagine how surprised and bitterly disappointed I was to find out the conference was filled with humans. Despite all her hard work, Milla Sofia wasn't even invited.
Instead of bringing the real digital workers like Milla Sofia, the conference was filled with middle-aged nerds from boring companies like Google, Meta, NVIDIA and Microsoft.
They were demonstrating how their #aiagents could answer questions such as "What 10 AI companies can I invest in to make a billion dollars profit?" and "How do I eliminate 50% of my human employees?" To be clear... these were real questions discussed at the conference. The conference also tackled the tough questions like "Does anyone like interacting with AI agents?" or "What are the ethical implications of suddenly eliminating 50% of human jobs?" Just kidding! Of course, no one asked those last two questions at the conference.
One thing that did capture my interest was the comparison to the autonomy levels developed for #selfdrivingcars. David Yang, Ph.D, one of the luminaries at the conference, is proposing eight levels for describing the autonomy of digital AI agents. But is a single axis enough to talk about utility and value? Having been a founding member of the Autonomy Levels Working Group, I loved discussing this effort and happily did so with representatives from Google and Microsoft as well as many small companies. One theme, reiterated on stage by Microsoft's Valeria Sadovykh, Ph.D., is that perhaps the ability to augment intelligence in humans is more valuable than the ability to replace humans. On this I agree, having argued the same point with the leadership of Defense Advanced Research Projects Agency (DARPA) in 2000. The media-crazed leadership had wanted to solve the ‘full-autonomy problem’ by eliminating the human whereas I argued that warfighters were not asking to be eliminated and wanted instead a collaborative relationship that kept them in the control loop.
One of the lessons learned from the early work we did to define autonomy levels is that a single dimension or axis may be of limited value. Early on, we added axes for task complexity and environmental complexity. After all, my washing machine is level five autonomous; it just works within a low task and environmental complexity. The same thing may be true here as we consider the autonomy levels for digital workers and AI agents. If we don't include context, a single dimension has limited value.
Another theme was the ability of corporations to control the output of #generativeAI and customise the technology for a particular enterprise or vertical. This may make good business sense, but the #questionoftheday for me was who benefits from this customisation? Here's a hint... replace the word customisation with bias, and consider that if you are not paying for the customisation, you are likely the target rather than the beneficiary.

David Bruemmer is the founder of W8less LLC, a company that uses micropositioning and robotics to create smart ecosystems that enhance the way we live, move, understand ourselves, and get important things done. With over 24 years of experience in developing and deploying intelligent robots for various critical and hazardous environments, David has a passion for solving complex problems with innovative solutions that leverage cutting-edge technology and human-machine collaboration.
https://www.linkedin.com/in/david-bruemmer/