Robin’s Rules of Order for AI

Discussions about AI have erupted into the public conversation, reaching from the tech community, to living rooms, to the Vatican. To cut through the clamor, AI Institute Director Robin Feldman proposes Robin’s Rules of Order as we bring modern AI into tech and our daily lives responsibly and intentionally. She offers these thoughts as both a techno-optimist and a techno-realist.

Rule #1: Distinguish Real-time Dangers from Distant Dangers

So, what keeps me up at night . . . and what doesn’t?

“I’m less worried about rogue robots taking over the world in the future than rogue humans (or rogue nations) wreaking havoc with AI tools today. A new type of Cold War is under way, and the nation with the most-advanced AI will dominate. In the spirit of the football playoffs, it’s all about offense and defense.

On the military front, our conventional weaponry, as well as our ability to protect civil infrastructure, must keep pace with AI’s quantum speed. As Eric Schmidt pointed out, our military procurement system is not well-suited for AI innovation.

Here’s how I would describe the mismatch between military procurement and AI innovation. . .imagine pulling a plow with a Lamborghini. It’s a waste of the Lamborghini, the plow won’t work very well, and the whole thing will get stuck in the mud.

On the non-military fronts, we face attacks on the basic foundations of democratic society. These aren’t limited to attacks on freedom of speech, these are an assault on the deliberative process itself, as well as democratic principles and values.”

How Social Media + AI → Experimentation on Humans by our Adversaries

“Would we ever allow a foreign nation to use our citizens as unwitting guinea pigs in scientific experimentation? That’s what we face today. In combination, AI and social media can allow our global adversaries to engage in real-time experimentations on humans–specifically, on you and me. Our young people, in particular, are endless guinea pigs, being fed slightly varying diets as observers determine the exact amount of potion that will drive and sustain our reactions.

Press reports explain, for example, that social media magnifies hopelessness in teens and young adults by feeding them an endless torrent of information precisely tailored to mirror their deepest feelings. But why should we imagine the only aim is to magnify our own feelings? The technique also can be used to drive and direct our feelings.

Imagine endless deep-fake material created by state-sponsored bots designed to rivet the attention of young people, shift their views, and assure them that others think the same way. And then imagine the effects amplified through a campaign of poisoning training data by artificially elevating information in the data stream. This form of sustained, sophisticated campaign could have a widespread impact on public views, sowing disorder and discontent.

All of this can be done through messages that shape societal views on the democratic process and attempt to destabilize the nation. The challenge, at the end of the day, is to sustain a deliberative process when the “deliberative” messages we receive can be carefully curated and fed to us by global adversaries.

None of this is to suggest that we should halt or ban AI. Innovation moves forward, and we cannot behave like the original “saboteurs” throwing our sabots (shoes) into the machinery to stop industrialization. Instead, we should focus on maintaining our lead in the international race for AI technology. Like any other form of cold war, we have to guard against societal harms and destabilization beyond bullets or balustrades.”

What the Y2K Panic Demonstrates About Distant Dangers

“Although ancient history for anyone under 25, the year 2000 was anticipated with quiet dread. It was not because of a doomsday end-of-the-world prediction, but because of a well-founded, computer-science concern that all technology products would instantly stop working.

Why did we fear Y2K? Since the start of the digital revolution, programmers had been saving precious storage space by writing years with only two numbers. It’s like talking about the “60s” instead of saying 1960.

Unfortunately, programmers didn’t focus on what would happen when the 1900s became the 2000s.

As the year 2000 loomed, experts predicted that when the clock struck 12:00 AM on New Year’s Day, computers would go haywire—befuddled by a date that literally did not compute.

Elevators would stop running. Flights would be grounded. Data would disappear. ATM machines wouldn’t work, potentially leading to a run on banks. Security and communications would fail, and general mayhem would ensue. But after an extraordinary scramble of reprogramming, involving partnerships between the public and private sectors, the dawn of the millennium passed with barely a hiccup.

Today, we are again faced with a transformative technical advancement. And we can find much wisdom in the lessons of rounding the millennial corner.

First, today is the time to think about where technology will lead tomorrow. As we are seeing from the downsides of social media, tomorrow is a little late.

Second, envisioning a disaster doesn’t necessarily mean it will happen. To the extent there are concerns about future AI disasters, we can face them and determine how to avoid the disaster.

Finally, avoiding disasters generally requires enormous cooperation between the public and private sectors.

In the end, there is much we can learn from Y2k about distant dangers.”