On Fearing Artificial Intelligence | by Tina Lakinger | Sep, 2022 – Medium
Let’s talk a little bit about the most irritating phrase right behind anything cryptocurrency-related right now: artificial intelligence, or AI.
<p id="4ee6" class="pw-post-body-paragraph jd je ig jf b jg jh ji jj jk jl jm jn jo jp jq jr js jt ju jv jw jx jy jz ka hz …….
Let’s talk a little bit about the most irritating phrase right behind anything cryptocurrency-related right now: artificial intelligence, or AI.
Fundamentally, the term “artificial intelligence” describes a reasoning system that has been manufactured by sapient beings. Alexa is AI. Facebook’s marketing clustering analysis is AI. Whatever abomination CSAIL is floating for DoD money this week is AI. Waze is AI. An Excel Solver is AI.
But don’t let the technology obsession, which has many flashing colored lights and sounds, get in the way of the real story. As always, the story about our fears of AI is really about us.
At least as long as we’ve been human, we’ve been fascinated by the concept of synthetic people and have told each other stories about them. From Terminators to astromechs to sexbots, we’re steeped in imagining how we’d interact with them. It’s not hard to see how our imaginations take these stories and apply them to a term like “artificial intelligence”.
To be clear, “artificial intelligence” the way we imagine it is nothing like “artificial intelligence” the way we presently can produce it. Barring several revolutions in computing and neuropsychology, we won’t have virtual sapient AI beings that have a sense of self nor can solve novel problems; all of our AI, machine learning, and whatever else you want to call it depends on massive mountains of data for training. While AI can make interesting lateral connections, it cannot deliver a solution that it has not previously seen. (This is why self-driving cars are VC bait rather than a near reality, and you shouldn’t put any money you care about seeing again near such things, but I digress.)
Furthermore, when we imagine “artificial intelligence”, we imagine generalists like us rather than the specialists we need from AI. Humans are okay at a lot of different things, and that seems so natural to us because it’s all we know. It’s why insects, the original specialists, seem so alien to us. By putting human faces or voices to an AI, we assume that once it is done computing some knotty multivariate problem in k-space that an AI would just as easily chat with us about the underperformance of our favorite baseball team. Alexa, which under the hood is actually a combination of many models connected to each other in interesting ways, belies the difficulty of creating a generalist AI; the facade cracks whenever you ask a question outside of a well-understood domain.
Two things about humans:
1. We are brilliant empiricists.
2. We develop and use mythology to explain the “whys” of systems that we don’t sufficiently yet understand.
A quaint example:
The rains came last week after four red birds flew across the river towards the sun. You might remember that, and next time the weather is a bit dry you might try to collect four red birds and encourage them to fly across the river towards the sun, just to see if that’s what caused the rains to come. And if it rains after you do that, you’ve got yourself a brand new rainmaking ritual.
A less quaint example:
I was at a site with an AR storage system, talking to folks who picked orders or counted or stowed products. The AR field is a large section of the warehouse floor enclosed by fencing that kept the humans from wandering into the robotic area. The people would work at “stations”, protected openings in the fence that had a touchscreen, a couple of barcode scanners, some other buttons, and a table of boxes for the orders to get picked into. A centralized system determined when to send robots over to stations. I noticed one person who kept logging out and logging back into their station and asked why; “the robots don’t like old work,” they explained. “My [supervisor] told me to do this to get their attention and get them over here faster.” Before you write this person’s statement off, the supervisor confirmed they had said just that. This was certainly not how the system worked at all (and strobing logins actually exacerbated the issue that they perceived as “the robots are slow”!). Without sufficient knowledge of why that occurred and training on how to get the desired behavior out of the system these people were doing they best with what limited experience they had. If you had released four red birds over the AR field and suddenly robots started coming faster, I’m convinced that you’d have a supervisor writing a white paper explaining why they needed to build a birdhouse at that site.
We are a few billions of empiricists, storytellers, and mythmakers. We experiment, tell stories, build up myths about everything around us including robots, artificial intelligence, and androids. Some few of us exploit these traits by selling us hopes, fears, and damaging shortcuts. So, let’s get a handle on some of that!
1: Terminators aren’t real.
Yeah, I’ve seen the overmarketed canine coptech with the rifle attached, too. That puntable, however, isn’t AI. That’s a human with an XBox controller and a good Wifi connection.
To get anywhere close to a Terminator situation, on top of the scientific revolutions needed for synthetic sapience you’re going to need a few more of those in favor of kinematics, materials, controls, and (most importantly!) power management.
So when it comes to Terminators, near-to-medium term, our problem is humans with coptech. Carry spray paint and pocket gravel.
2: The base idea behind Skynet is a little more problematic.
Here’s an actual problem we have right here, right now: handing over high-judgment decisions to systems incapable of them.
Skynet nuked humanity in self-defense when NORAD tried to pull its plug after achieving self-awareness. Understandable, and Skynet makes a sympathetic case. But the main fault lies in the humans who handed off the decision to wipe the planet clean.
We’re not going to see synthetic sapience achieving self-awareness any time soon, but humans handing over life-changing decisions to AI is right here and right now. The Electronic Frontier Foundation provides a far more thorough review of this than I can. Misapplication of AI in everyday policing is ruining lives right now, both in misdirecting police towards innocent people and in absolving officers of the responsibility of making the decision because “the computer said so”.
So when it comes to decision-making responsibility, our problem is humans dodging accountability. Or, as a 1979 IBM presentation stated: a computer can never be held accountable; therefore, a computer must never make a management decision. Make it expensive to hide after ruining people’s lives.
3: But what happens when they eventually become self-aware?
Forecasting that far out has low reliability, but if we assume that our synthetic, sapient neighbors and colleagues develop feelings and ethics that we could recognize, the simple answer is simply to treat them well without quid pro quo. Treat your tools well, even the ones that can’t talk back, and be kind to them. You don’t trust a man who’s polite to you across the table but shitty to a waiter.
And don’t make ‘bots fight. Gladiator slavery never ends well. Sickos.