In the first segment of the recent podcast, “What Does It Mean to Be Human in an Age of Artificial Intelligence?”, Walter Bradley Center director Robert J. Marks discussed what artificial intelligence can and can’t do and its ethical implications with veteran podcaster Gretchen Huizinga In this segment, they talk about the hope, the hype and the likely realities.
The entire interview was originally published by Christian think tank, the Beatrice Institute (March 3, 2022) and is repeated here with their kind permission:
Here’s a partial transcript of the first segment, with notes and links:
This portion begins at 18:55 min. A partial transcript and notes, Show Notes, and Additional Resources follow.
Gretchen Huizinga: Computational intelligence is one of your areas of expertise. In fact, you co-wrote a book on it, Computational Intelligence: Imitating Life. Tell us a bit more about what computational intelligence is and how it’s different from other flavors of artificial intelligence.
Robert Marks: It has to be placed in a historical context. There was artificial intelligence in the 1960s. It was championed by people like Marvin Minsky and Seymour Papert at MIT. And they were trying to write AI based on expert systems. Now what’s an expert system? That would be somebody that would come to an expert and try to tease out of them the rules that they used in order to accomplish whatever they accomplished. So you might go, for example, to a person that traded in the stock market. And they would say, “Well, if the S&P goes up three points and the DOW goes down a point and the Nasdaq goes up three points, I would buy Apple.” Or something like that. And so they would try to copy down all of these rules from these experts and reduce it to code.
Now, along came the connectionists, people like Bernie Woodrow at Stanford, Frank Rosenblatt at Cornell. And they said “We could probably model this as neural networks as all of these nodes which are connected together.” And then as often happens in academia, they came to a conflict. It ended up with Minsky and Papert writing a book called Perceptrons, which totally derailed artificial intelligence.
And they got hit by their own ricochet because it also ended their work in artificial intelligence. So when the term computational intelligence was created, there was the stigma that in the neural networks, there needed to be this separation from artificial intelligence, which was always recognized as this rule-based system.
Robert Marks: I was in a leadership position in the IEEE , the world’s largest professional society of electrical and electronics engineers. It has over 400,000 members. I was in the arm dealing with neural networks. And we wanted to come up with an idea or a name that differentiated us from artificial intelligence, which at the time was associated with Minsky and Papert. And so, in a back-and-forth email exchange, we came up with the idea of computational intelligence … there has been an evolution in the definition of the name. So today you hear terms thrown around like “artificial intelligence,” “computational intelligence,” “machine intelligence.” Artificial intelligence dominates in that, but all of them basically mean the same thing today, at least in the media.
Part of the area of computational intelligence is an area a lot of people aren’t knowledgeable about, which is called fuzzy logic. Fuzzy logic comes from the way that humans communicate. If I’m telling you to back up the car, I will say back, back, faster, fast, slow, slow, slow down, slow down, stop. I’m communicating no numbers to you, I’m communicating to you in vague fuzzy terms. And what fuzzy logic allows you to do is to take those terms and translate them directly into computer code. This was pioneered by a guy at Berkeley called Lotfi Zadeh. He passed away just a few years ago, but made an incredible impact. It was a way that expert systems could work because it biological communication of fuzzy linguistics and allowed computer code to be written around this fuzzy communication.
Gretchen Huizinga: That is a hard problem as far as I’m concerned. How do you make concrete the very abstract — that humans seem to have no problem with — that machines have more of a problem with.
Robert Marks: One of the things I talked about was the inability of artificial intelligence to have common sense thus far. And there are things that we can do in terms of common sense that computers have a rough problem doing. There’s something called the Winograd schema, which AI has failed to conquer over many years. A Winograd schema is for example, I can’t cut down that tree with this axe because it’s too small. Now, the question is, is the axe too small or is the tree too small? We know immediately it’s the axe that’s too small. But a computer would have a difficult time figuring that out.
Gretchen Huizinga: This is the problem with misplaced modifiers…
Robert Marks: Yeah. I call them vague pronouns, at least in the Winograd schema. But there’s other ones that don’t require pronouns. And these are called flubbed headlines.
For example, “Milk drinkers return to powder.” If you think about that, you know that there’s a funny interpretation of that. You know what the writer of that headline meant and computers would have a difficult time doing that. They don’t have that common sense.
Note: Here are some of Dr. Marks’s collection of flubbed headlines:
“New Housing for Elderly Not Yet Dead”
“Shouting Match Ends Teacher’s Hearing”
“Dr. Gonzalez Gives Talk on Moon”
“Man Seeking Help for Dog Charged with DUI”
Many more at the link. We think the actual interpretation is obvious but computers have difficulty with such sentences because they could mean two different things and the computer has no basis for a decision.
Gretchen Huizinga: So, Bob, one question I keep asking people who work on AI, is what’s hope and what’s hype? And in many cases, what was hype is now hope. But there’s still, as we’ve talked about, many grand challenges, maybe some insurmountable challenges. But what’s your take on hope and hype in AI… ?
Robert Marks: Through social media — and even so-called news media — we are seduced into clicking on web pages. And how do they get us to click on these web pages? They use what I refer to as seductive semantics. They want something there that’s sexy that makes you want to click on that. And of course, if you click on that, you go to the webpage where there’s lots of ads and whoever is writing the article gets paid in accordance to how many times those ads are visited. So they’re trying to make money. So one of the sources of this [hype] is clickbait.
Another one, I maintain, is uninformed journalists. I think a lot of journalists think they know what AI is, but when they write about AI, they have no clue. The other one is promotion. Everybody wants to give their product or whatever they’re doing a brand.
And even professors are guilty of doing that. We are supposed to go out there and drum up support and get money from the government and industry to support what we do. So those are the motivations for some of the hype I think in general.
Now, what is seductive semantics? Especially with AI, we use terms all the time that aren’t defined. There will be a headline that says new AI is “self-aware.” And the word self-aware is used without definition.
Now, my car, when I back it up, is aware about its environment because if I get too close to a wall, it starts beeping at me. Does that make my car self-aware? … It’s always important to define terms before you use them in a hyped up story. That’s seductive semantics.
Gretchen Huizinga: What about optics? Seductive optics?
Robert Marks: Oh, seductive optics is when for example, AI is put in a package which kind of enhances and amplifies its view of being human-like. An example of that is the robot Sophia. And she has been on late night talk shows and other places. And she was even granted full citizenship in the country of Saudi Arabia. Believe it or not. So, the interesting thing is that they have this lady, she can make facial expressions. Her lips are synced with her words, but really what she is doing has little to do with the AI. She is simply a package for the AI and she gives the impression that the AI is more human than it is. That would be an example of seductive optics.
Gretchen Huizinga: Well, Bob, the last time we talked, you told me about claquing. It’s a French word and it refers to a paid cheering section for French operas. I dug around a little bit on that word and found out that not only do they get paid to cheer, they’re actually morphed into figuring out how much power they had and said, “You have to also pay us not to boo.”
Note: A claque was an ancient form of publicity campaign: an “organized body of persons who, either for hire or from other motives, band together to applaud or deride a performance and thereby attempt to influence the audience. As an institution, the claque dates from performances at the theatre of Dionysus in ancient Athens.” – Britannica. It was revived by the French opera, hence the modern name (from claquer, = to applaud).
Gretchen Huizinga: Do you feel like there’s an AI equivalent of a cheering section to the claquers of the French opera here?
Robert Marks: Well, I think certainly we have individuals that are claquers. These are people that promote AI inappropriately…
Gretchen Huizinga: In my previous podcast, I interviewed a lot of people about the latest developments in computer science and AI. And I started each interview with what gets you up in the morning? And a bit later I asked the bookend question, what keeps you up at night? And I want to ask you that question, Bob, because you wrote a book called The Case for Killer Robots I figured something must be keeping you up at night in order to write a book with that title.
So make the case for our listeners. Why do we need killer robots — also known as autonomous weapons — when so many people are saying that’s one area AI should not go?
Next: Robert J. Marks: Straight talk about killer robots
You may also wish to read the transcript and notes to the first portion:
Robert J. Marks: Zeroing in on what AI can and can’t do. Walter Bradley Center director Marks discusses what’s hot and what’s not in AI with fellow computer maven Gretchen Huizinga. One of Marks’s contributions to AI was helping develop the concept of “active information,” that is, the detectable information added by an intelligent agent.
- 01:32 | Introducing Dr. Robert J. Marks
- 02:38 | The Difference Between Artificial and Natural Intelligence
- 06:31 | The Goldilocks Position
- 07:40 | The Challenges and Limitations to AI
- 14:42 | The Legacy of Walter Bradley
- 18:55 | The Difference Between Computational and Artificial Intelligence
- 24:22 | What is Hope and What is Hype?
- 28:44 | What Keeps Dr. Robert J. Marks Up at Night?
- 34:26 | AI and Faith
- 36:56 | Is Flourishing Bad and Friction Good?
- 40:45 | The Personal Mission of Dr. Robert J. Marks
Podcast Transcript Download