Artificial Intelligence and the Metaverse: future terror or present fears? – DataDrivenInvestor

geralt pixabay

Recent events, emphasised by the mainstream, make us think we are at the dawn of a first real ‘conscious’ Artificial Intelligence, what is commonly defined by the acronym AGI, which stands for Artificial General Intelligence.

<p id="1225" class="pw-pos…….

npressfetimg-89.png

geralt pixabay

Recent events, emphasised by the mainstream, make us think we are at the dawn of a first real ‘conscious’ Artificial Intelligence, what is commonly defined by the acronym AGI, which stands for Artificial General Intelligence.

We know that this is a reality far from being realised and accomplished, but we risk, in this mixture of longing and dread, not looking at the current and already viable risks in the vast field of Artificial Intelligence research.

A very recent study, published by Thomas Hellström and Suna Bensch, entitled ‘Apocalypse now: no need for artifcial general intelligence’ tells us precisely this: ‘The worst case scenario is that AGI becomes self-aware and prioritises its own existence over people, who are seen as a threat because they may decide to “pull the plug” and thus “kill” AGI. The AGI could then decide to exterminate or enslave all people. The good news is that the road to AGI is probably a long one, although experts strongly disagree, if not entirely, on when AGI might become a reality. Some researchers argue that we should not put too much effort into these fanciful and improbable scenarios, as they distract attention from the dangers of AI at today’s level, which are quite serious and more acute.”

Dangers also exist at the current level, and for the authors, three conditions would be required for an AI to materialise

1. that it already affects the world in one way or another,

2. that it has an ability to discover and use causal relationships to achieve its goal, and

3. that it has access to a relevant model of the world in which to conduct causal discovery.

Today’s AI systems are either industrial robots or self-driving vehicles, aia high-performance algorithmic systems such as GPT-3, or even ‘help’ systems that guide our purchasing choices or opinions in social environments.

Therefore, at least the first of the above conditions has already been largely implemented.

Random reasoning is a characteristic of both humans and AI, although the latter lacks the ability to distinguish between random relations and correlations, something that belongs to humans. AI needs data and experimentation, which makes it clear why the Metaverse represents a remarkably useful field of experimentation, as the influx of data and ‘events’ would provide remarkable guides to so-called random discovery, contributing significantly to increasing and improving algorithmic performance and facilitating the business of operators and companies.

A system that possesses all three requirements, and thus is used to maximise a company’s profits, could use the available data to advise and channel tastes and ideas, but then, by accessing global patterns and random relationships, could create what the authors call ‘useful idiots’, i.e. virtual influencers, consumer lobbies that convey the desired message and influence the market.

But what would happen if the AI, drawing on a generalised model characterised by cheating and deception, proposed this archetype as its base model? What Metaverse would be created? In what society, albeit virtual, would we find ourselves operating?

Here is the need for pre-programmed ethical rules, to avoid drifts that do not need AGI to cause irreparable damage: “Through interventions in a world model such as the Metaverse, causal rules linking actions to consequences can be automatically generated and subsequently used for planning the activities of actions in the physical world. Once this mechanism is put into operation, the AI can, on its own, generate increasingly efficient action plans that lead to its pre-programmed goal. This Artificial Intelligence system need not be close to an AGI, but its power still increases with exponential growth.”

Even more significant, as well as disturbing to say the least, is the next paragraph: ‘As described above, the Metaverse plays an important role, providing an experimental platform where AI can conduct experiments to discover causal relationships that can subsequently be used to plan sequences of actions in the physical world. However, the Metaverse’s role may be even more central than that. As the intention of companies is to move more and more human activities into this virtual world, we could end up in a situation where the Metaverse, to some extent, is the world. Those who govern this Metaverse, be it companies or an artificial intelligence system, have full control over all the actions performed by the avatars, as well as how the (virtual) world responds to those actions.”

The conclusion of this fine piece of work leads to key questions that, sooner or later, we will have to be able to answer: how should we, as human beings, react as the future increasingly leads us towards controlled meta-verses? How can we curb negative drifts in AI systems, but instead leave room for positive, technology-driven advances?

I gladly quote this fine work, which can be found, under Creative Commons Attribution 4.0 International License, at this link https://doi.org/10.1007/s00146-022-01526-8

It is just the first grain of sand of my personal journey into the world of Artificial Intelligence and the Metaverse. Stay tuned.

All Rights Reserved

Raffaella Aghemo, Lawyer

Source: https://medium.datadriveninvestor.com/artificial-intelligence-and-the-metaverse-future-terror-or-present-fears-8f3dc052c728