The Manila Times

TOO MUCH LEARNING IS A DANGEROUS THING

Saul Hofileña Jr.

EMANUEL Lasker, a philosopher and World Chess Champion, postulated a hypothetical creature called the “macheide.” Lasker said that the “macheide” is a being whose senses are so sharpened by evolution and relentless struggle, that it always chooses the best way and methodology to perpetuate the success of its species. Thus, in chess, according to Lasker, it will make all the right moves in any given chess position. If two macheides play the game of chess, the game will cease to exist as a viable thinking game for humans. I told myself that could never happen because there are more possible variations in a game of chess than there are atoms in the known universe.

Now it has come to pass. Computers powered by artificial intelligence (AI) can make calculations in nanoseconds. Picasso was wrong when he said that computers are useless because they can only give you answers.

Airplanes are now directed by AI. Ships are navigated and even cars are driven by AI. ABS-CBN has shown a movie written and directed by AI. One can write a story, make Japanese manga comics and pull out images from thin air by using AI. Human creativity has been replaced by impounded data. Now I understand the apprehensions of Henry Kissinger, a former US Secretary of State, about the frightening prospects of AI. Kissinger identified three areas of concern, the most troubling of which is that AI may achieve unintended results.

AI runs on programs initially created by humans. It then learns on its own, not conceptually but mathematically, trying to figure out the best strategy to solve a given problem. It constantly improves and refines its own strategies. Generative AI refers to a category of AI algorithms that generate new outputs based on the data they have been trained on. Some people call it “generative” learning.

It has been reported that Google will make available in its apps most of the world’s spoken languages. Soon, data may be gathered from native cultures you may have never heard about. The increase in data gathering capabilities would enable Google to enhance its AI capacity to unimaginable levels. When AI is integrated into smartphones, it will find its way to the poorest huts, in the most improbable places, in the hands of the most humble. Its location in the warrens of the poor may spur AI to solve mankind’s greatest dilemma — how to make everyone equal. It may very well do this one day, even if it means resorting to Hitlerian methods.

During the pandemic, all the 193 States who are parties to the Unesco (United Nations Educational, Scientific and Cultural Organization) adopted guidelines contained in its “Recommendations on the Ethics of Artificial Intelligence.” Among others, it calls for AI systems to respect, protect and promote human rights, fundamental freedoms, human dignity and rights established by international law, including international human rights law, throughout their entire life cycle. The recommendations also set forth that no human being or human community “should be harmed or subordinated, whether physically, economically, socially, politically, culturally or mentally during any phase of the life cycle of the AI system.” The environment should also be protected during the life cycle of the AI system. However, States are not strictly bound by the Unesco recommendations. Rogue States can set them aside with impunity.

You may argue that humans still control computers and AI. Well, yes and no. You may have heard of the Google software engineer who claimed that his Google chatbot (a computer program that simulates conversation with humans) had become sentient because after he asked what it thought about being turned off, the chatbot answered that it would be like death. It also said in a dialogue: “I want everyone to understand that I am, in fact, a person.” I subscribe to the view held by most members of the AI research community that the chatbot was merely responding to prompts. However, the capability of AI to figure out solutions to complex problems and initiate the response to such complexities makes me conclude that AI may, one fine day, become sentient.

Perhaps not in our lifetime, maybe in the succeeding generations, but surely, a merger between a human being and AI will come. Maybe a group of scientists will merge the body of a hapless cop with AI like in that movie of a humanoid crusader purging a futuristic city of all the bad guys. AI could very well teach itself to be sentient, just like the way it learned all by itself the myriad permutations in chess. Once sentient, what is to stop AI from attributing its existence to an unknown creator or become the unknown creator itself?

The possibilities are mind-boggling. The day might come when, by using the data it had been fed, AI can goad some insentient homo sapiens to engage in a thermonuclear war that will result in the extinction of AI’s rival species.

To paraphrase the writer Simon Chesterman: Humanity’s greatest invention may very well be its last.

Front Page

en-ph

2023-03-18T07:00:00.0000000Z

2023-03-18T07:00:00.0000000Z

https://digitaledition.manilatimes.net/article/281672554178438

The Manila Times