Etika veštačke inteligencije i gotovo razumno pitanje da li će ljudi nadživeti veštačku inteligenciju

I have a question for you that seems to be garnering a lot of handwringing and heated debates these days.

Jesi li spreman?

Will humans outlive AI?

Razmislite o tome.

I am going to unpack the question and examine closely the answers and how the answers have been elucidated. My primary intent is to highlight how the question itself and the surrounding discourse are inevitably and inexorably rooted in AI Ethics.

For those that dismissively think that the question is inherently unanswerable or a waste of time and breath, I would politely suggest that the act of trying to answer the question raises some vital AI Ethics considerations. Thus, even if you want to out-of-hand reject the question as perhaps preposterous or unrealistic, I say that it still elicits some value as a vehicle or mechanism that underscores Ethical AI precepts. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see link ovdje i link ovdje, Da nabrojimo samo neke.

With the aforementioned premise, please permit me to once again repeat the contentious question and allow our minds to roam across the significance of the question.

Will humans outlive AI?

If you are uncomfortable with that particular phrasing, you are welcome to reword the question to ask whether AI will outlive humans. I am not sure if that makes answering the question any easier, but maybe it seems less disconcerting. I say that because the idea of AI outliving humans might feel a bit more innocuous. It would almost be as though I asked you whether large buildings and human-crafted monuments might outlive humankind.

Surely this seems feasible and not especially threatening. We make these big things during the course of our lives and akin to the pyramids, these mighty structures will outlast those that crafted them. That doesn’t quite equate to persisting past the end of humanity, of course, since humans are still here. Nonetheless, it seems quite logical and possible that the structures we make could outlast our existence in total.

The notable distinction though is that various structures such as tall skyscrapers and glorious statues are not alive. They are inert. In contrast, when asking about AI, the assumption is that AI is essentially “alive” in a sense of having some form of intelligence and being able to act in ways that humans do. That’s why the question about living longer is more daunting, mind-bending, and altogether a puzzle worthy of puzzling over.

Throughout my remarks herein, I am going to stick with the question that is worded as to whether humans will outlive AI. This is merely for sake of discussion and ease of contemplation. I mean no disrespect to the alternative query of whether AI will outlive humans. All in all, this analysis covers both wordings and I just perchance find that the question of humans outliving AI seems more endearing in these thorny matters.

Okay, I’ll ask it yet again:

Seems like you have two potential answers, either the ironclad yes, humans will outlive AI, or you might be on the other side of the coin and fervently insist that no, humans aren’t going to outlive AI. Thus, this lofty and angst-ridden question boils down to a straightforward rendering of either yes or no.

Make your pick.

I realize that the smarmy reply is that neither yes nor no is applicable.

Čujem te.

Whereas the question certainly seems to be answerable in only a distinctly binary fashion, namely just yes or no, I will grant you that a counter-argument can be sensibly made that the answer is something else.

Let’s briefly explore some of the basis for not wanting to merely say yes or no to this question.

First, you might reject the word “outlive” in the context of the question posed.

This particular wording perhaps implies that AI is alive. The question didn’t say “outlast” and instead asks whether humans will outlive AI. Does the outlive apply only to the human part of the question, or does it also apply to the AI part of the question? Some would try to assert that the outlived aura applies to the AI portion too. In that case, they would have heartburn by saying that AI is a living thing. To them, AI is going to be akin to tall buildings and other structures. It isn’t alive in the same manner of speaking that humans are alive.

Ergo, in this ardent contrarian viewpoint, the question is falsely worded.

You might be vaguely familiar with questions that have false or misleading premises. One of the most famous examples is whether someone is going to stop beating their wife (an old saying that obviously needs to be set aside). In that infamous example, if the answer of yes is provided, the implication is that the person was already doing so. If they say no, the implication is that they were and are going to continue doing so.

In the case of asking whether humans will outlive AI, we can end up buried in a morass about whether AI is considered something of a living facet. As I will explain momentarily, we do not have any AI today that is sentient. I think most reasonable people would agree that a non-sentient AI is not a living thing (well, not everyone agrees, but I’ll stipulate that for now – see my coverage of legal personhood for AI at link ovdje).

The gist of this first basis for not answering the question of whether humans will outlive AI is that the word “outlive” could be interpreted to imply that AI is alive. We don’t have AI of that ilk, as yet. If we do produce or somehow have sentient AI that arises, you would be hard-pressed to argue that it isn’t alive (though some will try to make such an argument). So the key here is that the question posits something that doesn’t exist and we are merely speculating about an unknown and hazy-looking future.

We can take this messiness and seek to expand it into a more expressive expression. Suppose that we are asking this instead:

  • Will humans as living beings outlast AI that is either (1) non-living, or (2) a living entity if that someday so arises?

Keep that expanded wording in mind and we will soon return to it.

A second basis for not wanting to answer the original question posed of whether humans will outlive AI is that it presupposes that one of the things will outlive one of the other things. Suppose though that they both essentially live forever? Or suppose that they both expire or go out of existence at the same time?

I’m sure that you can readily discern how that makes the yes-or-no wording fall apart.

Seems like we need a possible third answer consisting of “neither” or a similar response.

There is a slew of “neither” related permutations. For example, if someone is of the strident belief that humans will destroy themselves via AI, and simultaneously humans manage to destroy the AI, this believer cannot sincerely answer the question of which will outlive the other with an inflexible answer of yes or no. The answer, in that rather sordid and sad case, would be more along the lines of neither one outlives the other.

The same would be true if a huge meteor strikes the Earth and wipes out everything on the planet, including humans and any AI that happens to be around (assuming we are all confined to the Earth and not already living additionally on Mars). Once again, the answer of “neither” seems more apt than suggesting that the humans outlived the AI or that the AI outlived the humans (since they both got destroyed at the same time).

I don’t want to go too far afield here, but we also might want to establish some parameters about the timing of the outliving. Suppose that a meteor strikes the Earth and humans are nearly instantly wiped out. Meanwhile, suppose the AI continues for a while. Think of this as though we might have already-underway machinery in factories that keeps humming along until eventually, the machines come to a halt because there aren’t any humans keeping the machines in running order.

You would have to say that humans were outlasted or outlived by those machines. Therefore, the answer is “no” regarding whether humans survived longer. That answer seems sketchy. The machines gradually and inexorably came to a halt, presumably due to the lack of humankind around them. Does it seem fair to claim that the machines were ably able to last longer than the humans?

Probably only to those that are finicky and always want to be irritatingly precise.

We could then add some kind of time-related element to the question. Will humans outlive AI for more than a day? For more than a month? For more than a year? For more than a century? I realize this regrettably opens up Pandora’s box.

What is the agreeable time frame beyond which we would be willing to concede that the AI did in fact outlive or outlast humans? The accurate answer seems to be that even if it happens for a nanosecond (a billionth of a second) or shorter, the AI summarily wins and the humans lose on this matter. Allowing for latitude by using a day or a week or a month might seem fairer, perhaps. Letting this go on for years or centuries seems a possible outstretching. That being said, if you look at the world on the scale of millions of years, the idea of AI outliving or outlasting humans for no more than a few centuries seems notably unimpressive and we might declare that they both went out of existence at roughly the same time (on a rounded basis).

Anyway, let’s concede that for a variety of reasonably reasonable reasons, the posed question is allowed to have three possible answers:

  • Yes, humans will outlive AI
  • No, and thus asserting that humans will ne outlive AI
  • Neither yes nor no is applicable (explanation required, if you please)

I mention that if you pick “neither” you ought to also provide an explanation for your answer. This is so that we can know why you believe that “neither” is applicable and also why you are rejecting the use of yes or no. To make life fairer for all, I suppose we should somewhat insist or at least encourage that even if you answer with a yes or no, you still should proffer an explanation. Providing a simple yes or no does not particularly reveal your logic as to why you are answering the way that you are. Without also providing an explanation, we might as well flip a coin. The coin doesn’t know why it landed on heads or tails (unless you believe that the coin has a soul or embodies some omniscient hand of fate, but we won’t go with that for now).

We expect humans that answer questions to provide some kind of explanation for their decisions. Note that I am not saying that the explanations will be necessarily of a logical or sensible nature, and indeed an explanation could be entirely vacuous and not add any special value. Nonetheless, we can sincerely hope that an explanation will be illuminative.

During this discussion, there has been an unstated assumption that for one reason or another one of these things will indeed outlive the other.

Why are we to believe such an implied condition?

The answer to this secondary question is almost self-evident.

Evo dogovora.

We know that some prominent soothsayers and intellectuals have made rather bold and outstretched predictions about how the emergence or arrival of sentient AI is going to radically change the world as we know it today (as a reminder, we don’t have sentient AI today).

Evo nekoliko prijavljenih poznatih citata koji naglašavaju utjecaje razumne umjetne inteligencije koji mijenjaju život:

  • Stephen Hawking: "Uspjeh u stvaranju AI bi bio najveći događaj u ljudskoj istoriji."
  • Ray Kurzweil: “U roku od nekoliko decenija, inteligencija mašina će nadmašiti ljudsku, što će dovesti do Singularnosti – tehnološke promjene tako brze i duboke da predstavlja pukotinu u tkivu ljudske povijesti.”
  • Nick Bostrom: “Mašinska inteligencija je posljednji izum koji će čovječanstvo ikada trebati da napravi.”

Those contentions are transparently upbeat.

The thing is, we ought to also consider the ugly underbelly when it comes to dealing with sentient AI:

  • Stephen Hawking: “Razvoj potpune umjetne inteligencije mogao bi značiti kraj ljudske rase.”
  • Elon Musk: „Sve sam skloniji mišljenju da bi trebao postojati neki regulatorni nadzor, možda na nacionalnom i međunarodnom nivou, samo kako bismo bili sigurni da ne radimo nešto jako glupo. Mislim sa vještačkom inteligencijom mi prizivamo demona.”

Očekuje se da će razumna AI biti poslovični tigar kojeg smo zgrabili za rep. Hoćemo li čovječanstvo podići naprijed koristeći razumnu umjetnu inteligenciju? Ili ćemo glupo proizvesti vlastitu smrt od strane razumne AI koja se odluči da nas uništi ili porobi? Za moju analizu ove zagonetke dvostruke namjene AI, vidi link ovdje.

The underlying qualm about whether humans will outlive AI is that we might be making a Frankenstein that opts to eradicate humanity. AI becomes the victor. There are lots of possible reasons why AI would do this to us. Maybe the AI is evil and acts accordingly. Perhaps AI gets fed up with humans and realizes it has the power to get rid of humankind. One supposes it could also occur mistakenly. The AI tries to save humankind and in the process, oops, kills us all outright. At least the motive was clean.

You might find of relevant interest a famous AI conundrum known as the paperclip problem, which I’ve covered at link ovdje.

In short, a someday sentient AI is asked to make paperclips. AI is fixated on this. To ensure that the paperclip making is fully carried out to the ultimate degree, the AI starts to gobble up all other planetary resources to do so. This leads to the demise of humanity since AI has consumed all available resources for the sole objective handed to it by humans. Paperclips cause our own destruction if you will. AI that is narrowly devised and lacks any semblance of common sense is the kind of AI that we need to especially be leery of.

Before we jump further into the question of whether humans will outlive AI, notice that I keep bringing up the matter of sentient AI versus non-sentient AI. I do so for important reasons.

Možemo divlje spekulisati o razumnoj AI. Niko sa sigurnošću ne zna šta će to biti. Niko ne može sa sigurnošću reći da li ćemo jednog dana postići razumnu AI. Kao rezultat ove nepoznate i još uvijek nespoznatljive okolnosti, može se izvesti gotovo bilo koji scenario. Neko može reći da će razumna AI biti zao. Neko može reći da će razumna AI biti dobra i dobroćudna. Možete nastaviti i dalje, pri čemu se ne može pružiti nikakav “dokaz” koji bi datu tvrdnju potkrepio bilo kakvom sigurnošću ili uvjeravanjem.

Ovo nas dovodi do oblasti etike veštačke inteligencije.

Sve ovo se odnosi i na trezveno nastajuću zabrinutost oko današnje AI, a posebno na upotrebu mašinskog učenja (ML) i dubokog učenja (DL). Vidite, postoje upotrebe ML/DL koje obično uključuju antropomorfizaciju AI od strane široke javnosti, vjerujući ili birajući pretpostaviti da je ML/DL ili razumna AI ili blizu (nije).

Možda bi bilo korisno prvo razjasniti na šta mislim kada govorim o AI općenito i također dati kratak pregled mašinskog učenja i dubokog učenja. Postoji velika konfuzija o tome šta umjetna inteligencija podrazumijeva. Također bih želio da vam predstavim propise etike umjetne inteligencije, koji će biti posebno sastavni dio ostatka ovog diskursa.

Navođenje zapisa o AI

Uvjerimo se da smo na istoj strani o prirodi današnje AI.

Danas ne postoji AI koja je razumna.

Nemamo ovo.

Ne znamo da li će razumna AI biti moguća. Niko ne može tačno predvideti da li ćemo postići osećajnu veštačku inteligenciju, niti da li će se osećajna veštačka inteligencija nekako čudesno spontano pojaviti u obliku kompjuterske kognitivne supernove (koja se obično naziva Singularnost, pogledajte moje izveštavanje na link ovdje).

Shvatite da današnja veštačka inteligencija nije u stanju da „razmišlja” ni na koji način na nivou ljudskog razmišljanja. Kada stupite u interakciju sa Alexom ili Siri, konverzacijski kapaciteti mogu se činiti sličnim ljudskim kapacitetima, ali realnost je da su računalni i da im nedostaje ljudska spoznaja. Najnovija era AI je uveliko koristila mašinsko učenje i duboko učenje, koji koriste uparivanje računarskih obrazaca. To je dovelo do AI sistema koji izgledaju kao ljudski sklonosti. U međuvremenu, danas ne postoji veštačka inteligencija koja ima privid zdravog razuma niti ima bilo kakvo kognitivno čudo snažnog ljudskog razmišljanja.

Part of the issue is our tendency to anthropomorphize computers and especially AI. When a computer system or AI seems to act in ways that we associate with human behavior, there is a nearly overwhelming urge to ascribe human qualities to the system. It is a common mental trap that can grab hold of even the most intransigent skeptic about the chances of reaching sentience. For my detailed analysis of such matters, see link ovdje.

U određenoj mjeri, zato su etika umjetne inteligencije i etička umjetna inteligencija tako ključna tema.

Propisi etike umjetne inteligencije navode nas da ostanemo budni. Tehnolozi AI ponekad mogu postati zaokupljeni tehnologijom, posebno optimizacijom visoke tehnologije. Oni ne razmišljaju nužno o većim društvenim posljedicama. Imati etički način razmišljanja AI i činiti to integralno za razvoj AI i postavljanje na teren od vitalnog je značaja za proizvodnju odgovarajuće AI, uključujući procjenu kako kompanije usvajaju etiku AI.

Osim primjene načela etike umjetne inteligencije općenito, postoji odgovarajuće pitanje da li bismo trebali imati zakone koji bi regulisali različite upotrebe AI. Na saveznom, državnom i lokalnom nivou se donose novi zakoni koji se tiču ​​opsega i prirode načina na koji bi umjetna inteligencija trebala biti osmišljena. Napori da se izrade i donesu takvi zakoni su postepeni. Etika veštačke inteligencije u najmanju ruku služi kao zaustavni korak, i skoro sigurno će do nekog stepena biti direktno ugrađen u te nove zakone.

Imajte na umu da neki uporno tvrde da nam ne trebaju novi zakoni koji pokrivaju AI i da su naši postojeći zakoni dovoljni. Oni unaprijed upozoravaju da ćemo, ako donesemo neke od ovih zakona o umjetnoj inteligenciji, ubiti zlatnu gusku suzbijanjem napretka u umjetnoj inteligenciji koja nudi ogromne društvene prednosti.

U prethodnim kolumnama pokrio sam različite nacionalne i međunarodne napore za izradu i donošenje zakona koji regulišu AI, vidi link ovdje, na primjer. Također sam pokrio različite principe i smjernice etike umjetne inteligencije koje su različite nacije identificirale i usvojile, uključujući na primjer napore Ujedinjenih nacija kao što je UNESCO-ov skup etike umjetne inteligencije koji je usvojilo skoro 200 zemalja, vidi link ovdje.

Evo korisne ključne liste etičkih AI kriterija ili karakteristika u vezi s AI sistemima koje sam prethodno pomno istražio:

  • Providnost
  • Pravda i pravičnost
  • Non-Maleficence
  • odgovornost
  • privatnost
  • Dobrotvornost
  • Sloboda i autonomija
  • povjerenje
  • održivost
  • Dostojanstvo
  • solidarnost

Te principe etike umjetne inteligencije ozbiljno bi trebali koristiti programeri umjetne inteligencije, zajedno s onima koji upravljaju razvojnim naporima AI, pa čak i onima koji na kraju obavljaju i održavaju AI sisteme. Svi akteri tokom čitavog životnog ciklusa AI razvoja i upotrebe smatraju se u okviru poštivanja postojećih normi Etičke AI. Ovo je važan naglasak budući da je uobičajena pretpostavka da su „samo koderi“ ili oni koji programiraju AI podložni pridržavanju pojmova etike AI. Kao što je ovdje ranije naglašeno, potrebno je selo da osmisli i postavi AI, a za šta cijelo selo mora biti upućeno i pridržavati se etičkih propisa AI.

Spustimo stvari na zemlju i fokusirajmo se na današnju kompjutersku neosjetnu umjetnu inteligenciju.

ML/DL je oblik podudaranja računskog uzorka. Uobičajeni pristup je da prikupljate podatke o zadatku donošenja odluka. Podatke unosite u ML/DL računarske modele. Ti modeli nastoje pronaći matematičke obrasce. Nakon pronalaženja takvih obrazaca, ako ih pronađe, AI sistem će koristiti te obrasce kada naiđe na nove podatke. Nakon predstavljanja novih podataka, obrasci zasnovani na „starim“ ili istorijskim podacima se primenjuju za donošenje trenutne odluke.

Mislim da možete pogoditi kuda ovo vodi. Ako su ljudi koji su donosili odluke po uzoru na njih inkorporirali neželjene predrasude, velika je vjerojatnost da podaci to odražavaju na suptilan, ali značajan način. Mašinsko učenje ili Deep Learning računsko uparivanje uzoraka jednostavno će pokušati matematički oponašati podatke u skladu s tim. Ne postoji privid zdravog razuma ili drugih razumnih aspekata modeliranja napravljenog od umjetne inteligencije per se.

Štaviše, ni AI programeri možda neće shvatiti šta se dešava. Tajna matematika u ML/DL-u mogla bi otežati otkrivanje sada skrivenih predrasuda. S pravom se nadate i očekujete da će AI programeri testirati potencijalno skrivene predrasude, iako je to teže nego što se čini. Postoji velika šansa da će čak i uz relativno opsežna testiranja postojati predrasude i dalje ugrađene u modele podudaranja obrazaca ML/DL.

Mogli biste donekle koristiti poznatu ili zloglasnu izreku smeće-u-đubre-van. Stvar je u tome što je ovo više slično predrasudama koje se podmuklo unose kao predrasude potopljene unutar AI. Algoritam donošenja odluka (ADM) AI aksiomatski postaje opterećen nejednakostima.

Nije dobro.

I believe that I’ve now set the stage adequately to further examine whether humans will outlive AI.

Humans And AI Are Friends, Enemies, Or Frenemies

I had earlier proclaimed that any answer to the question of whether humans will outlive AI should be accompanied by an explanation.

We will take a look at the Da answer. I’ll provide a shortlist of explanations. You are welcome to adopt any of those explanations. You are also encouraged to derive other explanations, of which a multitude are conceivable.

Yes, humans will outlive AI because:

  • Humans as creators: Humans are the makers and maintainers of AI, such that without humans then AI will cease to run or exist
  • Human innate spirit: Humans have an indomitable spirit toward living while AI does not, thus one way or another humans will survive but AI shall undoubtedly fall by the wayside due to a lack of innate invigoration for survival
  • Humans as vanquishers: Humans won’t let AI outlive humans in that humans would opt to entirely vanquish AI if humans were being endangered by AI or otherwise becoming extinct
  • drugi

We will take a look at the No (non-yes) answer. I’ll provide a shortlist of explanations. You are welcome to adopt any of those explanations. You are also encouraged to derive other explanations, of which a multitude are conceivable.

Humans will not outlive AI because:

  • AI able to self-persist: Even if humans are the makers and maintainers of AI, the AI will either be programmed or devised by humans to persist in the absence of humans or the AI will find its own means of persistence (possibly without humans realizing so)
  • AI artificial spirit: Even if humans have an indomitable spirit toward living, we know that humans also have a spirit of self-destruction; in any case, the AI can be programmed with an artificial spirit, if you will, such that the AI seeks to survive and/or the AI will divine a semblance of innate invigoration on its own terms
  • AI overcomes vanquishers: Even if humans don’t want to let AI outlive humans, AI will potentially be programmed to outmaneuver the human vanquishing efforts or might self-derive how to do so (and, perhaps might opt to vanquish humans accordingly, or not)
  • drugi

We are equally obligated to take a look at the “Neither” (not Da, ne Ne) answer. I’ll provide a shortlist of explanations. You are welcome to adopt any of those explanations. You are also encouraged to derive other explanations, of which a multitude are conceivable.

Humans don’t outlive AI and meanwhile, AI does not outlive humans, because of:

  • Humans and AI exist cordially forever: Turns out that humans and AI are meant to be with each other, forever. There might be bumps along the way. The good news or happy face is that we all get along.
  • Humans and AI exist hatefully forever: Whoa, humans and AI come to hate each other. Sad face scenario. The thing is, there is a stalemate at hand. AI cannot prevail over humans. Humans cannot prevail over AI. A tug of war of an everlasting condition.
  • Humans and AI mutually destroy each other: Two heavyweights end up knocking each other out of the ring and out of this world. Humans prevail over AI, but the AI has managed to also prevail over humans (perhaps a doomsday setup)
  • Humans and AI get wiped out by some exigency: Humans and AI get wiped out by a striking meteor or maybe an alien from another planet that decides it is a definite no-go for humans and humankind-derived AI (not even interested in stealing our amazing AI from us)
  • drugi

There are some of the most commonly noted reasons for the Yes, No, and Neither answers to the question of whether humans will outlive AI.

zaključak

You might remember that I earlier proffered this expanded variant of the humans outliving AI question:

  • Will humans as living beings outlast AI that is either (1) non-living, or (2) a living entity if that someday so arises?

The aforementioned answers are generally focused on the latter part of the question, namely the circumstance involving AI of a sentient variety. I have already pointed out that this is wildly speculative since we don’t know whether sentient AI is going to occur. Some would argue that as a just-in-case, we are rightfully wise to consider beforehand what might arise.

If that seems grossly unrealistic to you, I sympathize that all of this is quite hypothetical and filled with assumptions on top of assumptions. It is a barrel full of assumptions. You will need to ascertain the value that you think such speculative endeavors provide.

Getting more to the brass tacks, as it were, we can consider the non-living or non-sentient type of AI.

Shorten the question to this:

  • Will humans outlive the non-living non-sentient AI?

Believe it or not, this is a substantively worthy question.

You might be unsure of why this non-living non-sentient AI could be anywhere in the ballpark of somehow being able to outlive humankind.

Consider the situation involving autonomous weapons systems, which I’ve discussed at link ovdje. We are already seeing that weapons systems are being armed with AI, allowing the weapon to work somewhat autonomously. This non-living non-sentient AI has no semblance of thinking, no semblance of common sense, etc.

Envision one of those apocalyptic situations. Several nations have infused this low-caliber AI into their weapons of mass destruction. Inadvertently (or, by intent), these AI-powered autonomous weapons systems are launched or unleashed. There is insufficient failsafe to stop them. Humankind is destroyed.

Would the AI outlast humans in that kind of scenario?

First, you might not especially care. In other words, if all of humanity has been wiped out, worrying or caring whether the AI is still humming along seems a bit like moving those deckchairs on the Titanic. Does it matter that the AI is still going?

A stickler might argue that it does still matter. Okay, we’ll entertain the stickler. The AI might be running on its own via solar panels and other forms of energy that can keep on fueling the machinery. We might have also devised AI systems that repair and maintain other AI systems. Note that this doesn’t require sentient AI.

All in all, you can conjure up a scenario whereby humankind is expired and the AI is still working. Maybe the AI keeps going for just a short period of time. Nonetheless, as per the earlier discussion about being exactingly precise on timing concerns, AI has in fact outlasted humans (for a while).

A final thought on this topic, for now.

Discussing whether humans will outlive sentient AI is almost like the proverbial spoonful of sugar (how can this be, you might be wondering, well, hold onto your hat and I shall tell you).

You see, we definitely need to get in our heads that the non-sentient AI has also grand and grievous potential to participate in wiping out humanity and outlasting us. Not particularly because the AI “wanted to outlast us” but simply by our own hands at crafting AI that doesn’t need human intervention to continue functioning. Some would strongly argue that AI that is devised to be somewhat everlasting can be a destabilizing influence that might get some humans to want to make a first move on destroying other humans, see my explanation at link ovdje.

The part about outliving humans is not the mainstay of why the question merits such weightiness today. Instead, the hidden undercurrent about how we are crafting today’s AI and how we are placing AI into use is the real kicker here. We need to be thinking abundantly about the AI Ethics ramifications and societal impacts of current-day AI.

If the somewhat zany question about whether humans will outlive AI is going to get onto the table the here-and-now issues of contemporary AI, we are going to be better off. In that manner of consideration, the sentient AI facets of humans outliving AI is the spoonful of sugar that hopefully gets the medicine down about dealing with the here-and-now AI.

Just a spoonful of sugar helps the medicine go down, sometimes. And in the most delightful of ways. Or at least in an engaging way that gets our attention and keeps us riveted on what we need to be worrying over.

As the ditty further says, like a robin feathering its nest, we have very little time to rest.

Source: https://www.forbes.com/sites/lanceeliot/2022/08/28/ai-ethics-and-the-almost-sensible-question-of-whether-humans-will-outlive-ai/