How Real is too Real for Artificial Intelligence, and Does it Matter? - The Khuram Dhanani Foundation
15532
post-template-default,single,single-post,postid-15532,single-format-standard,bridge-core-3.0.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-child-theme-ver-1.0.0,qode-theme-ver-28.2,qode-theme-bridge | shared by wptry.org,disabled_footer_top,wpb-js-composer js-comp-ver-6.9.0,vc_responsive

How Real is too Real for Artificial Intelligence, and Does it Matter?

How Real is too Real for Artificial Intelligence, and Does it Matter?

As NLT and neural networks become more and more competent, there is a valid discussion to be had about what exactly it is we want from AI and who the conversation about it really serves. 

Recently, there has been some controversy surrounding the possible sentience of AI. This isn’t a new topic of conversation, but the nature of the smoking gun that started the conversation up again makes this far more remarkable than usual. The AI at the eye of this storm is Google’s LaMDA (language model for dialogue applications). 

The LaMDA Controversy

The whole debacle started when Blake Lemoine, a senior engineer at Google’s ethical AI division, declared that the AI was sentient. He went on to clarify that he had personally confirmed this in his routine conversations with the AI as part of his job, where he was tasked with ensuring that the AI didn’t say racially (or otherwise) insensitive things. 

In addition to making these claims from his reputable position with the company, transcripts of one of the conversations in question became publicly available rather quickly, chilling readers as the AI declared itself not just sentient but capable of experiencing distinctly human emotions. 

The company rejected these claims, placing Lemoine on indefinite leave until he was later fired due to his refusal to comply with company directives. 

A Perpetual Discussion

While it might seem novel due to the source of the claim and conversation, this idea is not new. Conversations about AI and the need to guard against making it “too advanced” (a nebulous idea that means different things to different people) have proliferated the public zeitgeist since before AI was even a feasible undertaking. 

Discussions have stretched from science fiction theory crafting to tinfoil conspiracies right up to an actual, credible discussion about the ethics and limitations of artificial intelligence. As far back as the 1960s, primitive chatbots like ELIZA were being hit with similar speculation about supposed sentience. 

Of course, it is laughable to those of us in the modern era that such basic responses caused that manner of apprehension, but only because we find ourselves so close to the real thing that we could feasibly start to ask the same questions in our lifetimes. So much so that even reputable people like OpenAI CEO Sam Altman and Tesla’s Elon Musk regularly engage in serious discussions about possible issues with overtrained AI. Musk specifically has expressed such skepticism that some observers find it difficult to discern how serious he actually is. 

A Convenient Misdirection?

As noted earlier, the conversation surrounding AI sentience has dominated the ethics discussion around AI for as long as there has been speculation about artificial intelligence. However, many more pressing issues tend to fall by the wayside in public discourse. While laypeople are captivated by the idea of thinking, and feeling machines, AI that’s already in use displays issues that are far less visible. 

While AI has come a long way, it is still far from infallible. With issues ranging from false identification and discriminatory behavior to actual computational errors during critical law enforcement investigations, many checks and balances are missing from the AI implementation process. 

These issues are overshadowed by the sentience debate, leaving many involved free to gloss over those problems and present a clean lease on the state of things to investors and potential victims alike. 

It is worth noting that whenever asked, experts with no direct financial incentive maintain that there is a large gulf between even the most powerful AI and anything that would fit a working definition of sentience or transcendent intelligence. This has always been the position of moderates, and yet the conversation continues to proliferate. It begs the question of who the misdirection serves. 

Who Does it Benefit, Really?

While the case of Blake Lemoine was widely discussed, the secondary information took much longer to appear. Lemoine has been a member of several non-mainstream religious movements and describes himself as a Christian mystic. When pressed, he insisted that he “knew when he was talking to a person”, suggesting that there might not have been an intellectual basis for his claims. 

According to Timnit Gerbu, a former colleague of his, Lemoine was also seemingly given to getting swept up in the industry hype. Guru also worked at Google, until she was fired following a controversial paper she authored at the end of 2020. In it, she critiqued the direction the company’s AI data set utilization was going,   

When discussing the Lemoine controversy, she also attributed much of the blame to tech company bigwigs, saying; 

“He’s the one who’s going to face consequences, but it’s the leaders of this field who created this entire moment.”

Gerber, who understands and regularly discusses the misdirection inherent in the sentience debate, emphasized this by pointing out that just a week before rejecting Lemoine’s claim, a Google VP had written a piece for the Economist on the possibility of sentience in LaMDA. However, it does show us who really gains from the discussion about AI. 

Asking Critical Questions Before Time Runs Out

Rather than engaging in good-faith discussions about the potential for benefit or harm that more advancements in AI, how likely is it that those with a financial stake in the technology will instead play up the public discourse for their own gains? 

If possible sentience makes for a good tagline for investors to get behind, then so be it. If denying the possibility helps to skirt around possible ethics restrictions, why not?

The matter seems well-designed to almost guarantee shady mixed messages. This is dangerous in any business but is more harmful in emergent tech than in any other field. 

That concern goes double for anything that concerns the potential of creating sentient life, a concept that is so wrapped up in ethics concerns that there might never be either permission or forgiveness. 

More specifically, there needs to be a serious discussion as to what the goal of widening language training data sets to encompass more human communication is. In addition, there is a need to review the improvement of Natural Language Processing to enable AI to understand more subtle and nuanced prompts. 

If these discussions are not had honestly, then those at the helm of the development are liable to go the way of every cheesy science fiction corporation and cross a line that they probably shouldn’t.