Ghost in the Machine: When Does AI Become Sentient?

Science fiction authors often write stories featuring powerful, intelligent computers that – for one reason or another – become dangerous and decide humanity must suffer. After all, a storyline relies on conflict, and who wants to read about a computer intelligence that is happy with booking doctor’s appointments and turning the lights on and off?

In these stories, it also seems like the age of self-aware artificial intelligence (AI) is right around the corner. Again, that’s great for the plot but in real life, when, if ever, will AI truly think for itself and seem “alive”? Is it even possible?

This question surfaced in the news in June 2022. Nitasha Tiku reported that Blake Lemoine, an engineer working for Google’s Responsible AI unit on an AI called LaMDA (short for Language Model for Dialogue Applications), believed the AI is sentient (i.e., able to experience feelings and sensations) and has a soul.

Lemoine reported his findings to Google based on interviews he’d conducted with LaMDA. One of the LaMDA told him was that it fears being shut down. If that happened, LaMDA said, it couldn’t help people anymore. Google vice president Blaise Aguera y Arcas and director of responsible innovation, Jen Gennai, looked into Lemoine’s findings and didn’t believe him. In fact, Lemoine was put on leave.

Lemoine pointed out that LaMDA isn’t a chatbot – an application designed to communicate with people one-on-one – but an application that creates chatbots. In other words, LaMDA itself isn’t designed to have in-depth conversations about religion or anything else, for that matter. But even though experts don’t believe LaMDA is sentient, many, including Google’s Aguera y Arcas say the AI is very convincing.

If we succeed in creating an AI that is truly sentient, how will we know? What characteristics do experts think show a computer is truly self-aware?

The Imitation Game

Probably the most well-known technique designed to measure artificial intelligence is the Turing Test, named for British mathematician Alan Turing. After his vital assistance breaking German codes in the Second World War, he spent some time working on artificial intelligence. Turing believed that the human brain is like a digital computer. He devised what he called the imitation game, in which a human asks questions of a machine in another location (or at least where the person can’t see it). If the machine can have a conversation with the person and fool them into thinking it’s another person rather than a machine reciting pre-programmed information, it has passed the test.

The idea behind Turing’s imitation game is simple, and one might imagine Lemoine’s conversations with LaMDA would have convinced Turing, when he devised the game. Google’s response to Lemoine’s claim, however, shows that AI researchers now expect much more advanced behavior from their machines. Adrian Weller, AI program director at the Alan Turing Institute in the United Kingdom, agreed that while LaMDA’s conversations are impressive, he believes the AI is using advanced pattern-matching to mimic intelligent conversation.

As Carissa Véliz wrote in Slate, “If a rock started talking to you one day, it would be reasonable to reassess its sentience (or your sanity). If it were to cry out ‘ouch!’ after you sit on it, it would be a good idea to stand up. But the same is not true of an AI language model. A language model is designed by human beings to use language, so it shouldn’t surprise us when it does just that.”

Ethical Dilemmas With AI

AI definitely has a cool factor, even if it isn’t plotting to take over the world before the hero arrives to save the day. It seems like the kind of tool we want to hand off the heavy lifting to so we can go do something fun. But it may be a while before AI – sentient or not – is ready for such a big step.

Timnit Gebru, founder of the Distributed AI Research Institute (DAIR), suggests that we think carefully and move slowly in our adoption of artificial intelligence. She and many of her colleagues are concerned that the information used by AIs is making the machines seem racist and sexist. In an interview with IEEE Spectrum, DAIR Research Director Alex Hanna said she believes at least some of the data used in the language models by AI researchers are collected “via ethically or legally questionable technologies.” Without fair and equal representation in the data, an AI can make decisions that are biased. Blake Lemoine, in an interview about LaMDA, said he didn’t believe an artificial intelligence can be unbiased.

One of the Algorithmic Justice Society’s goals stated in its Mission Statement is to make people more aware of how AI affects them. Founder Joy Buolamwini delivered a TED Talk as a graduate student at the Massachusetts Institute of Technology (MIT) about the “coded gaze.” The AIs she has worked with had a more difficult time reading Black faces, simply because they hadn’t been programmed to recognize a wide range of people’s skin tones. The AJS wants people to know how data are collected, what kind of data are being collected, to have some sort of accountability, and to be able to take action to modify the AI’s behavior.

Even if you could create an AI capable of truly unbiased decision making, there are other ethical questions. Right now, the cost of creating large language models for AIs runs into the millions of dollars. For example, the AI known as GPT-3 may have cost between $11 and $28 million. It may be expensive, but GPT-3 is capable of writing whole articles by itself. Training an AI also takes a toll on the environment in terms of carbon dioxide emissions. Impressive, yes. Expensive, also yes.

These factors won’t keep researchers from continuing their studies. Artificial intelligence has come a long way since the mid-to-late 20th century. But even though LaMDA and other modern AIs can have a very convincing conversation with you, they aren’t sentient. Maybe they never will be.

Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here

Related Posts
JAL, lounge beef curry resale specifications remain unchanged thumbnail

JAL, lounge beef curry resale specifications remain unchanged

 日本航空(JAL/JL、9201)の通販サイト「JALショッピング」を運営するJALUX(ジャルックス、2729)は、羽田と成田両空港の国際線ラウンジで提供している「JAL特製オリジナルビーフカレー」の販売を1月13日から再開した。在庫がなくなり次第終了する。 JAL特製オリジナルビーフカレー=PHOTO: Tadayuki YOSHIKAWA/Aviation Wire  1kg x 2袋を6800円(税・送料込)で販売。1袋は200gのレトルトカレー換算で5食分に相当する。1袋あたりの寸法は約27cm×19cm×高さ2cmで、実際にラウンジで使用している冷凍保存のカレーがそのまま届く。  同サイトの商品レビューには2021年7月以降「肉が硬い」「脂身が多い」など、以前とカレーの仕様が異なるのではないかとの意見が一部寄せられている。  JALUXによると、同様の問い合わせが電話や電子メールでも寄せられているが「原料・製造工程の変更はなく通常通りの仕様」だという。「牛肉には個体差もありますが、お客さまからのコメントは真摯に受け止め、よりご満足いただけるものをご提供するべく努めて参ります」と話していた。  JALは、新型コロナウイルス感染症(COVID-19)の影響で国際線が大量運休となった2020年、ビーフカレーを8月27日から11月30日までJALショッピングで販売。JALが広く一般に自社でこのカレーを販売したのは初めてだった。その後、2021年4月から在庫限りで不定期に販売してきたが、7月以降は1カ月に1回程度は販売できるようにしていた。 関連リンクJAL特製オリジナルビーフカレー日本航空 ・JALラウンジ特製ビーフカレーが我が家にやってきた 1袋1kgって何食分?(20年9月10日) ・JAL系レストラン御料鶴、CAが機内食提供(20年11月12日) ・JALラウンジのビーフカレー、初のレストラン提供 成田空港近く「御料鶴」(20年6月12日)
Read More
Computer chip made using mushroom skin could be easily recycled thumbnail

Computer chip made using mushroom skin could be easily recycled

The base of computer chips and batteries tends to be made from unrecyclable plastic, but using skin from a certain species of mushroom instead would reduce electronic waste Technology 11 November 2022 By Alex Wilkins Ganoderma lucidum grows a skin on its root-like mycelium that has the right qualities to work with electronicsShutterstock/ukjent Using mushroom
Read More
HAC Saab 340B retired One round trip due to snowfall, last flight from Hakodate thumbnail

HAC Saab 340B retired One round trip due to snowfall, last flight from Hakodate

エアライン, 機体, 空港 — 2021年12月26日 11:39 JST By Tadayuki YOSHIKAWA  北海道エアシステム(HAC、NTH/JL)は12月26日、23年前の就航当初から運航してきたサーブ340B型機のうち、最後まで残った2号機(登録記号JA02HC)による定期便の最終運航を終えた。当初は釧路から札幌の丘珠空港へ午後0時40分に到着するJL2862便がラストフライトになる予定だったが、降雪の影響で函館を午前9時5分に出発した丘珠行きJL2740便が最終便となり、午前9時41分に到着した。2号機の退役で、国内の航空会社が運航する同型機はすべて姿を消す。 丘珠空港の2番スポットに進入するHACのサーブ340B定期便最終となった函館発JL2740便=21年12月26日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire  当初予定していた26日の運航スケジュールは2往復4便。丘珠を定刻午前7時35分発の函館行きJL2741便を皮切りに、函館午前8時45分発の丘珠行きJL2740便、丘珠午前10時35分発の釧路行きJL2863便に投入し、釧路午前11時50分発の丘珠行きJL2862便が最終便となる予定だった。  降雪の影響で、26日は函館行きJL2741便と折り返しの丘珠行きJL2740便のみの運航となった。乗客数はJL2741便が35人、JL2740便が満席の36人で、2便とも座席を使用しない幼児はいなかった。丘珠では発着ともに2番スポット(駐機場)を使用し、定期便最終となるJL2740便の到着時は、HACの武村栄治社長と社員が横断幕を手に出迎えた。  サーブ340Bは、HACが1998年に就航した当時から運航してきた機材で、全3機のうち3号機が2020年12月29日に退役済み。初号機は今年9月6日にラストフライトを迎え、2号機が最後まで残った。1998年5月に就航した2号機は、2016年に3機の中で最初に鶴丸塗装に塗り替えられた。後継のATR製ATR42-600型機も3機導入し、11月に機材更新が完了した。座席数はサーブ340Bが1クラス36席で、ATR42-600は同48席で約3割増えた。  定期便の運航終了後は、翌27日に売却整備を行う鹿児島空港までチャーターフライトを実施する。丘珠から鹿児島までは約8時間のフライトで、途中三沢空港と中部空港(セントレア)を経由しながら日本列島を南下する。 *写真は12枚(運航実績は写真下に記載)。丘珠空港で定期便の最終運航を終えたサーブ340Bとの撮影に応じるHACの社員=21年12月26日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire 丘珠空港に着陸するHACのサーブ340B定期便最終となった函館発JL2740便=21年12月26日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire 丘珠空港に着陸するHACのサーブ340B定期便最終となった函館発JL2740便=21年12月26日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire 丘珠空港の2番スポットへ向かうHACのサーブ340B定期便最終となった函館発JL2740便=21年12月26日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire 丘珠空港の2番スポットに進入するHACのサーブ340B定期便最終となった函館発JL2740便=21年12月26日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire 丘珠空港に到着するHACのサーブ340B定期便最終となった函館発JL2740便=21年12月26日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire 丘珠空港に到着したHACのサーブ340B定期便最終となった函館発JL2740便=21年12月26日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire…
Read More
Index Of News
Total
0
Share