The key to detecting deepfakes may lie within deep space

A series of deepfake eyes showing inconsistent reflections in each eye.



A series of deepfake eyes showing inconsistent reflections in each eye.
(Image credit: Adejumoke Owolabi)

The eyes, the old saying goes, are the window to the soul — but when it comes to deepfake images, they might be a window into unreality.

That’s according to new research conducted at the University of Hull in the U.K., which applied techniques typically used in observing distant galaxies to determine whether images of human faces were real or not. The idea was sparked when Kevin Pimbblet, a professor of astrophysics at the University, was studying facial imagery created by artificial intelligence (AI) art generators Midjourney and Stable Diffusion. He wondered whether he could use physics to determine which images were fake and which were real. “It dawned on me that the reflections in the eyes were the obvious thing to look at,” he told Space.com. 

Deepfakes are either fake images or videos of people created by training AI on mountains of data. When generating images of a human face, the AI uses its vast knowledge to build an unreal face, pixel by pixel; these faces can be built from the ground up, or based on actual people. In the case of the latter, they’re often used for malicious reasons. Nevertheless, given that real photos contain reflections, the AI adds these in —– but there are often subtle differences across both eyes. 

With a desire to follow his instinct, Pimbblet recruited Adejumoke Owolabi, a Masters student at the University, to help develop software that could quickly scan the eyes of subjects in various images to see whether those reflections checked out. The pair built a program to assess the differences between the left and right eyeballs in photos of people, real and unreal. The real faces came from a diverse dataset of 70,000 faces on Flickr, while the deepfakes were created by the AI underpinning the website This Person Does Not Exist, a website that generates realistic images of people who you would think exist, but do not.

Related: Apollo 11 ‘disaster’ video project highlights growing danger of deepfake tech

It’s obvious once you know it’s there: I refreshed This Person Does Not Exist five times and studied the reflections in the eyes. The faces were impressive. At a glance, there’s nothing that stood out to suggest they were fake.

Five separate photos of different people's faces are layed in a row. They feature a bow with a bowl cut, a middle aged white man with brown hair and an orange collar, a man with dark hair and black glasses, a man with long, thin sideburns, and a little girl who is smiling.

Five images of people from ThisPersonDoesNotExist. (Image credit: Adejumoke Owolabi)

Closer inspection revealed some near-imperceptible differences in the lighting of either eyeball. They didn’t exactly seem to match. In one case, the AI generated a man wearing glasses — the reflection in his lens also seemed a little off. 

Breaking space news, the latest updates on rocket launches, skywatching events and more!

What my eye couldn’t quantify, however, was how different the reflections were. To make such an assessment, you’d need a tool that can identify violations to the precise rules of optics. This is where the software Pimbblet and Owolabi comes in. They used two techniques from the astronomy playbook, “CAS parameters” and “the Gini index.”

A photo of a white woman with dark red-hued hair, next to a black man with an earring on his left ear. Below them, enlarged views of their irises.

In this image, the person on the left (Scarlett Johansson) is real, while the person on the right is AI-generated. Their eyeballs are depicted underneath their faces. The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person. (Image credit: Adejumoke Owolabi)

In astronomy, CAS parameters can determine the structure of a galaxy by examining the Concentration, Asymmetry and Smoothness (or “clumpiness”) of a light profile. For instance, an elliptical galaxy will have a high C value and low A and S values — its light is concentrated within its center, but it has a more diffuse shell, which makes it both smoother and more symmetrical. However, the pair found CAS wasn’t as useful for detecting deepfakes. Concentration works best with a single point of light, but reflections often appear as patches of light scattered across an eyeball. Asymmetry suffers from a similar problem — those patches make the reflection asymmetrical and Pimbblet said it was hard to get this measure “right”. 

Using the Gini coefficient worked a lot better. This is a way to measure inequality across a spectrum of values. It can be used to calculate a range of results related to inequality, such as the distribution of wealth, life expectancy or, perhaps most commonly, income. In this case, Gini was applied to pixel inequality.

“Gini takes the whole pixel distribution,  is able to see if the pixel values are similarly distributed between left and right, and is a robust non-parametric approach to take here,” Pimbblet said. 

The work was presented at the Royal Astronomical Society meeting at the University of Hull on July 15, but is yet to be peer-reviewed and published. The pair are working to turn the study into a publication. 

Pimbblet says the software is merely a proof of concept at this stage. The software still flags false positives and false negatives, with an error rate of about three in 10. It has also only been tested on a single AI model so far. “We have not tested against other models, but this would be an obvious next step,” Pimbblet says.

Dan Miller, a psychologist at James Cook University in Australia, said the findings from the study offer useful information, but cautioned it may not be especially relevant to improving human detection of deepfakes — at least not yet, because the method requires sophisticated mathematical modeling of light. However, he noted “the findings could inform the development of deepfake detection software.”

And software appears like it will be necessary, given how sophisticated the fakes are becoming. In a 2023 study, Miller assessed how well participants could spot a deepfake video, providing one group with a list of visual artifacts — like shadows or lighting — they should look for. But the research found that intervention didn’t work at all. Subjects were only able to spot the fakes as well as a control group who hadn’t been given the tips (this kind of suggests my personal mini-experiment above could be an outlier).

The entire field of AI feels like it has been moving at lightspeed since ChatGPT dropped in late 2022. Pimbblet suggests the pair’s approach would work with other AI image generators, but notes it’s also likely newer models will be able to “solve the physics lighting problem.”

This research also raises an interesting question: If AI can generate reflections that can be assessed with astronomy-based methods… could AI also be used to generate entire galaxies? 

Pimbblet says there have been forays into that realm. He points to a study from 2017 which assessed how well “generative adversarial networks” or GANs (the technology underpinning AI generators like Midjourney or ChatGPT) could recapitulate galaxies from degraded data. Observing telescopes on Earth and in space can be limited by noise and background, causing blurring and loss of quality (Even stunning James Webb Space Telescope images require some cleaning up). 

In the 2017 study, researchers trained a large AI model on images of galaxies, then used the model to try and recover degraded imagery. It wasn’t always perfect — but it was certainly possible to recover features of the galaxies from low-quality imagery.

A preprint study, in 2019, similarly used GANs to simulate entire galaxies

The researchers suggest the work would be useful as huge amounts of data pour in from missions observing the universe. There’s no way to look through all of it, so we may need to turn to AI. Generating these galaxies with AI could then, in turn, train AI to hunt for specific kinds of actual galaxies in huge datasets. It all sounds a bit dystopian, but, then again, so does detecting unreal faces with subtle changes in the reflections in their eyeballs.

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

Jackson Ryan is a science journalist hailing from Adelaide, Australia, with a focus on longform and narrative non-fiction work. He currently serves as the President of the Science Journalists Association of Australia. Between 2018 and 2023, he was the science editor at CNET. In 2022, he won the Eureka Prize for Science Journalism, which Aussies dub the “Science Oscars.” Before all that, he got his doctorate in molecular biology and once hosted a kids TV show on the Disney Channel, called “GameFest.” (Good luck finding it.) He lives with a collection of more than 70 Christmas sweaters and zero pets, the latter of which he hopes to rectify one day.

Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here

Related Posts
ANA, low-carbohydrate and low-salt in-flight meal renewal chewed umami thumbnail

ANA, low-carbohydrate and low-salt in-flight meal renewal chewed umami

 全日本空輸(ANA/NH)は9月27日、低糖質や低脂肪、低カロリーなど特別機内食4種類を10月1日にリニューアルすると発表した。フレンチシェフの髙山英紀氏や専門家の知見を取り入れた。 ANAの低糖質・低脂肪・低カロリー機内食(左)と低塩機内食を紹介する(右から)髙山シェフと客室乗務員=21年9月27日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire  ANAは食事制限がある人向けや子供向け、宗教食など24種類ある「特別機内食(スペシャルミール)」のうち、低糖質(DBML)と低脂肪(LFML)、低カロリー(LCML))、低塩(LSML)の食事はこれまで4種類別々に用意していたが、今回のリニューアルでは低糖質と低脂肪、低カロリーに対応した共通メニューと、低塩メニューの2種類に再編。新メニューは2種類ともランディス台北 Paris 1930 de Hideki Takayamaの髙山シェフとのコラボレーションメニューで、ANAグループで機内食を手掛けるANAケータリングサービス(ANAC)が機内食に仕上げ、低糖質メニューは日本糖尿病協会が監修した。ANAが特別機内食で有名店とのコラボメニューを導入するのは初めて。  髙山シェフは、「塩や油を使わないが、噛(か)んでうま味を出すようにし、色合いも意識した」と食材が一般の機内食よりも制限される中でこだわった点を挙げた。ANACの清水誠総料理長は、「機内食では多くのメニューが75度で火入れを1分以上しなければならない規定がある」と述べ、制約がある中で髙山シェフが提案したメニューを機内食用にアレンジしていったという。 ANAの低糖質・低脂肪・低カロリー機内食を紹介する客室乗務員=21年9月27日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire ANAの低塩機内食を紹介する客室乗務員=21年9月27日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire  低糖質・低脂肪・低カロリー共通メニューは「チキンのパプールとマッシュルームのピューレ 彩野菜とキヌアのサラダ仕立て」、低塩メニューは「チキンのロースト蜂蜜ビネガーとオニオンコンフィー雑穀物を添えて」をそれぞれメインに据えた。また、デザートとして紅茶のゼリーを共通で添える。  今回のメニューは、ファーストクラスからエコノミーまで全クラス共通。食器や提供方法は各クラスに準じたものになる。従来のメニューは3カ月ごとに見直していたが1年周期に改め、切替期に生じる食材のロスも削減する。ANAによると、特別機内食を頼む人はコロナ前で全体の7%、昨年は5.3%だったという。  開発期間は通常の機内食と同じ約1年。ANA商品企画部の眞野知彦部長は「今までは出張需要が中心だったが、今後は高齢者や女性など幅広い客層に選んで頂けるよう機内食も見直した」と、コロナ後の国際線の需要回復に向け、食事制限がある人だけでなく、健康に気を配っている人などにも特別機内食を利用して欲しいという。  また、ANACはイスラム教徒(ムスリム)向けの機内食を充実させるため、ハラール認証を取得しているキッチンを拡張。従来はハラール対応メニューのうち、一部を外注していたが自社で調理できるようにした。イスラム教とヒンズー教に対応したメニューを6月に刷新し、イスラム教徒に人気のあるカレーを提供している。  ANAの特別機内食は、日本発の国際線全路線で提供。出発予定時刻の24時間前までにANAのウェブサイトか電話での予約が必要になる。ビジネスクラスの食器に盛り付けられたANAの低糖質・低脂肪・低カロリー機内食=21年9月27日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire ビジネスクラスの食器に盛り付けられたANAの低塩機内食=21年9月27日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire ANAのハラール機内食を紹介するANAケータリングサービスの清水総料理長=21年9月27日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire ANAの低糖質・低脂肪・低カロリーと低塩機内食を発表する髙山シェフ(中央左)とANACの清水総料理長(同右)ら=21年9月27日 PHOTO: Tadayuki YOSHIKAWA/Aviation Wire 関連リンク特別機内食(スペシャルミール)全日本空輸ANAケータリングサービスParis 1930 de Hideki Takayama日本糖尿病協会 ・ANA、通販で子供向け「デコ弁」機内食(21年7月18日) ・ANA、駐機中のA380をレストランに 2機並べて機内見学会も(21年6月26日) ・「ビーフカレーはないの?」ANAファーストクラスシェフ監修レトルトは3種類(19年11月9日)…
Read More
Immune System Memory Less Durable After Severe COVID-19 thumbnail

Immune System Memory Less Durable After Severe COVID-19

UT Health San Antonio researchers compared results in less-severe and severe COVID-19 cases one and five months after symptom onset. Infection-fighting B cells retain better memory of the coronavirus spike protein in University Hospital patients who recover from less-severe cases of COVID-19 than in those recovering from severe COVID-19, a new study suggests. Findings by…
Read More
The Abyssal World: Dark and Hostile Environment Is the Last Terra Incognita of the Earth Surface thumbnail

The Abyssal World: Dark and Hostile Environment Is the Last Terra Incognita of the Earth Surface

An effort of 15 deep-sea international expeditions has allowed the analysis of abyssal sediments collected in all major oceanic regions, including the Arctic and Southern Oceans. Credit: © Andreas Worden The deep-ocean floor is the least explored ecosystem on the planet, despite covering more than 60% of the Earth surface. Largely unknown life in abyssal…
Read More
Index Of News
Total
0
Share