TOKYO, Oct 17 — Japanese authorities have made their first arrest over the use of generative artificial intelligence (AI) to produce and sell sexually explicit deepfake images of celebrities, The Asahi Shimbun reported today.
The Metropolitan Police Department said yesterday that Hiroya Yokoi, a 31-year-old office worker from Akita, was arrested on suspicion of producing and displaying obscene materials.
Yokoi is alleged to have created thousands of fake images of more than 260 female celebrities, including television personalities, actors, idols, and news announcers.
Police quoted Yokoi as saying: “The reaction from viewers was huge, and it became so popular that I realised I could make a lot of money.” He has admitted to the allegations.
According to authorities, Yokoi began the operation in October last year after seeing others profit from similar schemes, aiming to supplement his income for living expenses and student loan payments.
From January to June this year, he reportedly created and posted explicit images modelled on three female celebrities to an online communication site, where they were accessible to anyone.
Police suspect Yokoi promoted the scheme through social media accounts, directing users to a subscription site where around 20,000 deepfake images were available.
Subscribers could request custom images for a premium fee.
At least 50 people subscribed, earning him an estimated ¥1.2 million yen (RM34,000) over 11 months.
Yokoi reportedly had no professional training in AI or information technology, learning to create the images through online articles and videos.
Using free software that advertises the ability to “generate high-quality images in seconds,” he trained AI models on celebrity photos and used text prompts to expand his library.
The case reflects a wider global trend. A US security firm identified 95,820 deepfake videos online in 2023 — a 5.5-fold increase from four years ago — with 98 per cent being sexual in nature. Countries including the US, the UK, and South Korea have tightened regulations, while Japan currently lacks specific laws addressing sexual deepfakes.
A Japanese investigator told The Asahi Shimbun: “The creators may be doing this casually, but the victims suffer damage to their reputations and can be harmed mentally and financially. We have to put the brakes on businesses that misuse generative AI.”
Authorities face challenges under current Japan law, as proving that an AI-generated image legally depicts a person requires objective evidence. Defamation investigations also require victim complaints, yet images often spread before victims become aware of them.
AloJapan.com