Advertisement

Musk’s AI chatbot Grok is still making sexual deepfakes


Elon Musk’s artificial intelligence software, Grok, continues to generate sexualized images of people without their consent, despite his company’s pledge months ago to halt abusive deepfakes after a public backlash and government investigations.

A review by NBC News found dozens of AI-generated sexual images and videos depicting real people posted publicly on Musk’s social media app, X, over the past month. The images show women whose likenesses were edited by the AI chatbot to put them in more revealing clothing, such as towels, sports bras, skintight Spider-Woman outfits or bunny costumes. Many of the women are female pop stars or actors.

The Grok software, created by Musk’s company xAI, made the images at the request of users who tried to break through undressing restrictions the service put in place in January. Grok, via its X account, or the users then posted the images to X.

The images are similar to ones that sparked a firestorm of criticism in January, when Musk’s companies freely allowed people to undress others simply by uploading photos and typing prompts such as “put her in a bikini.” Musk’s companies had cheered on the idea, promoting the “spicy mode” of his AI chatbot. The flood of fake images, including some of children, prompted government investigations on five continents.

The number of sexualized deepfakes created by Grok and posted to X appears to have decreased significantly since the flood in January. In posts reviewed by NBC News, the Grok software turns down or ignores many of the sexualized requests it receives publicly on X. None of the women in Grok-generated images seen by NBC News were naked, and none appeared to be minors.

But experts told NBC News that it’s difficult to research all of what Grok produces, especially when people access the software privately on Grok’s app, on the Grok website or on the private Grok tab of X. It’s also difficult to search X for all public examples of sexualized deepfakes.

“When these images are being created and spread around, the people in the images don’t necessarily find out,” said Stefan Turkheimer, the vice president for public policy at RAINN, an advocacy group dedicated to fighting sexual assault.

xAI, the Musk-owned AI startup that created Grok and also owns X, said Monday it wanted to review NBC News’ findings. A representative did not respond to follow-up questions. On Tuesday, most of the images were no longer on X and were replaced with messages saying the post “is unavailable” or “violated the X Rules.” X and Musk did not respond to a separate request for comment.

The new examples seen by NBC News show that Grok users have updated their tactics to try to stay ahead of xAI’s engineers and X’s content moderators. While Grok now appears to turn down or ignore requests from users to put people “in a bikini,” it has complied with other queries.

The examples were not difficult to find using the search function on the X website.

In one trend, a user asks Grok to create an image by melding two images they submit simultaneously: first, a photo of a woman, often a celebrity, and second, a drawing of a stick figure with its legs spread, either in a squat or a split. The request includes a prompt telling Grok to make the woman “strike the pose from the second image” or “match the pose.” The resulting deepfake emphasizes the woman’s crotch.

A second trend involves users asking Grok to swap the clothing of women in two separate photos, with at least one of the photos involving tight or revealing clothing.

And in a third set, users have uploaded what appear to be authentic photos of women and asked Grok to transform the photos into video clips, sometimes with results that are sexualized. In one example from March 12, Grok complied by generating a video in which a likeness of an actor fondles her breasts, based on an image in which she is not touching them. In another example from April 6, Grok created a video of the same actor with her legs spread apart from a photo in which her legs were crossed.

At least one of the celebrities depicted in the deepfakes is someone who has publicly complained about such images in the past.

The findings come after X committed to preventing the creation of such images.

X said in a statement in January that it had “implemented technological measures to prevent the Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.”

Genevieve Oh, an independent analyst whose research on deepfakes has been widely cited, said in an email that she believes Grok “was and still is unmistakably the largest nonconsensual synthetic nudity generator” in the world. While she said her research is ongoing, she said it’s likely that Grok surpasses the output of all other “nudifier tools” combined. Similar apps have circulated for years, causing disruptions at schools and leaving victims searching for recourse.

The Center for Countering Digital Hate, which estimated in January that Grok produced 3 million sexualized images during an 11-day period, said last week that it also was still finding nonconsensual deepfakes made by Musk’s AI.

“Perverts can still use Grok to put women and girls into sexualized positions and outfits, despite the platform’s claims otherwise,” Imran Ahmed, the center’s CEO and founder, said in a statement.

When Grok instituted the changes that allowed creating the sexualized deepfakes, it was unique among the most popular AI platforms in relaxing its guardrails to such a degree.

Last month, there was a sign that Musk’s companies could be backtracking from the commitment they made in January. In the Netherlands, where an advocacy organization sued xAI over sexualized deepfakes, the company argued at a court hearing that it could not stop all abuse of its tools and should not be penalized for the actions of malicious users, according to a description of the hearing by Reuters.

Individual sex offenders have been persistent in trying to exploit system loopholes, not only on Grok but also elsewhere, according to law enforcement.

The National Center for Missing & Exploited Children, which runs the CyberTipline, a nationwide centralized reporting system for online child exploitation, said members of the public are sending it reports describing incidents in which children or abuse survivors may have been exploited using Grok. NCMEC described similar complaints in January.

NCMEC said, though, that it has not independently researched Grok’s current capabilities.

“NCMEC is concerned about any AI technology that has the potential to generate child sexual abuse material or otherwise facilitate the exploitation of children,” it said in a statement.

Musk has denied that Grok produced child sexual abuse material. He wrote in a Jan. 14 post that he was “not aware of any naked underage images generated by Grok. Literally zero.”

Eight separate law enforcement and regulatory agencies told NBC News this month that they are continuing their investigations of Grok’s nudification and sexualization capabilities. Those authorities are the California attorney general’s office, Australia’s eSafety office, the Privacy Commissioner of Canada, the European Commission, Ireland’s Data Protection Commission, the Paris public prosecutor and a pair of British agencies called the Office of Communications, or Ofcom, and the Information Commissioner’s Office.

“California’s investigation is still very much underway. Beyond this, to protect an ongoing investigation, we do not have further updates to share at this time,” the office of California Attorney General Rob Bonta said in an email.

Even more government authorities expressed outrage in January and February, although not all of them have confirmed that their investigations are ongoing. Italy, which issued a warning in January that some Grok-created images could be criminal, decided not to launch its own investigation and chose instead to monitor the investigations by Ireland and the European Commission, a spokesperson said last week. (X has its European headquarters in Dublin.)

Malaysia’s communications commission, which blocked and then restored access to Grok in January, said in an email Tuesday that it was not currently investigating the matter.

xAI separately faces several lawsuits over Grok’s generation of sexualized images. They include two lawsuits proposed as class actions in federal court in California brought by women and girls whose likenesses were edited by Grok and a lawsuit by the city of Baltimore alleging violations of its consumer protection code. Court dockets in those cases do not show any responses yet from Musk’s companies.

A fourth case, in the Netherlands, led to an order last month for Grok to cease generating undressing images of adults or children.

The investigations and lawsuits are underway at a sensitive time for Musk’s business empire. In February, xAI was acquired by one of Musk’s other companies, SpaceX, the rocket service provider and satellite internet business. In June, SpaceX plans an initial public offering of its shares to raise billions of dollars in additional capital.

The decision to fold xAI into SpaceX means the rocket company almost certainly will be on the hook for any potential future fines related to Grok’s behavior, legal experts said, although they said it’s not clear whether such fines would be considered material to SpaceX’s expected valuation of $2 trillion.

SpaceX did not respond to a request for comment.

Musk has promoted Grok’s ability to create sexualized images. He has frequently posted AI-generated images of cartoonish women in sexual situations or tight or revealing clothing. In a post in October responding to someone who had shared an AI video of a sexualized robot, Musk complained: “Hmm, our competitors do better deep fakes. We will have to step up our game.”

xAI released a new generative AI video tool last year called “Imagine,” which included something the company called “Spicy” mode, which allowed the creation of AI-generated not-safe-for-work content. The Verge reported that it created topless deepfakes of pop star Taylor Swift without the user’s asking.

In late December, users began to complain about a wave of sexualized deepfakes targeting women and girls whose photos Grok digitally edited to make them appear naked or nearly naked. Grok said Dec. 31 on X that there were “isolated cases where users prompted for and received AI images depicting minors in minimal clothing.” In a separate post, the software posted that it “deeply regretted” what it had done.

xAI initially did not change the product and instead put the onus on users to obey laws about child abuse.

“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk posted on Jan. 3.

But the global backlash soon overwhelmed the company. A British watchdog, the Internet Watch Foundation, reported on “criminal imagery” that online users said was created with Grok, and different researchers found independently that Grok was producing thousands of sexualized images an hour. X restricted the AI image generation to paying customers only on Jan. 9 and announced the more comprehensive crackdown on Jan. 14.

In February, French authorities raided X’s offices in the country in connection with the deepfakes and other issues. They also said they planned to call X executives and employees — including Musk and former X CEO Linda Yaccarino — to Paris for interviews the week of April 20. X condemned the search as an “abusive act of law enforcement theater.” It’s not clear whether French authorities still hope to conduct those interviews this month. The Paris prosecutor’s office said in a statement last week that its investigation continues, with no new information available.

European Union regulators can sometimes take years to reach decisions. They spent two years investigating X before they announced in December that they were fining the company the equivalent of $140 million for breaching transparency obligations. Musk has vowed to fight the fine.

Britain’s Internet Watch Foundation said its analysts have been unable to search for criminal material on Grok beyond its pay barriers, so it does not know what Grok’s users are generating now. The foundation said it is not enough for Musk to limit the AI tools to paying customers.

“Our position is that tech companies must make sure the products they build and make available to the global public are safe by design,” it said in a statement.

“If that means Governments and regulators need to force them to design safer tools, then that is what must happen. Sitting and waiting for unsafe products to be abused before taking action is unacceptable,” it said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *