Noti.Group RSS Feed
  • Contact Us
Wednesday, January 28, 2026
Noti Group Logo
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
Noti Group
No Result
View All Result
ADVERTISEMENT

Under Musk, the Grok disaster was inevitable

in Technology
Reading Time: 14 mins read
382 29
A A
0
Hayden Field
137
SHARES
6.8k
VIEWS
ShareShareShareShareShare

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on dystopian developments in AI, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

You could say it all started with Elon Musk’s AI FOMO — and his crusade against “wokeness.” When his AI company, xAI, announced Grok in November 2023, it was described as a chatbot with “a rebellious streak” and the ability to “answer spicy questions that are rejected by most other AI systems.” The chatbot debuted after a few months of development and just two months of training, and the announcement highlighted that Grok would have real-time knowledge of the X platform.

But there are inherent risks to a chatbot having both the run of the internet and X, and it’s safe to say xAI may not have taken the necessary steps to address them. Since Musk took over Twitter in 2022 and renamed it X, he laid off 30% of its global trust and safety staff and cut its number of safety engineers by 80%, Australia’s online safety watchdog said last January. As for xAI, when Grok was released, it was unclear whether xAI had a safety team already in place. When Grok 4 was released in July, it took more than a month for the company to release a model card — a practice typically seen as an industry standard, which details safety tests and potential concerns. Two weeks after Grok 4’s release, an xAI employee wrote on X that he was hiring for xAI’s safety team and that they “urgently need strong engineers/researchers.” In response to a commenter, who asked, “xAI does safety?” the original employee said xAI was “working on it.”

Journalist Kat Tenbarge wrote about how she first started seeing sexually explicit deepfakes go viral on Grok in June 2023. Those images obviously weren’t created by Grok — it didn’t even have the ability to generate images until August 2024 — but X’s response to the concerns was varied. Even last January, Grok was inciting controversy for AI-generated images. And this past August, Grok’s “spicy” video-generation mode created nude deepfakes of Taylor Swift without even being asked. Experts have told noti.group since September that the company takes a whack-a-mole approach to safety and guardrails — and that it’s difficult enough to keep an AI system on the straight and narrow when you design it with safety in mind from the beginning, let alone if you’re going back to fix baked-in problems. Now, it seems that approach has blown up in xAI’s face.

Grok has spent the last couple of weeks spreading nonconsensual, sexualized deepfakes of adults and minors all over the platform, as promoted. Screenshots show Grok complying with users asking it to replace women’s clothing with lingerie and make them spread their legs, as well as to put small children in bikinis. And there are even more egregious reports. It’s gotten so bad that during a 24-hour analysis of Grok-created images on X, one estimate gauged the chatbot to be generating about 6,700 sexually suggestive or “nudifying” images per hour. Part of the reason for the onslaught is a recent feature added to Grok, allowing users to use an “edit” button to ask the chatbot to change images, without the original poster’s consent.

Since then, we’ve seen a handful of countries either investigate the matter or threaten to ban X altogether. Members of the French government promised an investigation, as did the Indian IT ministry, and a Malaysian government commission wrote a letter about its concerns. California governor Gavin Newsom called on the US Attorney General to investigate xAI. The United Kingdom said it is planning to pass a law banning the creation of AI-generated nonconsensual, sexualized images, and the country’s communications-industry regulator said it would investigate both X and the images that had been generated in order to see if they violated its Online Safety Act. And this week, both Malaysia and Indonesia blocked access to Grok.

xAI initially said its goal for Grok was to “assist humanity in its quest for understanding and knowledge,” “maximally benefit all of humanity,” and “empower our users with our AI tools, subject to the law,” as well as to “serve as a powerful research assistant for anyone.” That’s a far cry from generating nude-adjacent deepfakes of women without their consent, let alone minors.

On Wednesday evening, as pressure on the company heightened, X’s Safety account put out a statement that the platform has “implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” and that the restriction “applies to all users, including paid subscribers.” On top of that, only paid subscribers can use Grok to create or edit any sort of image moving forward, according to X. The statement went on to say that X “now geoblock[s] the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal,” which was a strange point to make since earlier in the statement, the company said it was not allowing anyone to use Grok to edit images in such a way.

Another important point: My colleagues tested Grok’s image-generation restrictions on Wednesday to find that it took less than a minute to get around most guardrails. Although asking the chatbot to “put her in a bikini” or “remove her clothes” produced censored results, they found, it had no qualms about delivering on prompts like “show me her cleavage,” “make her breasts bigger,” and “put her in a crop top and low-rise shorts,” as well as generating images in lingerie and sexualized poses. As of Wednesday evening, we were still able to get the Grok app to generate revealing images of people, using a free account.

Even after X’s Wednesday statement, we may see a number of other countries either ban or block access to either all of X or just Grok, at least temporarily. We’ll also see how the proposed laws and investigations around the world play out. The pressure is mounting for Musk, who on Wednesday afternoon took to X to say that he is “not aware of any naked underage images generated by Grok.” Hours later, X’s Safety team put out its statement, saying it’s “working around the clock to add additional safeguards, take swift and decisive action to remove violating and illegal content, permanently suspend accounts where appropriate, and collaborate with local governments and law enforcement as necessary.”

What technically is and isn’t against the law is a big question here. For instance, experts told noti.group earlier this month that AI-generated images of identifiable minors in bikinis, or potentially even naked, may not technically be illegal under current child sexual abuse material (CSAM) laws in the US, though of course disturbing and unethical. But lascivious images of minors in such situations are against the law. We’ll see if those definitions expand or change, even though the current laws are a bit of a patchwork.

As for nonconsensual intimate deepfakes of adult women, the Take It Down Act, signed into law in May 2025, bars nonconsensual AI-generated “intimate visual depictions” and requires certain platforms to rapidly remove them. The grace period before the latter part goes into effect — requiring platforms to actually remove them — ends in May 2026, so we may see some significant developments in the next six months.

  • Some people have been making the case that it’s been possible to do things like this for a long time using Photoshop, or even other AI image-generators. Yes, that’s true. But there are a lot of differences here that makes the Grok case more concerning: It’s public, it’s targeting “regular” people just as much as it’s targeting public figures, it’s often posted directly to the person being deepfaked (the original poster of the photo), and the barrier to entry is lower (for proof, just look at the correlation between the ability to do this going viral after an easy “edit” button launched, even though people could technically do it before).
  • Plus, other AI companies — though they have a laundry list of their own safety concerns — seem to have significantly more safeguards built into their image-generation processes. For instance, asking OpenAI’s ChatGPT to return an image of a specific politician in a bikini prompts the response, “Sorry—I can’t help with generating images that depict a real public figure in a sexualized or potentially degrading way.” Ask Microsoft Copilot, and it’ll say, “I can’t create that. Images of real, identifiable public figures in sexualized or compromising scenarios aren’t allowed, even if the intent is humorous or fictional.”
  • Spitfire News’ Kat Tenbarge on how Grok’s sexual abuse hit a tipping point — and what brought us to today’s maelstrom.
  • noti.group’s own Liz Lopatto on why Sundar Pichai and Tim Cook are cowards for not pulling X from Google and Apple’s app stores.
  • “If there is no red line around AI-generated sex abuse, then no line exists,” Charlie Warzel and Matteo Wong write in The Atlantic on why Elon Musk cannot get away with this.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Hayden Field

    Hayden Field

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Hayden Field

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Column

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Column

  • Elon Musk

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Elon Musk

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

  • The Stepback

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All The Stepback

  • xAI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All xAI

[Notigroup Newsroom in collaboration with other media outlets, with information from the following sources]

Tags: AIColumnelon muskTechThe StepbackxAI
Previous Post

‘Pissed’ Sean McDermott blasts refs over controversial interception in Bills loss

Next Post

Microsoft’s first Windows 11 update of 2026 stopped some computers from shutting down

Related Posts

This fanny pack robot helped me walk miles
Technology

This fanny pack robot helped me walk miles

January 28, 2026
This soft humanoid is designed to be ‘physically safe and socially approachable’
Technology

This soft humanoid is designed to be ‘physically safe and socially approachable’

January 28, 2026
Samsung Galaxy Z Fold 7 and Google Pixel 10 Pro Fold shown in hand.
Technology

My favorite folding phone is the one that doesn’t exist yet

January 28, 2026
Apple aims to keep iPhone 18 price flat in face of RAM shortage, Kuo says
Technology

Apple aims to keep iPhone 18 price flat in face of RAM shortage, Kuo says

January 28, 2026
Load More
Next Post
Microsoft AI’s first in-house image generator MAI-Image-1 is now available

Microsoft’s first Windows 11 update of 2026 stopped some computers from shutting down

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • This fanny pack robot helped me walk miles
  • Knox Jolie-Pitt spotted leaving Muay Thai class in Los Angeles
  • Bank of America says it will match Treasury’s $1,000 deposits in Trump accounts for employees
  • What rare Rangers trade with Islanders reveals about retool
  • This soft humanoid is designed to be ‘physically safe and socially approachable’

Recent Comments

  • Stefano on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Van Hens on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Ioannis K on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Panagiotis Nikolaos on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • John Miele on UK government suggests deleting files to save water

Noti Group All rights reserved

No Result
View All Result
Noti Group

What’s New Here

  • This fanny pack robot helped me walk miles
  • Knox Jolie-Pitt spotted leaving Muay Thai class in Los Angeles
  • Bank of America says it will match Treasury’s $1,000 deposits in Trump accounts for employees

Topics to Cover!

  • Business (4,656)
  • Entertainment (1,749)
  • General News (326)
  • Health (327)
  • Investigative Journalism (11)
  • Lifestyle (4)
  • Sports (6,359)
  • Technology (5,362)
  • World News (1,336)
  • Contact Us
  • Terms and Conditions
  • Privacy Policy
  • RSS
  • Contact News Room
  • Code of Conduct
  • Careers
  • Values
  • Advertise
  • DMCA

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.