Noti.Group RSS Feed
  • Contact Us
Tuesday, April 7, 2026
Noti Group Logo
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment
No Result
View All Result
Noti Group
No Result
View All Result
ADVERTISEMENT

Anthropic has new rules for a more dangerous AI landscape

in Technology
Reading Time: 2 mins read
403 8
A A
0
Anthropic is reportedly launching a voice AI you can speak to
137
SHARES
6.8k
VIEWS
ShareShareShareShareShare

Anthropic has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. In addition to introducing stricter cybersecurity rules, Anthropic now specifies some of the most dangerous weapons that people should not develop using Claude.

Anthropic doesn’t highlight the tweaks made to its weapons policy in the post summarizing its changes, but a comparison between the company’s old usage policy and its new one reveals a notable difference. Though Anthropic previously prohibited the use of Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life,” the updated version expands on this by specifically prohibiting the development of high-yield explosives, along with biological, nuclear, chemical, and radiological (CBRN) weapons.

In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The safeguards are designed to make the model more difficult to jailbreak, as well as to help prevent it from assisting with the development of CBRN weapons.

In its post, Anthropic also acknowledges the risks posed by agentic AI tools, including Computer Use, which lets Claude take control of a user’s computer, as well as Claude Code, a tool that embeds Claude directly into a developer’s terminal. “These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks,” Anthropic writes.

The AI startup is responding to these potential risks by folding a new “Do Not Compromise Computer or Network Systems” section into its usage policy. This section includes rules against using Claude to discover or exploit vulnerabilities, create or distribute malware, develop tools for denial-of-service attacks, and more.

Additionally, Anthropic is loosening its policy around political content. Instead of banning the creation of all kinds of content related to political campaigns and lobbying, Anthropic will now only prohibit people from using Claude for “use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting.” The company also clarified that its requirements for all its “high-risk” use cases, which come into play when people use Claude to make recommendations to individuals or customers, only apply to consumer-facing scenarios, not for business use.

[Notigroup Newsroom in collaboration with other media outlets, with information from the following sources]

Tags: AINewsTech
Previous Post

Story of the iconic Wiffle Ball and its family-run Connecticut factory

Next Post

Dave Portnoy laughs off $20 million Michigan sign-stealing fine

Related Posts

Wisconsin governor says ‘no’ to age checks for porn
Technology

Wisconsin governor says ‘no’ to age checks for porn

April 6, 2026
Hacker's hands typing on a keyboard in front of a monitor displaying code.
Business

Hackers run amok, but fewer corporate victims are paying up as ‘dealmakers’ strive to outwit criminals: report

April 6, 2026
Artemis II astronauts break a record, name a crater
Technology

Artemis II astronauts break a record, name a crater

April 6, 2026
Logitech’s haptics-enhanced MX Master 4 mouse is on sale for under $100
Technology

Logitech’s haptics-enhanced MX Master 4 mouse is on sale for under $100

April 6, 2026
Load More
Next Post
Dave Portnoy laughs off $20 million Michigan sign-stealing fine

Dave Portnoy laughs off $20 million Michigan sign-stealing fine

No Result
View All Result

Recent Posts

  • Karl-Anthony Towns believes Knicks’ sky high playoff expectations are a ‘blessing’
  • Islanders could get two major roster boosts for final playoff push
  • Stars ban fan tied to alleged Nazi salute in viral video
  • Yankees fans set to take in Cam Schlittler with new next-level arsenal
  • Yankees confident they will get more production from bottom of order

Recent Comments

  • Stefano on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Van Hens on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Ioannis K on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • Panagiotis Nikolaos on The Last Byzantine Medieval Town on Earth Is Being Destroyed, and It’s Too Late
  • John Miele on UK government suggests deleting files to save water

Noti Group All rights reserved

No Result
View All Result
Noti Group

What’s New Here

  • Karl-Anthony Towns believes Knicks’ sky high playoff expectations are a ‘blessing’
  • Islanders could get two major roster boosts for final playoff push
  • Stars ban fan tied to alleged Nazi salute in viral video

Topics to Cover!

  • Business (4,810)
  • Entertainment (1,915)
  • General News (326)
  • Health (327)
  • Investigative Journalism (12)
  • Lifestyle (4)
  • Sports (9,029)
  • Technology (6,375)
  • World News (1,336)
  • Contact Us
  • Terms and Conditions
  • Privacy Policy
  • RSS
  • Contact News Room
  • Code of Conduct
  • Careers
  • Values
  • Advertise
  • DMCA

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • World News
  • Business
  • Health
  • Sports
  • Entertainment

© 2025 - noti.group - All rights reserved - noti.group runs on 100% green energy.