A major artificial intelligence company, Anthropic, has introduced a powerful new AI model named “Claude Mythos Preview.” This model is designed to work on cybersecurity tasks, such as finding weaknesses in software and testing systems against possible cyberattacks.
However, the company has made it clear that the model is not ready for a public release. The reason is simple. The same features that make it useful for defense could also make it dangerous if used by hackers or cybercriminals.
Because of this, access to the model is being restricted. Only a selected group of large technology and cybersecurity companies are allowed to use it. These include Amazon, Apple, Microsoft, Google, and Cisco.
The main goal is to use the model to strengthen defenses before any misuse can happen.
Why Experts Are Concerned About Speed and Scale
The biggest concern around this AI model is how fast it can work. It can scan large amounts of code quickly and identify security flaws that humans may miss. It can also test different hacking techniques automatically.
Experts say that one AI system like this could perform tasks that would normally require many human researchers. It can work continuously without stopping and test multiple attack methods at the same time.
This creates a serious risk. If such technology is used by attackers, they could discover and exploit vulnerabilities much faster than before. These vulnerabilities are small weaknesses in software that can allow unauthorized access.
The company has stated that the model has already identified thousands of previously unknown vulnerabilities. These are highly valuable targets for cybercriminals and even spy agencies.
The ability to find and act on these weaknesses quickly shows how cybersecurity is changing. It is no longer just about human effort. AI is now becoming a major factor in both defending and attacking systems.
Limited Access to Major Companies for Defensive Use
To reduce the risks, Anthropic is sharing the model only with trusted organizations. Along with major tech companies, cybersecurity firms like CrowdStrike and Palo Alto Networks are also included.
Chipmakers such as Nvidia and organizations like Linux Foundation have also been given access. These groups play a key role in building and maintaining important digital systems used worldwide.
The model is being used to test software, identify bugs, and check whether systems can resist known hacking techniques. This allows companies to fix issues before they become serious threats.
By limiting access, the company aims to prevent misuse while still allowing improvements in cybersecurity.
US intelligence raises concerns over Iran’s use of Chinese AI satellite systems for missile planning
Government Briefings and Ongoing Testing
Anthropic has also taken steps to involve government authorities. The company has briefed officials across the United States government about the model’s capabilities. This includes both its defensive uses and its potential risks.
The company has also offered support for testing and evaluating the technology. This helps authorities understand how such AI systems could impact national security and digital infrastructure.
At the same time, cybersecurity experts have pointed out a key challenge. Attackers and defenders may both have access to advanced AI tools. However, defenders must protect entire systems, while attackers only need to find one weak point.
Some experts believe that concerns about misuse may be slightly overstated. They note that AI has already been used in cybersecurity for years. Tools have been developed to detect threats and even suggest fixes for software issues.
Still, there is general agreement that this new model represents a significant step forward. Its capabilities highlight the need for careful handling and strong safeguards.
The decision to release it in a controlled manner reflects the seriousness of the situation.
