作者:Darryl K. Taft
Anthropic‘s launch of automated security reviews for Claude Code has drawn widespread attention from industry experts, who see the move as both a critical step toward “AI-native development” and a potential game-changer for traditional security tooling vendors.
The features, which include a terminal-based /security-review
command and automated GitHub pull request scanning, represent what Abhishek Sisodia, director of engineering at Scotiabank, calls “a big moment in the shift toward AI-native development.”
For Sisodia, the significance lies in making security proactive rather than reactive.
“Running security checks at the PR level (not just during pen testing or quarterly audits) means vulnerabilities are caught early, when they’re cheapest to fix,” he told The New Stack. “That’s huge for both velocity and quality.”
The approach transforms security from a separate workflow into part of everyday coding.
“Developers don’t have to be security experts to ship secure code,” Sisodia explained. “With Claude flagging issues like SQLi, XSS and auth flaws inline, and even suggesting fixes, this becomes part of everyday coding.”
Glenn Weinstein, CEO of Cloudsmith, praised Anthropic’s “secure-by-design mindset,” calling the features “a great complement to the role artifact management platforms perform in scanning and flagging binary package dependencies with known vulnerabilities.”
Weinstein emphasized the importance of early detection.
“Ideally, you want to do this as early as possible in the development life cycle – well before PR merges and CI/CD builds — so this important enhancement to Claude Code is right in that sweet spot,” he said.
The security features come as industry observers raise concerns about the rapid adoption of AI-powered development tools.
Brad Shimmin, an analyst at The Futurum Group, highlighted two key risks: the creation of software that “has not been carefully vetted for security, performance and compliance requirements,” and the acceleration of “shallow pull requests” where AI systems flood project owners with “frivolous and often inaccurate software requests.”
Moreover, David Mytton, CEO of Arcjet, pointed to a fundamental challenge: “AI is making it easier and faster to write code. There will be more code written by less experienced people, which means more security problems.” He sees the automated security reviews as “a good safety check to prevent low-hanging-fruit-style security issues.”
Mytton noted that basic security mistakes like “exposing secrets or not properly securing databases” are already common problems that these tools could help catch.
However, Mytton raised a provocative question about the approach.
“If it’s reviewing its own AI-generated code, then there’s something strange about an AI reviewing itself!” he wrote in an email exchange with The New Stack. “Why not just make the model avoid security issues in the first place?”
Cybersecurity expert and founder of Vulnerable U, Matt Johansen, while enthusiastic about the development, echoed similar concerns about limitations.
“You’re asking the same AI to secure what it just generated,” he told The New Stack. “It might bring in some additional context or tools access, but it’s still limited in capability in the same ways.”
Despite these limitations, Johansen said he sees value in vendors building their own security features.
“It’s a great signal that the vendor themselves realized security features are a value add and not a speed bump,” he said. “I’m bullish on AI security tools, but it’s a good thing for all of us if the vendors are building their own security features and not relying on vendors to close the gaps.”
The launch has sparked discussion about what it means for traditional security tooling companies. Sisodia suggested the competitive landscape is shifting.
“If AI-native platforms like Claude can do what traditional SAST/DAST tools do, but faster, cheaper and embedded in dev tools, the bar just got raised,” he said.
He predicted that established security vendors “will need to rethink UX, developer integration and how they layer value beyond just detection.”
However, Johansen downplayed existential threats to the security industry, drawing an analogy: “It’s like saying Microsoft built in security tooling, so why does EDR [Endpoint Detection and Response] need to exist — there will be problems that need solving.”
Weinstein reinforced this view, emphasizing that “effectively preventing vulnerabilities from making it into your production systems requires a multilayered approach,” examining not just source code but also “language packages, containers, OS libraries and other binary ingredients that you pull in as dependencies.”
Shimmin told The New Stack that he sees Anthropic’s move as potentially catalytic for the broader industry.
“This effort from Anthropic seems to point directly at these two issues and will most certainly have a positive knockdown effect across the broader research and ISV community,” he said, comparing it to how “Anthropic’s earlier work on model transparency and MCP [Model Context Protocol] has influenced several supportive efforts across the industry.”
For Sisodia, the features represent something larger than a mere product update.
“This isn’t just a feature drop. It’s a sign that AI-first software security is becoming real,” he said. “We’re heading toward a world where secure by default isn’t aspirational, it’s just what happens when you code with the right agent by your side.”
While experts are optimistic about AI-powered security tools, they emphasize that no single solution will solve all security challenges. Weinstein’s multilayered approach philosophy reflects a broader industry consensus that security requires defense in depth.
The question moving forward is not whether AI will play a role in software security — that appears to be settled — but rather how traditional security vendors will adapt and what new problems will emerge as AI reshapes the development landscape.
As Johansen noted, “Whether we in security like it or not, devs will be using these AI tools.” The challenge now is ensuring those tools come with appropriate safeguards built in, rather than bolted on afterward.
The industry response to Anthropic’s security features suggests that as AI accelerates software development, security tooling must evolve to keep pace.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.