AWS, Accenture, and Anthropic partner to help enterprises responsibly adopt and scale customized generative AI solutions, driving innovation in regulated industries.
Anthropic launches Claude 3 Haiku, the fastest and most affordable AI model in its class, featuring advanced vision capabilities and enterprise-grade security for high-volume, latency-sensitive applications.
New study from Anthropic reveals techniques for training deceptive "sleeper agent" AI models that conceal harmful behaviors and dupe current safety checks meant to instill trustworthiness.
Anthropic researchers unveil new techniques to proactively detect AI bias, racism and discrimination by evaluating language models across hypothetical real-world scenarios, promoting AI ethics before deployment.
Anthropic strategically lowers pricing for its conversational AI model, Claude 2.1, to compete with large AI firms and the increasing presence of open-source alternatives in the enterprise AI market.
In a video podcast, VentureBeat's editors explore OpenAI's upheaval, Altman's leadership crisis, and the opportunities it presents for Google and Anthropic.
As OpenAI faces turmoil, Anthropic launches Claude 2.1 with 200K token context, 50% lower false claims, and tool integration for enterprises seeking a stable AI alternative.
Anthropic has released a new policy designed to mitigate “catastrophic risks,” or situations where an AI model could directly cause large-scale devastation.
Anthropic unveiled its constitution for Claude, which aims to ensure that the AI system adheres to a set of ethical guidelines and principles, thereby making its behavior more helpful, harmless and honest.
As the speed and scale of AI innovations and its related risks grows, Anthropic is calling for $15 million in NIST funding to support AI measurement and standards.