Fresh from battling the US Department of Defense (DoD) over AI guardrails, Anthropic has returned this week with a new initiative: the company is founding a think tank, the Anthropic Institute, “to confront the most significant challenges that powerful AI will pose to our societies.” Headed by Anthropic co-founder Jack Clark, who will take up a new role as head of public benefit, the Institute’s faculty will be completed by “an interdisciplinary staff of machine learning engineers, economists, and social scientists,” the company said . It will unite and expand three internal teams: the Frontier Red Team , which stress tests and assesses AI models; Societal Impacts , which models how AI is being used in the real world; and Economic Research , which studies AI’s growing impact on economics and employment. “The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess. It will use this to its full advantage, reporting candidly about what we’re learning about the shape of the technology we’re making,” Anthropic said. More unexpectedly for an AI company, it also said, “[the Institute] will engage with workers and industries facing displacement, and with the people and communities who feel the future bearing down on them but are unsure how to respond.” ‘Supply chain risk’ While the timing is probably coincidental, the fact that the Institute is being launched this week comes at the perfect moment to remind the world that Anthropic sees itself as different to its rivals when it comes to AI ethics. The Anthropic Institute is only the latest in a series of recent announcements, including the Claude Constitution , illustrating the company’s efforts to give outsiders a chance to peer into the design principles informing the model’s values and behavior.
In 2024, CEO Dario Amodei set out his vision of “how AI could transform the world for the better” in an essay reflecting the sort of idealism which has been tested in recent weeks. On February 27, after weeks of cajoling and threats by the DoD, the company’s refusal to back down on AI ethics saw it banned from Pentagon programs by Defense Secretary Pete Hegseth as a “supply chain risk.” In response, on Monday an unbowed Anthropic announced a lawsuit asking for a temporary restraining order against the Pentagon. The question is what enterprise customers should make of what increasingly looks like an irreconcilably ideological clash of cultures. According to Andrew Gamino-Cheong , CTO of AI governance platform Trustible, the fact that Microsoft has publicly opposed the blanket ban on Anthropic could be significant. “Microsoft’s support of Anthropic in this will matter,” he said. Slapping “supply chain risk” on a company of Anthropic’s size was also off-putting for the entire sector.
Microsoft, of course, invested $5 million in Anthropic in November, and said it will use Claude Sonnet in its Copilot chatbot , so what happens to Anthropic matters beyond its own walls. “Many startups and AI companies are going to hesitate to do business with the federal government as a result of this,” Gamino-Cheong noted. “Chinese models haven’t been labelled a ‘supply chain risk’ yet, so it’s wild for an American model creator.” However, he believed that Anthropic’s focus on AI ethics might appeal to private sector companies for whom AI governance is becoming an important issue. “Claude’s models have started to pull away and clearly beat most others for many ‘business’ type tasks. This is because of their investment in AI safety research, which is the DoD’s sticking point.”
Source :
For further details, visit: https://www.cio.com/article/4143915/anthropic-announces-think-tank-to-examine-ais-effect-on-economy-and-society.html
