Processing…
Success! You're on the list.
NEWSLETTER

Processing…
Success! You're on the list.

Anthropic launches AI-powered chatbot ‘Claude,’ to take on ChatGPT 

Anthropic

Anthropic, an AI safety and research company co-founded by ex-OpenAI employees, has recently announced the launch of ChatGPT rival — Claude, an AI-powered chatbot. 

According to Anthropic, Claude is a next-generation AI assistant based on the company’s research into training helpful, honest, and harmless AI systems.

The ChatGPT rival can perform a wide variety of conversational and text-processing tasks, including summarisation, search, creative and collaborative writing, Q&A, coding, and more while maintaining a high degree of reliability and predictability.

“Early customers report that Claude is much less likely to produce harmful outputs, easier to converse with, and more steerable,” claims the company. 

Anthropic claims that Claude can also take direction on personality, tone, and behavior.

Currently, the company offers Claude in two variants:  

  • Claude is a state-of-the-art high-performance model
  • Claude Instant is a lighter, less expensive, and much faster option. 

“We plan to introduce even more updates in the coming weeks. As we develop these systems, we’ll continually work to make them more helpful, honest, and harmless as we learn more from our safety research and our deployments,” says the company. 

Anthropic has been privately testing Claude, after conducting a closed beta the previous year with launch partners such as Robin AI, AssemblyAI, Notion, Quora, and DuckDuckGo.

Anthropic claims that Claude, much like ChatGPT, was trained using public webpages until spring 2021 and is programmed to steer clear of making sexist, racist, or harmful statements.

Additionally, Claude is designed to avoid assisting humans in participating in illegal or unethical activities, which is a standard feature in AI chatbots. However, what distinguishes Claude is the use of the “constitutional AI” methodology.

To reiterate, Constitutional AI refers to the use of artificial intelligence (AI) technology to support or enforce constitutional principles and values. It involves designing AI systems in a way that is compatible with fundamental legal and ethical principles, such as privacy, fairness, accountability, and transparency.

Constitutional AI aims to ensure that AI systems are consistent with human rights, legal norms, and democratic values, and do not undermine the rule of law. 

Partnership with AssemblyAI

In addition to improving existing products, Anthropic has announced a partnership with AssemblyAI, an innovative AI company, to help power its platform of APIs that transcribe and understand audio data at scale. 

Dylan Fox, Founder & CEO of AssemblyAI, says, “We’re thrilled to partner with a pioneering company like Anthropic whose commitment to AI integrity and research directly helps us ship more robust, LLM-backed Generative AI and Conversation Intelligence capabilities to our customers faster. We look forward to seeing this partnership propel our AI initiatives forward.”

Image credits: everythingposs/DepositPhotos

Related Posts
Total
0
Share

Get daily funding news briefings in the tech world delivered right to your inbox.

Enter Your Email
join our newsletter. thank you