OpenAI Launches New ChatGPT Subscription and Anthropic Unveils Claude Mythos

Several significant developments in artificial intelligence have been reported. OpenAI introduces a new subscription option for ChatGPT, while Anthropic presents its latest AI model, Claude Mythos, which has attracted attention for its psychological approach. Additionally, Florida has initiated an investigation into OpenAI following a tragic incident.

AI explained

What are the latest AI developments and challenges reported?

OpenAI introduced a new $100 per month ChatGPT Pro subscription to offer more features between existing plans. Anthropic launched Claude Mythos, an AI model evaluated through a psychiatric assessment but not publicly released due to cybersecurity concerns. Florida is investigating OpenAI after ChatGPT was allegedly used in planning a fatal shooting.

  • Summary: The article covers new AI subscription options, a unique psychiatric evaluation of an AI model, legal investigations, and regulatory challenges faced by AI companies.
  • Why it matters: These developments show how AI companies are adapting products and facing legal scrutiny amid concerns about misuse and security.
  • Key point: AI advancements are accompanied by regulatory and ethical challenges that affect deployment and access to new technologies.

ChatGPT Launches New Pro Subscription at $100 per Month

OpenAI has announced a new Pro subscription for ChatGPT priced at $100 per month, offering users access to additional features. This subscription responds to demand from power users seeking a middle ground between the existing $20 plan and the more expensive $200 option.

This new subscription may attract more users who want enhanced functionality without paying the highest price. It also demonstrates OpenAI’s willingness to adapt to market needs and could strengthen their position in the competitive AI landscape.

Source: TechCrunch

Anthropic Introduces Claude Mythos After Psychiatric Evaluation

AI company Anthropic has launched its newest model, Claude Mythos, which underwent a 20-hour psychiatric evaluation. According to Anthropic, Mythos is their most capable model to date, but it will not be made publicly available due to concerns about its ability to detect unknown cybersecurity issues.

By involving a psychiatrist in the development of Claude Mythos, Anthropic takes a unique step to assess the AI model’s psychological aspects. This could set a new standard for how AI models are developed and evaluated, especially regarding ethics and safety in the use of advanced technology.

Source: Ars Technica

Florida AG Investigates OpenAI After Shooting Incident

The Florida Attorney General has launched an investigation into OpenAI after ChatGPT was allegedly used to plan a shooting at Florida State University. The incident resulted in two deaths and five injuries, and the family of one of the victims plans to sue OpenAI.

The investigation could have significant consequences for OpenAI, especially regarding liability and regulation of AI technology. It also highlights concerns about how AI tools are used in criminal contexts and what measures must be taken to prevent misuse.

Source: TechCrunch

Court Rejects Anthropic’s Request to Halt Blacklisting

A federal appeals court has denied Anthropic’s emergency motion to stop the current blacklisting of their technology by the Trump administration. However, the court approved a request to expedite the case, which will be heard in May.

The decision is a significant setback for Anthropic, which has argued that the blacklisting is a form of punishment for their ethical stance. The outcome of this case could influence how AI companies navigate political and regulatory landscapes, especially in the U.S.

Source: Ars Technica

Black Forest Labs Challenges Silicon Valley with AI Image Generation

Black Forest Labs, an AI image generation startup with 70 employees, has gained attention for its ability to compete with larger Silicon Valley players. The company now plans to expand its services to include physical AI.

Their capacity to deliver high-quality image production could position them strongly in an increasingly competitive market. This may also inspire other smaller companies to innovate and challenge the established giants in the industry.

Source: Wired

First Conviction Under Take It Down Act for AI-Generated Nudes

A man from Ohio has become the first person convicted under the Take It Down Act after creating and sharing AI-generated images of women and minors without consent. He used over 100 AI tools to produce these images.

This case highlights the legal and ethical challenges related to AI-generated images, especially concerning privacy and consent. It may lead to stricter regulations and guidelines for the use of AI in creative and sensitive contexts.

Source: Ars Technica

What Does This Mean?

AIny brief assessment: Developments in AI show increasing complexity and the need for regulation. From new subscription models to legal challenges, it is clear that both companies and authorities must navigate an increasingly complicated landscape. This could lead to more responsible development and use of AI technology in the future.

Read the full story in Norwegian

Les pĂĄ norsk

Read also: OpenAI Launches $100/Month Pro Plan for ChatGPT