AI NEWS: Microsoft's New AI Robot, OpenAI Sued, GitHub Copilot & Claude 3 Updates

In this AI news roundup, we cover Microsoft's new AI robot, OpenAI being sued by newspapers, updates to Claude 3, a TED Talk on AI's future, NIST's AI risk framework, and Microsoft's collaboration with Sanctuary AI on humanoid robots. Key topics include legal challenges, AI safety, robotics, and the evolution of language models.

February 14, 2025

party-gif

Discover the latest advancements in AI, from Microsoft's new AI robot to OpenAI's legal battles, GitHub Copilot updates, and more. This blog post provides a comprehensive overview of the most significant AI news and developments, offering insights that can help you stay ahead of the curve in this rapidly evolving field.

Open AI Sued by Eight Newspapers: Allegations and Implications

Eight daily newspapers owned by Alden Global Capital have sued OpenAI and Microsoft, accusing the tech companies of illegally using news articles to power their AI chatbots. The publications, including the New York Daily News, Chicago Tribune, and Orlando Sentinel, claim that the chatbots regularly surfaced entire articles behind subscription paywalls, reducing the need for readers to pay for subscriptions and depriving the publishers of revenue from both subscriptions and content licensing.

The newspapers argue that OpenAI and Microsoft have "spent billions of dollars gathering information and reporting news" and cannot allow the tech companies to "steal their work to build their own businesses at our expense." The complaint alleges that the chatbots often did not prominently link back to the source, further reducing the need for readers to pay for subscriptions.

This lawsuit highlights the ongoing debate around the use of copyrighted content in training AI models. The outcome of this case could set a precedent for future lawsuits, as more content creators and publishers may come forward seeking compensation for the use of their work in AI systems. The resolution of this case will be crucial in determining the boundaries and responsibilities of AI companies when it comes to utilizing publicly available information.

Claude AI Update: Teams Integration and Mobile App

Claude, the AI assistant developed by Anthropic, has received a significant update that includes the introduction of Teams integration and a mobile app.

The Teams integration allows users to seamlessly collaborate with their teams using Claude. This feature enables users to share and discuss their work with colleagues, making it easier to leverage Claude's capabilities within a team environment. The increased usage allowance provided by the Teams tier also ensures that users can have more conversations with Claude, further enhancing their productivity and efficiency.

The mobile app, on the other hand, provides users with the ability to access Claude on the go. This is particularly useful for those who need to utilize Claude's capabilities while away from their desktops. The mobile app allows users to take advantage of Claude's vision capabilities, as they can now transfer images directly from their phones to the AI assistant without the need to switch between devices.

These updates to Claude are a welcome addition, as they address some of the limitations that were previously present in the AI assistant. The Teams integration and mobile app functionality make Claude more accessible and versatile, allowing users to leverage its capabilities in a wider range of scenarios. As AI continues to evolve and become more integrated into our daily lives, these types of updates are crucial in ensuring that the technology remains relevant and useful for a diverse range of users and use cases.

GPT-2 Model Confusion: Speculations and Uncertainties

The recent release of a model called "GPT2" has caused a lot of confusion and speculation in the AI community. Some key points about this situation:

  • The model is called "GPT2", which is puzzling as it seems unrelated to the actual GPT-2 model released by OpenAI in 2019. This naming choice has added to the confusion.

  • There are many theories circulating about what this new "GPT2" model actually is. Some speculate it could be a fine-tuned version of the original GPT-2, while others think it may be an entirely new architecture.

  • The fact that Sam Altman, the CEO of OpenAI, has tweeted about this "GPT2" model has further stoked rumors and speculation, as people wonder if this is somehow connected to OpenAI's work.

  • Overall, there seems to be a lot of uncertainty around the origins, capabilities, and purpose of this "GPT2" model. The lack of clear information from the developers has led to a proliferation of theories and speculation in the AI community.

Without more details from the creators, it's difficult to say definitively what this new "GPT2" model represents. It could be an experimental system, a test of a new reasoning engine, or something else entirely. The ambiguity around its nature and relation to previous GPT models has sparked a lot of discussion and uncertainty in the AI space.

NIST Releases AI Risk Management Framework: Key Focuses and Considerations

The National Institute of Standards and Technology (NIST) has released the NIST AI 600-1, a risk management framework that focuses on the management of generative AI. This framework aims to assess the risks associated with the use of AI systems, particularly in areas such as:

  1. CBRN Information Risk: The framework highlights the potential for chatbots to facilitate the analysis and dissemination of information related to chemical, biological, radiological, and nuclear (CBRN) weapons, which could increase the ease of research for malicious actors.

  2. Confabulation: The framework addresses the risk of users believing false content generated by AI systems due to the confident nature of the responses or accompanying logic and citations, leading to the promotion of misinformation.

  3. Biased and Homogenized Content: The framework recognizes the potential for AI systems to generate biased or homogenized content, which could have negative societal impacts.

  4. Information Integrity: The framework focuses on the risk of AI systems undermining the integrity of information, potentially leading to the spread of false or misleading content.

  5. Environmental Impact: The framework considers the environmental impact of AI systems, including their energy consumption and carbon footprint.

The NIST AI 600-1 framework emphasizes the importance of proactive risk management and the need for AI developers and users to carefully assess the potential risks associated with the deployment of AI systems. It provides guidance on how to identify, evaluate, and mitigate these risks, with the goal of ensuring the responsible and ethical development and use of AI technologies.

By addressing these critical areas, the NIST framework aims to help organizations and individuals navigate the complex landscape of AI and make informed decisions that prioritize safety, security, and the well-being of society.

Sanctuary AI and Microsoft Collaboration: Accelerating Humanoid Robotics

Sanctuary AI, a Canadian humanoid robotics company, has announced a collaboration with Microsoft to accelerate the development of general-purpose robots. This partnership aims to leverage large language models (LLMs) and Microsoft's AI capabilities to ground AI systems in the physical world.

Sanctuary AI is on a mission to create the world's first human-like intelligence. Their Phoenix platform serves as the foundation for this endeavor. By combining the power of LLMs with Sanctuary AI's Phoenix robots, the collaboration seeks to enable these systems to understand and learn from real-world experiences.

The partnership will take advantage of Microsoft's AI control system for Sanctuary AI's Phoenix robots. This integration is expected to drive progress towards the large behavior models that are crucial for developing general-purpose robots, or embodied AGI.

Sanctuary AI's recent release of the Phoenix Generation 7 robot has showcased impressive dexterity and autonomy. However, the company has faced challenges in generating hype around its offerings. This collaboration with Microsoft could provide the necessary boost to propel Sanctuary AI's technology into the spotlight.

The ability to ground AI in the physical world through embodied systems is a crucial step towards achieving general artificial intelligence (AGI). By combining Sanctuary AI's robotics expertise with Microsoft's AI capabilities, this partnership aims to accelerate the development of humanoid robots that can understand and interact with the real world in a more natural and intuitive manner.

As the race for AGI continues, collaborations like this one between Sanctuary AI and Microsoft highlight the importance of integrating physical and digital realms to unlock the full potential of AI systems. This partnership could pave the way for more advanced and versatile humanoid robots that can tackle a wide range of tasks and applications.

Conclusion

The key points from the provided content are:

  1. OpenAI and Microsoft are facing lawsuits from 8 newspapers, alleging that their AI chatbots illegally used copyrighted articles to train their models. This sets an important precedent for the future of AI training on public data.

  2. Anthropic's AI assistant Claude has received an update, adding new features like a Teams integration and a mobile app. This helps make Claude more competitive with other AI assistants.

  3. A TED Talk by Helen Toner discussed the risk of AI companies evolving into social media-like platforms that focus more on user engagement than beneficial applications of AI. Data suggests some AI chatbots already have high user engagement, especially with younger demographics.

  4. A new GPT-2 model was released, causing confusion and speculation about its purpose and relation to GPT-4. The lack of clear communication from OpenAI has led to many unsubstantiated rumors.

  5. NIST released a risk management framework for generative AI, highlighting concerns around the potential misuse of these models for harmful purposes like the creation of weapons.

  6. Sanctuary AI announced a collaboration with Microsoft to accelerate the development of general-purpose robots, which could be a significant step towards embodied AGI.

  7. Opinions differ on whether current large language models represent meaningful progress towards AGI, with some experts arguing they do not.

In summary, the content covers a wide range of important developments in the AI industry, from legal challenges to technical advancements and safety considerations. The overall theme is the rapid evolution of AI and the need to carefully navigate the opportunities and risks it presents.

FAQ