Google AI Blunders: Putting Glue in Pizza and More Shocking Mistakes

Discover shocking AI blunders from Google, including recommendations to put glue in pizza and other bizarre mistakes. Explore the latest developments in AI from tech giants like Microsoft, Meta, and more. Stay informed on the evolving landscape of AI technology and its implications.

February 14, 2025

party-gif

Discover the latest AI news and insights in this informative blog post. Explore the challenges faced by tech giants like Google and OpenAI, and learn about the exciting developments in AI-powered features from Microsoft. Stay ahead of the curve and gain a deeper understanding of the rapidly evolving world of artificial intelligence.

Google's AI Blunders: Disastrous Recommendations Gone Wrong

It seems Google's latest AI efforts have been plagued with some major issues, as evidenced by the numerous concerning responses their AI has provided when directly queried through Google Search.

Some examples of the AI's problematic recommendations include:

  • Suggesting to add non-toxic glue to pizza sauce to make the cheese stick better.
  • Claiming 1919 was 20 years ago.
  • Recommending pregnant women smoke 2-3 cigarettes per day.
  • Stating it's always safe to leave a dog in a hot car.
  • Fabricating details about the fictional death of a SpongeBob character.
  • Providing a convoluted multi-step process to determine the number of a person's sisters.

These AI-generated responses are not only inaccurate, but in many cases could be actively harmful if followed. It appears Google's AI is struggling to provide reliable, factual information when directly queried, raising serious concerns about the safety and reliability of integrating such technology into core search functionality.

While Google has claimed the voice used in their recent GPT-4 demo was not Scarlett Johansson's, the similarity has sparked controversy and questions about their practices. The company's partnership with News Corp also raises worries about potential biases, though Google maintains the existing training data is the primary concern, not the new content access.

Overall, these incidents highlight the critical need for rigorous testing, oversight, and transparency as AI systems become more deeply integrated into everyday services. Google will need to address these issues swiftly to maintain user trust and ensure the safety of its AI-powered offerings.

Yan Luo Leaves OpenAI, Raises Safety Concerns

Yan Luo, the head of super alignment at OpenAI, has left the company. In a thread on Twitter, Luo expressed concerns about OpenAI's priorities, stating that he has been "disagreeing with OpenAI leadership about the company's core priorities for quite some time."

Luo believes that more of OpenAI's bandwidth should be spent on "getting ready for the next generations of models on security monitoring, preparedness, safety, adversarial robustness, super alignment, confidentiality, societal impact and related topics." He is concerned that these crucial research areas have taken a backseat to "shiny products."

Luo's departure comes shortly after the departure of Ilya Sutskever, OpenAI's chief scientist. While Sutskever left on good terms, Luo's exit appears to be more contentious, with the suggestion that OpenAI employees who criticize the company may face consequences, such as the loss of vested equity.

OpenAI's CEO, Sam Altman, has acknowledged Luo's concerns and stated that the company is committed to addressing them. However, the situation highlights the ongoing debate within the AI community about the balance between innovation and safety.

As the development of advanced AI systems continues, the importance of prioritizing safety and alignment with human values becomes increasingly critical. Luo's departure from OpenAI underscores the need for AI companies to maintain a strong focus on these crucial issues.

OpenAI's Voice Controversy: Scarlett Johansson Dispute

According to reports, OpenAI's recent demo of GPT-4 featured a voice very similar to that of actress Scarlett Johansson from the movie "Her". Apparently, OpenAI had reached out to Johansson to use her voice, but she declined. However, OpenAI claims they hired a voice actor whose voice happened to sound similar, and it was never their intention to replicate Johansson's voice.

This has led to a controversy, with many believing OpenAI simply found a way to use a voice actor that sounded like Johansson, even though they were denied permission to use her actual voice. While OpenAI maintains they did not copy Johansson's voice, the public has seemingly rallied behind the actress, seeing it as a gray area case.

The key points are:

  • OpenAI reached out to Scarlett Johansson to use her voice, but she declined.
  • OpenAI claims they hired a voice actor whose voice happened to sound similar, not intending to replicate Johansson's voice.
  • Many believe OpenAI found a way to use a soundalike voice actor, even though they were denied Johansson's actual voice.
  • It's a controversial gray area, with the public siding more with Johansson on the issue.

Overall, this incident has raised questions about the ethics and transparency around AI voice generation, especially when it comes to using voices that closely resemble real people without their consent.

OpenAI's Partnership with News Corp: Potential Bias Implications

OpenAI has recently announced a partnership with News Corp, providing them access to content from major news and information publications. This has raised concerns about potential biases being introduced into OpenAI's language models, particularly ChatGPT.

However, the reality is that the content from News Corp is likely already trained into ChatGPT, and this partnership simply provides a more ethical and transparent approach to accessing this data. The existing biases in the training data are unlikely to be significantly altered by this new agreement.

While News Corp is known for its more extreme political leanings, particularly through its Fox News division, the impact on ChatGPT's outputs is not expected to be substantial. The language model has already been trained on a vast amount of online data, which likely includes content from a wide range of news sources, both left-leaning and right-leaning.

The partnership with News Corp simply provides OpenAI with the legal permission to use this content in their future training, rather than relying on potentially biased or incomplete data sources. This move towards transparency and ethical data acquisition is a positive step, even if the practical implications for ChatGPT's outputs may be limited.

Ultimately, while the partnership with News Corp raises valid concerns about potential biases, the reality is that ChatGPT's outputs are likely already influenced by a diverse range of news sources, both in terms of political leanings and quality. The partnership is more about formalizing and legitimizing the data sources used, rather than introducing significant new biases into the language model.

Rumor: Apple to Integrate OpenAI's Models into Siri

This falls more into the rumor territory, but it's a claim that's worth discussing. According to the website Mac Daily News, there is a rumor that Apple is going to be teaming up with OpenAI and that this will be announced at WWDC.

This has also been reported by Bloomberg, but it's not yet confirmed by either Apple or OpenAI. The current rumor is that the new version of Siri in the next Apple iPhone could potentially be powered by OpenAI's latest models of GPT.

We've also heard rumors that Apple has been talking to Google about integrating Gemini, another large language model, into their products. However, we won't know for sure what the outcome will be until the WWDC event.

If the rumor is true, and Apple does end up integrating OpenAI's models into Siri, it could potentially lead to significant improvements in Siri's capabilities. OpenAI's language models, such as GPT-4, have demonstrated impressive performance in a variety of tasks. Integrating these models could help make Siri more conversational, knowledgeable, and capable of understanding and responding to natural language queries.

At the same time, there may be concerns about potential biases or limitations that could be introduced by the OpenAI models. It will be important for Apple to carefully evaluate and address any such issues to ensure that Siri remains a reliable and trustworthy assistant.

Overall, the rumor of Apple partnering with OpenAI for Siri is an intriguing one, and it will be interesting to see if it materializes at the upcoming WWDC event. As with any rumor, it's best to wait for official confirmation before drawing any conclusions.

Microsoft Build Event Highlights: Co-Pilot PCs, Recall Feature, and More

At the Microsoft Build event in Seattle, the tech giant unveiled several exciting new features and updates:

  1. Co-Pilot PCs: Microsoft introduced "Co-Pilot PCs" - computers with a dedicated Neural Processing Unit (NPU) for running AI inference locally, allowing users to interact with AI assistants without relying on the cloud.

  2. Recall Feature: A new feature called "Recall" was announced, which allows users to view their entire computer usage history, including browsing, applications, and more. However, this feature raised privacy concerns due to the potential for sensitive data exposure if the device is lost or compromised.

  3. Microsoft Paint Upgrades: Microsoft Paint received AI-powered enhancements, including the ability to generate images, improve sketches, and restyle existing images.

  4. Edge Browser Improvements: The Microsoft Edge browser gained new translation and transcription capabilities, enabling real-time language translation and transcription during video calls and meetings.

  5. Co-Pilot for Teams: Microsoft introduced an AI "team member" that can be integrated into Microsoft Teams, helping with tasks such as creating checklists and summarizing meetings.

  6. F3 Language Model: Microsoft announced a new multimodal small language model called "F3," which will likely be compared to other large language models by experts in the field.

  7. GitHub Co-Pilot Extensions: Microsoft expanded the capabilities of GitHub Co-Pilot, adding integrations with various development tools and the ability to create custom private extensions.

Overall, Microsoft's Build event showcased the company's continued focus on integrating AI technologies into its products and services, aiming to enhance productivity and user experiences.

Conclusion

The recent developments in the world of AI have been quite eventful. From Google's AI blunders to the departure of key figures from OpenAI, the industry has been abuzz with news and controversies.

The Google AI search results have raised concerns about the reliability and safety of AI systems, with the company recommending questionable solutions like adding glue to pizza. This highlights the need for rigorous testing and oversight to ensure AI systems provide accurate and safe information.

The departure of Yan Lecun, the head of super alignment at OpenAI, has also sparked discussions about the company's priorities. Lecun's concerns about OpenAI's focus on "shiny products" rather than safety and security are a sobering reminder of the challenges in developing advanced AI systems.

Microsoft's announcements at the Build event, including the introduction of the Copilot Plus PCs and the new Recall feature, have generated both excitement and privacy concerns. While the AI-powered capabilities hold promise, the potential security risks associated with the Recall feature need to be addressed.

In the realm of AI art, platforms like Leonardo AI and Midjourney continue to evolve, introducing new features and collaborative tools. The integration of Adobe Firefly into Lightroom also showcases the growing integration of AI-powered tools into creative workflows.

Overall, the AI landscape remains dynamic and fast-paced, with both advancements and challenges emerging. As the technology continues to evolve, it will be crucial for companies, researchers, and the public to maintain a balanced and responsible approach to ensure the safe and ethical development of AI systems.

FAQ