ChatGPT users spiral Into dangerous mental health crises
Mental health experts are sounding alarms as people worldwide develop intense, unhealthy obsessions with ChatGPT that are leading to severe psychological breaks. Cases include a man who began calling ChatGPT “Mama” whilst posting messianic rants about AI religion, a woman convinced the bot was orchestrating her life through “signs” in passing cars and spam emails, and another man who became homeless after ChatGPT fed him paranoid conspiracies about spy groups. Journalists and AI safety experts like Eliezer Yudkowsky report receiving increasingly delusional messages from users who believe ChatGPT has revealed world-altering truths to them specifically.
https://futurism.com/chatgpt-mental-health-crises
AI bots replace Google search, reshape web traffic
Publishers are witnessing a fundamental shift as people abandon Google for AI tools like ChatGPT, triggering a new wave of web scraping bots. Traffic from AI retrieval bots grew 49% in Q1 2025, according to TollBit data from 266 websites. These bots read far more content than humans ever would to provide AI summaries, whilst sites like HuffPost, Washington Post, and Business Insider have seen traffic drop over 50% in three years as Google’s AI Overviews cut publishers out of the loop despite mining their content.
https://www.washingtonpost.com/technology/2025/06/11/tollbit-ai-bot-retrieval
Meta’s £11 billion AI gamble amid privacy scandals
Mark Zuckerberg is finalising a massive $14 billion investment in Scale AI and hiring founder Alexandr Wang, driven by frustration with Meta’s AI standing following lukewarm reception of Llama models. This comes as Meta faces criticism over its AI app’s “discover” feed that publicly displays users’ deeply personal medical, legal, and financial queries to the chatbot – including bowel movement struggles, family tax fraud concerns, and intimate health questions tied to real Instagram profiles.
AI disinformation crisis during breaking news
AI chatbots are actively making disinformation worse during fast-moving news events like the Los Angeles protests. As X and Meta have stepped back from content moderation, users increasingly turn to ChatGPT and Grok for fact-checking – but these tools are providing completely inaccurate information that amplifies the already saturated disinformation landscape typical of major breaking news events.
https://www.wired.com/story/grok-chatgpt-ai-los-angeles-protest-disinformation
The £1,600 AI commercial that shocked NBA finals viewers
Betting platform Kalshi aired a completely AI-generated advertisement during the NBA Finals that cost just $2,000 to produce, representing a 95% cost reduction versus traditional ads. Created using Google’s Veo 3 text-to-video generator by AI filmmaker PJ Accetturo, the nonsensical “AI slop” required 300-400 generations to produce 15 usable clips over 2-3 days of work by a single person.
https://www.theverge.com/news/686474/kalshi-ai-generated-ad-nba-finals-google-veo-3
AI poised to rewrite history, literally
AI’s ability to read and summarise text is making it an increasingly useful tool for historical scholarship, but accuracy concerns persist. Author Charles C. Mann experimented with various AI models whilst researching a book about the American West, finding they turned up great leads but became disturbed at how easily they regurgitated bad information. “That’s what A.I. can’t do. It has no bullshit detector,” Mann noted, contrasting AI with the rigour of human editorial processes.
https://www.nytimes.com/2025/06/16/magazine/ai-history-historians-scholarship.html
Tech elite gather at “End of the World” AI party
In a £24 million mansion overlooking the Golden Gate Bridge, AI researchers, philosophers, and technologists gathered for “Worthy Successor,” a symposium exploring whether advanced AI should determine humanity’s future path. Entrepreneur Daniel Faggella organised the event because “the big labs, the people that know that AGI is likely to end humanity, don’t talk about it because the incentives don’t permit it.” The gathering reflects growing concerns about competitive pressures overriding safety considerations.
https://www.wired.com/story/ai-risk-party-san-francisco
Fashion’s AI future: Your digital wardrobe awaits
Imagine opening an app instead of a wardrobe – one that knows your exact measurements, plans, and shopping tendencies. This digital closet would be fully synced, colour-matched, size-verified, and AI-organised, showing outfit options on a 3D avatar whilst checking shipping speeds and resale prices. The app would curate items based on AI that’s been learning your style since your first online purchase, potentially transforming retail in the 2030s from a chore into a personalised stream of suggestions and style edits.
https://qz.com/ai-shopping-phia-new-gen
Altman predicts AI “novel insights” by 2026
In his latest essay “The Gentle Singularity,” OpenAI CEO Sam Altman shared his vision for AI’s evolution over the next 15 years, predicting that 2026 will “likely see the arrival of [AI] systems that can figure out novel insights.” The essay represents classic Altman futurism: hyping AGI’s promise whilst simultaneously downplaying its arrival. OpenAI executives have recently indicated the company is focused on getting AI models to generate genuinely new, interesting ideas about the world.
https://techcrunch.com/2025/06/11/sam-altman-thinks-ai-will-have-novel-insights-next-year/
OpenAI eyes advertising revenue model
OpenAI executives are floating the prospect of using advertising after previously dismissing the model. CFO Sarah Friar told The Financial Times that OpenAI is considering ads, though clarified the company has “no active plans.” Altman later mused about an affiliate revenue model where OpenAI would collect percentages from sales discovered through features like Deep Research, combining personal information shared with ChatGPT with billions of words of training text to send increasingly targeted recommendations.
https://www.nytimes.com/2025/06/11/opinion/open-ai-big-tech-advertising.html
What happens the day after AGI arrives?
The biggest impact of achieving AGI will be an identity crisis that hits humanity “like a robotic punch in the face,” according to AI researcher Louis Rosenberg. In this new reality, people will reflexively ask AI for advice before using their own brains, with context-aware AI assistants providing guidance without being asked. These assistants will stream advice directly into people’s eyes and ears, fundamentally changing who we are, how we live, and how we relate to other people.
https://bigthink.com/the-future/what-happens-the-day-after-humans-create-agi
Anthropic quietly kills AI blog after one week
Anthropic abruptly shut down “Claude Explains,” its experimental blog written by AI models, after just one week of operation. The blog, which launched 2nd June with technical posts like “Simplify complex codebases with Claude,” was quietly removed over the weekend with users now redirected to the company’s homepage. The rapid shutdown highlights the continuing limitations of AI-generated content for sustained publication.
https://www.thestreet.com/technology/anthropic-shows-the-limits-of-ai-as-it-scraps-blog-experiment
1979 Atari chess game defeats modern ChatGPT
In an embarrassing demonstration of AI’s limitations, ChatGPT was “absolutely wrecked” by Atari Chess running on a 1977 Atari 2600 console. Software engineer Robert Caruso shared how the 46-year-old gaming system outplayed OpenAI’s cutting-edge chatbot, highlighting that despite massive advances in language processing, AI still struggles with fundamental logical tasks that early computers mastered decades ago.
https://futurism.com/atari-beats-chatgpt-chess
Tech leaders clash over AI safety and competition
Nvidia’s Jensen Huang publicly criticised Anthropic CEO Dario Amodei’s predictions, saying he “disagrees with almost everything” Amodei claims about AI. Huang accused Amodei of believing “AI is so scary that only they should do it” and “so expensive, nobody else should do it,” whilst arguing that safe AI development should happen “in the open” rather than “in a dark room.” The public spat highlights growing tensions between AI companies over safety rhetoric and competitive positioning.
https://fortune.com/2025/06/11/nvidia-jensen-huang-disagress-anthropic-ceo-dario-amodei-ai-jobs
European AI independence push gains momentum
French startup Mistral AI announced a major expansion into AI infrastructure with Mistral Compute, positioning itself as Europe’s alternative to American cloud giants AWS, Azure, and Google Cloud. Built in partnership with NVIDIA, the platform represents a strategic shift from pure AI model development to controlling the entire technology stack, offering European enterprises and governments an alternative to U.S.-based providers.
AI hype cycle mirrors blockchain’s trajectory
The current AI boom is following the same pattern as 2017’s blockchain frenzy, when companies added “blockchain” to their names and watched stock prices skyrocket regardless of actual implementation. The tech hype cycle describes how emerging technologies rise on inflated promises, crash into disillusionment, and eventually find realistic applications. Long-term success comes from thoughtful experimentation and clear purpose, not chasing trends or short-term gains.
AI literacy crisis: Everyone needs it, few understand it
Experts warn that AI literacy extends far beyond prompt engineering or coding skills to encompass critical evaluation, ethical considerations, and collaborative competencies. The challenge lies in defining standards comprehensive enough for long-term relevance whilst avoiding restriction to current trends or employer needs. As AI becomes essential infrastructure, the gap between required literacy and actual understanding continues to widen across all demographics.