UK Stocks, Forex, Commodities, Crypto, Live Market News- Daily Forex News

OpenAI’s doomsday bunker, 15 fakest books, Hollywood’s over with Veo 3: AI Eye

Doomsday bunker for OpenAI’s top scientists

In the last AI Eye, we reported that scientists from the four leading AI companies believe there’s at least a 10% chance of AI killing around 50% of humanity in the next decade — one scientist was buying farmland in the US Midwest so he could ride out the AI apocalypse.

Ilya OpenAIIlya OpenAI
Ilya is heading down the bunker on AGI release day. (TheAlexK)

This week, it emerged that another doomsday prepper is OpenAI co-founder Ilya Sutskever.

According to Empire of AI author Karen Hao, he told key scientists in mid-2023 that “we’re definitely going to build a bunker before we release AGI.” Artificial General Intelligence is the vaguely defined idea for a sentient intelligence smarter than humans. 

The underground compound would protect the scientists from geopolitical chaos or violent competition between world powers once AGI was released. 

“Of course,” he added, “it’s going to be optional whether you want to get into the bunker.”

One OpenAI researcher said there was an identifiable group of AI doomers in the company, with Ilya being one, “who believe that building AGI will bring about a rapture. Literally a rapture.”

Of course, it’s not stopping them from trying to build it.  

‘Bad’ persuasion vs ‘good’ persuasion

A new study in Nature found ChatGPT and humans are equally persuasive at convincing other humans of their position via an argument — but when the LLM is given basic demographic information, it becomes more persuasive because it can tailor its arguments to individuals and manipulate them. 

Researcher Francesco Salvi raised legitimate concerns that this could lead to “armies of bots microtargeting undecided voters” and speculated that “malicious actors” may already be using bots “spread misinformation and unfair propaganda.” 

But in almost the next breath he said the “potential benefits” would be the ability to  manipulate the population with his own preferred narratives, saying the bots could help reduce “conspiracy beliefs and political polarization… helping people adopt healthier lifestyles.”

Personally, I don’t want to be manipulated by “good bots” spreading their version of “the truth” or the “bad bots” dealing in “misinformation” but I suspect everyone will end up getting manipulated by both in short order.

Via Francesco Salvi

Bots less accurate than humans

Left to their own devices, LLMs would probably manipulate people into believing incorrect stuff anyway. A new study from researchers at Utrecht, Cambridge and Western Universities found that LLMs like ChatGPT and Claude are five times more likely than humans to overgeneralize when summarizing scientific research. They tend to strip out the nuance and the scientific caveats and make overly broad claims that aren’t supported by the original findings.

This is, of course, is a direct threat to the jobs of tabloid science journalists, who have been doing that for years.



Hollywood is so over thanks to Google

AI isn’t yet creative enough to write a decent screenplay, but Veo 3 suggests it can easily replace expensive camera crews and actors. The incredible text-to-video and audio generator can knock up an 8-second ultra-realistic-looking clip complete with dialogue, sound effects and background noise.

It can also mimic any style or format you like, from a ’90s sitcom to an overweight comedian telling a stand-up joke. This edit of vox pops at a car show was stitched together using the related Flow service. 

Not everyone is impressed, with some suggesting the movements and lip-sync aren’t 100% realistic — but it looks a quantum leap from Will Smith eating spaghetti two years ago.

As Michael Bay has demonstrated, you can make a whole movie with cuts under 8 seconds long. Adding to the fun, social media is full of fake Veo 3 deepfake clips like this bizarre but genuine avant-garde chicken song. 

Lawyers are slightly less inaccurate that AI

Legal agreements are one area ripe for AI disruption, given that a lot of contracts are just cut-and-paste clauses from previous contracts with the names changed.

Serial founder Andrew Wilkinson posted this week that he’d saved $5,000 by “drafting a legal agreement with Gemini 2.5 Pro, which ranks #1 on LegalBench.”

“Once the context window is big enough to encompass all of a corporation’s historical legal records, cap table, financials, etc (it probably already is), I struggle to see how lawyers will compete.”

Save money now with just 16% chance of major legal error….

cointelegraph.com

Exit mobile version