AI Eye – Cointelegraph Magazine

HomeCrypto News

AI Eye – Cointelegraph Magazine

AI Arena AI Eye chatted with Framework Venture’s Vance Spencer recently and he raved about the possibilities offered by an upcoming g

AI Arena

AI Eye chatted with Framework Venture’s Vance Spencer recently and he raved about the possibilities offered by an upcoming game his fund invested in called AI Arena in which players train AI models how to battle each other in an arena.

Framework Ventures was an early investor in Chainlink and Synthetix and three years ahead of NBA Top Shots with a similar NFL platform, so when they get excited about the future prospects, it’s worth looking into.

Also backed by Paradigm, AI Arena is like a cross between Super Smash Brothers and Axie Infinity. The AI models are tokenized as NFTs, meaning players can train them up and flip them for profit or rent them to noobs. While this is a gamified version, there are endless possibilities involved with crowdsourcing user-trained models for specific purposes and then selling them as tokens in a blockchain-based marketplace.

AI ArenaAI Arena
Screenshot from AI Arena

“Probably some of the most valuable assets on-chain will be tokenized AI models; that’s my theory at least,” Spencer predicts.

AI Arena chief operating officer Wei Xi explains that his cofounders, Brandon Da Silva and Dylan Pereira, had been toying with creating games for years, and when NFTs and later AI came out, Da Silva had the brainwave to put all three elements together. 

“Part of the idea was, well, if we can tokenize an AI model, we can actually build a game around AI,” says Xi, who worked alongside Da Silva in TradFi. “The core loop of the game actually helps to reveal the process of AI research.”

Read also

Features

Before NFTs: Surging interest in pre-CryptoPunk collectibles

Features

Why Animism Gives Japanese Characters a NiFTy Head Start on the Blockchain

There are three elements to training a model in AI Arena. The first is demonstrating what needs to be done — like a parent showing a kid how to kick a ball. The second element is calibrating and providing context for the model — telling it when to pass and when to shoot for goal. The final element is seeing how the AI plays and diagnosing where the model needs improvement.   

“So the overall game loop is like iterating, iterating through those three steps, and you’re kind of progressively refining your AI to become this more and more well balanced and well rounded fighter.”

The game uses a custom-built feed forward neural network and the AIs are constrained and lightweight, meaning the winner won’t just be whoever’s able to throw the most computing resources at the model.

“We want to see ingenuity, creativity to be the discerning factor,” Xie says. 

Currently in closed beta testing, AI Arena is targeting the first quarter of next year for mainnet launch on Ethereum scaling solution Arbitrum. There are two versions of the game: One is a browser-based game that anyone can log into with a Google or Twitter account and start playing for fun, while the other is blockchain-based for competitive players, the “esports version of the game.”

Also read: Exclusive — 2 years after John McAfee’s death, widow Janice is broke and needs answers

This being crypto, there is a token of course, which will be distributed to players who compete in the launch tournament and later be used to pay entry fees for subsequent competitions. Xie envisages a big future for the tech, saying it can be used “in a first-person shooter game and a soccer game,” and expanded into a crowdsourced marketplace for AI models that are trained for specific business tasks.

“What somebody has to do is frame it into a problem and then we allow the best minds in the AI space to compete on it. It’s just a better model.”

Chatbots can’t be trusted

A new analysis from AI startup Vectara shows that the output from large language models like ChatGPT or Claude simply can’t be relied upon for accuracy.

Everyone knew that already, but until now there was no way to quantify the precise amount of bullshit each model is generating. It turns out that GPT-4 is the most accurate, inventing fake information around just 3% of the time. Meta’s Llama models make up nonsense 5% of the time while Anthropic’s Claude 2 system produced 8% bullshit.

Google’s PaLM hallucinated an astonishing 27% of its answers.

Palm 2 is one of the components incorporated into Google’s Search Generative Experience, which highlights useful snippets of information in response to common search queries. It’s also unreliable.

For months now, if you ask Google for an African country beginning with the letter K, it shows the following snippet of…

cointelegraph.com

COMMENTS

WORDPRESS: 0
DISQUS: