25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more – Cointelegraph Magazine

HomeCrypto News

25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more – Cointelegraph Magazine

Almost 25,000 investors have signed up to trade alongside ChatGPT as they follow the GPT Portfolio experiment from copy trading firm Aut

Almost 25,000 investors have signed up to trade alongside ChatGPT as they follow the GPT Portfolio experiment from copy trading firm Autopilot.

The traders have bet a combined $14.7 million on the AI’s stock picks, which would average about $600 each if they all invested after signing up. They’re hoping to take even a small slice of a purported 500% return from one of the strategies backtested in academic research.

The GPT Portfolio gets the AI to analyze 10,000 news articles and 100 company reports to select 20 stocks for the $50,000 portfolio, updated each week. The initial picks included Berkshire Hathaway, Amazon, D.R. Horton and Davita Health. After two weeks, the portfolio is up around 2%, which is pretty much the same as the stock market. 

Interestingly the bottom five picks lost more in percentage terms than the top five gained — Dollar Tree lost 17% after it missed earnings — so it might be more sensible in future to only invest in GPT-4’s best five or 10 ideas, but we’ll see how it works out.

The smaller-scale ChatGPT Crypto Trader account is tweaking a similar strategy that gets GPT-4s advice on when to go long on Ethereum. He says it shows a profit of 11,000% backtested to August 2017, but in the real-world experiment since January, the portfolio is up by a third, while the Ethereum price has gained 60%.



It’s worth being careful using AI for trading, however. Crypto derivatives platform Bitget recently abandoned its experiment of using AI on the platform due to the potential for misinformation. A survey of its users found 80% of users had a negative experience with the AI, including false investment advice and other misinformation. 

Bitget Managing Director Gracy Chen says:

“AI tools, while robust and resourceful, lack the human touch necessary to interpret market nuances and trends accurately.”

Autopilot
The GPT Portfolio hopes that CNN is on the money. (Autopilot)

Are LLM’s stupid?

There are two extremes when it comes to thinking about large language models like GPT-4: some people maintain they are stupid mansplaining bots that confidently blurt out fake information, while others believe they will lead to artificial general intelligence (equivalent or better than human intelligence). Researchers from Microsoft published a 155-page paper called “Sparks of General Intelligence” back in March, arguing the latter was the case, apparently super impressed that the GPT was clever enough to work out how to stack a book, nine eggs and a laptop on top of each other.

Demis Hassabis, the co-founder of DeepMind, thinks the rate of progress is set to continue, meaning we may be just “a few years, maybe a decade away” from AGI. But robotics researcher and AI expert Rodney Brooks argues that large language models like ChatGPT are not going to lead to AGI. He says they don’t understand anything and can’t logically infer meaning.

“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”

Another skeptic is AI writer Brian Chau, who is writing a three-part series called “Diminishing returns in machine learning.” He argues that AI development is bumping up against hardware limitations and the extravagant cost of training larger models and is starting to slow. He puts the chances of AGI at less than 5% by 2043

ChatGPT’s loaded dice

One task that’s beyond ChatGPT is rolling a die and giving you a random number. Ask it to do so, and it’ll invariably roll a four on its first go. Ask it for a number between one and 10, and it’ll pick seven. Ask it for a number between one and 30, and it’ll pick 17. (Bard is similarly nonrandom) One Redditor got it to roll a die 50 times, with the bot returning “31 fours, 12 threes, 4 sixes, 3 fives, and no ones or twos.” 

What seems to be happening is the “random” numbers it produces are the ones that appear most frequently in its training data — because humans pick those same random numbers most often as well. In fact, the phenomenon of people picking seven when asked to choose a number between one and 10 is so well known, it’s even a magic trick/pick-up technique that The Game author Neil Strauss used to impress Britney Spears when he correctly “guessed” a number she’d chosen.

Read also

Art Week

Defying Obsolescence: How Blockchain Tech Could Redefine Artistic Expression

Features

Working with the Hydra: Providing Services to Decentralized Organizations

AI job losses may inspire a revolution

Goldman Sachs suggests 300 million jobs will be lost to automation,…

cointelegraph.com