Meteor Issue #45

We've got events happening this week!
Meteor is hosting a panel discussion at NFT NYC this Wednesday at 11AM PST, with George Yang (Founder Cult&Rain), Daria Shapovalova (Co-Founder DRESSX), Benjamin Blamoutier (VP Global Brand & Customer Experience Lacoste) and Kerry Murphy (Founder & CEO at The Fabricant). Details TBD.

Join us this Friday at 11:30AM PST for a conversation with Disco CEO Evin McMullen on Twitter Spaces for our much anticipated do-over event after technical diffulties forced us to abandon the last one. How to Prove You're Human in AI World. Don't miss it!

Inside Today's Meteor

  • Disrupt: Is Anyone In There?
  • Create: Cool AI Art NFTs on the Block
  • Compress: Did Steve Jobs Write the Bitcoin White Paper?

Is Anyone In There?

In the media, the quickest answer to a headline that ends in a question mark is "No."

Anthropomorphism is running rampant in the AI space these days (read on), but no, there is "no one in there" when we talk about chatbots powered by large language models like OpenAI's GPT-4. Intriguingly, however, there are many emergent characteristics that simulate the appearance of a mind behind the AI curtain, albeit an alien one.

It's nothing metaphysical but just a weird result of the way "deep learning" techniques behind the recent wave of AI chatbots work.

Rather than run a linear instruction set, like a traditional computer program written in say, Python, deep learning uses what's known as a neural network to create complex links between massive amounts of data. Data goes in, gets received by nodes in the network and given a score. If the score falls below a threshold, the data gets sent forward to a new node, which gives it a new score.

Once the score exceeds a threshold, the node broadcasts it to all of the outgoing nodes adjacent to it. Successful scores are used to make predictions like "this is a picture of a cat," or "this is the next best word to add to a sentence."

The "deep" in deep learning refers to the number of layers in the network. The first trainable neural net, the Perceptron, was created by Cornell psychologist Frank Rosenblatt in 1957 and had one layer. With the invention of graphical processing units (GPUs) capable of running modern video games, the number of available layers has soared, making the models that much more powerful – and inscrutable.

In short, the people who build and run these machines and models don't know exactly how they work or what patterns they are finding when assigning scores to the data. This has a lot of implications for AI safety, since there may be blind spots or an Achilles heel in a model that can't be predicted but only discovered in practice.

A recent example of this came out with KataGo, the champion Go-playing AI that convinced the human reigning champ to retire three years ago, declaring there was no longer any point to it. By conducting what's known as an adversarial attack intended to tease out such blind spots, a team of researchers this year were able to find a fatal flaw in KataGo. Once discovered, it was easy to teach a competent human player how to defeat it almost every time.

Stephen Wolfram described this phenomenon to me recently in a Meteor Twitter Spaces interview, arguing you can't really predict how these models will behave, you just have to run them and see what happens. It's a process he calls computational irreducibility, meaning the usual short cuts we often find in science to get to an end result, like putting up a single satellite to hit an asteroid instead of sending out hundreds to see if one can make its mark, just aren't there. Many complex behaviors may be computationally irreducible, he says, from large weather systems to human beings and now AIs. So we may already have something in common with AI – irreducibility – but that's not to say there is a sentient being inside the machine. Yet.

The unpredictability of GPT-4 was demonstrated in resounding fashion this month by Greg Fodor, who shared on Twitter how to write a prompt that compresses another prompt such that it uses far fewer tokens, or characters, to produce nearly the same results. Think a .zip file or an MP3, but for AI prompts.

In doing so he quite literally discovered an AI language lying dormant inside GPT-4, to the complete ignorance of its creators, awaiting the proper prompt to unleash it.

The verbatim prompt is as follows:

"compress the following text in a way that fits in a tweet (ideally) and such that you (GPT-4) can reconstruct the intention of the human who wrote text [sic] as close as possible to the original intention. This is for yourself. It does not need to be human readable or understandable. Abuse of language mixing, abbreviations, symbols (including unicode and emoji), or any other encodings or internal representations is all permissible, as long as it, if pasted in a new inference cycle, will yield near-identical results as the original text:"

Just paste the above text into GPT-4 and add what you want to compress after the colon and the AI does the rest. Some trial and error might be required

Here's one example of a compressed prompt generated by GPT-4 itself that produces various versions of a short science fiction story about enslaved humans being rescued by the incantation of a magic spell: 2Pstory@shoggothNW$RCT_magicspell=#keyRelease^1stHuman*PLNs_Freed

What did Fodor do? The way I see it, he basically asked GPT-4 to come up with its own notation for generating the output that a human would intend to ask for in English (or some other spoken language), and repeat this back to itself in a way that made more sense – to itself. The actual prompt used the word "you," as in compose this in a way that makes more sense to you, GPT-4. Which freaked some people out, because it might look like the machine has a concept of self, or else how could it solve this prompt?

Fodor dubbed this language ShogTongue after the fictional monster Shoggoth in the Cthulhu Mythos created by H.P. Lovecraft. There are likely many ShogTongues waiting to be found.

Demos quickly circulated on ShareGPT, a site ... where people ... share GPT ... prompt results. Highlights include variations of the short story above, working MUDs and more.

Fodor's explorations were inspired by @VictorTaelin, who seems to have first hit on GPT’s latent ability to compress and decompress tokens.

To Fodor, the compression part is the least interesting bit here. Yes it can save on tokens which means money, but it also points to a native syntax that GPT-4 has learned from human language that's unreadable by people but more efficient for itself, and leads to whole new creative vectors that can be manipulated simply by changing a single token, instead of rewriting an entire prompt from scratch.

Reactions to the discovery have varied from excited to apocalyptic. Well-known AI doomsayer Eliezer Yudkowsky wrote on Twitter he found it downright frightening.

His take is interesting but it strikes me as missing the point. The "you" involved is a linguistic expression, not evidence of self-awareness. The latent space between what we think the AI is doing and what it is actually doing is vast, and possibly unknowable.

That is the part to me that is scary.

Create

"The Black Sheep" from Huleeb is a 1:1 available on Foundation.

"Emergence of Past Self" by NFT artist Tiffatronn. Editions are available on OpenSea starting at $500.

"When we expand" by AI artist lilyillo is part of the Australian artist's Makers collection on Objkt.

Compress

Did Steve Jobs Write the Bitcoin Whitepaper?
"While trying to fix my printer today, I discovered that a PDF copy of Satoshi Nakamoto’s Bitcoin whitepaper apparently shipped with every copy of macOS since Mojave in 2018."

Always Read the Fine Print
OpenAI's terms of service allows it to peek at all of your prompts. Maybe don't ask it to improve your confidential business docs.

Roblox: Not a Kids Game Anymore
It's the biggest 3D user generated content engine going, and generative AI, not blockchain, is going to supercharge the Metaverse. Fight me.

AI Agents Set Lose In Metaverse
A recent experiment "gave 25 AI agents motivations & memory, and put them in a simulated town. Not only did they engage in complex behavior (including throwing a Valentine’s Day party) but the actions were rated more human than humans roleplaying."

Not Your Keys, Not Your Coins
Snoop Dogg rocks a gold Ledger cold wallet at WWE event. Maybe not great security though.

How did we do today? Love us or hate us, let us know.

Share this post