JOIN US TODAY FOR ANOTHER AMAZING TWITTER SPACE
Friday, March 24, 2PM (ET): Disco CEO Evin McMullen explains how to prove you're human in an AI world.

Inside Today's Meteor

  • Disrupt: A Conversation With Stephen Wolfram
  • Compress: Do Kwon Caught!
  • Cool Tools: NeRF scene editing from a text prompt

A Conversation with Stephen Wolfram

Meteor sat down yesterday with a giant of physics, computer science and mathematics to talk about the present and future of AI.

In addition to his influential science writing, Stephen Wolfram is the founder and CEO of Wolfram Research, where he developed Mathematica, a computation program used in many scientific, engineering, mathematical and computing fields built on pioneering machine learning and natural language processes; and also the Wolfram Alpha answer engine, a rich knowledge base he's been building for years.

On Thursday, OpenAI announced a new plugin program allowing third party services to connect with ChatGPT to facilitate a whole new range of functions, from planning travel with Expedia, to running computational prompts in Wolfram's suite of tools. By using Wolfram, ChatGPT can avoid "hallucinations," deliver accurate facts, and leverage computational powers not otherwise available out of the box. It's early days.  

The following is an edited transcript of a Twitter Space created with MacWhisper.

On ChatGPT's Surprise Leap in Human-like Chat Performance
I don't think anybody knows exactly what happened [between GPT-3 and GPT-3.5]. I think really it's the instruction-following capabilities of ChatGPT and that extra level of reinforcement learning, the human-feedback loop, that somehow has pushed it over the edge.

ChatGPT and Classical Logic
I think [the thing] people have really noticed, is that ChatGPT is able to take advantage of early versions of logic.

I mean, picture the kind of stuff back in antiquity that allowed Aristotle to come up with logic.

He looked at lots of arguments people had given in rhetoric and so on, and said, well, what kind of patterns of linguistic forms exist that are repeated and that represent valid arguments in rhetoric?

And he noticed a collection of those, you know: "All men are mortal, Socrates is a man, therefore Socrates is mortal," and so on.

He noticed those as essentially linguistic patterns. And ChatGPT's done the same thing.

And so people say, isn't it remarkable that it does logic? Well, yes, it does logic for the same reason that Aristotle was able to pull logic out of language. It also pulls logic out of language.

But the thing it also does is pull a lot of other kinds of [things] one might think of as "meaning" out of language, forms that are like the kinds of things we see in logic that are patterns about what is meaningful in language.

People haven't gone and found those things in the last couple of thousand years.

I think ChatGPT is a wake up call to, yes, those things exist and there's a lot more structure in human language than we might have assumed. And it'll be interesting to try and identify those things.

No doubt, identifying them will allow one to greatly simplify from a purely engineering point of view, what it takes to produce smooth meaningful language in the kind of way that something like ChatGPT does.

Human-AI Collaboration
The thing I'm interested in right now is human-AI collaboration.

That is, we tend to express ourselves with human language, that's very easy for us. But we would like to build towers of functionality that are not readily expressible in human language.

I mean, if you try and use human language to describe, let's say, mathematics, you only get so far. This is the thing 500 years ago people realized, and mathematical notation got invented that really streamlined mathematics. One had the development of algebra and calculus and various mathematical sciences.

In today's world, the analogous opportunity is getting a good kind of notation for expressing computational ideas, just like mathematical notation is a good notation for expressing mathematical ideas.

And the thing I've spent much of the past 40 years on is developing that, the computational language to express computational ideas in a very streamlined way.

You as a human can read that computational language and understand what it's trying to do. Then your computer can run that computational language and do some potentially sophisticated deep computation, which you then see the results of.

But the important part of that picture is you're not going through this intermediate stage of interacting with computational language and going back and forth, you're using natural language to poke at the computational language.

Will AI Destroy Us?
The world's going to be run by AIs before very long.

And, you know, in a sense the world is already run by machinery that is sort of beyond the human. Governments are sort of machinery, beyond the human, so to speak.

Also we live in a world in which there's nature as well as us humans. And nature, much like the AIs, is doing all kinds of complicated things that we don't necessarily understand.

So yes, just as if things go badly, nature can wipe us out, so if things go badly,
the civilization of the AIs could wipe us out too.

Hopefully it won't.

On the Unpredictability of AI
Computation is something where much of what is going to happen is not something that we can go in and say, "Oh, we can see these levers are moving in this way and that way, so therefore this is going to happen."

That's the thing people have to get used to.

People have the idea from experience in the last couple hundred years or so of science that science can predict everything. [But] this is just a phenomenon of a particular type of science and particular paradigm for doing science, which has sort of restricted itself to those things which can readily be predicted.

I think that as soon as you get into the computational universe, you are in a situation where many things are computationally irreducible. Many things, you can't really predict them, you can just run the computation and see what happens.

And I think that's the thing that we'll see lots of with AI.

When you deal with programs, there are often unintended consequences, which we typically call bugs. We think we set up a program this way, but it turns out that the program does something different than we expected.

It's some computational process so you can go in and see every bit. But if you want to have some sort of narrative explanation of what happened, that may just not be available.

The Future of Work
People talk about will there be anything left for the humans to do when
AIs have automated all the stuff?

One of the consequences of this computational irreducibility phenomenon is [when] you automate things, you zero out one set of things, [but] you create new things that have not yet been automated.

This is the pattern that we've seen over the last 150 years since the industrial revolution. There's some domain of technology that provides lots of jobs
for people. Telephone switchboard operators, let's say.

There are lots of those that are relevant when the technology of telephones is at a certain stage. Then automated switching comes in, and those people are no longer needed to do those jobs.

But it turns out that automated switching opens up the whole telecommunications world that leads to just a vast diversity of new things that one is interested in doing.

And at every moment, the frontier of things that are still relevant for people to do, it's very open. There are many different things that could be done and the role of people tends to be to describe what it is we want to do versus what we don't want to do. What goals do we want to set?

You know the AI on its own just ripples through its 175 billion weights of neural nets and [finds] the next word. There's nothing in there that's saying, "I have this goal that's based on my whole connection to the web of human history," and so on.

That's the thing that is still very much left for the humans, to define the goals, to find which of the many directions one could go in, that one should actually
decide to go in.

So you have kind of this computational contract that you can write in computational language that defines what it is you want to have happen.

One big challenge, I think, in the current time, which I would say really still is a very wide open challenge, is what should that general computational contract, the AI Constitution, look like for what we want the AIs to be trying to do?

I think that's the thing where there's no mathematical right answer. That's a question of what humans want to have happen.



Compress

Do Kwon Caught!
The fugitive Terraform Labs co-founder, who incinerated more than $40-billion in cryptocurrency implosion last year, was detained in Montenegro en route to Dubai with falsified passports.

The ChatBot Telephone Game Is On
Google and Microsoft’s chatbots are already citing one another in a misinformation shit show.

Trustworthy AI Gets a Boost
The Mozilla Foundation is dropping $30M to fund a new initiative to make AI safer.

AI on the Catwalk
Levi's is gearing up to test computer generated fashion models.

The Ultimate Web3 Gamer Expansion Pack
Sony has filed an NFT patent that allows digital assets to be moved between games.

Arbitrum Airdrop Crashes Site
Arbitrum's website and blockchain scanner was down ahead of its ARB token claim event, amid massive interest from token users.


Cool Tools

Instruct-NeRF2NeRF enables editing of a NeRF scene with a simple text instruction.

You can now try Microsoft Loop, a Notion competitor with futuristic Office documents.

GitHub Copilot gets a new ChatGPT-like assistant to help developers write and fix code


How did we do today? Love us or hate us, let us know.

Share this post