Inside Today's Meteor

  • OpenAI Was Created to Fight Dragons. Now It's Riding Them.
  • AI Artists Win at the Copyright Office
  • Cool Tools: MidJourney 5 Is Out and It Fixes the Hands
  • Plus: Some GPT-4 Projects in the Past 24 Hours
  • Meme of the day

OpenAI Was Created to Fight Dragons. Now It's Riding Them.

OpenAI's song is ancient, AI lore. It tells of an open source not-for-profit band of white knights who found, raised and climbed onto the back of a closed for-profit $30B market cap beast. With jaws that talk.

It was first sung last year, when the white knights were sighted winging the skies on ChatGPT, causing great jubilation among the townspeople. And it's being sung once again with the flight of its newest and most powerful ride yet, GPT-4.

It's legend, but even people close to it don't seem to quite understand how it happened, or if it was even legal. Pass the mead.

This all went down eons ago, in AI years. It was a done deal by 2019, but no one noticed or cared until ChatGPT touched down in the castle keep last year and crowds came from all the seven kingdoms to see, and speak with it.

OpenAI could have plugged along for another decade in the labs with no hope of a profit, in which case the wizard Elon would not be wondering if he should try to get his jewels back, or fume about how he wound up helping animate a "woke" AI.

But it didn't, so he is. Let's recap.

In 2015, young Sam Altman, the wizard Elon and other luminaries got concerned about a dragon (AI) that will come to destroy the world and everything in it, so they got together to figure out how to save humanity.

They came up with a plan to guide the dragon (AI, you following?) to a safe and friendly place by ensuring core discoveries about it were widely shared. A well-funded nonprofit research foundation run by a band of white knights (honest and upright AI geniuses) seemed just the ticket to keep the industry on the straight and narrow path. Inoculate the evil effects of AI through open science! Tame the dragon!

(Narrator aside #1: It was and is a controversial and arguably counter-intuitive point of view, given the very real risks of indiscriminately distributing powerful code with unknown effects. The closest analogy might be to help spread a virus to speed up herd immunity in order to keep the survivors safe. Or share nuclear secrets to create a global standoff to prevent nuclear war.)

(Narrator aside #2: Defenders of openness – which include Meteor, read our second to last bit – point out that open systems tend to be more resilient, problems can be more quickly identified and mitigated when more people can test them, and new and innovative ideas are most likely to come when core tech is not locked down but widely available for experimentation. Back to the show.)

And yet. Perhaps not surprisingly, the open science strategy would quickly come under pressure, and buckle, for more reasons than one.

OpenAI back in the day

OpenAI got off the ground with $1 billion in charitable pledges, which seems like a lot for any startup, let alone a nonprofit startup, but even that couldn't pay the bills (Microsoft's Peter Lee once quipped a top AI researcher costs more than an NFL quarterback prospect, and that was a few years ago).

What OpenAI did have were ideals that were far more attractive to PhD dreamers compared to what was on offer from better-funded rivals like IBM's DeepMind, Facebook or Google: A high-minded mission of making AI safe for the world and the prospect of pure research unrelated to commercial applications.

An elite core team was quickly and affordably secured, but the cut-rate payroll couldn't hold off the mounting compute costs required to develop next generation AI. Then the wizard Elon left to breed an intelligent self-riding electric horse and young Sam was left to manage things on his own.

He grew up fast.

By 2019 Sam convinced the board that OpenAI could not compete as a pure non-profit. They agreed and adopted a dual structure, with a "capped" for-profit arm limited to 100x return on total investment, and a non-profit steering foundation, still nominally in charge, but in the background.

It was then that OpenAI started acting like a VC-funded portfolio company, granting equity to employees, selling a stake to the friendly neighboring kingdom of Microsoft for $1B, and taking on additional investors. The economics of AI research was used to justify it all: They would need to raise again, and soon, Sam predicted at the time.

It was also around this time the band of white knights began to be drawn to more obviously commercial applications, especially applications that might be built on a new and innovative large language model it had just invented, GPT. Once the capabilities started to come into focus, what had begun as cutting edge and collaborative open research into the use of neural networks to mimic language, switched.

Holy shit this works. Creating a beneficent artificial general intelligence that won't destruct the world is really hard and probably won't happen for a really long time, so we're kinda wasting our effort here. How about we monetize this other thing, right now? And own it?

MVP, don't overbuild, is excellent start up advice. With the Rubicon crossed, openness was the next principle to fall.

In Feb. 2019, OpenAI released its second generation language model GPT-2, with very little public documentation. The company cited safety concerns, which many people felt were overblown. Full documentation followed in November of the same year, just a few months before the arrival of its next generation GPT-3 language model, which it promptly licensed exclusively to the friendly neighboring kingdom of Microsoft.

This week a lot of people noticed that GPT-4 was released in much the same way as GPT-2. Only this time safety was not the only excuse, OpenAI also admitted to weighing the commercial value over the scientific value of making it more open.

With another $10B in the bank from Microsoft, and products everyone wants, and that many people will pay for, OpenAI had a choice to make. So it abandoned everything it had set out to be.

Now the guardians first sworn in to protect us against unknown dragons from the future (AI) are selling us day passes to rides inside the castle. So far, it's been a gas, and it's nice to know they don't want us to fall out of our seats and hurt ourselves.

Create

Creators are taking MidJourney 5 for a an early spin. The results are impressive.

Javi Lopez

Nick St. Pierre

Stephan Vasement

Quick Hits

AI Artists Win One at the U.S. Copyright Office
In the first formal guidance on a hot button topic, the top federal copyright agency finds AI-assisted art may qualify for copyright protection for works than can show "sufficient" human authorship. Now the battle over what counts as sufficient begins.

LinkedIn, the Under-Appreciated Social Platform, Flexs AI
LinkedIn is using GPT-4 for personalized profiles, with GPT-3.5 for job descriptions.

OpenAI Partners With Stripe on Payments
OpenAI joins generative AI pioneers Runway, Diagram, and Moonbeam on the payments platform.

To the Moon, and Beyond?
This one's weird. A viral Reddit thread recently accused Samsung of "making up" details of moon photos captured on its devices to make their cameras seem more capable than they are. In response, the company republished an English translation of a year old Korean language explanation of its photo enhancement techniques, denying it adds fake data.

Cool Tools

Midjourney 5 is Out
And it fixes the hands. We're going to miss those extra digits. :(

Some GPT-4 Projects in the Past 24 Hours

One-Click RoboCall Lawsuits
From law firm Do Not Pay comes a GPT4 service that answers the phone, transcribes the call, and generates and files the legal complaint.

A Working Game of Pong

A Working Game of Snake


Meme of the day


Share this post