If you want to know what a person is really like, watch how they treat the staff in a restaurant. Power reveals the true character of those who hold it.
I couldn’t take Sam Altman out to dinner but I could watch his behavior in a similar setup: a live interview. Just like clients have an edge over waiters, OpenAI’s CEO had an edge over his listeners — and what he did was take every single one of them for a fool.
“We now [have] these Language Models that can, uh, understand language”, he said. “We don’t have it working perfectly yet, but it works a little, and it’ll get better and better.”
There are a thousand ways Sam could’ve formulated that first sentence, but he chose the only verb that conveys a big fat lie. But wait, you may say, he added that it’s working only a little, didn’t he? That’s also a lie because Large Language Models don’t understand anything — and they can’t get better at something they can’t do.
What most people call “Artificial Intelligence” these days is a specific brand of tech called Machine Learning. You basically show a computer a bunch of examples of what you want to achieve and ask it to find a way to produce the same result.
The most common use case for Machine Learning is image recognition. Say you want your “AI” to identify cats in random pictures. All you have to do is feed your system two sets of data: lots of images of “cats” and lots of images of “not cats.” Note that each picture you inject must be labeled — which is a fancy way to say you attach a title to each one of your images. In this case, your labels are “cat” and “not cat.”
Give your machine enough “cat/not cat” examples, and it’ll learn to recognize cats in almost any picture. The breakthrough here is you don’t need code — only data, lots of it. Your machine uses your carefully curated examples to teach itself, so to speak, which is why we call this field Machine Learning or ML.
In real life, you don’t always have as much quality data as you’d like. But you can make up for that using feedback loops. You review your machine’s work every now and then and nudge it in the right direction. You act like a teacher who’s training a very-hard-working-but-also-very-dumb student.
Now here’s the kicker. You’ll never ever be able to give direct instructions to your robotic student, and you’ll never know what it thinks. Even if your student told you how they think, you wouldn’t understand because it’s using a totally different language.
Put differently, your Machine Learning model writes its own code in a language you can’t read. That’s why AI/ML people often talk about a black box. “Nobody knows what’s going on inside a Machine Learning model,” they would say. “It’s a freaking black box!!!”
Now, this is fine until people start to use black boxes to make delusional claims like “ChatGPT understands language.” They’ll say we don’t know what’s happening inside ChatGPT, but it’s answering our questions quite well, right? Perhaps it figured out language after consuming a ton of it!?!
The claim sounds plausible, but like many of ChatGPT’s answers, it’s pure bullshit.
We don’t know what’s happening inside a Machine Learning model, but we definitely know how it works. It’s like when you ride a bike. You can’t fully describe how each cell of your body behaves, but you know it’s a matter of balancing your weight.
Large Language Models, which are the Machine Learning flavor used by ChatGPT, work by mimicking human language without actually understanding it. This brings us to Gary Marcus, an AI enthusiast, founder, and author of Rebooting AI, one of Forbes' 7 must-read books on the topic.
Gary compares ChatGPT’s responses to “pastiche”— the act of imitating the work of one or many creators to produce a new piece of content. You could say ChatGPT stitches words together to form plausible answers based on patterns of text absorbed from the internet. The keyword here is “plausible.”
ChatGPT, in particular, and GPT (the model behind it), in general, are optimized for plausibility. Not for accuracy. Not for fact-checking. Not for truth. All GPT cares about is producing a bunch of text that seems to make sense — even when it’s total bullshit. Gary Marcus explained this tendency in four elegant points:
1. Knowledge is in part about specific properties of particular entities. GPT’s mimicry draws on vast troves of human text that, for example, often put together subjects [England] with predicates [won 5 Eurovision contests].
2. Over the course of training, GPT sometimes loses track of the precise relations (“bindings”, to use a technical term) between those entities and their properties.
3. GPT”s heavy use of a technique called embeddings makes it really good at substituting synonyms and more broadly related phrases, but the same tendency towards substitution often lead it astray.
4. [GPT] never fully masters abstract relationship. It doesn’t know for example, in a fully general way that for all countries A and all B, if country A won more games than country B, country A is a better candidate for “country that won the most games”
I’ll repeat this until I’m hoarse: GPT doesn’t understand anything you say. It doesn’t understand your questions, and it doesn’t understand its own answers either. In fact, every single Language Model that solely relies on Deep Learning is dumber than your neighbor’s cat.
Cats have a representation of the world that help them navigate it. They also have instincts programmed into their brains, allowing them to act with relative logic. In contrast, GPT and Co only have patterns of human-written text devoid of meaning. This is for instance why ChatGPT fails to compute very basic maths despite its ability to produce “sophisticated” prose. Simply put, ChatGPT has zero capacity to reason.
The counterargument of Sam Altman and his followers is all you need is more data. If you want to make Large Language Models intelligent, just give them more examples, more parameters, and more labels. Then sit tight and hope that “reason” will somehow emerge.
Now let’s consider the following thought experiment.
Imagine you lock a newborn child alone inside a library. Let’s call him Loki, and suppose he doesn’t need food, water, sleep, and love. You have Loki watch thousands of books all day, every single day, for 20 years non-stop. You don’t teach him anything about grammar, and you never explain what English words mean.
Now imagine you come back two decades later and, under the library’s locked door, you slip a piece of paper that says, “Hello Loki, what’s your favorite color?”
Do you expect Loki to understand your question?
Loki may recall your question from one of the dialogues he’d previously seen. Remember, Loki doesn’t read words; he sees them the same way you see Indian/Japanese/Arab characters without being able to tell what they mean.
The only difference is Loki has a super strong memory that allows him to remember each word he saw before. Even better, he memorizes patterns of phrases and expressions that he can replicate. This makes him capable of writing back plausible answers inspired by two books he consumed— something like: “Hi, I love purple because it’s associated with creativity, luxury, and sophistication [inspired by book #1], and it can evoke feelings of tranquility and calmness [inspired by book #2].”
Now let me ask you again, do you think Loki understands your question?
No matter how many books the poor kid consumes, he’ll never understand what the heck you’re talking about. He won’t understand what the heck he’s talking about either. Loki will simply pastiche an answer and that answer will make him seem to understand. But in reality, Loki is full of nonsensical shit.
You see, Loki is a Large Language Model, just like ChatGPT. You can feed them all the data you want; they’ll never spontaneously evolve a sense of logic.
It’s the price investors reportedly put on OpenAI two months after ChatGPT went viral — and the same sources claim Microsoft injected $10 billion into OpenAI.
Regardless of the numbers, the two companies confirmed their collaboration and already made progress in upgrading Microsoft’s products. In particular, Microsoft announced a new version of its search engine Bing that will use chatbot features.
These developments made Google feel under existential threat, which led to a bunch of emergency meetings. Google then decided to ship its own AI-powered products as soon as possible.
At first glance, the news sounds great for AI lovers, including myself. Competition leads to improvement and innovation. But competition also makes people more prone to risk and bullshit.
Sam Altman lied to his audience about the capability of GPT models, and Sundar Pichai, Google’s CEO, decided to ignore the guardrails he previously set for his company. While the CEOs engage in an AI race, the rest of the world braces for a tsunami of online misinformation.
Oh come on, you may argue, these smart dudes know what they’re doing! Surely they wouldn’t make rash decisions and mess this up, right?
“Alphabet Inc lost $100 billion in market value on Wednesday [2023–02–08] after its new chatbot shared inaccurate information in a promotional video and a company event failed to dazzle,” Reuters wrote, describing the following tweet:
The inaccurate information Google shared was “The James Web Space Telescope took the first pictures of exoplanets.” In reality, It was the European Very Large Telescope that took those photographs.
There are no massive blunders on the Microsoft side yet, but the company seems to know what’s coming, which is why they made a point of warning their users. “Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate,” they wrote. “Use your own judgment and double check the facts before making decisions or taking action based on Bing’s responses.”
Both Google and Microsoft are pulling the same shady technique. They unload risk on their users to avoid bad consequences. They will ship free bullshit-generators and say it’s up to everyday people to sift through the mess.
Maybe it’s arrogance, ambition, or even delusion that’s driving tech CEOs to sell dumb Large Language Models as a smart solution to navigate the online world. But the “AI” hype you see on social media? Well that’s definitely driven by a lack of education.
We fall for what smart and eloquent leaders tell us. We believe in their noble dreams, and we want to see them succeed — and that’s fine. What’s not fine is to ignore red flags, especially when all you need to see them is to pay attention — be it in a restaurant or a live interview.
from Hacker News https://ift.tt/Gt9aT1g
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.