But I know enough about how it works to know that truthfulness, as you said, is not one of its dimensions. It is synthesized. It is a kind of copy. It sticks. And I tried to understand why he was so nervous about it. And that got me thinking, have you read that great philosophy article by Harry Frankfurt called On Bullshit?
GARY MARCUS: I know the newspaper.
EZRA KLEIN: Welcome to the podcast folks, this is a philosophical piece on what bullshit is. And he writes, quoting: The essence of shit is not that it's wrong, but that it's wrong. To appreciate this difference, it must be recognized that a forgery or forgery need not be inferior to the genuine thing in any respect other than authenticity itself. What is not genuine must not be faulty in any other way. After all, it may be an exact copy. What is wrong with a fake is not how it looks, but how it was made.
And what he means is that the difference between bullshit and a lie is that the lie knows what the truth is and had to move in the opposite direction. He has a great line where he says that people who tell the truth and people who lie play the same game but on different teams. But shit has nothing to do with the truth.
And the thing that scared me a little bit about ChatGPT was the feeling that if we don't lower the cost of truthful and accurate information or the advancement of knowledge, we're going to reduce the cost of shit to zero. And I'm curious how you view this concern.
GARY MARCUS: That's exactly right. These systems have no idea of the truth. Sometimes they get it right and sometimes they don't, but basically they're just bullshit in the sense that they're just saying things that other people said and trying to maximize the probability of it happening. It's just autocomplete, and autocomplete just sucks.
And it is a very serious problem. I just wrote an essay called "The Jurassic Park Moment for AI," and this Jurassic Park moment is exactly that. That's when the price of shit is zero and people start spreading disinformation, whether for political reasons or just to make money. so productively that we can no longer distinguish between truth and bullshit in what we see. . .
EZRA KLEIN: You write in that article: It's no exaggeration to say that systems like this pose a real and imminent threat to the very fabric of society. Why? Show me what this world could be like.
GARY MARCUS: Let's say someone wants to fabricate misinformation about Covid. You can use a system like Galactica, which is similar to ChatGPT, or you can use GPT-3. ChatGPT itself probably won't let you do this. And you tell him to present some misinformation about Covid and vaccines. And he'll write a whole story for you, including lines like: A study in JAMA, a leading medical journal, found that only 2 percent of people who got the vaccine benefited.
You have messages that seem to have been written by a human in every way. You'll have all the styling and shaping etc. to put your sources and data together. And people can take one of those, but what if there are 10 of those, or 100 of those, or 1,000 or 10,000 of those? Therefore, it becomes very difficult to monitor them.
Maybe we could build new types of AI, and I'm personally interested in that, to try and figure them out. But we don't have existing technology that really protects us from attacks, the incredible wave of potential misinformation like this.
And I had this discussion with Yann LeCun, the head of A.I. metascientists, and he says, well, that's not really a problem. But we have already seen that this is a problem. So it was something that really surprised me around December 4th. That was shortly after the release of ChatGPT. People used ChatGPT to create answers to programming questions in the style of a site called Stack Overflow.
Now everyone in the programming field uses Stack Overflow all the time. It is like a resource enjoyed by all. It is a place to exchange information. And so many people give wrong answers to this thing where people ask questions, people give answers, that Stack Overflow had to ban people from putting computer generated answers there. It was literally existential for this site. If enough people entered answers that sounded plausible but weren't true, no one else would go to the site.
And imagine that on a much larger scale, the scale where you can't trust anything on Twitter or Facebook or anything you get from a web search because you don't know which parts are true and which parts are not. And there's a lot of talk about using ChatGPT and the like for web searches. And sometimes that is true. It's super amazing. They come back with a paragraph instead of 10 pages, and that's great.
But the problem is that the paragraph may be wrong. Therefore, they may contain dangerous medical information, for example. And there can be lawsuits for those things. So if we don't find socio-political and technical solutions, I think that very soon we will end up in a world where we no longer know what to trust. I think this has been a problem for society for, let's say, the last decade. And I think it's only going to get worse.
EZRA KLEIN: But isn't it true that the research now could be wrong? Do your research, it's not just humans that can make mistakes. Are people spreading a lot of misinformation that there is a dimension to this critique that is keeping AI systems at a level that society itself is currently not reaching?
GARY MARCUS: Well, there are a few different things. So I think it's a scale difference issue. Therefore, it is really troublesome to write misleading content now. Russian trolls spent about $1 million a month, more than $1 million a month during the 2016 election. That's a significant amount of money. What they did back then, now they can buy their own version of GPT-3 to do this all the time. You pay less than $500,000 and you can do it in unlimited amounts instead of being limited by human hours.
That should make a difference. I mean, it's like saying we've had knives before. So what's the difference if we have a submachine gun? Well, the machine gun is more efficient at what it does. And we're talking about having disinformation machine guns.
So I think scaling is going to make a real difference in how much of that happens. And such is the sheer plausibility of this, it's just different from what's happened before. I mean, nobody's been able to do computer-generated disinformation in a convincing way before.
As for the search engines, it is true that you get misleading information. But at least we have some practice, I wish people had more time to look at a website and see if the website itself is legit. And we do it in different ways. We try to judge sources and quality. Does it come from the New York Times or does it look like someone did it in their office in their spare time and maybe it doesn't look quite as complete? Some of these suggestions are good, some are bad. We are not perfect at it. But do we discriminate like it looks like a fake website? It looks real and so on.
And when it all comes back in the form of a paragraph that essentially always looks like a Wikipedia page and always feels authoritative, people don't even know how to judge that. And I think they'll judge it as true, true by default, or they'll push a button and decide it's all false and not take it seriously, in which case you're really threatening the websites themselves as the search engines.
What are the issues with ChatGPT? ›
Yes, the ability of ChatGPT to generate conversational text raises ethical concerns about its potential to generate fake/wrong news or other misleading content. This could have severe ramifications, such as harming reputations, spreading false information, or even inciting violence.How reliable is ChatGPT? ›
So ChatGPT is not always trustworthy. It can usually answer general knowledge questions accurately, but it can easily give misleading answers on more specialist topics.Does ChatGPT give the same answer to everyone? ›
Does ChatGPT give everyone the same answer? Most of the time, when different people ask ChatGPT the exact same question, they will get the same answer. There may be a few word variations, but they will be almost identical. The trick is that several people must word their questions exactly the same for this to happen.Is Google scared of ChatGPT? ›
Since ChatGPT released in 2022, Google made a strong push to fight against the OpenAI product. Google recently released Bard to fight against the advanced chatbot but hasn't caught up yet. This is a problem for Google and here's why the company should be worried.What is the big deal with ChatGPT? ›
Even if you aren't into artificial intelligence, it's time to pay attention to ChatGPT, because this one is a big deal. The tool, from a power player in artificial intelligence called OpenAI, lets you type natural-language prompts. ChatGPT then offers conversational, if somewhat stilted, responses.What is purpose of ChatGPT? ›
ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and has been fine-tuned (an approach to transfer learning) using both supervised and reinforcement learning techniques.Is ChatGPT banned in university? ›
The AI chatbot has gained popularity over a short period of time. But it is now growing into an issue of concern for some. A university in Bengaluru has banned the use of ChatGPT inside the campus.Is it OK to use ChatGPT? ›
It is a large, deep learning model that has been trained to generate human-like text. It can be fine-tuned for a variety of language tasks, such as translation, summarization, and question answering, and has achieved state-of-the-art results on many of these tasks.How much does ChatGPT cost per day? ›
Tom Goldstein, an associate professor with the University of Maryland estimated the cost of running the highly advanced chatbot at $100,000 per day, or $3 million a month.Can ChatGPT answer engineering questions? ›
"No, ChatGPT will not replace software engineers," the AI responded. "ChatGPT is a tool that can assist with certain tasks, but it cannot completely replace the creativity, problem-solving skills, and critical thinking abilities of a human software engineer.
Can Turnitin detect AI writing? ›
AI writing tools are developing at a rapid pace and so is Turnitin's technology to address emerging forms of misconduct. Recently, we shared with you that we have technology that can detect AI-assisted writing and AI writing generated by tools such as ChatGPT.What can AI not replace? ›
- Teachers: ...
- Lawyers and Judges. ...
- Directors, Managers and CEOs. ...
- Politicians. ...
- HR Managers. ...
- Singers. ...
- Psychologists and Psychiatrists. ...
- Priests and other spiritual figures.
Even in the relentless news and noise of early 2023, you've probably heard about ChatGPT, the GPT-3-powered (technically, 3.5) chatbot that's eerily able to simulate human-like responses.What jobs will ChatGPT eliminate? ›
Media jobs (advertising, content creation, technical writing, journalism) Media jobs across the board — including those in advertising, technical writing, journalism, and any role that involves content creation — may be affected by ChatGPT and similar forms of AI, Madgavkar said.Will ChatGPT make Google obsolete? ›
bots like ChatGPT will destroy search engines within 2 years. Search engines like Google and Bing are already turning their focus to developments in A.I. A.I. chatbots such as OpenAI's ChatGPT could make typical search engines obsolete within two years, according to Gmail creator Paul Buchheit.Why is ChatGPT better than Google? ›
Secondly, ChatGPT is known for generating human-like responses in the context of customer service, and Google provides not only information but also images, applications and software and industries like business and healthcare etc. The major difference is the integration with other applications.Where does ChatGPT get its data? ›
ChatGPT runs on a language model architecture created by OpenAI called the Generative Pre-trained Transformer (GPT), specifically GPT-3. Generative AI models of this type are trained on vast amounts of information from the internet including websites, books, news articles and more.How is ChatGPT affecting Google? ›
A Google executive who spoke to the New York Times said that AI chatbots such as ChatGPT could upset the search business, ultimately affecting its ads business, which earns it the most money, and the voice memo says that CEO Sundar Pichai, who leads the AI efforts at the company.Does Elon Musk own OpenAI? ›
Musk co-founded OpenAI in 2015 as a nonprofit research organization. He cut ties in 2018. ChatGPT quickly captured the public imagination after launching late last year. Millions marveled at its ability to sound like a real person while replying conversationally to complicated questions.Is chatbot safe? ›
Most of the time, chatbots are legitimate and as safe as any other apps or websites. Security measures like encryption, data redaction, multi-factor authentication, and cookies keep information secure on chatbots.
Who is the owner of ChatGPT? ›
ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is backed by well-known investors — most notably Microsoft.What are schools doing about ChatGPT? ›
Some of the nation's largest school districts are banning San Francisco-based OpenAI's new ChatGPT tool on school networks and devices. Education technology experts are urging schools to train teachers and students about how to use ChatGPT and artificial intelligence instead of banning it outright.Why was ChatGPT banned? ›
Schools Ban ChatGPT amid Fears of Artificial Intelligence-Assisted Cheating. The code has been copied to your clipboard. Since its release in late 2022, an artificial intelligence-powered writing tool called ChatGPT has won instant acclaim but has also raised concerns, especially on school campuses.How should schools respond to ChatGPT? ›
Instead, I believe schools should thoughtfully embrace ChatGPT as a teaching aid — one that could unlock student creativity, offer personalized tutoring, and better prepare students to work alongside A.I. systems as adults.What are the negative effects of ChatGPT? ›
Limited creativity: ChatGPT can only generate responses based on the data it has been trained on, which may restrict students' ability to think creatively. Limited language comprehension: ChatGPT may not understand idiomatic expressions or colloquial language, resulting in confusion for non-native speakers.What is the drawback of ChatGPT? ›
One of the primary disadvantages of using a Chat GPT for customer service is the potential for the chatbot to provide inaccurate or misinformed answers. Since the GPTs are trained through a method of trial-and-error, they are only as accurate as the data and algorithms they are based on.What are the limitations of ChatGPT? ›
However, it also has several limitations including limited understanding of context and background information, difficulty in understanding sarcasm and irony, lack of common sense and general knowledge, and limited ability to understand and respond to complex questions.How much does ChatGPT cost a month? ›
OpenAI's ChatGPT Plus costs $20 per month: Here's what it offers, how to access it, etc.How much does ChatGPT professional cost? ›
OpenAI, the company behind the technology, has now announced that it will introduce a ChatGPT Pro subscription plan. This will cost $20 (£16) a month, giving paying users a host of key benefits to make the chatbot easier to use.How much is OpenAI worth? ›
Semafor reported earlier that the funding, which would also include other venture firms, would value OpenAI at $29 billion.
Does ChatGPT have a word limit? ›
ChatGPT is a language model developed by OpenAI that can generate text responses to a wide range of prompts. While it's a powerful tool, it does have a character limit of 2048 characters per response.Who is the founder of ChatGPT? ›
Sam Altman is the creator of ChatGPT, a large language model developed by OpenAI. Q2.How many users does ChatGPT have daily? ›
Currently, ChatGPT averages 13 million unique visitors a day, according to UBS.How many students use ChatGPT? ›
Another survey by the online magazine Intelligent found 30% of college students used ChatGPT on written assignments, and 60% of that group used it on "more than half of their assignments."Is OpenAI owned by Elon Musk? ›
Musk himself co-founded OpenAI in 2015 but has since completely cut ties with the company. He left OpenAI's board in 2018.Which AI is similar to ChatGPT? ›
- OpenAI playground.
- Jasper Chat.
- Bard AI.
- LaMDA (Language Model for Dialog Applications)
- Bing AI.