There’s a new chatbot in town, and it’s gaining plenty of buzz on Twitter for its ability to answer some pretty complex questions and generate creative responses — as well as for some notable blips that show the continued limits of artificial-intelligence technology.
ChatGPT, a product of OpenAI, made its free, public debut last week. Users who make an account with OpenAI can engage with the bot by asking it to solve problems, write code and produce other creative output. The chatbot returns responses within seconds, including when asked to describe itself for a MarketWatch audience.
OpenAI, which developed the tool and made it available for a “research preview,” said in a blog post that the company trained an initial model and fine-tuned it with help from human AI trainers. OpenAI said that ChatGPT is built to “refuse inappropriate requests,” though it acknowledged that some might still slip through.
Chatbots are prime candidates to generate interest on social media because people like seeing the funny, impressive and just plain wrong things that machines spit back. But ChatGPT seems to be winning some praise for its actual abilities as well. Aaron Levie, the CEO of Box Inc.
put the chatbot in some lofty company amid a series of tweets highlighting the service’s potential.
Others mused that the bot could become a thorn in the side of professors who now have to ward off a new type of cheating threat.
The bot was also able to provide a fairly impressive creative response to an application essay for the University of Chicago, which is known to have outside-the-box questions.
Twitter users were quick to take advantage of the bot’s whimsical potential. ChatGPT can rewrite popular songs to be about a life event you describe, turn articles into limericks, and mimic biblical language in describing the absurd.
Others saw more practical implications, with one business consultant writing that he was able to combine the chatbot with a program that would allow people to send more professional communications.
Still, ChatGPT has a few things to learn. Some Twitter users highlighted how the bot failed to recognize that birds aren’t mammals.
Others noted that the bot came up short on some basic logic questions that humans often mess up as well, such as how long it would take for a patch of lily pads that doubles in size daily to cover half a lake, if it takes 48 days for the patch to cover the whole lake.
Some people explored backdoors to the company’s safeguards around harmful content. The OpenAI website says that ChatGPN is trained to reject straight requests to share information about how to bully people, but some people mentioned they were able to get around that, including by asking the bot to complete realistic dialogue between people discussing bullying. Others said they were able to reveal gender and racial bias in the bot’s models.
OpenAI describes itself as an AI research and development company intended to bring benefits to humanity. It is governed by the OpenAI foundation and counts Microsoft Corp.
among its investors.