A new chatbot has passed one million users in less than a week, the project behind it says.
ChatGPT was publicly released on Wednesday by OpenAI, an artificial intelligence research firm whose founders included Elon Musk.
But the company warns it can produce problematic answers and exhibit biased behaviour.
Open AI says it’s “eager to collect user feedback to aid our ongoing work to improve this system”.
ChatGPT is the latest in a series of AIs which the firm refers to as GPTs, an acronym which stands for Generative Pre-Trained Transformer.
To develop the system, an early version was fine-tuned through conversations with human trainers.
The system also learned from access to Twitter data according to a tweet from Elon Musk who is no longer part of OpenAI’s board. The Twitter boss wrote that he had paused access “for now”.
The results have impressed many who’ve tried out the chatbot. OpenAI chief executive Sam Altman revealed the level of interest in the artificial conversationalist in a tweet.
The project says the chat format allows the AI to answer “follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests”
A journalist for technology news site Mashable who tried out ChatGPT reported it is hard to provoke the model into saying offensive things.
Mike Pearl wrote that in his own tests “its taboo avoidance system is pretty comprehensive”.
However, OpenAI warns that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers”.
Training the model to be more cautious, says the firm, causes it to decline to answer questions that it can answer correctly.
- What is AI and is it dangerous?
- Why the rise of AI art stirs fierce debate
- How human-like are the most sophisticated chatbots?
- Google engineer says AI system may have feelings
Briefly questioned by the BBC for this article, ChatGPT revealed itself to be a cautious interviewee capable of expressing itself clearly and accurately in English.
Did it think AI would take the jobs of human writers? No – it argued that “AI systems like myself can help writers by providing suggestions and ideas, but ultimately it is up to the human writer to create the final product”.
Asked what would be the social impact of AI systems such as itself, it said this was “hard to predict”.
Had it been trained on Twitter data? It said it did not know.
Only when the BBC asked a question about HAL, the malevolent fictional AI from the film 2001, did it seem troubled.
Source: ChatGPT: New AI chatbot has everyone talking to it – BBC News