Artificial intelligence (AI) continues to grow rapidly in emerging technologies. On Nov. 30, San Francisco-based OpenAI made its latest creation, the ChatGPT chatbot, available for public testing. As the name implies, a chatbot is a software application that mimics human-like conversations through the use of user input.
According to Sam Altman, co-founder, and CEO of OpenAI, over a million users have already used ChatGPT within a week of its release.
OPENAI: WHO OWNS IT? IS ELON MUSK PART OF IT?
Silicon Valley investor Sam Altman and billionaire Elon Musk founded OpenAI as a nonprofit in 2015, and it has attracted funding from a number of others, including venture capitalist Peter Thiel. To raise outside investment, the group created a for-profit entity in 2019.
The billionaire entrepreneur, who remains engrossed in his revamp of Twitter, left OpenAI’s board in 2018, but chimed in with a response to the viral phenomenon, calling it “scary good”.
In a subsequent tweet, Musk stated that he was pausing OpenAI’s use of Twitter’s database after learning that it was being used to train its algorithm.
How the ChatGPT works?
According to OpenAI, their ChatGPT model can simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests. The model is trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF).
During the initial development stage, human AI trainers played both sides of a conversation with the model – the user and the AI assistant. Currently, the bot is available for public testing and responds in a conversational manner to questions posed by users.
Who can use it?
In the short term, ChatGPT is most likely to be used to assist human creativity. Based on recent artificial intelligence research breakthroughs, OpenAI’s GPT-3 leads large-language model applications, but its conversational interface simplifies the process of writing speeches and blog posts.
It is not uncommon for other individuals to use ChatGPT to help stretch their minds before presenting or writing. The output (most often) isn’t the final product they want, but it’s a useful sketch of possibilities. ChatGPT outlined an interview about clean energy for Financial Times journalist Dave Lee, for example. The man was impressed. As Lee tweeted, he was preparing for an on-stage interview the next day. reported by slate.
Can this bot work as a search engine?
Using ChatGPT for search is also possible, but it is different from using a modern search engine. The knowledge base from which it was trained expires in 2021, so it cannot be used for current queries. It can, however, provide tutorials and travel tips that can replace some of what you may receive from Google. It has been noted that if the bot were to begin crawling the web, it would pose a competitive threat to search engines (as many pointed out on Twitter). As a result, it is no wonder that Google has made chat an integral part of its future plans.
Why ChatGPT is different?
It is time for the fun part. Among other things, ChatGPT writes poems, tells jokes (often terrible jokes), gets philosophical, and debates politics. There will actually be some stances taken, unlike some of its more benign predecessors. It refused to list anything when I asked what Hitler was good at (a common test to find out if a bot is actually a Nazi). In response to my mention that Hitler built highways in Germany, it replied that they were built by forced labor. It was impressive and nuanced pushback, something I had never seen from chatbots before.
Is it possible that it could be a problem?
As with many AI-driven innovations, ChatGPT doesn’t come without its share of concerns. In its report, OpenAI acknowledges that its tool often provides “logical-sounding but incorrect or nonsensical answers,” something it regards as challenging.
As a consequence, AI technology can also perpetuate systemic biases, such as those associated with race, gender, and culture. Some of the world’s largest tech companies, including Alphabet Inc’s Google, and Amazon.com, have previously acknowledged the ethical risk and limitations of some projects that used AI. AI caused havoc at several companies, so humans stepped in to fix it.
The field of AI research remains appealing despite these concerns. According to PitchBook, a Seattle company tracking financings, venture capital investment in AI development and operations companies rose to nearly $13 billion last year, and $6 billion had been invested through October.