Anirban Ghoshal
Senior Writer

Elon Musk to take on ChatGPT with Grok AI

news
Nov 06, 20234 mins
ChatbotsGenerative AI

xAI team has warned that the chatbot is a “very early beta product” and has been unveiled with just two months of training.

LLMs, ChatGPT, Generative AI
Credit: NicoElNino/Shutterstock

Elon Musk has revealed a new generative AI model-based chatbot that will take on several large language models including OpenAI’s ChatGPT, Google’s PaLM 2, and Anthropic’s Claude 2.

The chatbot, which has been christened Grok AI, was developed by xAI — a generative AI venture launched by Musk in July to make an AI model understand the true nature of the universe in order to make it safer. 

“Grok is an AI modeled after the ‘Hitchhiker’s Guide to the Galaxy’, so intended to answer almost anything and, far harder, even suggest what questions to ask,” the xAI team posted on its website.

The team further said that Grok has been fashioned in such a manner that it answers any question with a bit of wit and “has a rebellious streak.”

Further, the team said that Grok has access to Twitter, now X, allowing it to learn from all the content that is posted on the platform.

“It will also answer spicy questions that are rejected by most other AI systems,” the xAI team wrote.

However, the team warned that the chatbot is a “very early beta product” and has been unveiled with just two months of training. The model behind Grok is expected to “rapidly” improve every passing weak, the team claimed.

Grok-1 beats GPT-3.5 but fails to surpass GPT-4

Grok-1, which is the engine behind Grok, can be used for natural language processing tasks including question-answering, information retrieval, creative writing, and coding assistance, according to the xAI team.

“While Grok-1 excels in information processing, it is crucial to have humans review Grok-1’s work to ensure accuracy. The Grok-1 language model does not have the capability to search the web independently,” the team said, adding that search tools and databases enhance the capabilities and factualness of the model when deployed in Grok.

“The model can still hallucinate, despite the access to external information sources,” the team warned.

In order to develop Grok-1, the xAI team first trained a prototype large language model (LLM), dubbed Grok-0 with 33 billion parameters.

The Grok-0 model performed close to Meta’s LLaMA 2 model which has been trained with 70 billion parameters. The first model was then trained and improved upon to produce Grok-1.

The xAI team also ran benchmark tests to compare Grok-1’s performance against PaLM 2, Claude 2, Inflection-1, LLaMA 2, GPT-3.5 and GPT-4 models. These benchmark tests include middle school math problems, multidisciplinary multiple choice questions, and Python code completion tasks.

On these benchmark tests, Grok-1 surpassed performance of all other models except GPT-4, which the company claims has been trained on a “significantly larger amount of training data and compute resources.” 

The Grok AI chatbot, according to the company, is currently being offered to a limited number of users in the US. Users who are interested to test out Grok AI can sign up for a waitlist, the team said, adding that it has plans to release the chatbot to all X users who have a subscription.

AI safety continues to be a major concern

Musk’s decision to launch xAI and subsequently Grok AI could be seen by many to be contradictory to his previous remarks.

The SpaceX and Tesla CEO had signed a letter in May along with other members of the Center for AI Safety that warned that AI evolution could lead to an extinction event and suggested that controlling the technology should be a top global priority.

However, Musk has also roped in Center for AI Safety’s director Dan Hendrycks to be a part of xAI’s advisory team.

Several countries have taken cognizance of the possible issues or challenges that could arise from further development of AI.

Last week, China and the US along with several other countries including the EU, Australia, India, Germany, and France, signed an agreement, dubbed Bletchley Declaration, at the UK AI Safety Summit to form a common line of thinking that would oversee the evolution of AI and ensure that the technology is advancing safely.

ChatGPT-creator OpenAI, too last week, said it was readying a team to prevent what the company calls frontier AI models from starting a nuclear war and other threats.  

US President Joe Biden’s administration, too, during the week issued a long-awaited executive order that hammered out clear rules and oversight measures to ensure that AI is kept in check, while also providing a path for it to grow.