Google claims it is ready to allow the public to use its generative AI chatbot Bard.
In a gradual rollout, which began Tuesday, the company will allow tens to thousands of users access.
Google claims that people will use the chatbot online and on mobile to generate ideas (“Bard: How do I keep my plants healthy researching ideas (in conjunction with Search), and drafting their first drafts of letters or invites or proposals.
Google first announced Bard on February 6, along with some generative AI search function and developer tools. It revealed on March 14 that it will integrate generative AI capabilities across its apps within its Workspace productivity suite.
Today marks the first public release of a generative AI chatbot powered with a large-language model by Google. Google claims that the bot uses a lightweight, optimized LaMDA version and will continue to be upgraded with more capable models.
Eli Collins, Vice President of Product, Google Research, said that his group has been working with language models since 2015. This is two years prior to Google’s landmark paper about transformer models. These models led to large language models like the OpenAI GPT models, which power ChatGPT. Collins claims that his group is at a stage in its understanding of and development of generative artificial intelligence where it is ready to release it.
He claims that Google learned a lot from employees using Bard internally and wants to learn more from outside users. Collins acknowledges that Google released Bard to the general public because of the popularity of OpenAI’s ChatGPT.
Collins asked Bard Monday how he could get his son to be more interested in bowling. The bot suggested throwing a bowling party. Collins then hit the Google Search button at bottom of chat interface. The search result was displayed in a different tab. The search returned two top results: sponsored links to a publication, and one that provided information about party ideas for children. Below was a list containing local bowling partners. The search provided most of the useful information, but Collins was aided by Bard to narrow down what to search.
Google insists that Bard is still in development. It is referred to as an “experiment” by Google, which OpenAI and Microsoft use to refer to their generative AI robots. Google seems to be keen to highlight the fact that Bard can make factual errors, or invent facts (“hallucinating”): The prompt window below the bot’s prompt window contains the statement that “Bard may display inaccurate or offensive information that does not represent Google’s viewpoints.” This statement is true. Bard makes factual mistakes. Bard can be used with Google Search (there is a Google Search button in the chat interface), but it’s best to be used by itself as an idea generator tool.
Sissie Hsiao, Google’s Vice President of Product, points out that Bard’s creative abilities allow it to write poetry and tell stories. It creates facts, just as it makes up stories.
Other safety measures are in place for Bard. Although Google doesn’t say what the limit is, it limits the number of back and forths per conversation. Bard is being prevented from creating computer code by the company, which it hopes to expand in the future.
It plans to also add multimodal outputs (text plus video or audio) and support for additional languages.