Google has updated its Bard AI bot to help developers debug and write code. Google claims code generation is “one of Google’s top requests” since Bard was opened up to last month.
Bard can explain code snippets and code in GitHub repositories, similar to the way Microsoft-owned GitHub implements a ChatGPT assistant with Copilot. Bard can also debug any code you provide or its own code, if there are errors or the output is not what you wanted.
Bailey acknowledges that Bard, like AI-powered chatbots, “can sometimes present inaccurate, misleading, or false information, while confidently presenting it.” Bailey says that Bard can give you code that does not produce the desired output or code that is incomplete or unoptimized. Bard can give you code that is not optimal or incomplete.
Google’s Bard chatbot is still being developed despite employee criticism. Employees have reportedly called the chatbot “a pathological lie” and Google has ignored ethical concerns in order to compete with OpenAI and Microsoft. We found that Google’s Bard bot was less accurate in tests comparing Bing, ChatGPT and Bard.
The AI war between Alphabet’s Google and Microsoft, who counts themselves as investors in OpenAI is especially heated.
Microsoft uses AI to improve its Bing search service, which is competing with Google. Google launched Bard to counter the buzz surrounding ChatGPT. Executives at Google are reportedly looking to create a new AI-powered search engine.