Googlers say Bard AI is “worse than useless,” ethics concerns were ignored

A large Google logo is displayed amidst foliage.

Enlarge (credit: Sean Gallup | Getty Images)

From the outside, Google Bard looks like a rushed product to try to compete with ChatGPT, and some Google employees share those sentiments. A new report from Bloomberg interviews 18 current and former workers and came away with a pile of damning commentary and concerns about AI ethics teams that were "disempowered and demoralized" so Google could get Bard out the door.

According to the report, Google employees were asked to test Bard pre-release for their feedback, which was mostly ignored so Bard could launch quicker. Internal discussions viewed by Bloomberg called Bard “cringe-worthy” and “a pathological liar.” When asked how to land a plane, it gave incorrect instructions that would lead to a crash. One employee asked for scuba instructions and got an answer they said “would likely result in serious injury or death.” One employee wrapped up Bard's problems in a February post titled, “Bard is worse than useless: please do not launch.” Bard launched in March.

You could probably say many of the same things about the AI competitor Google is chasing, OpenAI's ChatGPT. Both can give biased or false information and hallucinate incorrect answers. Google is far behind ChatGPT, and the company is panicked over ChatGPT's ability to answer questions people might otherwise type into Google Search. ChatGPT's creator, OpenAI has been criticized for having a lax approach to AI safety and ethics. Now Google finds itself in a tough situation. If the company's only concern is placating the stock market and catching up to ChatGPT, it probably isn't going to be able to do that if it slows down to consider ethics issues.

Read 2 remaining paragraphs | Comments



from Tech – Ars Technica https://ift.tt/ORMEbop

Comments