advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

When it comes to Bard, all animals are cows

Artificial intelligence-powered large language models like ChatGPT, Bing Chat, and now Google Bard will take our jobs because they can think like a human but don’t have any of the disadvantages of the flesh, like having to sleep or eat.

Except that they will often provide information that is false, incorrect or just plain wrong. Something that the industry is aware of and a reason for it urging more responsible usage.

Case in point: Google Bard’s favourite ASCII image is one of a cow:

Not only is the cow Bard’s favourite ASCII art, it is also seemingly the only thing it can draw.

“Can you draw any other animals?” we asked the chatbot. “Sure, here are some other animals that I can draw in ASCII,” it proclaimed. It then went on to provide the same ASCII cow for several other animals, including horses, cats, dogs, and even crocodiles.

We then asked it to describe a cow and a dog, visually. It did so expertly, telling us that cows are larger than dogs. That dogs can have different body types, pointed ears, a shorter tail, and a more varied coats of fur.

With this in the chat memory, we asked it to now draw both animals in ASCII.

Interestingly, it broke down the differences between the animals in two paragraphs but still provided the same ASCII code, bar a small difference. So Bard can tell you what the visual difference is between a cow and a dog, but is unable to represent it.

Obviously, Bard can’t “see” cows, or cats or anything else. It doesn’t have eyes or a brain. It is a software that pulls from a huge amount of data to pretend to be knowledgeable.

Despite this, major competitor Microsoft Bing can provide different ASCII for a “cow” and a “cat,” for example. Probably because it leverages OpenAI’s large language model, which has had more time to learn from users.

So the problem could boil down to training data, content bases or the different places the two models (LaMDA-2 for Bard and GPT3.5/4 for Bing) scrape their information from.

This fun little experiment begs the question, why is the cow ASCII the Bard go-to? It will consistently draw a cow-like creature, no matter what you ask it to create using ASCII, from animals to simple shapes.

According to the most popular ASCII art archive, there are more ways to assemble cats using code than cows (86 to 16, respectively.) So shouldn’t the cat be the staple here since more people use cat ASCII art than cows?

In general, however, AI will often produce weird and hideous pieces of ASCII “art,” which while being interesting to look at also gives a some insight into how these generative models work.

In a feature piece, The Verge indicates that this is a core problem with large language models, and indeed generative AI. Human brains are deeply, unfathomably complex – we still don’t really know how they work – in order to maintain status relationships between millions of things. There is also imagination at play.

LLMs like ChatGPT and Bard can only draw from their training data. If the data doesn’t include too many examples of certain topics, like ASCII animal art, then it will not be able to return with correct responses.

However, if you ask a child to draw a cow, and then draw another cow on fire, a child will be able to manage this, because the child can inherently connect the relationships between the cow and the fire. In development psychology this is called assimilation and accommodation.

The common example is of a child that understands what a cat is through their senses – a small furry creature – but when confronted by a dog, the child will think it is a cat unless corrected by an adult. This will change the concept of what a cat is and what a dog is.

Bard’s generative AI is unable to do this, even if it is corrected, it will continue providing ASCII art of a cow.

What’s the point of this article? Well, apart from the fact that it is fun to call out that generative AI is not infallible, it also provides some hints into human cognition. So doomsday preppers can relax. Until AI is able to make its own connections, really and truly, it won’t come anywhere near the dreaded singularity.

advertisement

About Author

advertisement

Related News

advertisement