How Smart Is AI?

AI in creative industries is more tool, less comedian, and definitely not the boss (yet) writes Mohamed Rizwan

I recently asked ChatGPT to tell me a joke. This was the response.

Why don’t scientists trust atoms? Because they make up everything!

Right. How you look at this joke depends entirely on how charitable you’re feeling, but we can safely assume two things. One, and this is not going to come to you as a surprise, ChatGPT isn’t that funny. And more crucially, you really shouldn’t have to ask ChatGPT to tell you a joke.

Much of the conversation around generative AI in the marketing and design community is either alarmist or dismissive. The truth lies somewhere between the two. A conversation with a sharp client about using ChatGPT for Instagram captions led to all round laughter when she rightly pointed out that chances are everyone will be doing it and you’ll find the exact caption on a competitor’s Instagram page! It’s not inconceivable.

The loudest proponents of the AI-will-takeover-everything narrative, we often find, are those who have formed their opinions a touch too early - and are eager for everyone to listen to them. The nuanced view is that platforms like ChatGPT and Dall-E are another tool that can help with execution. Quicker, clearer, and likelier to help clients and partners see the point you’re trying to make. It could also have the welcome effect of writers and designers actually being happier at their jobs given that it helps with the mundane, time-consuming parts. No more spending hours photoshopping a bike in front of another shiny, glass building! Just ask Dall-E!

But. The ideas, we’re afraid, will still have to come from those old-fashioned beings - humans. Great work relies on brands forming a connection with their audience. A genuine connection that requires humour, wit, truth, empathy, insight, and context. Take those away, and you’re likely to have something that feels written and made by, you guessed it, a robot. And if there’s one thing we’ve learned, it’s that nobody likes talking to a robot. Not even humans talking like robots.

Arguably, prompt specialists (as in humans) will be the most sought-after people in the creative industry - any industry for that matter. Non-generic inputs will lead to non-generic output. It’s great that ChatGPT can write you an email real quick, as it should give that emails take a criminally disproportionate amount of time, but writing headlines or an editorial or let’s make this more fun, a slight, requires acumen and creative direction. AI in and of itself lacks both.

Design is a deeply human industry. Choosing colour, form, shape, and texture, are deeply human undertakings because all elicit an emotional response. AI’s biggest disadvantage is that it has no feeling for feelings. Which in turn means, it cannot accurately predict any emotional response. That’s why every creative project will be fundamentally human-led. The insight will be human. The angle, the breakthrough, and finally, the impact on business, will be a human’s remit.

What I find amusing are overexcited CEOs exclaiming that ChatGPT can lay out marketing plans which could make creative agencies redundant. ChatGPT can in fact lay out a marketing plan, a social media calendar, and a host of other things. But what happens next? I would love to see a CEO use Dall-E to get an image right, go to ChatGPT write a caption, and use another bit of software to schedule and publish it by himself. (It’s usually a he, you know the type that speaks unyieldingly to the camera and posts it on IG.) Sounds like a great use of time.

Oh, and there’s the small matter of factual accuracy. A McKinsey & Company report recently laid down ‘In-accuracy, cybersecurity, and intellectual property infringement as the most cited risks of AI adoption.’ Inaccuracy led the chart with a whopping 56 per cent risk. Other findings? Most industries have a single-digit percentage

for generative AI adoption. The highest was Technology, Media, and Telecom at 14 per cent, with most using it for ‘crafting first drafts of text documents.’ Sounds about right.

Contrast this to doomsday narratives around AI. A recent piece in 'The Economist' highlighted the results of research between domain experts and ‘super forecasters’, who have a wider historical view and call everything from election results to wars. In effect, the study found that ‘super forecasters are more optimistic than AI experts’. In fact, AI was the greatest area of divergence when compared to other potential sources of catastrophes or extinction - you know, nuclear war, pandemics, asteroid strikes, etc.

It’s early days in the world of AI, and as remarkable as its adoption and popularity are, my suspicion is that we may be overestimating what it can do. And that’s a good thing. Especially in much more serious areas like defence and national security. There is one other thing that most people don’t consider though. The possibility that generative AI is learning from patterns of people that are by and large, how do I put this delicately, not very smart. Could AI get dumber?

Now, that would be funny.

(Mohamed Rizwan is the Founder and Creative Director at Propaganda)

Disclaimer: The views expressed in the article above are those of the authors' and do not necessarily represent or reflect the views of this publishing house. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.