“The West has been fearful for Africa” - a Namibian medic tells us what it's really like on Africa's Covid frontline
The right kind of woman: who is running alongside Joe Biden for the White House?

Artificial intelligence vs human authenticity: are creative jobs in danger?

In 2018, a neural network bested the human in a reading comprehension test. The machine was able to answer over 100,000 questions from the Stanford Question Answering Dataset. It read over 500 Wikipedia articles and beat a human by 0.136 points.

Neural networks can upstage people in many ways. They are obviously better and quicker at analysing massive arrays of data, and they are keen on spotting the subtle differences and details. Our brain is still credited with the largest number of neurons, far exceeding any AI. It will take years and years to build machines powerful enough not to just collect and process information, but make decisions based on all the data just as well as humans do.

Algorithms are replacing more and more workers involved in monotonous menial labour, but what about intellectual jobs, like art, design, or journalism? Are they doomed, too?

Journalism and writing

Large companies are already using neural networks to perform menial jobs, as it's quicker and easier, and doesn't need health benefits and paid vacations. These are chatbots, machines writing short news blocks, putting together different lists, updates, and weather forecasts. But, unlike accountants, for example, journalists don't need to update their LinkedIn profiles just yet.

The AI will write a perfect news report but will have a hard time delivering an analytics piece or a review.

These tasks require historical background, a strong personal opinion and unique style. These skills can hardly be taught to a machine, and the purpose of doing it is questionable, too. People read these articles for someone's authentic style, and they want to hear a real honest opinion.

"Who is Who" screenshot from Sky News
"Who is Who" screenshot from Sky News

One of the largest AI-generated projects today belongs to the BBC. On election night in December 2019, BBC News posted around 700 news items about the outcome of the vote, 649 in English and 40 in Welsh using AI. Britain's Sky News made it's AI debut during Prince Harry and Meghan Markle's wedding ceremony in 2018. With the help of a face recognition system, they identified all the attendees at the royal wedding reception. The Who Is Who programme was shot as a follow up to this research. This year saw the emergence of a service tracking topics that are of interest to readers but has received little media coverage. The service was developed in Parse.ly, an American analytics bureau. IBM analysts, determined to revolutionaze sports broadcasts and create digital commentator, presented a technology which can track down fans' emotions and gestures performed by players and viewers during games. 

No matter how huge the volume of data collected, processing requires a great deal of critical thinking AI currently lacks. It becomes evident when we realise how AI produces racist, xenophobic and sexist algorithms because it doesn't consider the social dynamic and filter out the obsolete notions from the collected data.


Machine learning is very often compared to teaching a child. The vast difference between them is that the kid can look at the daisy just once and then reproduce it with a decent resemblance to the original. The machine has to comb through thousands of daisy images just to know a daisy from a rose.

AI advocates point out that the child must have processed a lot of relevant data throughout his or her entire life to solve the daisy dilemma so quickly. In contrast, the machine didn't have the luxury of learning over so many years or the appropriate cultural background, so it had to do the job in a much smaller period.

Neural networks are still easily misled — if it sees a lot of pictures of an axe, it might not recognise a differently shaped axe.

Secondly, real human intellect is unable to store terabytes of information, but it is better for making associations and can multitask.

However, design is one of the few creative areas that can benefit from the absence of cultural background. Better designs very often emerge from a relaxed mind. When the brain is unrestrained and not fixed on a specific cultural pattern, it is free to wander and make unconventional combinations. That's the case with AI: it was not raised and bred in a particular environment; it doesn't have inhibitions and might see connections the live designer would never see. Yet again, it is only a real human designer who will be able to appreciate the result fully.

To test that theory, Artemy Lebedev design studio from Russia built a designer AI, gave "him" a name and tasked with creating logos for real clients who didn't know they were working with a computer. Some people were baffled by the unsusal design but it sure was original.


Recently, art was no stranger to AI achievements. The neural networks that can make stylised depictions of any art piece by known artists have been popular since 2015, but they are evolving and have learnt not only to create copies but make paintings by themselves. The feelings towards these pieces may be mixed, but they clearly can't be ignored.

In October 2018, for the first time, Christie's sold an AI-generated print. The Portrait of Edmond de Belamy is in the style of 19th-century European portraiture and raised $432,500. The algorithm signed the smudged and blurred image of a young man with a piece of code. The portrait was based on a generative adversarial network, GAN. This algorithm consists of two neural networks where one is responsible for generating the actual image, while the other compares it to the database.

Portrait of Edmond de Belamy by Artificial Intelligence
Portrait of Edmond de Belamy by Artificial Intelligence/Wikimedia Commons

Four months later, the HG Contemporary Gallery in Chelsea, the epicentre of New York's contemporary art world, hosted the «Faceless Portraits Transcending Time». All of the pieces were created in the Art & AI Lab at Rutgers University headed by Dr Ahmed Elgammal.

The AICAN algorithm, just like GAN, has trained on images of over 3,000 Renaissance masterpieces and generates similar samples. In this case, though, the other neural network is not trying to look for stylistic guidelines, it goes into the opposite direction and tries to produce something unique, in other words, makes attempts at creative thinking. This new technology was named CAN for creative adversarial network.

The paintings created by AICAN came out pretty obscure — unlike Edmond de Belamy, they had no distinctive features and resembled ghosts or mythical creatures.

Though the art critics and the viewers were sceptical about this venture, Elgammal is very optimistic and is convinced AICAN is off to a good start. Soon it will be able to spot emerging trends in art and what art pieces will be popular.