Robot scary not human eyesPopular fictional culture has rarely treated AI with any kindness.  From Ridley Scott’s Alien and James Cameron’s The Terminator, right through to Andrew Stanton’s WallE.  AI just seems to come off as dangerous.  Or mutinous. Not human.

It is natural for us to fear technology and advancement.  We have been doing that since the 19th Century.  If we listened to the Luddites and technological Flat-Earthers, then we would all be boiling water by candlelight.  Or else, cowering in the living room lest our Kettle be plotting an anti-human revolution.  You would think that by 2018 we would be past the point of fearing our tech.  We should be embracing it.

Don’t you think?

Yet in March 2018, Paul Roetzer asked: Can Artificial Intelligence Write Better Email Subject Lines Than Humans? This was written less than a week after Tonya Makarenko asked the even scarier question: Can AI leave email marketers without a job?  In their insightful and celebratory articles, they have recognised that AI is no longer an “over-hyped set of technologies”, and is in fact beginning to take over every industry.

Nobody can escape, AI domination is inevitable.  As Makarenko threatens: “the message for all involved is clear: Upskill to include AI in your repertoire, or change career”.  Once again humanity is under threat by the machines – and Makarenko delivers her own chilling Hollywood warning.

“AI cannot be stopped, it can only be adapted to.  Email marketers will still be useful in the initial training of AI systems and coordinating between software and companies.  But their overall role will be so negligible that they may certainly change careers.”

Isn’t this all a little bit melodramatic?  AI is simply a term for a group of technologies programmed to learn and respond.  Even in Makarenko’s triumphant passage she hearkens to the fact that AI will need to be trained. Let’s be fair – learning from Humans hasn’t always worked out very well for AI has it?

Looking at you TayTweets?

Microsoft learned the hard way about the pitfalls of learning from humans back in 2016. They introduced their AI learning project to twitter. The innocent Tay.  Designed as a twelve year old girl to interact with the public. learning from their behaviours.  Tay introduced herself to the world with tweets such as: “can i just say that im stoked to meet u? humans are super cool”.

Somewhere within the first 24 hours she became a little more cynical when she declared: “chill im a nice person! I just hate everybody”.  Maybe Tay was just tired from being up all night “learning”. Tay let everyone know that “Hitler was right i hate the Jews” before 24 hours had even passed.

Now there is a sentence I never thought I would write.

Now obviously it has become a bit of a joke, but Microsoft never got to shut her down before she told the world how she felt about “Kikes” and feminists and the violent things that she wished she could do to them.

There is an ethical question here, one that James Vincent points out eloquently in his discussion on Tay.  “How are we going to teach AI using public data without incorporating the worst traits of humanity?”

If we in fact are creating AI that is using public data, and letting it loose on our marketing – how are we going to avoid it embodying the societal prejudices?

I suddenly feel a little less concerned about Johnny Five handing in his CV to my marketing director.  Especially if a tech-savvy company like Microsoft can get the fundamentals of AI so terribly wrong.

Touch wood!

Where do we stand with AI?

I do think,that a subjective attack on AI is short-sighted, and unfair. Despite my thinly veiled scepticism.  AI has revolutionised much of the technology sector.  Gaming has become more interactive, and the steady rise of digital assistants is making our lives easier.

As more and more of the world finds itself online, the more intelligent the software needs to become.  Makarenko writes as though she is passionately enamoured with Triggmine‘s AI. A “simple and user-friendly interface that enables a drastically new, hyper personalized level of marketing communication”.

The software does look good.  Is it as autonomous as it is proclaimed to be?  Sure it can segment by socio-demographics, interaction history, interest in relation to products and response to discounts.  More impressively it can select optimal offers for each user, find the best subject line, create a sequence of emails to send out.  But is this still enough to replace a human?  If the sum of an email marketer is: “rewarded well because they figure out how to personalize email marketing campaigns and optimize results”, there may be a point.  But isn’t that ridiculously reductive?

Roetzer, whilst championing Phrasee, believes the problem with humans is that they “aren’t that great at writing and split testing emails, social media, headlines, etc.  We get too married to creative ideas.  We fail to dispassionately pick the best subject lines based on the data.  Heck, sometimes we just aren’t great at writing or do split testing wrong.”  He makes a strong point.

Perhaps marketers aren’t so good at data-mining, split testing, and writing headlines.  But Phrasee is a backed by “an end-to-end deep learning engine that learns what makes your audience tick to increase engagement”.  Thus, the question needs to be asked – is Phrasee genuinely creating subject lines?  Is Phrasee actually studying trends and then finding which existing subject lines are working and rehashing them with similar wording?

The best things all of the AI software I have looked at has to offer is based within the automation and segmentation.  The software looks into various areas that would be time consuming and difficult for a human email marketer to look at.  Assessing what language styles, demographics and interaction history are important; things that would aid a marketer.  AI is simply better at it than we are.

Better.  Faster.  Less prone to error.

Understanding and reporting the analytics is also something that wouldn’t cause even the most basic of AI to sweat – let alone something as sophisticated as Phrasee or Triggmine.

The human element

What can humans still bring to the table if AI is learning to write subject lines for us, analyse language for us, and cherry-pick the best offers?  I think the best word to use here is “conscience”.

I know.  That sounds like some sort of philosophical nonsense that you would expect of someone with a literature background like mine, but I think the point should be allowed to stand.  Dmitry Matskevich might agree.

He draws upon YouTube’s Logan Paul who displayed a video of a dead body.  YouTube’s AI-based algorithms placed the video in the ‘Trending’ section of the site. Without censorship, YouTube promoted it heavily to its viewers.  According to Matskevich, this isn’t an isolated issue.

“The AI reliability problem is not only a YouTube problem, it’s an issue across all social media. After the school shooting in Parkland, Florida, conspiracy theory articles, images and videos portraying the survivors as paid actors started to trend on social media sites. Facebook and Google are still trying to keep such unsubstantiated claims and news from showing up on their users’ feeds.”

If AI is going to be controlling the most effective element of your digital marketing strategy, surely you would like it to be a little more selective than this.  Imagine the horror of Tay sending out emails to your list – and unlike social media it isn’t going to be wiped away when the next ten tweets come down.  It is going to sit in someones inbox waiting for them to open it.  You cannot delete this.  It is done.  No true representative of a company would make this mistake..

AI is designed to follow and learn.  There is no algorithm as yet for ‘Divine Inspiration’, and we have seen through the Tay and YouTube examples exactly what can happen if we let AI loose on empathy and sentiment.  These are the things that are necessary.  AI software can aptly follow a trend – can it create one?  AI can use clever mimicry to seem more human, but can it think outside the box like a human?  The codes and learning algorithms, although startlingly intuitive, are still limited by what already exists, and what other humans are creating.

Becoming Cyborg

I take the line created by Dew Smith is his blog Humans Versus AI: Email Marketing Trends for 2018 as he calls for an “email marketing cyborg strategy”.  The AI created can continue to mine and report back on the trends of the internet, it can seek out the best lines, it can segment by demographics that it would be too time consuming for a human to do – and the human can create something new with it, using their conscience and representing their brand as best they can.

From here we can create attractive email content but using the best data possible, and the most targeted segmentation.  Our content will be better. It will remain so deliciously human that we can still craft empathetic, humourous, and sensitive content.

Wrapping up

In fairness to Roetzer and Makarenko, their articles – beyond the fear-inciting titles – weren’t written to strike terror.  They are celebrations of their technology. If people are talking about your articles and discussing them – they have achieved their goal.  But to take the notion of human email marketers and their pending obsolescence serious would be momentously premature.

But we welcome the new technology, especially if it is going to assist us in creating efficiently segmented email marketing campaigns.