We marketers love technology of all types (especially when it helps us do our job better), and the rapid rise of artificial intelligence (AI) is creating one of the biggest buzzes in the marketing industry right now. No longer is AI and machine learning exclusive to companies with giant supercomputers and chess grandmasters at their disposal. Today, retailers, fashion houses, restaurants, travel companies, healthcare, financial companies and more are starting to make machine learning and predictive technologies almost commonplace.
Still, while it’s easy to get caught up with chatbots that can order a takeaway and pick new clothes, virtual assistants that can automate lead nurturing or predictive modelling that can segment your customers, you cannot forget one important thing: no matter how clever the technologies are, they all depend on data. More importantly, they depend on ‘good’ data.
That old adage ‘garbage in, garbage out’? It was applicable when putting together your direct marketing list to ensure campaigns reached the right people, and it is arguably even more important when programming AI.
Generally speaking, any AI system you choose to make can only be as smart as the information you provide it with. Even machine learning and deep learning technology – which can make decisions and adjust its actions even without explicit programming – need exposure to data in the first place. Even if this data is not continually governed and maintained, then it still needs some form of administration to ensure it is used in a fair, responsible and accurate way.
The highest profile example of AI gone rogue is of course Microsoft’s Twitter ‘chatterbot’, Tay. Without an understanding of ‘inappropriate’ behaviour, and with the unfiltered comments from Twitter’s massive user base as her information source, it didn’t take long before Tay start regurgitating some of the more hateful and inflammatory comments fed to her. To be precise, it took just 16 hours (in which Tay managed 96,000 tweets) before Microsoft pulled her plug.
Similarly, you don’t have to search far to read other stories of AI suffering from ‘learning bias’. This includes an AI-judged beauty contest displaying a bias toward Caucasians because of the criteria it associated with ‘health’ and ‘beauty’, and an automated passport photo checker that rejected a picture of an Asian man because the software claimed that “the subject eyes are closed”.
None of these aforementioned AI technologies have been created with the purpose of causing offense, or to racially discriminate. They were simply doing as they were told – but tell that to the local tabloid newspaper.
So, before you embark on an exciting project to introduce a ‘virtual assistant’, or use clever AI algorithms to start segmenting customers, autonomously buying digital advertising space or whatever, you need to first ensure you are happy with the data you want to fuel it with.
This requires data to be both carefully selected (to consider the data you should and shouldn’t use) and meticulously prepared (consistently formatted and cleansed to fix missing data and inaccuracies removed). Quality data could just mean the difference between better marketing and Judgement Day…