What’s The Difference Between AI & Computer Vision? And Why Every Marketer Needs To Know

What’s The Difference Between AI & Computer Vision? And Why Every Marketer Needs To Know

In this guest post, GumGum’s vice president ANZ, Jon Stubley (pictured below), takes a look at AI and computer vision. And explains why you’re about to hear a lot more about it…

Carlotta Vittori
Posted by Carlotta Vittori

If you go to the website for my own company you will see that we describe ourselves as “an artificial intelligence company with deep expertise in computer vision”.


Undoubtedly AI and computer vision is going to be a game changer for society (and within that the media and marketing industry) but in my conversations with industry executives this year, I’ve found that whilst some people are entirely across what AI and computer vision are, others are fairly sketchy in their understanding.

So, in case you’re wondering, here is the lowdown on what AI and computer vision actually are.

What is AI, what is computer vision and are they the same or different?

 While the following synthesis of a hugely broad and complex subject is unlikely to win any scientific awards, put simply AI means using computer systems to perform tasks and functions that usually require human intelligence. In other words, getting machines to think and act like humans. By tasks and functions, I mean things like speech recognition, translation of languages, decision-making and visual perception.

Which is where computer vision comes in. Computer vision describes the ability of machines to process and understand visual data; automating the type of tasks the human eye can do.

So in layman’s terms, computer vision is AI applied to the visual world.

Why am I suddenly reading so much about AI?  It seems to be this year’s ‘Internet of Things’?

AI has been around for decades (if not more) but recent advances in data technology capabilities have turbo charged advances, which is why you are reading about it so much now.

One of the most important breakthroughs came in 2011. Up until this point, scientists had thought that in order for machines to think and interact like humans, it required uploading huge quantities of knowledge data to a computer, and then uploading vast sets of rules for how to process the data (for example, the syntax and grammar rules of a language).

However, in 2011 AI researchers at Google had a Eureka moment.  The researchers, working on complex computing systems called neural networks and deep learning (modelled on how human brains work) reversed the process. That is, instead of inputting the rules themselves, they fed the computer huge amounts of data that had been precisely labelled and let the machine analyse it – a process known as ‘supervised learning’.  The machines were then able to recognise unlabelled data that was fed to it, having ‘learned’ for itself how to recognise it from the ingested labelled data.

In one landmark test, their machines were able to detect false positives and negatives in diagnosing diabetic retinopathy 90 percent of the time, versus human doctors who were correct only 80 per cent of the time on average.

This breakthrough ushered in the arrival of fast AI-enabled personal digital assistants like Alexa, Cortana, Google Assistant, and Siri that we are all familiar with. 

How is computer vision already being used?

Computer vision already has a host of day to day applications that you are likely already using; if you use Google Translate to decipher foreign menus or road signs for example or to search images in Google Photos using search terms, that’s already a form of computer vision you have accessible on the daily basis.

Within industry and society, there are too many examples out there to try and condense, but perhaps the most advanced are in healthcare, transport and in homes and cities.

In the medical sector, image recognition is increasingly being used to analyse X-rays, MRI, CAT, mammography, and other scansSince 90 percent of all medical data is image based, and computers can analyse just as well or even better than humans, this is set to continue. Not only that, it is also being used in real life procedures via that create perfect stitches and driving safer outcomes.

Self-driving cars are another obvious example; if you allow a Tesla to drive itself, it uses a host of cameras as well as sonars to both prevent you veering across lanes, and you. Similarly, computer vision is coming to the fore in ‘smart cities’ where it is being used to solve traffic and crime issues.

And we will soon be able to get takeaway delivered to our doors with Amazon recently trialling pizza delivery drones in the UK that use computer vision to avoid obstacles.

How will computer vision change marketing?

It already is. Image recognition is a crucial tool for processing the crazy amount of visual data that is uploaded and shared online every second; GumGum uses it to serve appropriate in-image and in-screen ads and for social media and sports sponsorship brand analysis and evaluation.

Going forward it will be used for a host of marketing applications. It is going to revolutionise shopping and merchandising (you’ll be able to buy clothes via Siri and Alexa amongst other innovations); enable real-time focus groups; refine re-targeting and real-time optimisation and so much more.

In its ultimate application, it can even be used to create a robotic creative. In fact, it already has. McCann Japan recently developed an AI Creative Director and pitted it against a real one to make an advert for Clorets Mint Tabs. The results were showcased earlier this year at an IAB event in the UK and ominously industry executives preferred the robot’s commercial.

You can decide which you think is better here, but either way, computer vision is going to be integral to almost all parts of our industry. So it’s time to make sure you get up to speed.