In his latest post for B&T, Collaboro CEO Warwick Boulter (pictured below) says another upside to all this new tech available to marketers is it ability to save you time and money…
You may have noticed that we talk about metadata, collation, content tagging and machine learning – a lot. They’re pretty dry topics. But they’re important, because if you can find your marketing assets quickly and easily, then you can avoid costly production repetition, you can empower your creative partners to actually be creative and you can get your campaigns to market far more efficiently.
Let’s put some metrics around NOT being able to find things: In 2012 McKinsey reported that employees spent 1.8hrs per day – over 20 per cent of the working work – on searching for files. An earlier IDC report placed this figure even higher at 30 per cent. In IDC’s words “the knowledge worker spends about 2.5 hours per day, or roughly 30% of the workday, searching for information”.
Conservatively, that’s one headcount in five, completely wasted.
Now, add in the cost of re-making things you can’t find. Then add in the opportunity cost of missed market opportunity while you remake things. Now add in the cost of this issue manifesting across multiple agency partners, who also have no ability to share assets easily… Suddenly, the real costs of not being able to find marketing assets jump.
Properly solving digital asset management (DAM), and engaging your creative partners in the process and outcomes of a DAM solution, can deliver huge efficiencies to your business and help cut down considerably on future marketing costs – ideally funnelling that wasted 20 per cent or so of your marketing budget back into market-focused outcomes.
There are three core aspects of digital asset management required for success: engagement, process and software – each have about a 30% weighting to deliver success.
Any DAM software worth its salt will use a relatively intuitive metadata system to ensure uploaded files can be quickly found by users. To be high-functioning, the system should use a combination of machine learned insights and human smarts to tag, not just the image or video file, but the individual components within the file. The system should recognise what’s actually important to your business, not just deliver generic tags. And it should encompass details on capture, usage, production and copyright.
This makes the content ‘intelligent’. And that’s where things get really exciting.
Using a combination of machine learning and human smarts, an intelligent content engine (ICE) analyses and understands objects, people, text and scenes within imagery and video, to add multiple layers of searchable metadata.
These searchable layers can be broken down into nine layers.
Metadata layer 1: File information
This is the basic information auto-generated by operating systems. Things like creation date, file type, file size, file name, and a few other parameters. Basic stuff, but handy nonetheless.
Metadata layer 2: Embedded capture information
When cameras (and phones) capture images and video, they also store some relevant metadata. Things like camera type, frame size, frame rate, technical capture information, and crucially, GPS location information are details included in an ICE.
Metadata layer 3: Audio transcription
Searchable transcripts of all spoken words in any video. This is especially powerful for interviews and longer form content. Being able to search not only video content but words on a cloud-based platform can deliver both marketing efficiencies and analytical insights from the content itself.
Metadata layer 4: Copyright and usage information
A set of information detailing the copyright holder, licensing agreement detail, usage contracting for talent, music, video operator, photographer, contact information – all the things that allow businesses to utilise their multimedia assets easily and with clarity.
Metadata layer 5: Bespoke customer-centric keywords
A bespoke thesaurus of keywords that reflect a brand’s specific characteristics. These are specific and detailed – often product or infrastructure terms that are critically important and often used internally within your business. For some of our clients, ‘hamburger’ is far too generic to be useful. We need more details like ‘beef patty’ or ‘low-fat’ or ‘Australian produce’.
Metadata layer 6: Basic object recognition
Broad keywords that are populated as searchable metadata under an automated AI framework are included in an ICE. Basic visual elements and objects, include, for example ‘plane’, ‘car’, ‘dog’, ‘man’, ‘day’, ‘night’, ‘water’, ‘park’.
Metadata layer 7: Facial recognition
Facial recognition can empower a customer to teach the algorithm important faces that can be tagged automatically – with applications from sport to C-suite and politics.
Metadata layer 8: Production information
Taking information from the call-sheets, job sheets, project management platforms, agency production reports, and any other campaign information, an ICE can include details on personnel involved in the production or creation, such as the producer, director, ad agency, product, campaign, job number.
Metadata layer 9: Character recognition
Recognition of words that appear as visuals on-screen – this is very powerful. It allows, for example, a sporting team to immediately identify images or video where sponsor branding appears, providing immediate proof points for activation.
These insights – many of them driven by machine learning and at the cutting edge of technology – allow creative assets to be easily surfaced within a library that is growing exponentially, and is being created across multiple partners or locations.
The wider implications of these technologies on managing digital creative assets easy to find and use not only adds value to marketing, but immunises against employee and agency changes, and prepares for a future of highly automated, highly individualised, content driven marketing at scale.