Most marketers understand the capacity for data to provide insights into customer behaviour and to help gently nudge the buyer toward the right (or at least preferred) outcome.
For many omnichannel retailers though the lag between purchase and delivery remains a blot on the customer experience. How much easier would it be if the analytics allowed us to identify a shopper before they knew they were going to buy a product so we could start shipping it to them early?
Welcome to the era of pre-emptive shopping. The principle sounds simple enough.
“The idea is that we start moving items towards the potential customers so that when they actually make purchase the item is closer to them and they receive it faster,” said Igor Elbert, data scientist for Gilt Groupe (pictured below).
Before you get too excited though, it’s probably best to recognise that it’s more difficult than it sounds, and it doesn’t sound easy.
According to Elbert, “pre-emptive shipping is hard, especially in our scenario when we often do not have historical sales data for the fashion items we sell. We don’t claim to be able to algorithmically predict what is going to be popular in fashion, our human experts are much better at it”.
Instead the company uses historical data to anticipate demand for a particular product in certain geo-areas based on product attributes, overall demand estimation, and, for instance, by the way the product will be presented on the website.
The biggest challenges, he said, are data quality/availability and volatility of the fashion trends.
“I cannot impose a ‘little black dress’ on someone but I can strive to bring it as close to them as possible so if they really want it they will get it faster.”
Elbert said that in the future companies like his will send trucks with merchandise into areas with high anticipated demand, creating ‘distributed warehousing’.
It’s a back-to-the-future scenario – travelling salesmen armed with predictive analytics and logistical support.
“As usual in these cases, the more data the better, but it’s the quality of the data that matters the most. For example, product attributes (such as colour, material, category and size) are very important so we need to make sure they are encoded in a way our models can use them. I’m talking about our predictive models here, of course.”
To help it make decisions the company uses the Teradata Aster distributed database, which simplifies data preparation and analysis. “It has data-mining tools built-in which comes in handy when I need to train the model on very large datasets.”
Gilt also uses crowd sourcing to get inside the mind of potential buyers.
“Mechanical Turk is an Amazon-built platform that allows using the crowd (aka ‘turkers’) for tasks that are difficult for artificial intelligence (AI). We don’t use turkers to ‘review’ clothes, we just ask their subjective opinion: is this dress pretty? Do you think it is going to be popular? For which occasion would you recommend this dress?”
Turkers are also effective in enriching product data (classify a dress neckline) and quality assurance (make sure that image matches the product description), he says. “We do get a bad submission once in a while but with several turkers per task the results are usually pretty good.
We spot the turkers that have a knack for our tasks and encourage them to work with us more,” he said.
This story originally appeared at www.which-50.com