The Daily Prosper
Big Data

How Big Data has improved efficiency in business decision-making

Solutions based on technologies such as Big Data and Machine Learning are knocking at the door of large companies to improve their efficiency and bring an end to the most repetitive tasks


Imagine an assembly line with hundreds of workers doing a single task: checking that the packages that they are going to distribute to the different points of sale contain the correct amount of product. It sounds not very stimulating and extremely tedious, right? Well, it's not such a ludicrous idea. Many of our country’s large supply chains work, or used to work until recently, in a similar way.

Although it has been years since professionals in the sector realised that there was a need to find technological solutions to these problems, until now tools have been created to solve specific problems and needed a person behind them to review the results and connect the solutions. 

In the words of Eva MontoroHead of CDO Intelligence at Banco Santander, speaking at the opening of the  Chief Data Officer Day (CDO) 2018 held in Madrid for the people responsible for data from some of the most important companies in Spain, "we need tools that are fast, simple and flexible that allow us to connect the world of business with the world of technology, because if not we will not be able to govern such data in a correct manner". 

Optimise work, reduce errors and improve efficiency 

In 2017 when the company began an ambitious process of digital transformation, Marco Antonio Serrano, manager of Advanced Analytics and Big Data at the Día Group, resolved to optimise the group’s supply chain in order to improve business efficiency. Before the transformation they had a series of warehouses located throughout Spain and a tool for automatically ordering from suppliers. “The problem is that this tool has bugs, and that is why we need a second line of staff, who we call reprovisioners, to review things", he explained during his presentation.

These employees manually review around 18,000 lines of orders per day. Their task is to check whether the tool has done its job well or if, on the contrary, the amount to be automatically ordered from suppliers is far below or far above what is necessary and could cause a problem with stock.

“We realized that 60% of orders that they review are correct and they agree with the amount proposed, so there would be no need to even look at those". That is why they decided to create an algorithm, in this case of supervision and classification, that would help them to predict whether an order needed to be revised or not. “It seems really obvious, but companies are throwing a lot of money at that," justifies the Big Data manager of the supermarket chain.

They taught the algorithm to read and review orders to be made to suppliers. How? Using more than one hundred million orders from the company's last 24 months of trading. The objective of this programme was to detect those orders that required no correction, so that they would not even need to be shown to a reprovisioner. With an 86% success rate, this software pilot achieved its goal. “What we have achieved is to save many thousands of hours of work per year, about three hours per employee, which is a very important cost saving for the business," he concludes.

Serrano adds: “Now we are working on feeding the same model with the data that is extracted from the correct orders and decisions that the reprovisionersmake when the order is incorrect, in order to automate these corrections". His objective: to have orders being automatically modified by early 2019.

Designing an analysis model for constantly changing data

But what do you do when those data that you are using to train the algorithm are unstable and constantly varying? How can you educate the algorithm so that some days it works with certain data and other days it works with other data, without losing its usefulness and accuracy? That was the challenge facing Marcos Ríos, director of Analytics at Datacentric, when their client, a large telecommunications company, suggested a project to locate the areas where they were more likely to get customers. 

“A company like that has customers who are constantly joining and leaving, so a traditional model that reflects a static photograph is no use today," he says during our meeting. In addition, they had the disadvantage that they were going to use fixed, private company data (its customer portfolio, the dropout rate, complaints...), fixed public data such as the census, the land registry, family budgets, average ages, and also variables that "until now had not been taken into account, such as the location of their physical stores, those of the competition, areas where they or their competition had focused". 

In summary, a huge amount of data that were not homogeneous and that fluctuated almost all the time. To solve the puzzle "we set it internally as a contest", he says. They decided to pit two of his data scientists head-to-head: one went for analysing the data through a traditional regression model and the other used a neural network model. “The result was a tie: each explained the same thing with the same accuracy," admits Ríos. 

However, it was not only a case of submitting a report to the client explaining what was happening, but of passing on the information visually as a table in order to be able to make the appropriate decisions at every moment. “We decided on the neuronal model because to be able to process all that changing information we needed a model that could learn," he argues.

Ríos explains that what they were looking for was "to avoid having to repeat the analysis with new data every six months. If there is a new source of information it is better that the model analyses it and adjusts in order to continue to function, not to throw away what we already had and do a new one".

In the end, what they achieved was to provide the client with a tool to view on a simple map in which areas they should run campaigns and for which customers. Something that before would only have been possible to get a sense of in the long term with costly market studies.

Both success stories demonstrate how it is no longer about being pioneering but about being able to give intelligent solutions to business problems that leech the efficiency from organisations and prevent them from growing. Marcos Ríos describes it like this: “Either you start to modify your internal habits to be able to give that external service, or you get left behind. This is not done for one client, it is because internal corporate culture is evolving."

 

By Belén Belmonte