Eric Schmidt (Google) was famously quoted in 2010 as saying “There were 5 exabytes of information created between the dawn of civilisation through 2003, but that much information is now created in 2 days”. More recently, it was reported that 90% of data in the world today is only 2 years old.
We probably don’t need to understand what an ‘exabyte’ is, but we can get a sense that it must be pretty big, and that it’s not just about size; there’s a component of velocity involved. If the rate of information was doubling every 2 days in 2010, we can be pretty sure it’s something less than 2 days now.
Most organisations still don’t know what data or information they actually have, and what they’re creating and storing on a daily basis, with one survey revealing that 52% of all information is considered to be ‘Dark’, where value is unknown, and 33% of information is considered to be redundant, obsolete or trivial (ROT). According to Veritas, left unchecked, hoarded data on this scale could mean that companies in Europe face $891bn of avoidable storage costs globally by 2020.
On a positive note, more and more organisations are beginning to realise these massive archives of data might actually hold some useful information that could potentially deliver new business opportunities. The problem is, without the right technology, it takes time to access, analyse, interpret and make decisions on these vast data repositories. In the mean-time, the world – and more agile business competitors – have moved on.
Paradoxically, there remains a fundamental scepticism about the practical use of data to drive the business. The explosion of data, new analytics techniques and machine learning have combined to create a degree of uncertainty about data driven decision making. Organisations are beginning to reflect on whether they are working with the right data; whether they can rely on algorithms to make decisions, or whether they are thinking the right way about using data to compete.
But the trend is clear. More and more organisations are changing, or plan to change the big decision making process because of big data, new analytics and use of AI, and specifically machine learning. Machine learning has the ability to scale across a broad spectrum of structured, semi-structured and unstructured data, sourced from, for example, contract management, customer service, finance, legal, sales, pricing and production. Machine learning algorithms are iterative in nature, constantly learning and seeking to optimise outcomes. Every time a miscalculation is made, machine learning algorithms correct the error and begin another iteration of the data analysis. These calculations happen in milliseconds which makes machine learning exceptionally efficient at optimising decisions and predicting outcomes.
Executives who once relied firmly on their intuition and experience are now face-to-face with machines that can learn from massive amounts of data. It is changing people’s relationships with technology and opening the door to truly data-driven decision-making.
It is reasonable to assume that management intuition and experience will remain critical for interpreting the results, even though there is ample evidence to show that decisions made by humans can be inherently biased. For example, confirmation bias might lead executives to cherry-pick data that supports their viewpoint; or they might discard data that contradicts their gut feeling. They can’t see what lies within the data.
The challenge going forward is for C-suite executives and managers to integrate these two factors, to find a new mix of mind and machine. One thing is for certain, we’re going to have to start trusting the machines and the algorithms if we ever want to extract real value from big data and make decisions with the increasing velocity that competitive business requires!
I’ll be covering these points in more detail at the Ark Group Digital Workplace event on the 1st December 2016.