Discover our blogs

Aerospace | Cranfield University

Aerospace

Agrifood | Cranfield University

Agrifood

Alumni | Cranfield University

Alumni

Careers | Cranfield University

Careers

Careers | Cranfield University

Defence and Security

Design | Cranfield University

Design

Energy and Power | Cranfield University

Energy and Sustainability

Environment | Cranfield University

Environment

Forensics | Cranfield University

Forensics

Libraries | Cranfield University

Libraries

Libraries | Cranfield University

Manufacturing and Materials

Libraries | Cranfield University

School of Management

Libraries | Cranfield University

Transport Systems

Water | Cranfield University

Water

Homepage / Doing Big Data – Intelligently

Doing Big Data – Intelligently

14/12/2016

Header-Cranfield-University

Last week we blogged about a term used by the Nobel Prize-winning author Daniel Kahneman in his book Thinking, Fast and Slow – WYSIATI (What you see is all there is). It refers to people wanting to see a complex world simplistically, whereas we would advocate that you have to understand (at some level at any rate) the complexity before simplifying – in other words “Simplicity on the far side of complexity”. Totally the opposite of WYSIATI – or “Simplistic on the near side of complexity”.

Many promises of Big Data fall into the area of simplistic on the near side of complexity. We hear Big Data is going to revolutionise our world from retail, financial services etc. through asset management and on into national security. This may be the case with unstructured data (i.e. free text), and possibly with structured data in some circumstances. It is obvious how a retailer might want to analyse all shoppers who purchased a certain washing powder for sensitive shin and run a promotion to also purchase sensitive skin cream and/or clothes made from materials for sensitive skin. Of course, retailers have been doing this for years – just not calling it Big Data.

However, there is a Big Question around Big Data revolutionising the Internet of Things or the Asset Management Industry. With sensor data coming from a myriad of equipment and machinery at the rate of every (typically) 100ms, we are being persuaded that it is necessary to capture every data point as if it has meaning – giving rise to vast so-called “data lakes” of petabytes and zeta bytes or whatever in size, with little hope of analysing it. It may revolutionise sales of Big Data solutions, but does it add value to understanding and analysis?

We came across a world-leading rubber moulding/sealing company in our travels over the past four years or so, and here we found this simplistic on the near side of complexity approach of “just put all the data into one place and run an algorithm to tell us what levers to pull” to get a perfect product. There was little interest in understanding, just “give me an answer”. They were generating large amounts of scrap – and still are.

More recently, we are working with a large transport organisation with many lifts in their buildings. We recognised that every data point did not hold value and that each data point was infected with electrical / mechanical noise that needed removal – in fact, through some intelligent analysis, we reduced the volume of rows by a factor of 400:1. Further analysis allowed us to enter the data into a proprietary time-series analysis platform and we were able to visualise normal behaviour hour by hour over several weeks, spot anomalies, see trends and patterns – all of which have never been seen before.

The Big Question – how would Big Data have provided such insight?

David Anker

Written By: Cranfield University

Categories & Tags:

Leave a comment on this post:

Sign up for more information about studying master’s and research degrees at Cranfield

Sign up now
Go to Top