In Bill Gates’ book, The Road Ahead, he discusses the evolution of personal computing: "We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don't let yourself be lulled into inaction.” It is with this perspective that we must consider how to prepare for the IoT and how to derive value from the coming onslaught of data.
A lot of hype surrounds IoT: It will catalyze business process efficiency improvements and create new personalized technology experiences and categories. The 50 billion connections will change the world we currently recognize. Some of these statements will prove true, but in the years to come executives must ensure two things:
They must not overestimate the near future and invest too far ahead of IoT technology’s capabilities.
They cannot be lulled into inaction by the idea that IoT technology innovations will continue at the same pace for the next decade.
I’ve spent years working with companies helping them elicit value from their data. (Currently, I run a company that develops NoSQL databases specifically for this purpose.) In this article, I draw from my experience to discuss a few ideas tech executives should consider when determining how to approach their IoT strategies.
Don’t Overestimate How Well Data Lakes Will Handle IoT Data
In the next two years, I worry that many organizations will overestimate the ability of data lake-centric approaches to take IoT deployments from experiment to production. As data mass grows, data lakes will slow organizations that attempt to use them as the foundation of IoT infrastructures.
For example, an energy company operates multiple wind farms — each with dozens of turbines — in different locations. Every turbine contains sensors that generate data. Each new turbine adds to the mass of data at a given location: wind velocity, energy generated, temperature, etc. Some of this data will be sent back and consumed centrally, but other data (weather readings) will need to be incorporated into business processes (used to adjust turbines to account for wind direction) locally.
Performance optimization data for regional temperatures and maintenance indications due to noise levels are only relevant locally. Sending this data to a central application may reduce response times and become expensive. The more data that amasses, the longer it takes to send information back and forth.
IoT data’s dynamic nature means you must act immediately on some data in order to find value in it. Data lakes aren’t built to enable immediate action, including real-time analysis, as they require data to travel to a centralized repository first. Organizations that overestimate the ability of data lake-centric architectures will hinder the progress of IoT deployments.
The alternative to data lakes is pushing data analysis to the edge of networks (individual turbines). This requires multi-datacenter replication, where information is copied from one datacenter location to another to ensure availability should disaster or failure strike.
Don’t Underestimate How Quickly You’ll Have to Scale
The more IoT devices and sensors we connect, the more difficult it will be to reliably incorporate real-time data into business processes at the edge. If we don’t architect to leverage data at the edge now, data growth will make it difficult to make the shift later. Organizations should not underestimate how quickly and widely they will have to scale. Slow response times, application downtime and data loss are warning signs that your central data store is not ready.
Klaus Schwab, founder and executive chairman of World Economic Forum, details the advent of the Fourth Industrial Revolution as “a fusion of technologies,” including Internet of Things and artificial intelligence, working to create cyber-physical systems. Many global CEOs believe: “[The] acceleration of innovation and the velocity of disruption are hard to comprehend or anticipate and that these drivers constitute a source of constant surprise, even for the best connected and most well informed.”
Many organizations will be surprised by how quickly data amasses. Even small deployments with tens of sensors or devices will soon begin to generate mountains of data. To scale, organizations must ask a number of questions, including:
Over the next 10 years, organizations must not underestimate how quickly deployments and the associated data masses will grow. Now, when IoT is in an experimental phase and scale isn’t needed, is the time to begin planning for large-scale projects. A great starting point is to build internal expertise on distributed systems and open source software.
Don’t Drown In That Data Lake
Ten years ago, cloud computing came with loss-of-control and security concerns. We’ve since dealt with many of these issues, and now cloud computing is ubiquitous. The same will happen with IoT, but we cannot let our long-term estimations lull us into inactivity, or the opportunity will pass us by as competitors reap the benefits.
Current IoT examples include smart utility grids, where meters transmit information to local collection stations for analysis, and self-driving cars that send information to local, regional and national levels for analysis. Those examples are only possible with the insight of people who understand the specific technology used, as well as the implications of moving data between multiple locations. We might be 5 10 years away from critical mass of IoT concepts, but that doesn’t mean organizations can’t start ensuring they have the expertise to capitalize on the opportunity that is sure to arrive. Source