Four Artificial Intelligence Challenges Facing the Industrial IoT

August 21, 2017

As a CTO who works closely with software architects and heads of business units validating and designing IoT solutions, it’s obvious there’s a disconnect between our vision of AI and what’s actually happening in the industry right now. While there are interesting experiments and fascinating use cases — for example, artificially intelligent buildings — machines and robots are not roaming around us. Before AI can learn to optimize factory operations automatically or replace parts prior to machine failure, we have four main technical challenges to address.

Connecting Devices

Today, we have the challenge of connecting the machines and devices around us to the internet. Simple devices like lights and temperature sensors communicate over Bluetooth or ZigBee – but both are standards that don’t work online. More complex industrial equipment likely communicates over OPC or proprietary socket communications, but with all of their complexity, these machines still can’t connect to the internet.

Companies building IoT devices are solving this challenge by using gateways, also known as edge-based processing, to connect to cloud-based IoT platforms. This enables the machines to get data to the internet. However, connecting devices isn’t as easy as updating software; instead, it’s an investment in retrofitting old machines, replacing existing equipment, and enabling a workforce to leverage this equipment. We see this just starting to happen today in many buildings as existing Honeywell and Johnson Controls HVAC systems are putting gateways next to their existing M2M solutions and streaming data to the internet over MQTT. It’s an evolution over many years, not a simple upgrade.

 

Understanding Data

The challenge of connecting things to the internet has hidden the fact that many companies don’t yet understand the information their devices are sending. A machine endlessly streaming information about voltage, temperature, battery and friction isn’t valuable if the data can’t be understood (and to be understood, data needs to be structured so that it can be put into an understood domain model).

Additionally, the data is useless without the ability to identify outcomes. This means we need data on how machines actually fail before we can map what is going on. After connecting devices and sending information to a data lake, we have to wait and observe what happens. This is where we can leverage subject matter experts, as they might be able to give us more information. For example, that a machine with a lot of vibration will soon fail.

Something we’ve done is tried adding predictive maintenance to connected doors. In doing so, we found that among data points like motor temperature, RPMs, use per day, amperage load, humidity, noise decibels, opening speed, geolocation, air particulates, time of day and ambient temperature, it may be that humidity and temperature are the strongest predictors of future issues. This just highlights that understanding our mountains of data requires that we leverage our internal expertise when training AI.

 

Training AI

Once we’ve collected enough data to start implementing machine learning, we enter a training stage. This is unique for each machine, accounting for factors like the specific model, types of data and possible outcomes. With this information, we can then use statistical strategies to find correlations between inputs and outcomes. No single model is ideal for all use cases; data scientists simply have to try various models and see what works.

The outcome from training means you know the algorithms that work best for predicting your machine’s behavior. Unfortunately, they will always be biased according to this initial set of seed data. So if we want machines to really learn, we need to do more. To achieve true machine learning we should account for more cases of machines running in real environments, introduce new sensors for new inputs, and account for new types of outcomes. We need to rerun this model analysis over and over again, ad infinitum, with each new case helping the AI models alter and improve.

As an example, we designed a system that works to understand MRI scans and their resulting diagnoses. With inputted factors and outcomes, it has complex probability tables and mathematical algorithms to support clinicians and radiologists to more rapidly diagnose and treat patients. It then continues to take each case and learn in order to optimize its probabilities as new factors like facility and doctor are introduced.

 

Making It Actionable

It’s amazing to think that even after we’ve connected and understood our data, and trained our AI algorithms in real time, there is still something left to do. But at this point, we still haven’t empowered our newly found intelligence to do anything. As our brain needs our body to do something with the stimulus it receives, a machine needs to do something with the data it processes. This means machines must make data actionable by issuing commands in real-time to change states, alter their performance and control components.

Making IoT devices actionable represents a future state many aren’t quite ready for. Security becomes increasingly important as machines run our factories, power facilities and transportation systems. Robots assembling cars must now be controlled remotely to change their behavior in real time. Lights, heaters, windows, elevators and doors must now all become engaged and actionable from the wider internet.

One area where we already see IoT data being made actionable is with railway Positive Train Control. This initiative, already implemented by many major railroads, automatically prevents collisions and derailments with human intervention.

Schedule A 30 Minute Demo