Computer vision is the lynchpin of the current AI/machine learning (ML) renaissance. In 2013, deep learning based vision solutions crossed a historic milestone of achieving human level accuracy.
Since then, the pace of algorithm sophistication, training data proliferation and performance has accelerated even faster.
The majority of AI/machine learning powered solutions and use cases across the supply chain follow a sense→think→act paradigm. Consequently, AI machine vision in supply chain applications and use cases are critical where visual information is a key input required for sensing.
Machine vision use case segmentation
It is useful to think about vision-enabled solutions along two dimensions:
First, learning algorithms can go from narrow supervised learning or learning from labeled data to more flexible paradigms. Specifically, reinforcement and unsupervised learning. Narrow based solutions can only solve problems that is similar to its original training context. Flexible based solutions can handle situations that it has not necessarily seen before during initial training.
Second, we must consider the type of data input required for an effective and robust solution. Does the solution or use case take discrete visual signals as input? Or is there a need to ingest and respond to a real-time, continuous stream of visual inputs?
The Figure below depicts an illustrative sample of use cases based on underlying learning type and input data requirements.
Enterprises are currently piloting and scaling numerous use cases using supervised learning. We expect to see continued adoption of narrow based vision solutions in the near future.
Machine vision in supply chain use cases that require more flexible learning algorithms such as reinforcement and unsupervised learning are still in proof of concept stages. Companies should continue to monitor progress as breakthroughs would significantly lower the current high training data requirements for solution development.