Computer vision is the lynchpin of the current AI/machine learning (ML) renaissance. In 2013, deep learning neural nets crossed a historic milestone of achieving human level accuracy at the ImageNet image classification benchmark
and the pace of algorithm sophistication, training data proliferation and performance has accelerated even faster.
The majority of AI/machine learning powered solutions across the supply chain follow a sense→think→act paradigm. As such, computer vision is critical for all such applications where visual information is a key for sensing.
It is useful to think about vision-enabled use cases along two dimensions:
First is the learning paradigm going from narrow (supervised) where the vision solution can only solve similar problems to which it is trained on, to more flexible paradigms (e.g. reinforcement and unsupervised learning), where the vision solution can handle situations that it has not necessarily seen before or that requires complex sequencing of actions and outcomes.
On the other dimension is the nature of the vision input required to solve the underlying problem – does the solution take discrete visual input and images or is there a need to ingest and respond to a real-time, continuous stream of visual inputs (see figure below):
In short we expect to see continued adoption and scaling of supervised and reinforcement-learning-based vision solutions in the near future. While unsupervised-based solutions are still in proof of concept stages, companies should continue to monitor progress as breakthroughs would significantly lower the current high training data requirements for solution development.
The above blog originally appeared in SupplyChainDive Influencer series