Augmented and mixed reality: what is it, and where is it going?

10/03/2020

XR is a term that has become more prominent in the last few years. It encapsulates virtual, augmented, and mixed reality topics. The definition of each of these has become saturated in the past decade, with companies using their own definitions for each to describe their products. The new IDTechEx Report, “Augmented, Mixed and Virtual Reality 2020-2030”, distils this range of terms and products, compares the technologies used in them, and produces a forecast for the market next decade. This premium article discusses AR (augmented reality) and MR (mixed reality) in more detail.

The report discusses 83 different companies and 175 products in VR (virtual reality), AR (augmented reality) and MR (mixed reality) markets. This promotional article specifically discusses the findings from this report of the augmented and mixed reality markets.

Augmented Reality (AR) and Mixed Reality (MR) are two technologies which have become more prominent in the past ten years. AR is the use of computer technology to superimpose digital objects and data on top of a real-world environment. MR is similar to AR, but the digital objects interact spatially with the real-world objects, rather than being superimposed as “floating images” on top of the real-world objects. AR and MR are also closely related to VR. There is a cross-over in application and technology, as some VR headsets simulate the real space and then add in extra artificial content for the user in VR. But for this article, AR and MR products are considered those which allow the user in some way to directly see the real-world around them. The main target sectors of AR and MR appear to be in industry and enterprise markets. With high costs of individual products, there appears to be less penetration into a consumer space.

AR and MR products are being used in a variety of settings. One way they are being used is to solve a problem called “the skills gap” This describes the large portion of the skilled workforce who are expected to retire in the next ten years, leading to a loss of the knowledge and skills from this workforce. This knowledge needs to be passed on to new, unskilled, employees. Some companies propose that AR/VR technology can fill this skills gap and pass on this knowledge. This was one of the key areas discussed at some events IDTechEx analysts attended in 2019, in researching for this report.

AR use in manufacturing and remote assistance has also grown in the past 10 years, leading to some AR companies targeting primarily enterprise spaces over a consumer space. Although there have been fewer direct need or problem cases which AR can solve for a consumer market, smartphone AR can provide an excellent starting point for technology-driven generations to create, develop, and use an XR enabled smartphone for entertainment, marketing and advertising purposes. One example of smartphone AR mentioned in the report is IKEA place. This is an application where a user can put a piece of IKEA furniture in their room to compare against their current furniture. It allows users a window into how AR can be used to supplement their environment and can be used in day to day activities such as purchasing and visualising products bought from an internet marketplace.

AR and MR companies historically have typically received higher funding per round than VR – e.g. Magic Leap which has had $2.6Bn in funding since its launch in 2017, but only released a creator’s edition of its headset in 2019. AR and MR products tend to be more expensive than VR products, as they are marketed to niche use cases. These are discussed in greater detail in the report, for example the below plot which shows this tendency for AR/MR products to be more expensive than VR products.
The report compares both augmented and mixed reality products and splits them into three categories: PC AR/MR, Standalone AR/MR and Smartphone/mobile AR/MR. PC products which need a physical PC attachment, standalone products which do not require a PC, and smartphone products – those which use a smartphone’s capabilities to implement the immersive experience. Standalone AR/MR have had more distinct product types in the past decade, and this influences the decisions made when forecasting the future decade to come.

The report predicts an AR/MR market worth over $20Bn in 2030, displaying the high interest around this technology. This report also provides a complete overview of the companies, technologies and products in augmented, virtual and mixed reality, allowing the reader to gain a deeper understanding of this exciting technology.

In conclusion, VR, AR & MR, as with nearly any technology area, must build on what has come before. This technology is heavily invested, targeting the future potential of XR headsets. “Augmented, Mixed and Virtual Reality 2020-2030” provides a complete overview of the companies, technologies and products in augmented, virtual and mixed reality, allowing the reader to gain a deeper understanding of this exciting technology.


The next-generation inspection!

17/09/2019

Combining machine vision and deep learning will give companies a powerful mean on both operational and ROI axles. So, catching the differences between traditional machine vision and deep learning, and understanding how these technologies complement each other – rather than compete or replace – are essential to maximizing investments. In this article Bruno Forgue of Cognex helps to clarify things.

Machine Vision vs Deep Learning

Over the last decade, technology changes and improvement have been so much various: device mobility… big data… artificial intelligence (AI)… internet-of-things… robotics… blockchain… 3D printing… machine vision… In all these domains, novel things came out of R&D-labs to improve our daily lives.

Engineers like to adopt and adapt technologies to their tough environment and constraints. Strategically planning for the adoption and leveraging of some or all these technologies will be crucial in the manufacturing industry.

Let’s focus here on AI, and specifically deep learning-based image analysis or example-based machine vision. Combined with traditional rule-based machine vision, it can help robotic assemblers identify the correct parts, help detect if a part was present or missing or assembled improperly on the product, and more quickly determine if those were problems. And this can be done with high precision.

Figure 1 – The first differences between traditional machine vision and deep learning include:
1. The development process (tool-by-tool rule-based programming vs. example-based training);
2. The hardware investments (deep learning requires more processing and storage);
3. The factory automation use cases.

Let’s first see what deep learning is
Without getting too deep (may I say?) in details, let’s talk about GPU hardware. GPUs (graphics processing units) gather thousands of relatively simple processing-cores on a single chip. Their architecture looks like neural networks. They allow to deploy biology-inspired and multi-layered “deep” neural networks which mimic the human brain.

By using such architecture, deep learning allows for solving specific tasks without being explicitly programmed to do so. In other words, classical computer applications are programmed by humans for being “task-specific”, but deep learning uses data (images, speeches, texts, numbers…) and trains it via neural networks. Starting from a primary logic developed during initial training, deep neural networks will continuously refine their performance as they receive new data.

It is based on detecting differences: it permanently looks for alterations and irregularities in a set of data. It is sensitive/reactive to unpredictable defects. Humans do this naturally well. Computer systems based on rigid programming aren’t good at this. (But unlike human inspectors on production lines, computers do not get tired because of constantly doing the same iteration.)

In daily life, typical applications of deep learning are facial recognition (to unlock computers or identify people on photos)… recommendation engines (on streaming video/music services or when shopping at ecommerce sites)… spam filtering in emails… disease diagnostics… credit card fraud detection…

Deep learning technology makes very accurate outputs based on the trained data. It is being used to predict patterns, detect variance and anomalies, and make critical business decisions. This same technology is now migrating into advanced manufacturing practices for quality inspection and other judgment-based use cases.

When implemented for the right types of factory applications, in conjunction with machine vision, deep learning will scale-up profits in manufacturing (especially when compared with investments in other emerging technologies that might take years to payoff).

How does deep learning complement machine vision?
A machine vision system relies on a digital sensor placed inside an industrial camera with specific optics. It acquires images. Those images are fed to a PC. Specialized software processes, analyzes, measures various characteristics for decision making. Machine vision systems perform reliably with consistent and well-manufactured parts. They operate via step-by-step filtering and rule-based algorithms.

On a production line, a rule-based machine vision system can inspect hundreds, or even thousands, of parts per minute with high accuracy. It’s more cost-effective than human inspection. The output of that visual data is based on a programmatic, rule-based approach to solving inspection problems.

On a factory floor, traditional rule-based machine vision is ideal for: guidance (position, orientation…), identification (barcodes, data-matrix codes, marks, characters…), gauging (comparison of distances with specified values…), inspection (flaws and other problems such as missing safety-seal, broken part…).

Rule-based machine vision is great with a known set of variables: Is a part present or absent? Exactly how far apart is this object from that one? Where does this robot need to pick up this part? These jobs are easy to deploy on the assembly line in a controlled environment. But what happens when things aren’t so clear cut?

This is where deep learning enters the game:

• Solve vision applications too difficult to program with rule-based algorithms.
• Handle confusing backgrounds and variations in part appearance.
• Maintain applications and re-train with new image data on the factory floor.
• Adapt to new examples without re-programming core networks.

A typical industrial example: looking for scratches on electronic device screens. Those defects will all differ in size, scope, location, or across screens with different backgrounds. Considering such variations, deep learning will tell the difference between good and defective parts. Plus, training the network on a new target (like a different kind of screen) is as easy as taking a new set of reference pictures.

Figure 2 – Typical industrial example: looking for defects which are all different in size, scope, location, or across various surfaces with different backgrounds.

Inspecting visually similar parts with complex surface textures and variations in appearance are serious challenges for traditional rule-based machine vision systems. “Functional” defaults, which affect a utility, are almost always rejected, but “cosmetic” anomalies may not be, depending upon the manufacturer’s needs and preference. And even more: these defects are difficult for a traditional machine vision system to distinguish between.

Due to multiple variables that can be hard to isolate (lighting, changes in color, curvature, or field of view), some defect detections, are notoriously difficult to program and solve with a traditional machine vision system. Here again, deep learning brings other appropriate tools.

In short, traditional machine vision systems perform reliably with consistent and well-manufactured parts, and the applications become challenging to program as exceptions and defect libraries grow. For the complex situations that need human-like vision with the speed and reliability of a computer, deep learning will prove to be a truly game-changing option.

Figure 3 – Compared to Traditional Machine Vision, Deep Learning is:
1. Designed for hard-to-solve applications;
2. Easier to configure;
3. Tolerant to variations.

Deep learning’s benefits for industrial manufacturing
Rule-based machine vision and deep learning-based image analysis are a complement to each other instead of an either/or choice when adopting next generation factory automation tools. In some applications, like measurement, rule-based machine vision will still be the preferred and cost-effective choice. For complex inspections involving wide deviation and unpredictable defects—too numerous and complicated to program and maintain within a traditional machine vision system—deep learning-based tools offer an excellent alternative.

• Learn more about Cognex deep learning solutions

#Machinehealth #PAuto @Cognex_Corp @CognexEurope