Automate image-based inspection with artificial intelligence.

02/09/2020

High demands on products as well as high time and cost pressure are decisive competitive factors across all industries and sectors. Whether in the food or automotive industry quality, safety and speed are today more than ever before factors that determine the success of a company.

Zero-defect production is the goal. But how can it be guaranteed that only flawless products leave the production line? In order to make quality inspection as efficient, simple, reliable and cost-effective as possible, the German company sentin GmbH develops solutions that use deep learning and industrial cameras from IDS to enable fast and robust error detection. A sentin VISION system uses AI-based recognition software and can be trained using a few sample images. Together with a GigE Vision CMOS industrial camera from IDS and an evaluation unit, it can be easily embedded in existing processes.

High demands on products as well as high time and cost pressure are decisive competitive factors across all industries and sectors. Whether in the food or automotive industry – quality, safety and speed are today more than ever before factors that determine the success of a company. Zero-defect production is the goal. But how can it be guaranteed that only flawless products leave the production line? How can faulty quality decisions, which lead to high costs, be avoided? In order to test this reliably, a wide variety of methods are used in quality assurance.

A visual inspection with the human eye is possible, but it is often error-prone and expensive: the eye tires and working time is costly. A mechanical test, on the other hand, is usually accompanied by complex calibration, i.e. setting up and adjusting all parameters of both software and hardware in order to detect every error. In addition, product or material changes require recalibration. Furthermore, with the classic, rule-based approach, a programmer or image processor must program rules specifically for the system to explain to the system how to detect the errors. This is complex and with a very high variance of errors often a hardly solvable Herculean task. All this can cost disproportionately much time and money.

In order to make quality inspection as efficient, simple, reliable and cost-effective as possible, sentin GmbH uses IDS industrial cameras and deep learning to develop solutions that enable fast and robust error detection. This is because, in contrast to conventional image processing, a neural network learns to recognize the features on the basis of images themselves. This is exactly the approach of the intelligent sentin VISION system. It uses an AI-based recognition software and can be trained on the basis of a few sample images. Together with a GigE Vision CMOS industrial camera from IDS and an evaluation unit, it can be easily embedded in existing processes.

Application
The system is capable of segmenting objects, patterns and even defects. Even surfaces that are difficult to detect cannot stop the system. Classical applications can be found, for example, in the automotive industry (defect detection on metallic surfaces) or in the ceramics industry (defect detection by making dents visible on reflecting and mirroring surfaces), but also in the food industry (object and pattern recognition).

Depending on the application, the AI is trained to detect errors or anomalies. With the latter, the system learns to distinguish good from bad parts. If, for example, a surface structure is inspected, see metal part in the automotive industry or ceramic part, errors are detected by Artificial Intelligence as deviations from a comparison with reference images. By using anomaly detection and pre-trained models the system can detect defects based on just a few sample images of good parts.

The hardware setup required for the training and evaluation consists of an IDS industrial camera and appropriate lighting. The recognition models used are trained using reference images. For example, a system and AI model was configured for the error-prone inspection of fabric webs in the textile industry. A difficult task, as mistakes can be very subjective and very small. The system camera for optimum image material of textiles and web materials was selected together with IDS on the basis of specific customer requirements. A GigE Vision CMOS camera (GV-5880CP) was selected, which provides high-resolution data, triggered with precise timing, for accurate image evaluation.

The system learns what constitutes a “good” fabric structure and knows already from a few shots of the fabric what a clean and flawless product looks like. For quality inspection, the image captured by the IDS Vision CP camera is then forwarded via GigE interface to an evaluation computer and processed with the recognition model. This computer can then reliably distinguish good/bad parts and highlight deviations. It gives an output signal when an error is found. In this way, slippage and pseudo rejects can be reduced quickly and easily.

Slippage is the proportion of products that do not meet the standard but are overlooked and therefore not sorted out, often leading to complaints. Pseudo rejects, on the other hand, are those products that meet the quality standard but are nevertheless incorrectly sorted out.

Both hardware and software of the system are flexible: For multiple or wider webs, additional cameras can easily be integrated into the setup. If necessary, the software also allows for re-training of the AI models. “Experience simply shows that a certain amount of night training is always necessary due to small individual circumstances. With pre-trained models from our portfolio, you need fewer reference images for individualization and post training,” explains Christian Els, CEO and co-founder of sentin. In this case, the images show the structured surface of a fabric and a small anomaly on it, which was filtered out in the image on the right:

Anomaly extracted from a recording of a substance (sentin GmbH)

Camera
Extremely accurate image acquisition and precise image evaluation are among the most important requirements for the camera used. Perfectly suitable: The GigE Vision CMOS camera GV-5880CP. The model has a 1/1.8″ rolling shutter CMOS sensor Sony IMX178, which enables a very high resolution of 6.4 MP (3088 x 2076 px, aspect ratio 3:2). It delivers frame rates of up to 18 fps at full resolution and is therefore ideal for visualization tasks in quality control. The sensor from the Sony STARVIS series features BSI technology (“back-side-illumination”) and is one of the most light-sensitive sensors with a low dark current close to the SCMOS range (Scientific CMOS). It ensures impressive results even under very low light conditions. Thanks to the sensor size of 1/1.8″, a wide range of C-Mount lenses is available for the GigE Vision camera model GV-5880CP. “In addition to resolution and frame rate, the interface and the price were decisive factors in the decision for the camera. The direct exchange with the IDS development department has helped us to reduce the time needed for camera integration,” says Arkadius Gombos, Technical Manager at sentin. The integration into the sentin VISION system is done via GenTL and a Python interface.

The GigE Vision camera GV-5880CP from IDS ensures precise image acquisition and accurate image evaluation when inspecting fabric webs. (sentin GmbH)

Conclusion
Automated, image-based quality control with Artificial Intelligence offers many advantages over human visual inspection or conventional machine vision applications. “In AI-based image interpretation, the aim is to create images on which humans can see the error, because then the AI model can do it too,” concludes Christian Els. The system learns to recognize the requirements of the product similar to a human being. But the human brain is beaten at any time by an artificial intelligence in terms of consistency and reliability. Even if the brain is capable of remarkable peak performance, an AI can recognize much more complex error patterns. The human eye, on the other hand, cannot stand up to any camera in terms of fatigue and vision. In combination with deep-learning recognition software, the image processing system therefore enables particularly fast and accurate inspection. Depending on the application, image acquisition and evaluation can take place in just a few milliseconds.

The system can also be applied to other areas such as surface testing. Similar applications are e.g. the testing of matte metal/coatings surfaces (automotive interior), natural materials (stone, wood) or technical textiles such as leather. Scratches, cracks and other defects on consumer goods can thus be detected and the respective products sorted out. Exclude quality defects and produce only “good stuff” – an indispensable process within the framework of quality assurance. IDS cameras in combination with the deep learning supported software of sentin GmbH significantly optimize the detection of defects and objects in quality control. This allows the personnel and time expenditure for complaints and rework, as well as pseudo rejects, to be significantly reduced in a wide range of industries and areas.

• See information on other IDS Imaging products. – published on the Read-out Signpost.

@sentin_ai @IDS_Imaging #mepaxIntPR #PAuto #Food


Understanding how to screen for people in an efficient and accurate manner.

03/08/2020
As we learn more and more about the symptoms and risks associated with COVID – 19 many companies have studied how their offerings can be used or adapted to cope and assist in the battle against this invisible enemy.

A thermal imaging camera can be an effective screening device for detecting individuals with an elevated skin temperature. This type of monitoring can provide useful information when used as a screening tool in high-traffic areas to help identify people with an elevated temperature compared to the general population. That individual can then be further screened using other body temperature measuring tools.

Although thermal imaging cameras are primarily designed for industrial and night vision uses, public health organizations have used FLIR cameras around the world at airports, seaports, office buildings and other mass gathering areas to provide rapid, efficient screening in high-traffic areas. FLIR thermal cameras are particularly well suited to this because they can provide a temperature reading of a person’s face in a matter of seconds.

How thermal imaging works
A thermal imaging camera produces infrared images or heat pictures that display small temperature differences. This allows thermal cameras to create and continually update a visual heat map of skin temperatures. In addition, FLIR thermal imaging cameras are sensitive devices capable of measuring small temperature differences.

Many of the FLIR thermal cameras that are appropriate for measuring skin temperatures also offer built-in functions like visual and sound alarms that can be set to go off when a certain temperature threshold is exceeded. The operator can then instantly decide whether the subject needs to be referred for further screening with additional temperature measurement tools.

Use in high-traffic areas, such as airports, as part of screening procedures.

As the thermal imaging camera produces images in near-realtime, the total evaluation process takes mere moments, making thermal imaging technology very useful for rapidly screening large numbers of people.

Measuring the temperature of the human body
It’s true that a person’s general skin temperature is typically not equal to the person’s core temperature. That doesn’t detract from the use of thermal cameras to detect elevated skin temperatures, however. Thermal cameras are useful in this role because the goal is not to measure absolute skin temperature, but to differentiate people who have an elevated skin temperature compared to others while also considering the environmental conditions of the location.

Some FLIR camera models offer an elevated skin temperature screening mode that is helpful in comparing the person being screened against the temperature of other people previously screened. When in Screening mode, the operator can save ten thermal images of faces that the camera automatically averages as a reference.

Hot spots in corner of the eyes

Sound and color alarms
All areas on the subject’s face that are hotter than a predefined temperature value can be displayed as a designated color on the thermal image. This built-in alarm allows users to make an immediate decision regarding whether the subject may need further screening with additional screening tools. In addition, some FLIR cameras are equipped with an audible alarm that can be activated to sound if the detected temperature exceeds a predefined value.

A small investment to enable high-traffic screening
Airports all over the world are using FLIR cameras and have applied this methodology to screen people entering and leaving a country. It is a quick, non-contact method that is safe for both the camera operator and the people being screened.

@flir #Transport


Augmented and mixed reality: what is it, and where is it going?

10/03/2020

XR is a term that has become more prominent in the last few years. It encapsulates virtual, augmented, and mixed reality topics. The definition of each of these has become saturated in the past decade, with companies using their own definitions for each to describe their products. The new IDTechEx Report, “Augmented, Mixed and Virtual Reality 2020-2030”, distils this range of terms and products, compares the technologies used in them, and produces a forecast for the market next decade. This premium article discusses AR (augmented reality) and MR (mixed reality) in more detail.

The report discusses 83 different companies and 175 products in VR (virtual reality), AR (augmented reality) and MR (mixed reality) markets. This promotional article specifically discusses the findings from this report of the augmented and mixed reality markets.

Augmented Reality (AR) and Mixed Reality (MR) are two technologies which have become more prominent in the past ten years. AR is the use of computer technology to superimpose digital objects and data on top of a real-world environment. MR is similar to AR, but the digital objects interact spatially with the real-world objects, rather than being superimposed as “floating images” on top of the real-world objects. AR and MR are also closely related to VR. There is a cross-over in application and technology, as some VR headsets simulate the real space and then add in extra artificial content for the user in VR. But for this article, AR and MR products are considered those which allow the user in some way to directly see the real-world around them. The main target sectors of AR and MR appear to be in industry and enterprise markets. With high costs of individual products, there appears to be less penetration into a consumer space.

AR and MR products are being used in a variety of settings. One way they are being used is to solve a problem called “the skills gap” This describes the large portion of the skilled workforce who are expected to retire in the next ten years, leading to a loss of the knowledge and skills from this workforce. This knowledge needs to be passed on to new, unskilled, employees. Some companies propose that AR/VR technology can fill this skills gap and pass on this knowledge. This was one of the key areas discussed at some events IDTechEx analysts attended in 2019, in researching for this report.

AR use in manufacturing and remote assistance has also grown in the past 10 years, leading to some AR companies targeting primarily enterprise spaces over a consumer space. Although there have been fewer direct need or problem cases which AR can solve for a consumer market, smartphone AR can provide an excellent starting point for technology-driven generations to create, develop, and use an XR enabled smartphone for entertainment, marketing and advertising purposes. One example of smartphone AR mentioned in the report is IKEA place. This is an application where a user can put a piece of IKEA furniture in their room to compare against their current furniture. It allows users a window into how AR can be used to supplement their environment and can be used in day to day activities such as purchasing and visualising products bought from an internet marketplace.

AR and MR companies historically have typically received higher funding per round than VR – e.g. Magic Leap which has had $2.6Bn in funding since its launch in 2017, but only released a creator’s edition of its headset in 2019. AR and MR products tend to be more expensive than VR products, as they are marketed to niche use cases. These are discussed in greater detail in the report, for example the below plot which shows this tendency for AR/MR products to be more expensive than VR products.
The report compares both augmented and mixed reality products and splits them into three categories: PC AR/MR, Standalone AR/MR and Smartphone/mobile AR/MR. PC products which need a physical PC attachment, standalone products which do not require a PC, and smartphone products – those which use a smartphone’s capabilities to implement the immersive experience. Standalone AR/MR have had more distinct product types in the past decade, and this influences the decisions made when forecasting the future decade to come.

The report predicts an AR/MR market worth over $20Bn in 2030, displaying the high interest around this technology. This report also provides a complete overview of the companies, technologies and products in augmented, virtual and mixed reality, allowing the reader to gain a deeper understanding of this exciting technology.

In conclusion, VR, AR & MR, as with nearly any technology area, must build on what has come before. This technology is heavily invested, targeting the future potential of XR headsets. “Augmented, Mixed and Virtual Reality 2020-2030” provides a complete overview of the companies, technologies and products in augmented, virtual and mixed reality, allowing the reader to gain a deeper understanding of this exciting technology.


The next-generation inspection!

17/09/2019

Combining machine vision and deep learning will give companies a powerful mean on both operational and ROI axles. So, catching the differences between traditional machine vision and deep learning, and understanding how these technologies complement each other – rather than compete or replace – are essential to maximizing investments. In this article Bruno Forgue of Cognex helps to clarify things.

Machine Vision vs Deep Learning

Over the last decade, technology changes and improvement have been so much various: device mobility… big data… artificial intelligence (AI)… internet-of-things… robotics… blockchain… 3D printing… machine vision… In all these domains, novel things came out of R&D-labs to improve our daily lives.

Engineers like to adopt and adapt technologies to their tough environment and constraints. Strategically planning for the adoption and leveraging of some or all these technologies will be crucial in the manufacturing industry.

Let’s focus here on AI, and specifically deep learning-based image analysis or example-based machine vision. Combined with traditional rule-based machine vision, it can help robotic assemblers identify the correct parts, help detect if a part was present or missing or assembled improperly on the product, and more quickly determine if those were problems. And this can be done with high precision.

Figure 1 – The first differences between traditional machine vision and deep learning include:
1. The development process (tool-by-tool rule-based programming vs. example-based training);
2. The hardware investments (deep learning requires more processing and storage);
3. The factory automation use cases.

Let’s first see what deep learning is
Without getting too deep (may I say?) in details, let’s talk about GPU hardware. GPUs (graphics processing units) gather thousands of relatively simple processing-cores on a single chip. Their architecture looks like neural networks. They allow to deploy biology-inspired and multi-layered “deep” neural networks which mimic the human brain.

By using such architecture, deep learning allows for solving specific tasks without being explicitly programmed to do so. In other words, classical computer applications are programmed by humans for being “task-specific”, but deep learning uses data (images, speeches, texts, numbers…) and trains it via neural networks. Starting from a primary logic developed during initial training, deep neural networks will continuously refine their performance as they receive new data.

It is based on detecting differences: it permanently looks for alterations and irregularities in a set of data. It is sensitive/reactive to unpredictable defects. Humans do this naturally well. Computer systems based on rigid programming aren’t good at this. (But unlike human inspectors on production lines, computers do not get tired because of constantly doing the same iteration.)

In daily life, typical applications of deep learning are facial recognition (to unlock computers or identify people on photos)… recommendation engines (on streaming video/music services or when shopping at ecommerce sites)… spam filtering in emails… disease diagnostics… credit card fraud detection…

Deep learning technology makes very accurate outputs based on the trained data. It is being used to predict patterns, detect variance and anomalies, and make critical business decisions. This same technology is now migrating into advanced manufacturing practices for quality inspection and other judgment-based use cases.

When implemented for the right types of factory applications, in conjunction with machine vision, deep learning will scale-up profits in manufacturing (especially when compared with investments in other emerging technologies that might take years to payoff).

How does deep learning complement machine vision?
A machine vision system relies on a digital sensor placed inside an industrial camera with specific optics. It acquires images. Those images are fed to a PC. Specialized software processes, analyzes, measures various characteristics for decision making. Machine vision systems perform reliably with consistent and well-manufactured parts. They operate via step-by-step filtering and rule-based algorithms.

On a production line, a rule-based machine vision system can inspect hundreds, or even thousands, of parts per minute with high accuracy. It’s more cost-effective than human inspection. The output of that visual data is based on a programmatic, rule-based approach to solving inspection problems.

On a factory floor, traditional rule-based machine vision is ideal for: guidance (position, orientation…), identification (barcodes, data-matrix codes, marks, characters…), gauging (comparison of distances with specified values…), inspection (flaws and other problems such as missing safety-seal, broken part…).

Rule-based machine vision is great with a known set of variables: Is a part present or absent? Exactly how far apart is this object from that one? Where does this robot need to pick up this part? These jobs are easy to deploy on the assembly line in a controlled environment. But what happens when things aren’t so clear cut?

This is where deep learning enters the game:

• Solve vision applications too difficult to program with rule-based algorithms.
• Handle confusing backgrounds and variations in part appearance.
• Maintain applications and re-train with new image data on the factory floor.
• Adapt to new examples without re-programming core networks.

A typical industrial example: looking for scratches on electronic device screens. Those defects will all differ in size, scope, location, or across screens with different backgrounds. Considering such variations, deep learning will tell the difference between good and defective parts. Plus, training the network on a new target (like a different kind of screen) is as easy as taking a new set of reference pictures.

Figure 2 – Typical industrial example: looking for defects which are all different in size, scope, location, or across various surfaces with different backgrounds.

Inspecting visually similar parts with complex surface textures and variations in appearance are serious challenges for traditional rule-based machine vision systems. “Functional” defaults, which affect a utility, are almost always rejected, but “cosmetic” anomalies may not be, depending upon the manufacturer’s needs and preference. And even more: these defects are difficult for a traditional machine vision system to distinguish between.

Due to multiple variables that can be hard to isolate (lighting, changes in color, curvature, or field of view), some defect detections, are notoriously difficult to program and solve with a traditional machine vision system. Here again, deep learning brings other appropriate tools.

In short, traditional machine vision systems perform reliably with consistent and well-manufactured parts, and the applications become challenging to program as exceptions and defect libraries grow. For the complex situations that need human-like vision with the speed and reliability of a computer, deep learning will prove to be a truly game-changing option.

Figure 3 – Compared to Traditional Machine Vision, Deep Learning is:
1. Designed for hard-to-solve applications;
2. Easier to configure;
3. Tolerant to variations.

Deep learning’s benefits for industrial manufacturing
Rule-based machine vision and deep learning-based image analysis are a complement to each other instead of an either/or choice when adopting next generation factory automation tools. In some applications, like measurement, rule-based machine vision will still be the preferred and cost-effective choice. For complex inspections involving wide deviation and unpredictable defects—too numerous and complicated to program and maintain within a traditional machine vision system—deep learning-based tools offer an excellent alternative.

• Learn more about Cognex deep learning solutions

#Machinehealth #PAuto @Cognex_Corp @CognexEurope