Non-contact technology simplifies torque monitoring and aids efficiency.

17/10/2019
Monitoring torque in a drive shaft is one of the best ways of assessing the performance of plant and machinery. However because drive shafts rotate, hard wiring a sensor into place usually requires the use of a delicate slip ring. An alternative solution is to use a non-contact radio frequency detector to monitor ‘Surface Acoustic Waves’ (SAWs), as Mark Ingham of Sensor Technology Ltd explains.

Torque imparts a small degree of twist into a driven shaft, which will distort SAW devices (small quartz combs) affixed to the shaft. This deformation causes a change in the resonant frequency of the combs, which can be measured via a non-contact radio frequency (RF) pick-up mounted close to the shaft. The pick-up emits an RF signal towards the shaft which is reflected back by the combs with its frequency changed in proportion to the distortion of the combs.

A SAW transducer is able to sense torque in both directions, and provides fast mechanical and electrical responses. As the method is non-contact it has also offers complete freedom from slip rings, brushes and/or complex electronics, which are often found in traditional torque measurement systems. SAW devices also have a high immunity to magnetic forces allowing their use in, for example, motors where other analogue technologies are very susceptible to electronic interference.

More detail:
In its simplest form, a SAW transducer consists of two interdigital arrays of thin metal electrodes deposited on a highly polished piezoelectric substrate such as quartz. The electrodes that comprise these arrays alternate polarities so that an RF signal of the proper frequency applied across them causes the surface of the crystal to expand and contract and this generates the surface wave.

These interdigital electrodes are generally spaced at half- or quarter-wavelength of the operating centre frequency. Since the surface wave or acoustic velocity is 10-5 of the speed of light, an acoustic wavelength is much smaller than its electromagnetic counterpart.

For example, a signal at 100Mhz with a free space wavelength of three metres would have a corresponding acoustic wavelength of about 30 microns. This results in the SAW’s unique ability to incorporate an incredible amount of signal processing or delay in a very small volume. As a result of this relationship, physical limitations exist at higher frequencies when the electrodes become too narrow to fabricate with standard photolithographic techniques and at lower frequencies when the devices become impractically large. Hence, at this time, SAW devices are most typically used from 10Mhz to about 3Ghz.

Applications
SAW-based torque sensors have been used around the world and in many fields, from test rigs to wind turbines and generators based on tidal or river flows. They are used extensively in the high tech world of the development of engines and gearboxes for Formula 1. Pharmaceutical companies employ them to monitor the pumps micro-dosing active ingredients into medicines and tablets. Torque feedback systems can be used by security firms to determine the direction their movable CCTV cameras are facing so that they can efficiently watch premises under their protection.

Today, as industrial engineers automated manufacturing and processing operations they are increasingly turning to torque monitoring to generate the vital operating and production data that maintains production and efficiency.

@sensortech #PAuto

The next-generation inspection!

17/09/2019

Combining machine vision and deep learning will give companies a powerful mean on both operational and ROI axles. So, catching the differences between traditional machine vision and deep learning, and understanding how these technologies complement each other – rather than compete or replace – are essential to maximizing investments. In this article Bruno Forgue of Cognex helps to clarify things.

Machine Vision vs Deep Learning

Over the last decade, technology changes and improvement have been so much various: device mobility… big data… artificial intelligence (AI)… internet-of-things… robotics… blockchain… 3D printing… machine vision… In all these domains, novel things came out of R&D-labs to improve our daily lives.

Engineers like to adopt and adapt technologies to their tough environment and constraints. Strategically planning for the adoption and leveraging of some or all these technologies will be crucial in the manufacturing industry.

Let’s focus here on AI, and specifically deep learning-based image analysis or example-based machine vision. Combined with traditional rule-based machine vision, it can help robotic assemblers identify the correct parts, help detect if a part was present or missing or assembled improperly on the product, and more quickly determine if those were problems. And this can be done with high precision.

Figure 1 – The first differences between traditional machine vision and deep learning include:
1. The development process (tool-by-tool rule-based programming vs. example-based training);
2. The hardware investments (deep learning requires more processing and storage);
3. The factory automation use cases.

Let’s first see what deep learning is
Without getting too deep (may I say?) in details, let’s talk about GPU hardware. GPUs (graphics processing units) gather thousands of relatively simple processing-cores on a single chip. Their architecture looks like neural networks. They allow to deploy biology-inspired and multi-layered “deep” neural networks which mimic the human brain.

By using such architecture, deep learning allows for solving specific tasks without being explicitly programmed to do so. In other words, classical computer applications are programmed by humans for being “task-specific”, but deep learning uses data (images, speeches, texts, numbers…) and trains it via neural networks. Starting from a primary logic developed during initial training, deep neural networks will continuously refine their performance as they receive new data.

It is based on detecting differences: it permanently looks for alterations and irregularities in a set of data. It is sensitive/reactive to unpredictable defects. Humans do this naturally well. Computer systems based on rigid programming aren’t good at this. (But unlike human inspectors on production lines, computers do not get tired because of constantly doing the same iteration.)

In daily life, typical applications of deep learning are facial recognition (to unlock computers or identify people on photos)… recommendation engines (on streaming video/music services or when shopping at ecommerce sites)… spam filtering in emails… disease diagnostics… credit card fraud detection…

Deep learning technology makes very accurate outputs based on the trained data. It is being used to predict patterns, detect variance and anomalies, and make critical business decisions. This same technology is now migrating into advanced manufacturing practices for quality inspection and other judgment-based use cases.

When implemented for the right types of factory applications, in conjunction with machine vision, deep learning will scale-up profits in manufacturing (especially when compared with investments in other emerging technologies that might take years to payoff).

How does deep learning complement machine vision?
A machine vision system relies on a digital sensor placed inside an industrial camera with specific optics. It acquires images. Those images are fed to a PC. Specialized software processes, analyzes, measures various characteristics for decision making. Machine vision systems perform reliably with consistent and well-manufactured parts. They operate via step-by-step filtering and rule-based algorithms.

On a production line, a rule-based machine vision system can inspect hundreds, or even thousands, of parts per minute with high accuracy. It’s more cost-effective than human inspection. The output of that visual data is based on a programmatic, rule-based approach to solving inspection problems.

On a factory floor, traditional rule-based machine vision is ideal for: guidance (position, orientation…), identification (barcodes, data-matrix codes, marks, characters…), gauging (comparison of distances with specified values…), inspection (flaws and other problems such as missing safety-seal, broken part…).

Rule-based machine vision is great with a known set of variables: Is a part present or absent? Exactly how far apart is this object from that one? Where does this robot need to pick up this part? These jobs are easy to deploy on the assembly line in a controlled environment. But what happens when things aren’t so clear cut?

This is where deep learning enters the game:

• Solve vision applications too difficult to program with rule-based algorithms.
• Handle confusing backgrounds and variations in part appearance.
• Maintain applications and re-train with new image data on the factory floor.
• Adapt to new examples without re-programming core networks.

A typical industrial example: looking for scratches on electronic device screens. Those defects will all differ in size, scope, location, or across screens with different backgrounds. Considering such variations, deep learning will tell the difference between good and defective parts. Plus, training the network on a new target (like a different kind of screen) is as easy as taking a new set of reference pictures.

Figure 2 – Typical industrial example: looking for defects which are all different in size, scope, location, or across various surfaces with different backgrounds.

Inspecting visually similar parts with complex surface textures and variations in appearance are serious challenges for traditional rule-based machine vision systems. “Functional” defaults, which affect a utility, are almost always rejected, but “cosmetic” anomalies may not be, depending upon the manufacturer’s needs and preference. And even more: these defects are difficult for a traditional machine vision system to distinguish between.

Due to multiple variables that can be hard to isolate (lighting, changes in color, curvature, or field of view), some defect detections, are notoriously difficult to program and solve with a traditional machine vision system. Here again, deep learning brings other appropriate tools.

In short, traditional machine vision systems perform reliably with consistent and well-manufactured parts, and the applications become challenging to program as exceptions and defect libraries grow. For the complex situations that need human-like vision with the speed and reliability of a computer, deep learning will prove to be a truly game-changing option.

Figure 3 – Compared to Traditional Machine Vision, Deep Learning is:
1. Designed for hard-to-solve applications;
2. Easier to configure;
3. Tolerant to variations.

Deep learning’s benefits for industrial manufacturing
Rule-based machine vision and deep learning-based image analysis are a complement to each other instead of an either/or choice when adopting next generation factory automation tools. In some applications, like measurement, rule-based machine vision will still be the preferred and cost-effective choice. For complex inspections involving wide deviation and unpredictable defects—too numerous and complicated to program and maintain within a traditional machine vision system—deep learning-based tools offer an excellent alternative.

• Learn more about Cognex deep learning solutions

#Machinehealth #PAuto @Cognex_Corp @CognexEurope

Power take-off torque monitoring.

07/08/2018
AIM – Precisely & Quickly Monitor Power Take-Off Torque on A Wave Energy Converter

Challenge
As part of a project funded by Wave Energy Scotland, 4c Engineering needed to test various configurations of the SeaPower Platform, a Wave Energy Converter (WEC), to determine the effects on power capture.   To do this they needed a reliable and accurate way of measuring power take-off (PTO) torque, forces, positions and pressures of the waves on the SeaPower Platform.

Why? Establishing the most efficient design with the highest wave power generation, will make it a more cost-efficient form of wave energy.

The SeaPower Platform extracts energy from deep water ocean waves by reacting to long prevailing wavelengths in high resource sites.

Solution – Accurate DTD-P Parallel Shaft Reaction Torque Transducer
“We chose the DTD-P torque transducer for its high accuracy and compact size which we needed for tank testing the SeaPower Platform,” explains Andy Hall, Director at 4c Engineering.

  • Designed for In-Line Static or Semi-Rotary Torque Measurement
  • Capacities: 0-10Nm to 0-10kNm
  • High Accuracy – Ideal for Calibration, Development and Testing Applications
  • Accuracy: <±0.15% / Full Scale Output
  • Customised Capacities, Shaft and Configuration Options Available
  • IP67 Waterproof and IP68 Fully Submersible Versions Available

Complete Torque Monitoring System
Applied Measurements Ltd provided 4c Engineering with a DTD-P 100Nm parallel shaft torque sensor fitted with an ICA4H miniature load cell amplifier, calibrated to UKAS traceable standards and sealed to IP68 to allow complete submersion. This complete torque measuring system enabled their engineers to reliably and accurately monitor the torque applied by the WEC as it responded to waves in the test tank.

Save on Installation Time
The DTD-P torque transducer has keyed parallel shaft connections for in-line static or semi-rotary torque measurement in capacities from 0-10Nm up to 0-10kNm (custom capacities readily available). This version was fitted with a flying lead, however versions with an integral bayonet lock military connector are also available which promise simple and easy connection.

Guaranteed High Accuracy
The DTD-P torque transducer is highly accurate to better than ±0.15% (typically ±0.05%) of the full scale output, making it ideal for this high precision development and testing application. Additional applications include the testing of electrical motors, hydraulic pumps, automotive transmissions, steering systems and aircraft actuators.

Need a Specific Design?
The DTD-P torque transducer can be customised with bespoke shaft, configuration options and capacities (to 50kNm+) specific to your application. For 4c Engineering we customised the design of the DTD-P torque transducer to IP68 submersible for continuous use underwater to 1m, which was essential for use in the wave test tank.

High Stability, Fast Response, ICA4H Miniature Load Cell Amplifier
“The high speed, reliability and clean output of the ICA4H miniature amplifier enabled the data to be analysed immediately after each test.” says Andy Hall.

  • Very Compact 19mm Diameter
  • Low Current Consumption
  • High Speed 1kHz Bandwidth (max.)
  • 4-20mA (3-wire) Output (10 to 30Vdc supply)

Very Compact
To deliver a conditioned load cell output signal we supplied the DTD-P torque transducer with an ICA4H high performance miniature load cell amplifier. The ICA miniature load cell amplifiers are very compact at only 19mm in diameter allowing them to fit inside the body of most load cells. In this application the ICA4H was supplied in a gel filled IP68 immersion protected compact enclosure (see image above) along with 10 metres of cable making it suitable for this underwater application.

With High Speed Response
The engineers at 4c Engineering needed to have a quick and reliable way to process the power take-off torque data from the tests, to determine the power capture and effects of the control settings before running the next test. The ICA4H miniature load cell amplifier was chosen not only for its high stability and compact size but also for its 1000Hz fast response.


A sea platform off the Galway (Ireland) Coast – not far from the Read-out offices.

@AppMeas @4c_Eng #power #PAuto


Load cells on stage!

30/04/2018
With theatres striving to create breath-taking spectacles and leave the audience gasping for more, there is often world-class engineering behind the scenes. Sensor Technology is developing technology to ensure safety when excited performers and heavy machinery share the same space.

If live theatre is to compete with film and television, it has to produce visual spectacles to complement the performance of the actors, singers and musicians on stage. Hollywood’s increasing reliance on CGI (computer generated imagery) has upped the ante for stage set designers, who have to work before a live audience, in restricted space and with a constant eye on the safety of the many people working frantically round the set.

Many stage props and almost all of the backdrops are lowered onto the stage from the fly tower just behind it. Usually this is done quickly between scenes, but sometimes it is during – and as part of – the actual performance. Either way, safety and reliability are essential.

“Until recently, the sets were manually controlled with a technical stage manager watching everything from the wings and giving instructions by radio to the winch operators above.” explains Tony Ingham of Sensor Technology who is helping to introduce safety systems and automation to the theatre industry.

“Speed is of the essence during scene changes, but you have to be confident the winches won’t fail – which could easily damage the set or injure a person.”

Sensor Technology is achieving this using real-time load signals from each winch. The data is monitored by a computer in the control room so that instant action can be taken if any loads move out of tolerance.

“We developed the load cells, which we have called LoadSense, a couple of years ago, originally for monitoring cargo nets carried under helicopters.” says Tony. “We were asked to develop one specific capability within the cell and were delighted to do so because we could see that the technology would transfer to many other fields – although I didn’t realise it would get to be a backstage pass to a world of greasepaint and legwarmers!”

That critical characteristic was robust, industrial-grade wireless communications, something in which Sensor Technology already has a 15 year track record from its TorqSense transducer range. In basic terms, each LoadSense has an on-board radio frequency transmitter which sends signals to the control room computer. The transmitter has to be physically robust to cope with the environment it finds itself in and capable of maintaining its signal integrity through the most corrupting of harmonic conditions.

“By working in real time, we can act instantly to any problems. For instance, if a load starts running too fast we would slow it down immediately. If a prop is heavier than expected this could suggest someone was standing on it so shouldn’t be whizzed 50 feet into the air at high speed. In fact, in this case, the computer ‘jiggles’ the load for a second or two as a warning to encourage the person to step away: If the load then returns to normal we are happy to let it rise; if it doesn’t, the floor manager is alerted by an alarm to check the situation.”

LoadSense is proving so sensitive that it can provide a feedback signal to close the control loop on a vector drive controlling the winch. Normally theatre engineers use sensorless vector drives, which offer good dynamic performance without the complications of wiring in a feedback sensor.

Sensor Technology is closing the loop which improves system integrity and enhances safety by a significant margin.

“Not that many years ago, stage scenery was fairly static, being moved only during the interval when the curtains were closed,” Tony recalls. “Then the big theatres in the West End and on Broadway started to emulate some of the things you see in the movies. Looking back, those early efforts were pretty crude, but you would say the same about long-running film franchises such as James Bond or Indiana Jones. “Nowadays, film directors can produce their spectacular images using CGI, and this has upped the ante no end for their cousins in live theatre. The computer power they turn to is not virtual reality but industrial automation.”

In fact, theatre engineers probably work in more demanding conditions than manufacturing engineers. Everything has to be right on the night, harmonic corruption is at stratospheric levels, there can be major changes at a moments notice, people run through the ‘machinery’ without a thought for personal safety.

“But with automation some order is brought to this creative chaos. In fact, the health and safety inspectors now insist on it, with lots of failsafes and feedbacks. I honestly don’t think theatre engineers would be able to achieve half of what they do without wireless communications. There would be just too many wires running all over the place and inevitably some would get broken at the most inopportune of moments.”

@sensortech #PAuto #Stagecraft