Non-contact technology simplifies torque monitoring and aids efficiency.

17/10/2019
Monitoring torque in a drive shaft is one of the best ways of assessing the performance of plant and machinery. However because drive shafts rotate, hard wiring a sensor into place usually requires the use of a delicate slip ring. An alternative solution is to use a non-contact radio frequency detector to monitor ‘Surface Acoustic Waves’ (SAWs), as Mark Ingham of Sensor Technology Ltd explains.

Torque imparts a small degree of twist into a driven shaft, which will distort SAW devices (small quartz combs) affixed to the shaft. This deformation causes a change in the resonant frequency of the combs, which can be measured via a non-contact radio frequency (RF) pick-up mounted close to the shaft. The pick-up emits an RF signal towards the shaft which is reflected back by the combs with its frequency changed in proportion to the distortion of the combs.

A SAW transducer is able to sense torque in both directions, and provides fast mechanical and electrical responses. As the method is non-contact it has also offers complete freedom from slip rings, brushes and/or complex electronics, which are often found in traditional torque measurement systems. SAW devices also have a high immunity to magnetic forces allowing their use in, for example, motors where other analogue technologies are very susceptible to electronic interference.

More detail:
In its simplest form, a SAW transducer consists of two interdigital arrays of thin metal electrodes deposited on a highly polished piezoelectric substrate such as quartz. The electrodes that comprise these arrays alternate polarities so that an RF signal of the proper frequency applied across them causes the surface of the crystal to expand and contract and this generates the surface wave.

These interdigital electrodes are generally spaced at half- or quarter-wavelength of the operating centre frequency. Since the surface wave or acoustic velocity is 10-5 of the speed of light, an acoustic wavelength is much smaller than its electromagnetic counterpart.

For example, a signal at 100Mhz with a free space wavelength of three metres would have a corresponding acoustic wavelength of about 30 microns. This results in the SAW’s unique ability to incorporate an incredible amount of signal processing or delay in a very small volume. As a result of this relationship, physical limitations exist at higher frequencies when the electrodes become too narrow to fabricate with standard photolithographic techniques and at lower frequencies when the devices become impractically large. Hence, at this time, SAW devices are most typically used from 10Mhz to about 3Ghz.

Applications
SAW-based torque sensors have been used around the world and in many fields, from test rigs to wind turbines and generators based on tidal or river flows. They are used extensively in the high tech world of the development of engines and gearboxes for Formula 1. Pharmaceutical companies employ them to monitor the pumps micro-dosing active ingredients into medicines and tablets. Torque feedback systems can be used by security firms to determine the direction their movable CCTV cameras are facing so that they can efficiently watch premises under their protection.

Today, as industrial engineers automated manufacturing and processing operations they are increasingly turning to torque monitoring to generate the vital operating and production data that maintains production and efficiency.

@sensortech #PAuto

Managing dust risks at quarries!

16/10/2019
In this article, Josh Thomas from instrumentation specialist Ashtead Technology, discusses the risks associated with dust at quarries, and highlights the vital role of monitoring.

Josh Thomas

Background
Almost all quarrying operations have the potential to create dust. Control measures should therefore be established to prevent the generation of levels that cause harm. These measures should be identified in the health and safety document, and measurements should be taken to monitor exposure and demonstrate the effectiveness of controls.

Many minerals contain high levels of silica, so quarrying activities of these materials generate silica dust known as respirable crystalline silica (RCS) and particular care must be taken to control exposure. Guidance is available from the British Health & Safety Executive (HSE); see document HS(G) 73 Respirable crystalline silica at quarries. Sandstone, gravel and flint typically contain over 70% crystalline silica, shale contains over 40% and granite can contain up to 30%. Inhaling RCS can lead to silicosis which is a serious and irreversible lung disease that can cause permanent disablement and early death. There is an increased risk of lung cancer in workers who have silicosis, and it can also be the cause of chronic obstructive pulmonary disease (COPD).

The British Control of Substances Hazardous to Health Regulations 2002 (COSHH) requires employers to ensure that exposure is prevented or, where this is not reasonably practicable, adequately controlled. The COSHH definition of a substance hazardous to health includes dust of any kind when present at a concentration in air equal to or greater than 10 mg/m3 8-hour time-weighted average of inhalable dust, or 4 mg/m3 8-hour TWA of respirable dust. This means that any dust will be subject to COSHH if people are exposed to dust above these levels. Some dusts have been assigned specific workplace exposure limits (WELs) and exposure to these must comply with the appropriate limits. For example, the WEL for RCS is 0.1 mg/m3 8-hour TWA.

The Quarries Regulations 1999 (GB) cover all surface mineral workings, and include tips and stockpiles, as well as areas used for crushing, screening, washing, drying and bagging. Buildings and other structures are also included, as are common areas and prospecting sites. The Regulations were created to protect the health and safety of quarry staff, as well as others that may be affected by quarrying activities, such as those living, passing or working nearby, or visiting the site.

The role of monitoring
In order to assess the risks posed by dust, it is necessary to undertake both workplace monitoring – inside buildings, vehicle cabs etc., as well as environmental monitoring in and around the quarry. The technology for doing so is similar but different instruments are available for every application. Ashtead supplies personal air sampling pumps when it is necessary to conduct compliance monitoring, or when the identification and measurement (in a laboratory) of a specific dust type, such as RCS, is required.

Once the dust risks at a quarry have been assessed, ongoing monitoring is more often conducted with direct reading instruments that employ optical techniques to measure the different particulate fractions. Portable battery-powered instruments such as the TSI SidePak and the DustTrak are ideal for this purpose and feature heavily in Ashtead’s fleet of instruments for both sale and rental.

Installed TSI DTE

The same dust monitoring technology is employed by the TSI DustTrak Environmental (DTE), which has been developed specifically for applications such as dust monitoring at quarries. Fully compliant with stringent MCERTS performance requirements, the DTE employs a ‘cloud’ based data management system, which provides users with easy access to real-time data on dust levels, with the optional addition of other sensors. Alarm conditions can be set by users so that text and email alerts are issued when threshold levels arise. The DTE monitors PMTotal, PM10, PM2.5 and PM1.0 mass fractions simultaneously, which provides detailed information on the type of dust present, and means that alarms can be set for specific fractions.

Clearly, dust monitors can perform a vital role in helping to protect safety at working quarries. However, a TSI DTE was recently hired from Ashtead Technology to perform monitoring prior to the commencement of quarrying operations, so that baseline dust levels could be established for comparison once the quarry is operational. Monitoring prior to operations is important, because airborne dust at a quarry is not necessarily derived from the quarry alone; local agricultural or industrial activities may also contribute to the particulate burden. This also highlights the advantages of 24/7 monitoring because dust pollution may be intermittent, so continuous monitors such as the DTE are able to identify peaks and thereby assist in the attribution of sources.

Ashtead Technology fitted the DTE mentioned above with a solar panel and rechargeable battery so that it could operate unattended for extended periods in a remote location. With web-based access to the data, site visits were minimised and costs lowered. This equipment was hired from Ashtead to avoid capital expenditure, and looking forward, the client is planning to add a Lufft wind monitor to the rental, because data on wind speed and direction helps with modelling and with the identification of dust pollution sources.

Summary
Ideally, quarry site monitoring should be undertaken prior to the commencement of operations to establish baseline levels for that site. Risk assessments can then be undertaken around the site and within buildings and vehicles/machinery. However, conditions can change significantly, so continuous monitoring is preferable. Changes in quarry practices and weather can affect environmental conditions, and workplace exposure can be affected by a wide range of factors such as broken filter bags, spillage, insufficient cleaning, filter blockage and dry (instead of wet) drilling or cutting.

With a variety of applications for dust monitoring, it is important that appropriate technology is employed, so the Ashtead Technology instrument fleet has been developed to meet almost every need, and technical advice is available to help consultants and quarry operators ensure that dust hazards and effectively managed.

#Environment @ashteadtech @_Enviro_News

Managing NOx gas emissions from combustion.

26/09/2019
Pollution can only be managed effectively if it is monitored effectively.

James Clements

As political pressure increases to limit the emissions of the oxides of nitrogen, James Clements, Managing Director of the Signal Group, explains how the latest advances in monitoring technology can help.

Nitrogen and oxygen are the two main components of atmospheric air, but they do not react at ambient temperature. However, in the heat of combustion, such as in a vehicle engine or within an industrial furnace or process, the gases react to form nitrogen oxide (NO) and nitrogen dioxide (NO2). This is an important consideration for the manufacturers of combustion equipment because emissions of these gases (collectively known as NOx) have serious health and environmental effects, and are therefore tightly regulated.

Nitrogen dioxide gas is a major pollutant in ambient air, responsible for large numbers of premature deaths, particularly in urban areas where vehicular emissions accumulate. NO2 also contributes to global warming and in some circumstances can cause acid rain. A wide range of regulations therefore exist to limit NOx emissions from combustion sources ranging from domestic wood burners to cars, and from industrial furnaces and generators to power stations. The developers of engines and furnaces therefore focus attention on the NOx emissions of their designs, and the operators of this equipment are generally required to undertake emissions monitoring to demonstrate regulatory compliance.

The role of monitoring in NOx reduction
NOx emissions can be reduced by:

  • reducing peak combustion temperature
  • reducing residence time at the peak temperature
  • chemical reduction of NOx during the combustion process
  • reducing nitrogen in the combustion process

These primary NOx reduction methods frequently involve extra cost or lower combustion efficiency, so NOx measurements are essential for the optimisation of engine/boiler efficiency. Secondary NOx reduction measures are possible by either chemical reduction or sorption/neutralisation. Naturally, the effects of these measures also require accurate emissions monitoring and control.

Choosing a NOx analyser
In practice, the main methods employed for the measurement of NOx are infrared, chemiluminescence and electrochemical. However, emissions monitoring standards are mostly performance based, so users need to select analysers that are able to demonstrate the required performance specification.

Rack Analyser

Infrared analysers measure the absorption of an emitted infrared light source through a gas sample. In Signal’s PULSAR range, Gas Filter Correlation technology enables the measurement of just the gas or gases of interest, with negligible interference from other gases and water vapour. Alternatively, FTIR enables the simultaneous speciation of many different species, including NO and NO2, but it is costly and in common with other infrared methods, is significantly less sensitive than CLD.

Electrochemical sensors are low cost and generally offer lower levels of performance. Gas diffuses into the sensor where it is oxidised or reduced, which results in a current that is limited by diffusion, so the output from these sensors is proportional to the gas concentration. However, users should take into consideration potential cross-sensitivities, as well as rigorous calibration requirements and limited sensor longevity.

The chemiluminescence detector (CLD) method of measuring NO is based on the use of a controlled amount of Ozone (O3) coming into contact with the sample containing NO inside a light sealed chamber. This chamber has a photomultiplier fitted so that it measures the photons given off by the reaction that takes place between NO and O3.

NO is oxidised by the O3 to become NO2 and photons are released as a part of the reaction. This chemiluminescence only occurs with NO, so in order to measure NO2 it is necessary to first convert it to NO. The NO2 value is added to the NO reading and this is equates to the NOx value.

Most of the oxides of nitrogen coming directly from combustion processes are NO, but much of it is further oxidised to NO2 as the NO mixes with air (which is 20.9% Oxygen). For regulatory monitoring, NO2 is generally the required measurement parameter, but for combustion research and development NOx is the common measurand. Consequently, chemiluminescence is the preferred measurement method for development engineers at manufacturer laboratories working on new technologies to reduce NOx emissions in the combustion of fossil fuels. For regulatory compliance monitoring, NDIR (Non-Dispersive Infrared) is more commonly employed.

Typical applications for CLD analysers therefore include the development and manufacture of gas turbines, large stationary diesel engines, large combustion plant process boilers, domestic gas water heaters and gas-fired factory space heaters, as well as combustion research, catalyst efficiency, NOx reduction, bus engine retrofits, truck NOx selective catalytic reduction development and any other manufacturing process which burns fossil fuels.

These applications require better accuracy than regulatory compliance because savings in the choice of analyser are negligible in comparison with the market benefits of developing engines and furnaces with superior efficiency and better, cleaner emissions.

Signal Group always offers non-heated, non-vacuum CLD analysers for combined cycle gas turbine (CCGT) power stations because these stations emit lower than average NOx levels. NDIR analysers typically have a range of 100ppm whereas CLD analysers are much more sensitive, with a lower range of 10ppm. Combustion processes operating with de-NOX equipment will need this superior level of sensitivity.

There is a high proportion of NO2 in the emissions of CCGT plants because they run with high levels of air in the combustion process, so it is necessary to convert NO2 to NO prior to analysis. Most CLD analysers are supplied with converters, but NDIR analysers are not so these are normally installed separately when NDIR is used.

In the USA, permitted levels for NOx are low, and many plants employ de-NOx equipment, so CLD analysers are often preferred. In Europe, the permitted levels are coming down, but there are fewer CCGT Large Plant operators, and in other markets such as India and China, permitted NOx emissions are significantly higher and NDIR is therefore more commonly employed.

In England, the Environment Agency requires continuous emissions monitors (CEMS) to have a range no more than 2.5 times the permitted NOx level, so as a manufacturer of both CLD and NDIR analysers, this can be a determining factor for Signal Group when deciding which analysers to recommend. The UK has a large number of CCGT power plants in operation and Signal Group has a high number of installed CEMS at these sites, but very few new plants have been built in recent years.

New NOx analysis technology
Signal Group recently announced the launch of the QUASAR Series IV gas analysers which employ CLD for the continuous measurement of NOx, Nitric Oxide, Nitrogen Dioxide or Ammonia in applications such as engine emissions, combustion studies, process monitoring, CEMS and gas production.

Chemiluminescence Analyser

The QUASAR instruments exploit the advantages of heated vacuum chemiluminescence, offering higher sensitivity with minimal quenching effects, and a heated reaction chamber that facilitates the processing of hot, wet sample gases without condensation. Signal’s vacuum technology improves the signal to noise ratio, and a fast response time makes it ideal for real-time reporting applications. However, a non-vacuum version is available for trace NOx measurements such as RDE (Real-world Driving Emissions) on-board vehicle testing, for which a 24VDC version is available.

A key feature of these latest instruments is the communications flexibility – all of the new Series IV instruments are compatible with 3G, 4G, GPRS, Bluetooth, Wifi and satellite communications; each instrument has its own IP address and runs on Windows software. This provides users with simple, secure access to their analyzers at any time, from almost anywhere.

In summary, it is clear that the choice of analyser is dictated by the application, so it is important to discuss this with appropriate suppliers/manufacturers. However, with the latest instruments, Signal’s customers can look forward to monitoring systems that are much more flexible and easier to operate. This will improve NOx reduction measures, and thereby help to protect both human health and the environment.


The next-generation inspection!

17/09/2019

Combining machine vision and deep learning will give companies a powerful mean on both operational and ROI axles. So, catching the differences between traditional machine vision and deep learning, and understanding how these technologies complement each other – rather than compete or replace – are essential to maximizing investments. In this article Bruno Forgue of Cognex helps to clarify things.

Machine Vision vs Deep Learning

Over the last decade, technology changes and improvement have been so much various: device mobility… big data… artificial intelligence (AI)… internet-of-things… robotics… blockchain… 3D printing… machine vision… In all these domains, novel things came out of R&D-labs to improve our daily lives.

Engineers like to adopt and adapt technologies to their tough environment and constraints. Strategically planning for the adoption and leveraging of some or all these technologies will be crucial in the manufacturing industry.

Let’s focus here on AI, and specifically deep learning-based image analysis or example-based machine vision. Combined with traditional rule-based machine vision, it can help robotic assemblers identify the correct parts, help detect if a part was present or missing or assembled improperly on the product, and more quickly determine if those were problems. And this can be done with high precision.

Figure 1 – The first differences between traditional machine vision and deep learning include:
1. The development process (tool-by-tool rule-based programming vs. example-based training);
2. The hardware investments (deep learning requires more processing and storage);
3. The factory automation use cases.

Let’s first see what deep learning is
Without getting too deep (may I say?) in details, let’s talk about GPU hardware. GPUs (graphics processing units) gather thousands of relatively simple processing-cores on a single chip. Their architecture looks like neural networks. They allow to deploy biology-inspired and multi-layered “deep” neural networks which mimic the human brain.

By using such architecture, deep learning allows for solving specific tasks without being explicitly programmed to do so. In other words, classical computer applications are programmed by humans for being “task-specific”, but deep learning uses data (images, speeches, texts, numbers…) and trains it via neural networks. Starting from a primary logic developed during initial training, deep neural networks will continuously refine their performance as they receive new data.

It is based on detecting differences: it permanently looks for alterations and irregularities in a set of data. It is sensitive/reactive to unpredictable defects. Humans do this naturally well. Computer systems based on rigid programming aren’t good at this. (But unlike human inspectors on production lines, computers do not get tired because of constantly doing the same iteration.)

In daily life, typical applications of deep learning are facial recognition (to unlock computers or identify people on photos)… recommendation engines (on streaming video/music services or when shopping at ecommerce sites)… spam filtering in emails… disease diagnostics… credit card fraud detection…

Deep learning technology makes very accurate outputs based on the trained data. It is being used to predict patterns, detect variance and anomalies, and make critical business decisions. This same technology is now migrating into advanced manufacturing practices for quality inspection and other judgment-based use cases.

When implemented for the right types of factory applications, in conjunction with machine vision, deep learning will scale-up profits in manufacturing (especially when compared with investments in other emerging technologies that might take years to payoff).

How does deep learning complement machine vision?
A machine vision system relies on a digital sensor placed inside an industrial camera with specific optics. It acquires images. Those images are fed to a PC. Specialized software processes, analyzes, measures various characteristics for decision making. Machine vision systems perform reliably with consistent and well-manufactured parts. They operate via step-by-step filtering and rule-based algorithms.

On a production line, a rule-based machine vision system can inspect hundreds, or even thousands, of parts per minute with high accuracy. It’s more cost-effective than human inspection. The output of that visual data is based on a programmatic, rule-based approach to solving inspection problems.

On a factory floor, traditional rule-based machine vision is ideal for: guidance (position, orientation…), identification (barcodes, data-matrix codes, marks, characters…), gauging (comparison of distances with specified values…), inspection (flaws and other problems such as missing safety-seal, broken part…).

Rule-based machine vision is great with a known set of variables: Is a part present or absent? Exactly how far apart is this object from that one? Where does this robot need to pick up this part? These jobs are easy to deploy on the assembly line in a controlled environment. But what happens when things aren’t so clear cut?

This is where deep learning enters the game:

• Solve vision applications too difficult to program with rule-based algorithms.
• Handle confusing backgrounds and variations in part appearance.
• Maintain applications and re-train with new image data on the factory floor.
• Adapt to new examples without re-programming core networks.

A typical industrial example: looking for scratches on electronic device screens. Those defects will all differ in size, scope, location, or across screens with different backgrounds. Considering such variations, deep learning will tell the difference between good and defective parts. Plus, training the network on a new target (like a different kind of screen) is as easy as taking a new set of reference pictures.

Figure 2 – Typical industrial example: looking for defects which are all different in size, scope, location, or across various surfaces with different backgrounds.

Inspecting visually similar parts with complex surface textures and variations in appearance are serious challenges for traditional rule-based machine vision systems. “Functional” defaults, which affect a utility, are almost always rejected, but “cosmetic” anomalies may not be, depending upon the manufacturer’s needs and preference. And even more: these defects are difficult for a traditional machine vision system to distinguish between.

Due to multiple variables that can be hard to isolate (lighting, changes in color, curvature, or field of view), some defect detections, are notoriously difficult to program and solve with a traditional machine vision system. Here again, deep learning brings other appropriate tools.

In short, traditional machine vision systems perform reliably with consistent and well-manufactured parts, and the applications become challenging to program as exceptions and defect libraries grow. For the complex situations that need human-like vision with the speed and reliability of a computer, deep learning will prove to be a truly game-changing option.

Figure 3 – Compared to Traditional Machine Vision, Deep Learning is:
1. Designed for hard-to-solve applications;
2. Easier to configure;
3. Tolerant to variations.

Deep learning’s benefits for industrial manufacturing
Rule-based machine vision and deep learning-based image analysis are a complement to each other instead of an either/or choice when adopting next generation factory automation tools. In some applications, like measurement, rule-based machine vision will still be the preferred and cost-effective choice. For complex inspections involving wide deviation and unpredictable defects—too numerous and complicated to program and maintain within a traditional machine vision system—deep learning-based tools offer an excellent alternative.

• Learn more about Cognex deep learning solutions

#Machinehealth #PAuto @Cognex_Corp @CognexEurope

Directing traffic smartly.

01/09/2019

In the 17th century, Captain Frans Banninck Cocq, the central figure in Rembrandt’s masterpiece `The Night Watch’ (housed at the Rijksmuseum, pictured above) provided safety and security in Amsterdam. Today, the city relies on the Verkeer en Openbare Ruimte to ensure safe navigation through the busy streets. (See reproduction of the famous picture at bottom of this article)

Amsterdam is the largest city in the Netherlands, with a population of 2.4 million. The city is also one of Europe’s leading tourist destinations, attracting around 6 million people a year. Amsterdam’s oldest quarter, the medieval centre, is very small and has an incredibly complex infrastructure, with roads, tunnels, trams, metro, canals and thousands of bicycles. This creates one of the world’s most challenging traffic management environments, which the office for Traffic and Public Space (Verkeer en Openbare Ruimte) meets through vision, action and modern technology. This is typified by the new intelligent data communications network being installed to support the city’s traffic control system, for which they have selected advanced Ethernet switching and routing technology from Westermo.

In 2015, the municipality of Amsterdam created its own team that was responsible for the development and operation of the data communication network that supports the Intelligent Traffic Systems (ITS) in the city. Previously, this was managed by an external partner, but due to rising costs, and increasing performance and cybersecurity requirements, it was decided the best way forward was to take back full responsibility for the network.

Eric Bish, Senior Systems and Management Engineer and Project Manager and Albert Scholten, System and Management Engineer, were two key members of this team responsible for the Information and Communications Technology (ICT) systems for traffic control in Amsterdam.

Albert Scholten

“The existing communications network supporting the traffic control system had served us well for many years, but it had become outdated and the daily costs to maintain the leased line copper network was very high. With the challenges the city faced going forward, we needed to modernise our systems,” said Scholten.

“The old network was mostly based on analogue modems, multi-drop-modems, xDSL extenders and 3G routers from Westermo,” explained Bish. “These devices have proved to be very reliable, so when we started to look at the requirements for the new system, Westermo technology was given serious consideration.”

Project planning
“We worked closely with Axians, our supplier of network services, and Modelec Data Industrie, the distributor of Westermo products in the Netherlands. The collaboration between the three parties was essential to the success of the project. Modelec Data Industrie are very knowledgeable about industrial data communications and during constructive discussions regarding the system requirements they suggested that Westermo technologies would be a good choice for building a robust and reliable network for the future.

“From our meetings a roadmap was established. Our long-term plan is largely based on having a fibre optic infrastructure managed by Westermo Lynx and RedFox Ethernet switches. However, installing new cables is a costly and time-consuming process, so where existing fibre optic cabling is not already available, we have found the Westermo Wolverine Ethernet Extender to be extremely useful. This device allows us to create reliable, high speed, fully managed network solutions using the existing copper cables linking the traffic light systems. For remote connections, between the edge networks and the control centre, we have used Westermo MRD 4G cellular routers, which offer a redundant SIM option and simplifies the process of setting up IPSEC VPN’s.”

Equipment testing

Eric Bish

Before a large-scale implementation of the new system could begin, the Lynx switches and Wolverine Ethernet Extenders were tested at some of the less critical road junctions. To assess the Westermo MRD 4G cellular routers, a mobile test system was constructed and taken to popular parts of Amsterdam during King’s day, the annual Dutch national holiday and busiest day of the year. Despite the huge crowds swamping the mobile masts, the routers delivered excellent performance.

“Having met our required standards during testing, the Westermo devices were deployed extensively throughout the city and are now providing the data communications for several major traffic control systems. Over 1300 pieces of equipment are currently connected via the new network and with the traffic control systems being constantly upgraded this figure continues to grow.”

Westermo offers a broad range of products suitable for traffic control applications, which has helped us to meet all of our needs for this project. We have found the technology to be robust and reliable. The devices consume very low power, which means they generate little heat. This is important, as the switches are often installed in cramped, unventilated cabinets with other electronics that can be damaged if they get too hot.

“The Westermo Lynx switch is very versatile, offering an array of smart features and network connections. For example, the SFP option gave us the ability to easily switch between copper and fiber wiring, while the serial port enabled connection to legacy traffic light systems. The option to perform text-based configuration from a console port has supported our need for fine granular control and rapid mass deployment of devices. Every device received a consistent configuration, but we had the flexibility to adjust the configuration of specific devices, where required. This functionality has enabled us to install all the devices in a little over 12 months. This helped us to make significant savings because the costly leased lines to the datacenter could be terminated sooner.

Network capability
“While we were installing the new network, we needed to retain the old system and move the functionality across gradually. However, with the cost of maintaining the old leased line copper network was so high, we wanted the new network to be very simple and fast to implement. We started with a classic layer-2 approach, consisting of an MRD router and up to six Lynx switches or Line Extenders connected to it. Every Traffic Light Controller was then connected to a Line Extender or switch, depending on the existing cabling in place.

However, because it is difficult, time consuming and costly to install and maintain a data network of this size within a city such as Amsterdam, we knew the new network would eventually have to be able to support more than just the traffic light systems. In fact, it must support camera surveillance, traffic information systems, automatic number plate recognition camera and even public lighting systems. Critically, these other applications must be isolated from each other for security purposes, while changes or additions to the network must also be simple to achieve.

“Efficient use of the cable infrastructure is therefore critical, which is why we selected switches with layer 3 functionality at the start of the project. This enabled us to create a layer-3 network design. A clever combination of OSPF routing, local firewalling and layer-2 and layer-3 features has yielded a very flexible, secure and redundant gateway network design. The network is now sufficiently resilient to withstand common issues, such as cable damage and power outages.

“Using the Westermo Redfox switches, we will soon couple our updated network to the fiber optic rings used to control the city’s metro lines. This will provide fully redundant gigabit connections to our datacenter for many of our surveillance cameras and traffic systems.

“Using Westermo technology we have built a robust and reliable networking solution that will last for a long time. The technology offers the functionality we need to modernise the network and enable us to make quick system upgrades over the lifecycle of the system,” Bish added. “As far as we are aware, this is the most advanced network infrastructure in place in The Netherlands and to date the solution has performed flawlessly. We expect that within five years the industrial network will cover the whole of Amsterdam and its surrounding areas and this will almost completely rely on gigabit fiber links, with only a handful 4G connections still required.”

 

Use case 1: Traffic light control
There are several hundred traffic light systems throughout Amsterdam. These work autonomously, but can also be controlled centrally, which is one of the most critical tasks for the city’s department for traffic and public space. In the event of traffic congestion, traffic control centre operators can manage the flow of traffic and if necessary, reroute traffic to less crowded roads.

The traffic light control systems interconnect several traffic lights. The infrastructure connecting the traffic lights is a mix of existing copper cables and new fibre cables. However, in order to connect a string of traffic lights back to the control room, the city has been relying on leased lines. This solution is not only expensive, costing around EUR 2 million per year, but also does not provide the reliability required for a system of this magnitude. The savings made as a result of replacing the leased lines with the Westermo cellular routers is estimated to cover the cost of the network upgrade project within just three years.

Use case 2: Environmental Zone Enforcement
An environmental zone has been established in the central part of Amsterdam with the aim of decreasing pollution from motor vehicles. Vehicles that are not environmentally friendly are prohibited to enter the `green zone’ and automatic number plate recognition cameras have been installed to ensure that the restriction is followed by motorists. Approximately 80 control points have been established at the entrances to the city to monitor about three million cars every day. Between one and five ANPR cameras automatically read the vehicle registration numbers as they pass the control points. The photos are processed inside the camera, converted into simple text information and sent to the control centre through a secure encrypted IPSec VPN tunnel using the MRD 4G cellular router. The City of Amsterdam plan to participate in the European C-ITS smart traffic project, which will allow real-time traffic optimisation. This will mean that there will be a requirement for more bandwidth and lower latency so in time, the mobile connections will be replaced with a fibre optic network, using for example the Lynx and RedFox switches.

Use case 3: Traffic observation and situation assessment
The Amsterdam traffic is continuously monitored from the control centre to help operators maintain the flow of traffic, reduce congestion and minimise the risk of accidents. Operators make decisions based on the information provided by hundreds of cameras installed across the city. Many of the regular surveillance cameras are connected to the network via Westermo switches. The real-time video feed from the ANPR cameras can also be viewed for traffic controlling purposes. These are connected to the control room using Westermo MRD 4G cellular routers, which provide secure IPSec encrypted VPN tunnels. When traffic congestion occurs, the traffic control managers are permitted to disable the environmental monitoring system and activate predefined scenarios that reroutes the traffic to dissolve the congestion.

De Nachtwacht (The Night Watch)

@westermo @hhc_lewis #Netherlands

Gas sensing in the purification process of drinking water.

28/08/2019

The processing of clean and safe drinking water is an international issue. Estimates suggest that, if no further improvements are made to the availability of safe water sources, over 135 million people will die from potentially preventable diseases by 2020.1

Even within Britain, water purification and treatment is big business, with £2.1 billion (€2.37b 28/8/2019) being invested by utilities in England and Wales between 2013 and 2014.2 Water purification consists of removing undesirable chemicals, bacteria, solids and gases from water, so that it is safe to drink and use. The standard of purified water varies depending on the intended purpose of the water, for example, water used for fine chemical synthesis may need to be ‘cleaner’ i.e. have fewer chemicals present, than is tolerable for drinking water, the most common use of purified water.

Purification Process
The process of water purification involves many different steps. The first step, once the water has been piped to the purification plant, is filtering to remove any large debris and solids. There also needs to be an assessment of how dirty the water is to design the purification strategy. Some pretreatment may also occur using carbon dioxide to change pH levels and clean up the wastewater to some extent. Here, gas monitors are used to ensure the correct gas levels are being added to the water and unsafe levels of the gas do not build up.

The following steps include chemical treatment, an filtration to remove dissolved ionic compounds.3 Then, disinfection can occur to kill any remaining bacteria or viruses, with additional chemicals being added to provide longer lasting protection.4 At all stages, the water quality must be constantly monitored. This is to ensure that any pollutants have been adequately removed and the water is safe for its intended purpose.

In-line gas monitors are often used as part of the water treatment process as a way of monitoring total organic carbon (TOC) content. Carbon content in water can arise from a variety of sources, including bacteria, plastics or sediments that have not been successfully removed by the filtration process.5 TOC is a useful proxy for water cleanliness as it covers contamination from a variety of different sources.

To use non-dispersive infrared (NDIR) gas monitors to analyze the TOC content of water, a few extra chemical reactions and vaporization need to be performed to cause the release of CO2 gas. The resulting concentration of gas can then be used as a proxy of TOC levels.6 This then provides a metric than can be used to determine whether additional purification is required or that the water is safe for use.

Need for Gas Monitors
NDIR gas sensors can be used as both a safety device in the water purification process as carbon dioxide, methane, and carbon monoxide are some of the key gases produced during the treatment process. 5 The other key use is for analysis of TOC content as a way of checking for water purity.7 NDIR sensors are particularly well suited for TOC analysis as carbon dioxide absorbs infrared light very strongly. This means that even very low carbon dioxide concentrations can be detected easily, making it a highly sensitive measurement approach.6 Other hydrocarbon gases can also easily be detected in this way, making NDIR sensors a highly flexible, adaptable approach to monitoring TOC and dissolved gas content in water.

Sensor Solutions
The need for constant gas monitoring to guide and refine the purification process during wastewater treatment means water purification plants need permanent, easy to install sensors that are capable of continual online monitoring. One of the most effective ways of doing this is having OEM sensors that can be integrated into existing water testing equipment to also provide information on water purity.

These reasons are why Edinburgh Sensors range of nondispersive infrared (NDIR) gas sensors are the perfect solution for water purification plants. NDIR sensors are highly robust with excellent sensitivity and accuracy across a range of gas concentrations. Two of the sensors they offer, the Gascard NG8 and the Guardian NG9 are suitable for detecting carbon monoxide, carbon dioxide or other hydrocarbon gases. If just carbon dioxide is of interest, then Edinburgh Sensors offers are more extensive range of monitors, including the Gascheck10 and the IRgaskiT.11

The advantage of NDIR detection for these gases are the device initial warm-up times are less than 1 minute, in the case of the Guardian NG. It is also capable of 0 – 100 % measurements such gases with a response time of less than 30 seconds from the sample inlet. The readout is ± 2 % accurate and all these sensors maintain this accuracy over even challenging environmental conditions of 0 – 95 % humidity, with self-compensating readout.

The Guardian NG comes with its own readout and menu display for ease of use and simply requires a reference gas and power supply to get running. For water purification purposes, the Gascard is particularly popular as the card-based device is easy to integrate into existing water testing equipment so testing of gases can occur while checking purity. .
Edinburgh Sensors also offers custom gas sensing solutions and their full technical support throughout the sales, installation and maintenance process.

References
1. Gleick, P. H. (2002). Dirty Water: Estimated Deaths from Water-Related Diseases 2000-2020 Pacific. Pacific Institute Researc Report, 1–12.

2. Water and Treated Water (2019), https://www.gov.uk/government/publications/water-and-treated-water/water-and-treated-water

3. Pangarkar, B. L., Deshmukh, S. K., Sapkal, V. S., & Sapkal, R. S. (2016). Review of membrane distillation process for water purification. Desalination and Water Treatment, 57(7), 2959–2981. https://doi.org/10.1080/19443994.2014.985728

4. Hijnen, W. A. M., Beerendonk, E. F., & Medema, G. J. (2006). Inactivation credit of UV radiation for viruses, bacteria and protozoan (oo)cysts in water: A review. Water Research, 40(1), 3–22. https://doi.org/10.1016/j.watres.2005.10.030

5. McCarty, P. L., & Smith, D. P. (1986). Anaerobic wastewater treatment. Environmental Science and Technology, 20(12), 1200–1206. https://doi.org/10.1021/es00154a002

6. Scott, J. P., & Ollis, D. F. (1995). Integration of chemical and biological oxidation processes for water treatment: Review and recommendations. Environmental Progress, 14(2), 88–103. https://doi.org/10.1002/ep.670140212

7. Florescu, D., Iordache, A. M., Costinel, D., Horj, E., Ionete, R. E., & Culea, M. (2013). Validation procedure for assessing the total organic carbon in water samples. Romanian Reports of Physics, 58(1–2), 211–219.

8. Gascard NG, (2019), https://edinburghsensors.com/products/oem/gascard-ng/

9. Guardian NG (2019) https://edinburghsensors.com/products/gas-monitors/guardian-ng/

10. Gascheck (2019), https://edinburghsensors.com/products/oem/gascheck/

11. IRgaskiT (2019), https://edinburghsensors.com/products/oem-co2-sensor/irgaskit/

12. Boxed GasCard (2019) https://edinburghsensors.com/products/oem/boxed-gascard/

 

#Pauto @Edinst

Carry on regardless?

20/08/2019

It is with a mixture of foreboding and uncertainty the people of Britain are looking forward to this years Halloween – 31st October 2019. The feeling in the rest of Europe may be described as sorrow mixed with total incomprehension.  Business struggles on however and continues to function despite the planned and unplanned difficulties chosen by the people and/or their elected representatives.

“Will I go or will I stay?”

As an example the annual Advanced Engineering event at Britain’s National Exhibition Centre (the NEC) near the English midland city of Birmingham. It is described by the organisers as “The UK’s must-attend event for advanced manufacturing technology, innovation and supply chain solutions” where the many thousands of visitors will be guaranteed to “come away from their visit with ideas to grow their businesses for the future. See, touch and discover the newest technologies to achieve production efficiencies, reduce time and costs, and get you ahead of your competitors.”

With opportunities to network with some 15,000 professionals from OEMs and supply chain partners, Advanced Engineering provides a platform for knowledge transfer and business discussions across R & D, design, test, measurement & inspection, raw materials & processing, manufacturing, production and automation.

If, as is now widely expected, Britain drops out of the European Union at 11.00 pm on the 31st of October the complete business picture of trade within and outside of the United Kingdom will have altered in a mirad of small and great ways.

The European Union will have been diminished by one member state from 28 to 27 independent states. Suppliers and their goods, and buyers from states outside of the United Kingdom, freely admitted to the country to attend the show will experience, many for the first time, the sort of border controls and delays that are usual for those travelling to “third countries,”  if they delay their departure to the day after the event finishes.  This appears to be the implication of the current British Home Secretary if the Daily Telegraph newspaper’s report is to be believed. “Freedom of movement by European Union nationals into the UK will end overnight from October 31 in the event of a no deal Brexit, Priti Patel has signalled.”

As somebody, a Citizen of the European Union, who travels frequently to Britain for this and other events this is a new hurdle to be crossed and to be taken into consideration when travelling to this new “third country!” It will be so much easier to travel to such events in Germany, France and Italy. It will be so much easier to get goods and people from these countries than from what used to be the closest and easiest to deal with country.

But of course we have no idea how this whole thing. Still! How plaintive do the words of outgoing head of the European Commission in 2016 sound now, “But I thought they had a plan!” And how grimly prophetic sound those blunt words of the outgoing European Council President more recently when he mused on “What that special place in hell looks like for those who promoted Brexit, without even a sketch of a plan how to carry it out safely”

At the moment this writer is undecided on travelling to this event. I have my car insured now to travel in a third country but I am uncertain if I am prepared to weather the delays that must occur as I endevour to board a ferry.

Since this was published news has broken that valid Irish insurance discs will serve as proof of insurance for those driving Irish registered vehicles in Britain and Northern Ireland, in the event of a no-deal Brexit.

One thing I am sure that Britain will muddle through this puzzle but how I am not sure. Nor am I sure do the powers that be!

Our last article on this topic ended with the statement “Nobody knows!”

Not much has changed and everything has changed and yet “Nobody knows!”

• An interesting aside that I have seen rarely mentioned, is that all those born in Northern Ireland whether or not they voted for or against this decision may remain as citizens of the European Union by virtue of their right to be Irish Citizenship whether or not Britain leaves, remains or drops out of the Union. This is guaranteed by the International Agreement signed by the United Kingdom and Ireland and guaranteed by the European Union and usually referred to as the Good Friday Agreement.

See also:
Who knows? The Brexit dilemma! (Feb 2019)
Nobody knows! (June 2016)

#Brexit #PAuto #AEUK19 @advancedenguk