Post pandemic environmental monitoring

23/07/2020
Matt Dibbs, Managing Director Meteor Communications Ltd., explains how the Coronavirus pandemic presented significant challenges to the collection of environmental data. However, by utilising novel technology, British water companies and the Environment Agency have been able to continue gathering water quality data in locations from Cornwall to Cumbria. Matt believes that this provides a template for environmental monitoring on the future.

The Coronavirus pandemic presented significant challenges to the collection of environmental data. However, by utilising novel technology, water companies and the Environment Agency have been able to continue gathering key data in locations from Cornwall to Cumbria.

Water quality monitoring
The British Environment Agency and water utilities have statutory obligations to protect and enhance water resources; and in order to fulfil these obligations they undertake large numbers of measurements to establish baseline data, detect trends, monitor mitigation measures, and identify sources of pollution from both point and diffuse sources. This involves making a range of measurements; either collecting samples for laboratory analysis or employing portable instruments in the field. To support these activities, rapidly deployable, automatic, remote monitoring systems have also been developed to provide real-time access to data 24/7.

The Environment Agency’s Environmental Sensor Network (ESNET) is operated by the National Laboratory Service. This agile monitoring capability of over 150 sites is providing a template for sustainable, resilient, environmental monitoring. ESNET is comprised of modular water quality monitoring systems that can be quickly and easily deployed at remote sites. The telemetry modules and website capability are developed and supplied by Meteor Communications Ltd.

The laboratory analysis of samples is vitally important and allows industry and regulators to analyse for an extensive array of parameters. These samples inform a better understanding of longer term trends and facilitate the monitoring of trace and emerging pollutants. However, water bodies are highly dynamic environments. Precipitation, flow and the intermittent or diurnal nature of process and agricultural effluents mean that in some circumstances it is necessary to employ enhanced high-resolution monitoring techniques to provide evidence upon which informed operational and policy decisions can be made.

Real-time, high-resolution water quality monitoring systems
The Environment Agency uses two main types of continuous water quality monitors; a fixed, cabinet or kiosk based system (right), and a portable version which is housed in a rugged case (below). Evidence from these systems is utilised by environment planners, ecologists, fisheries and environment management teams across the agency. These continuous water quality monitoring systems have been developed and refined over the last 20 years, so that they can be quickly and easily deployed at almost any national location; delivering data via telemetry within minutes of installation. This high-intensity monitoring capability substantially improves the temporal and spatial quality of data. The rapid deployment of these monitors now enables the agency to respond more quickly to pollution events.

Each system is built around a battery-powered multi-parameter water quality sonde; situated in the river or located in a bankside flow-through chamber, with samples being taken at 15 minute intervals. Typically, the sondes are loaded with sensors for measuring parameters such as dissolved oxygen, temperature, pH, conductivity, turbidity, ammonium, Blue Green Algae and chlorophyll. Additionally, the systems can incorporate an automatic sampler which can be triggered when pre-determined conditions arise. This means that event-triggered samples can be made available for subsequent laboratory investigation.

Measured data is transferred securely to the Meteor Data Cloud, where stakeholders access graphical, tabular and geospatial views to see live readings and retrieve recorded data. With this customisable data presentation, managers are able to communicate evidence in a form which is more accessible and meaningful to public representatives, interest groups and stakeholders. This also enables bodies such as the Environment Agency to promote the use of open data, providing live data links, advice and services to a diverse range of public groups and organisations such as flood awareness groups, rivers trusts and angling organisations.

During the coronavirus pandemic the Environment Agency collected over 16,000 samples per day using ESNET and the cloud-based viewer was made available to all water quality practitioners across the Defra family, as well as a wide range of external bodies.

The advantages of remote monitoring networks
By collecting data automatically; the volume of evidence increases dramatically, furthermore, such systems are resilient to the effects of issues such as a lockdown; because monitoring practitioners are able to collect and assess data; even if they are isolated at home.

In recent years, sensors and water quality sondes have undergone significant development to improve reliability and extend the period between service and calibration. Meteor Communications provides a comprehensive maintenance program for customers on a monthly basis and freshly calibrated units are constantly in circulation within the ESNET system.

Continuous monitoring enables the detection of transient spikes that can arise from pollution incidents; helping to raise timely alarms and identify ongoing sources of pollution. This evidence can be used to develop informed interventions by stakeholders in industry and agriculture, and to enable the adoption of practices that improve water quality.

Integrated systems such as those operated in the Thames Valley catchment are able to track pollution events as they move with the river, which means for example, that water treatment plants can adjust their intakes accordingly.

Tidal water presents a major monitoring challenge because large volumes of saline water are constantly moving back and forth, which significantly complicates the comparison of measurements at one point on the river. So, for example, a measurement at one location at 9am is not directly comparable with another measurement at 9am a week later, because one might be taken at low tide and the other at high tide. The transient effects of CSO’s and algal activity further complicate the picture. Water quality scientists at the Environment Agency have therefore worked closely with Meteor Communications to develop a software-based monitoring system, known as ‘Half Tide Correction’ (HTC). In simple terms, this corrects for the effects of the tide and allows assessment of the underlying water quality.

Continuous, accurate and robust data allows managers to assess the impact of developments and remediation measures. Good data, used as evidence, informs the evaluation of investments and leads to better decision making.

The ESNET network also provides image acquisition, and the Environment Agency and others have deployed over 600 ESNET camera sites. These remote cameras are used to continuously monitor a wide range of flood defence infrastructure and assets; rapidly detecting blockages or overflows and avoiding the need for unnecessary and costly site visits.

ESNET systems also provide an essential tool for measuring the effectiveness of Natural Flood Management (NFM) schemes. In Oxfordshire for example, working with a wide range of partners in the Evenlode catchment, the systems are helping to evaluate the effectiveness of NFM measures for the local community and other stakeholders.

Utilities – final effluent monitoring
The flexibility of the ESNET systems makes them ideal for monitoring water quality at waste water treatment works. The responsibility for monitoring discharges rests with the operators themselves under the terms of operator self-monitoring (OSM) agreements. OSM is now delivered by a spot sampling regime supported by real time monitors, so an opportunity exists for all stakeholders to benefit from the advantages of continuous monitoring.

A British water company is now operating 130 ESNET final effluent monitoring systems across their business. These sites have continued to operate during the COVID-19 lockdown providing operators and managers with vital data with which to assess performance and compliance during this challenging period.

Summary
Recent advances in technology have enabled the development of continuous monitoring systems that are quick and easy to install. The portable ESNET system is routinely commissioned in less than an hour, and the pumped kiosks can usually be installed within half a day.

With little or no capital works necessary prior to the installation of an ESNET system, continuous, easily accessible, multi-parameter data can be established quickly and cost effectively. Real-time monitoring means less travel, less time on site and a lower carbon footprint. Real time data can also be provided to stakeholders, timely alarms triggered and monitoring can continue unaffected by the impact of viral pandemics.

@MeteorComms @_Enviro_News #PAuto #Water #coróinvíreas #COVID19 #coronavirus

Gas detection equipment benefits from international co-operation.

08/04/2020

Critical Environment Technologies Canada Inc. (CETCI) was founded by Frank and Shirley Britton in 1995. Since that time, the company has expanded considerably and now employs around 35 people; developing and manufacturing gas detection equipment for global markets. One of the keys to the company’s success has been the relationship that it has built with sensor supplier Alphasense.

Frank’s career in gas detection stretches back to 1982, and when he was first visited by a sales person from Alphasense in 2003, he was immediately impressed with the representative’s technical knowledge. “It was clear that he understood the issues that manufacturers face, and had a good knowledge of the challenging applications in which our equipment is commonly deployed. This was important, because it helped to build trust.”

Following that initial meeting, it was agreed that CETCI would trial some of Alphasense’s electrochemical gas detection sensors, and Frank was pleased to see how well they performed. “It was also very encouraging to note the high level of service that we enjoyed,” he adds. “Even though there were 5,000 miles between us and 8 hours in time difference, we have always received very prompt and useful responses to our service requests.

“In fact, I would go so far as to say that Alphasense has delivered superb levels of service from day one, and as a consequence is one of our best suppliers. It is also very useful that Arthur Burnley from Alphasense visits us every year to review progress and explore new ways for us to work together in the future.”

YesAir portable

As the relationship with Alphasense has grown the range of sensor technologies employed has expanded to include electrochemical, catalytic, optical, metal oxide and PID. For example, some of these sensors are deployed in portable indoor air quality instruments such as the YESAIR range. Available in two models (pump or diffusion) and battery powered with onboard datalogging, the YESAIR instruments have been designed for intermittent or continuous indoor air quality monitoring of temperature, RH, particulates and up to 5 gases. Each can be configured with parameter selection from more than 30 different plug and play gas sensors, as well as a particulate sensor.

CETCI also manufactures fixed gas detection systems, controllers and transmitters that are deployed to monitor hazardous gases; protecting health and safety in confined spaces and indoor environments. Customers are able to select from a range of target gases including Ammonia, Carbon monoxide, Chlorine dioxide, Chlorine, Ethylene, Ethylene oxide, Fluorine, Formaldehyde, Hydrogen, Hydrogen sulphide, Hydrogen chloride, Hydrogen cyanide, Hydrogen fluoride, Nitric oxide, Nitrogen dioxide, Oxygen, Ozone, Phosphine, Silane, Sulfur dioxide, Methane, Propane, Hydrogen, TVOCs and Refrigerants. The company’s products are employed in commercial, institutional, municipal and light industrial markets, and in a wide variety of applications. These include refrigeration plants, indoor swimming pools, water treatment plants, ice arenas, wineries and breweries, airports, hotels, fish farms, battery charging rooms, HVAC systems, food processing plants, vehicle exhausts and many more.

One of the main reasons for CETCI’s success is its ability to develop gas detectors that meet the precise requirements for specific markets. “We are large enough to employ talented people with the skills and experience to develop products that meet the latest requirements,” Frank explains. “But we are not so large that we are uninterested in niche applications – in fact we relish the challenge when a customer asks us to do something new, and this is where our relationship with Alphasense, and the technical support that they can provide, comes into its own.”

The market for gas detection equipment is constantly changing as new safety and environmental regulations are created around the world, and as new markets emerge. Again, the close relationship with Alphasense is vitally important; as new sensors are being developed, CETCI is moving into new markets that are able to utilise these technologies.

New market example – cannabis cultivation
Following the legalisation of marijuana in Canada and some other North American regions, greenhouses and other plant growth rooms have proliferated. These facilities can present a variety of potential hazards to human health. Gas powered equipment may be a source of carbon monoxide; carbon dioxide enrichment systems may be utilised; air conditioning systems can potentially leak refrigerants, and propane or natural gas furnaces may be employed for heating purposes. All of these pose a potential risk, so an appropriate detection and alarm system is necessary.

Responding to market demand, CETCI developed monitoring systems that met the requirements of the market. This included appropriate gas detectors connected to a controller with logging capability and a live display of gas levels. In the event of a leak or high gas concentration, the system can provide an audible or visual alarm, and relays can be configured to control equipment such as the ventilation system or a furnace.

Developing market example – car parking facilities

Car park installation

In recent years, the effects of vehicular air pollution on human health have become better understood, and received greater political and media attention. As a result, the owners and operators of parking facilities have become more aware of the ways in which they can protect their customers and staff.

Carbon monoxide is a major component of vehicle exhaust, and nitrogen dioxide levels are high in the emissions of diesel powered engines. In more modern facilities, hydrogen may accumulate as a result of electric car charging stations. CETCI has therefore developed hazardousgas detection systems to protect air quality in parking locations. This equipment includes output relays which can minimise energy costs by controlling the operation of ventilation systems.

Summarising the secrets to a long and successful partnership in gas detection, Frank says: “One of the most important issues is of course the quality of the products, and we have always been impressed with the fact that Alphasense differentiates itself from other sensor manufacturers by testing every sensor.

“The next important issue is the quality of service; we need sensors to be delivered on time and in perfect condition, and when we have a technical query we have become accustomed to a very prompt response.

“We also value highly the opportunity to develop our businesses together – through regular conversations with Arthur and his colleagues we are able to plan our future product development and marketing strategies, so that we can meet the ever changing needs of the market. This has worked extremely well for the last 17 years and we foresee it doing so for many years to come.”

 

#Environment #Alphasense @cetci @_Enviro_News


Particulate monitors selling like hot cakes.

03/12/2016

Palas, the German manufacturer of particulate monitoring instruments, is expanding production to cope with demand for its fine particulate monitor, the Fidas® 200. In the following article Jim Mills explains why Air Monitors, the British distributor, is being kept busy by the demand for this exciting new technology.

fidas_200PM monitoring – the ultimate goal
We monitor PM because of its acute health effects. It irritates our eyes and lungs, and some of the finer particles were more recently shown to be able to move directly from the nasal cavity to the brain. Monitoring is therefore essential, but there are almost as many monitoring methods as there are types of PM, so it is vitally important to monitor what matters. If you are measuring dust from a construction site, the PM is relatively large in diameter and heavy, but if you are monitoring PM from diesel emissions in a city, the smallest particles with much less mass but high particle numbers, are of greater interest. Monitoring a single size fraction provides an incomplete picture of particulate contamination and risks ignoring the PM of most interest, particularly if the ignored fractions are the finer particles that travel deepest into the lungs. The ideal PM monitor would therefore reliably and accurately monitor all important PM fractions, with high data capture rates and low service requirements… hence the heavy demand for the Fidas 200.

Fidas® 200
The Fidas 200 is a fine dust ambient air quality monitoring device, developed specifically for regulatory purposes; providing continuous and simultaneous measurement of PM1, PM2.5, PM4, PM10, TSP (PMtot), as well as particle number concentration and particle size distribution between 180nm and 18µm (further non-certified size ranges are also available on request).

Employing a well-established measurement technology – optical light scattering of single particles – the Fidas 200 is equipped with a high intensity LED light source, which is extremely stable, delivering a long lifetime, with minimal service requirements. An optical aerosol spectrometer determines the particle size using Lorenz‐Mie scattered light analysis of single particles. These particles move through an optical measurement volume that is homogeneously illuminated with white light, and each particle generates a scattered light impulse that is detected at an angle of 85° to 95° degrees. The particle number measurement is based on the number of scattered light impulses, and the level of the scattered light impulse is a measure of the particle diameter.

The Fidas 200 operates with a volume flow of approx. 0.3m3/h and is equipped with a Sigma‐2 sampling head, which enables representative measurements even under strong wind conditions. The sampling system includes a drying system that prevents measurement inaccuracies caused by condensation from high humidity, which means that it will continue to function correctly in misty or foggy conditions but without the loss of semi-volatile fractions of the PM. It is also equipped with a filter holder for the insertion of a plane filter (47 or 50 mm in diameter) which enables subsequent chemical analysis of the aerosol.

Different versions of the Fidas 200 allow for stand-alone outdoors installation or for installation inside a measurement cabinet or air quality monitoring station.

Performance
The Fidas 200 is the only ambient continuous PM monitor in the UK to have passed TÜV and MCERTS. The MCERTS certificate (Sira MC16290/01) confirms that the Fidas 200 complies with the MCERTS Performance Standards for Continuous Ambient Air Quality Monitoring Systems, and with MCERTS for UK Particulate Matter. The instrument has type-approval to the Standards EN 12341 (PM10), EN 14907 (PM2.5) and is certified to the Standards EN 15267-1 and -2.

Importantly, the FIDAS 200 has half the uncertainty of many of its rivals and one third of the required uncertainty (25%).

Typical data capture rates exceed 99%. This has been achieved by a design approach that is focused on reliability. For example, two pumps operate in parallel, providing redundancy protection, and the instrument continuously monitors status and calibration.

Monitoring frequency has an adjustable time resolution ranging from 1 second up to 24 hours. However, high frequency data provides almost real-time access to readings when deployed with a remote web-enabled Envirologger. This enables the detection of short-term spikes, providing much greater insight into the causes of PM pollution.

The Fidas instruments have been proven in many countries as well as Britain; Air Monitors has been supplying Fidas PM monitors for around three years and there are now over 30 monitors in operation Britain alone.

Costs
One of the major financial considerations for Fidas 200 is its extremely low operating cost; the requirement for consumables is almost nil (no filter required) and its power consumption is around one fifth of its nearest rival. Calibration can be checked and adjusted, if necessary, quickly and easily in the field with a simple monodisperse powder test.

The purchase cost of a single Fidas 200 is a little more than some ambient PM monitors, but it is less expensive than others. However, for most instruments, a requirement to monitor two fractions, say PM2.5 and PM10, would necessitate two instruments and therefore double the cost. With budgets under pressure, Fidas therefore provides an opportunity to obtain better data for less cost.

In summary, the Fidas 200 offers better performance than all of its rivals; usually at significantly lower capital cost and always with dramatically lower operational costs. Consequently, it is no surprise that these instruments are selling like hot cakes.

@airmonitors #PAuto @_Enviro_News


Continuous compliance with PLM.

27/07/2016
Adam Bannaghan, technical director of Design Rule, discusses the growing role of PLM in managing quality and compliance.

The advantages of product lifecycle management (PLM) software are widely understood; improved product quality, lower development costs, valuable design data and a significant reduction in waste. However, one benefit that does not get as much attention is PLM’s support of regulatory compliance.

Compliance-PLMNobody would dispute the necessity of regulatory compliance, but in the product development realm it certainly isn’t the most interesting topic. Regardless of its lack of glamour, failure to comply with industry regulations can render the more exciting advantages of PLM redundant.

From a product designer’s perspective, compliance through PLM delivers notable strategic advantages. Achieving compliance in the initial design stage can save time and reduce engineering changes in the long run. What is more, this design-for-compliance approach sets the bar for quality product development, creating a unified standard to which the entire workforce can adhere. What is more, the support of a PLM platform significantly simplifies the compliance process, especially for businesses operating in sectors with fast-changing or complicated regulations.

For example, AS/EN 9100, is a series of quality management guidelines for the aerospace sector, which are globally recognised, but set to change later this year. December 2016 is the target date for companies to achieve these new standards – a fast transition for those managing compliance without the help of dedicated software.

Similarly, the defence industry has its own standards to follow. ITAR (International Traffic in Arms Regulations) and EAR (Export Administration Regulations) are notoriously strict exporting standards, delivering both civil and criminal penalties to companies that fail to comply.

“Fines for ITAR violations in recent years have ranged from several hundred thousand to $100 million,” explained Kay Georgi, an import/export compliance attorney and partner at law firm Arent Fox LLP in Washington. “Wilful violations can be penalised by criminal fines, debarment, both of the export and government contracting varieties, and jail time for individuals.”

PLM across sectors
The strict nature of all these regulations is not limited to aerospace and defence however. Electrical, food and beverage, pharmaceutical and consumer goods are also subject to different, but equally stern, compliance rules.

Despite varying requirements across industries, there are a number of PLM options that support compliance on an industry-specific basis. Dassault Systèmes ENOVIA platform, for example, allows businesses to input compliance definition directly into the program. This ensures that, depending on the industry, the product is able to meet the necessary standards. As an intelligent PLM platform, ENOVIA delivers full traceability of the product development process, from conception right through to manufacturing.

For those in charge of managing compliance, access to this data is incredibly valuable, for both auditing and providing evidence to regulatory panels. By acquiring industry-specific modules, businesses can rest assured that their compliance is being managed appropriately for their sector – avoiding nasty surprises or unsuccessful compliance.

For some industry sectors, failure to comply can cause momentous damage, beyond the obvious financial difficulties and time-to-market delays you might expect. For sensitive markets, like pharmaceutical or food and beverage, regulatory failure can wreak havoc on a brand’s reputation. What’s more, if the uncompliant product is subject to a recall, or the company is issued with a newsworthy penalty charge, the reputational damage can be irreparable.

PLM software is widely regarded as an effective tool to simplify product design. However, by providing a single source of truth for the entire development process, the potential of PLM surpasses this basic function. Using PLM for compliance equips manufacturers with complete data traceability, from the initial stages of design, right through to product launch. What’s more, industry-specific applications are dramatically simplifying the entire compliance process by guaranteeing businesses can meet particular regulations from the very outset.

Meeting regulatory standards is an undisputed obligation for product designers. However, as the strategic and product quality benefits of design-for-compliance become more apparent, it is likely that complying through PLM will become standard practice in the near future.

#PLM @designruleltd #PAuto #Pharma #Food @StoneJunctionPR

Celebrating twenty years abnormality!

21/07/2014

This year the Abnormal Situation Management (ASM®) Consortium  is celebrating 20 years of thought leadership in the process industry. The ASM Consortium grew out of a grassroots effort begun in 1989 by ASM to address alarm floods. Honeywell spearheaded the development of a proposal to the US NIST, Advanced Technology Program to form a Joint Research & Development Consortium.

Background on the ASM ConsortiumasmThe ASM Consortium was started in 1994 to address Process Industry concerns about the high cost of incidents, such as unplanned shutdowns, fires, explosions, emissions, etc. The term, Abnormal Situation Management®, was used to describe it. Matching funds from NIST enabled the consortium to spend several years researching and developing highly-advanced concepts to address the problem of abnormal situations. Since then research has continued and increasing effort has been put into development and deployment of solutions that incorporate ASM knowledge.The basis of the ASM Consortium is collaboration and information-sharing. By working together, members achieve far more than they could working alone. Research results are published for members, and often further shared by means of webinars, seminars and workshops. User members also guide Honeywell in selection and development of product solutions that incorporate ASM knowledge. Non-members can benefit from ASM Research as ASM Effective Practices Guidelines for Alarm Management, Display Design and Procedural Practices are available for purchase on Amazon.com.

The proposal addressed the challenging problem of abnormal situation management. In preparing for this proposal effort, Honeywell and its collaborators created the Abnormal Situation Management (ASM) Joint Research and Development Consortium (referred to as ASMC) under the U.S. Cooperative Research and Development Act. In November 1994, the ASM research joint venture began its research with $16.6 million (€12.27m) in funding for a three year study program, including $8.1 million (€6m) from ATP and industry cost-sharing of $8.5 million (€6.29m).

This year, ASM Consortium members have met twice for a week-long Quarterly Review Meetings (QRM) , once at Houston, Texas (USA) in April and then again at Antwerp (B) in June. Along with its normal business, the Consortium discussed plans to celebrate the Consortium’s 20 year of service to the Process Industry. The Quarterly Review Meetings are a platform for the ASM Consortium members to share the benefits gained from the ASM practices and products, and to discuss new challenges faced in plant operations. Members of the Consortium besides Honeywell include industrial manufacturers, a human factors research company, and universities that collaborate to research best practices for managing abnormal situations in industrial facilities.

To celebrate its 20th year, ASM Consortium will be spreading further awareness about managing and mitigating abnormal situations in process industries by publishing journal articles, white papers at leading industry conferences, and a planned video.


Ensuring that necessary dredging mantains water quality!

07/07/2014

Last winter brought unprecedented weather conditions both in Ireland and Britain. In the Read-out offices we were hit by a thunder and lightening storm which played havoc with our electronic equipment and elsewhere in the region the rough seas did incredible damage. In the south-west of England the farms and homes in the Somerset Levels and Moors, a sparsely populated coastal plain and wetland area of central Somerset, was severely hit with incredible flooding. Indeed the effects of this will be felt in the area for many years to come.

levels

This shows the incredible extent of last winter’s flooding with superimposed map showing location of the Somerset Levels and Moors.

A special monitoring system is helping protect water quality on the Somerset Levels and Moors where a major dredging operation is under way following this severe flooding. The system, which was supplied by OTT Hydrometry and installed by Wavelength Environmental, is designed to protect the river ecology by issuing email alerts if water quality deteriorates beyond pre-set conditions. Any such alerts would immediately be relayed to the project team and an assessment of conditions would then be undertaken, so that working practices can be reviewed and continued.

The flood caused extensive damage to properties in the area and many residents had to leave their homes.  Approximately 170 homes and businesses were affected. The Environment Agency estimated there were more than 65 million cubic metres of floodwater covering an area of 65 square kilometres.

Dredgers commenced work at the end of March 2014

Dredgers commenced work at the end of March 2014

On Monday 31st March 2014, three months after the flooding began, dredging work started on the banks of the river Parrett between Burrowbridge and Moorland, just a few minutes from Junction 24 of the M5 in the south west of England. Costing £1 million per mile, 5 miles of river bank will be dredged (3 miles of the river Parrett and 2 miles of the river Tone), based on restoring the river channels to their 1960’s profile and improving their drainage capability.

In recent years, an accumulation of sediment has narrowed the river channel and this is believed to be just one of the reasons for the severe flooding that took place. A network of mobile real-time water quality monitors is therefore being deployed to continuously monitor water quality upstream and downstream of the dredgers. This work complements the Environment Agency’s wider environmental monitoring.

Adcon Telemetry plus Hydrolab WQ sonde.

Adcon Telemetry plus Hydrolab WQ sonde.

The monitors consist of Hydrolab water quality ‘sondes’ and Adcon telemetry systems which transmit near-live data during the dredging operation that is due to run until the Winter of 2014. The monitors are anchored to the river bed and suspended in the river by means of two small buoys. Each sonde is fitted with sensors for the measurement of dissolved oxygen (DO), ammonium, temperature, pH, conductivity and turbidity. A short cable connects each sonde to an Adcon telemetry unit on the bank, which transmits data via GPRS every 10 minutes. The sondes contain internal dataloggers, however the transmitted data is available to project staff in near real-time via a web-based data portal. If water quality meets the pre-set alert conditions (for temperature, dissolved oxygen or ammonium), email messages are issued via the telemetry units. It is important to note that poor water quality can be caused by a number of factors including low flow levels and high nutrient levels arising from many sources in the area.

Downstream monitoring!

Downstream monitoring!

The project plan has allowed for up to eight dredging teams, and the monitors are being installed approximately 50 metres upstream and 100-150 meters downstream of the dredgers – to allow sufficient mixing.

Simon Browning from Wavelength Environmental has been monitoring the data from the sondes and says: “The monitors are quick and easy to deploy, and have performed very well; however, portability is extremely important because the instruments have to be moved and redeployed as the dredging work proceeds.

“We have also started fitting GPS units to the telemetry systems so that we can keep track of the monitoring locations. This is important because each dredging team is constantly moving, so the monitors have to be moved regularly.”

Matthew Ellison, a telemetry specialist from OTT Hydrometry, was delighted to be involved in this high profile project and recommended the Adcon systems because they are extremely small and therefore portable, and have been designed to run on very low power, which means they can be left to run in remote locations for extended periods of time with just a small solar panel.

In January, Owen Paterson, the Environmental Secretary of State in England, asked for a 20 year Action Plan to be developed to look at the various options for the sustainable management of flood risk on the Somerset Levels and Moors. The plan is supported by a £10m investment from the Department for Transport with a further £500k from the Department for Communities and Local Government, on top of the £10m previously announced by the British Prime Minister. The plan has been published and is available here on the Somerset County Council website!

Whilst the plan recognises that it will not be possible to stop flooding completely, it has 6 key objectives:

  1. Reduce the frequency, depth and duration of flooding.
  2. Maintain access for communities and businesses.
  3. Increase resilience to flooding for families, agriculture, businesses, communities, and wildlife.
  4. Make the most of the special characteristics of the Somerset Levels and Moors (the internationally important biodiversity, environment and cultural heritage).
  5. Ensure strategic transport connectivity, both within Somerset and through the county to the South West peninsula.
  6. Promote business confidence and growth.

“Dredging is one of the one things the local community has really been pressing for and people are going to check the Environment Agency is doing the work properly. The water quality monitoring undertaken by the mobile monitors and by our own static monitors will help provide assurance that the environment is not compromised by this work,” said Graham Quarrier for the Environment Agency.


Cloud Computing for SCADA

05/09/2013
Moving all or part of SCADA applications to the cloud can cut costs significantly while dramatically increasing reliability and scalability, says Larry Combs, vice president of customer service and support, InduSoft.

Although cloud computing is becoming more common, it’s relatively new for SCADA (supervisory control and data acquisition) applications. Cloud computing provides convenient, on-demand network access to a shared pool of configurable computing resources including networks, servers, storage, applications, and services. These resources can be rapidly provisioned and released with minimal management effort or service provider interaction.

By moving to a cloud-based environment, SCADA providers and users can significantly reduce costs, achieve greater reliability, and enhance functionality. In addition to eliminating the expenses and problems related to the hardware layer of IT infrastructure, cloud-based SCADA enables users to view data on devices like smartphones and tablet computers, and also through SMS text messages and e-mail.

Our company (InduSoft), along with a number of others, provides SCADA software and services for firms that want to use their own IT infrastructure, the cloud, or a combination of both to deploy their applications. We provide upfront consulting and advice to help customers make the best choice depending on their specific requirements and capabilities.

A cloud can be public or private. A public cloud infrastructure is owned by an organization and sold as services to the public. A private cloud infrastructure is operated solely for a specific customer. It may be managed by the customer or by a third party; it may exist on premise or off premise. Hybrid clouds consist of private and public clouds that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability.

Cloud computing can support SCADA applications in two fashions:

  • The SCADA application is running on-site, directly connected to the control network and delivering information to the cloud where it can be stored and disseminated, or
  • The SCADA application is running entirely in the cloud and remotely connected to the control network.
Figure 1: A public cloud formation in which the SCADA system is running onsite and delivers data via the cloud

Figure 1: A public cloud formation in which the SCADA system is running onsite and delivers data via the cloud

The first method is by far the most common and is illustrated in Figure 1 (right). The control functions of the SCADA application are entirely isolated to the control network. However, the SCADA application is connected to a service in the cloud that provides visualization, reporting, and access to remote users. These applications are commonly implemented using public cloud infrastructures.

The implementation illustrated in Figure 2 (below) is common to distributed SCADA applications where a single, local SCADA deployment is not practical. The controllers are connected via WAN links to the SCADA application running entirely in the cloud. These applications are commonly implemented using private or hybrid cloud architectures.

Service Choices
Most experts divide the services offered by cloud computing into three categories: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).

Figure 2: A private/hybrid cloud in which the controllers are connected via WAN links to the SCADA application running entirely in the cloud.

Figure 2: A private/hybrid cloud in which the controllers are connected via WAN links to the SCADA application running entirely in the cloud.

An IaaS such as Amazon Web Services is the most mature and widespread service model. IaaS enables service provider customers to deploy and run off-the-shelf SCADA software as they would on their own IT infrastructure. IaaS provides on-demand provisioning of virtual servers, storage, networks, and other fundamental computing resources.

Users only pay for capacity used, and can bring additional capacity online as necessary. Consumers don’t manage or control the underlying cloud infrastructure but maintain control over operating systems, storage, deployed applications, and select networking components such as host firewalls.

PaaS, like Microsoft’s Azure or Google Apps, is a set of software and product development tools hosted on the provider’s infrastructure. Developers use these tools to create applications over the Internet. Users don’t manage or control the underlying cloud infrastructure but have control over the deployed applications and application hosting environment configurations. PaaS is used by consumers who develop their own SCADA software and want a common off-the-shelf development and runtime platform.

SaaS, like web-based e-mail, affords consumers the capability to use a provider’s applications running on a cloud infrastructure from various client devices through a thin client interface like a web browser. Consumers don’t manage or control the underlying cloud infrastructure but instead simply pay a fee for use of the application.

SCADA vendors have been slow to adopt the SaaS service model for their core applications. This may change as the uncertainty of cloud computing begins to clear. For now, vendors are beginning to release only certain SCADA application components and functions as SaaS, such as visualization and historical reporting.

Economical Scalability
With all three service models, scalability is dynamic and inexpensive because it doesn’t involve the purchase, deployment, and configuration of new servers and software. If more computing power or data storage is needed, users simply pay on an as-needed basis.

Companies don’t have to purchase redundant hardware and software licenses or create disaster recovery sites they may never use. Instead they can provision new resources on demand when and if they need them. Add in the costs that a company would otherwise incur to manage an IT infrastructure, and the savings of moving to the cloud could be huge.

Instead of numerous servers and backups in different geographic locations, the cloud offers its own redundancy. On-demand resource capacity can be used for better resilience when facing increased service demands or distributed denial of service attacks, and for quicker recovery from serious incidents. The scalability of cloud computing facilities offers greater availability. Companies can provision large data servers for online historical databases, but only pay for the storage they’re using.

Building an IT infrastructure is usually a long-term commitment. Systems can take months to purchase, install, configure, and test. Equivalent cloud resources can be running in as little as a few minutes, and on-demand resources allow for trial-and-error testing.

The ability to easily switch back to a previous configuration makes it easier to make changes without having to start from scratch by taking a snapshot of a known working configuration. If a problem occurs when deploying a patch or update, the user can easily switch back to the previous configuration.

On-site IT projects involve significant cost, resources, and long timelines—and thus include significant risk of failure. Cloud computing deployments can be completed in a few hours with little or no financial and resource commitments, and therefore are much less risky.

Manageability, Security, and Reliability
The structure of cloud computing platforms is typically more uniform than most traditional computing centers. Greater uniformity promotes better automation of security management activities like configuration control, vulnerability testing, security audits, and security patching of platform components.

A traditional IT infrastructure environment poses the risk that both the primary and the single backup server could fail, leading to complete system failure. In the cloud environment, if one of the cloud computing nodes fails, other nodes take over the function of the failed cloud computing node without a blip.

If a company chooses to implement its own IT infrastructure, access to user data in this infrastructure generally depends on the company’s single Internet provider. If that provider experiences an outage, then users don’t have remote access to the SCADA application. Cloud computing providers have multiple, redundant Internet connections. If users have Internet access, they have access to the SCADA application.

The backup and recovery policies and procedures of a cloud service may be superior to those of a single company’s IT infrastructure, and if copies are maintained in diverse geographic locations as with most cloud providers, may be more robust. Data maintained within a cloud is easily accessible, faster to restore, and often more reliable. Updates and patches are distributed in real time without any user intervention. This saves time and improves system safety by enabling patches to be implemented very quickly.

Challenges and Risks
Cloud computing has many advantages over the traditional IT model. However, some concerns exist in regard to security and other issues. Data stored in the cloud typically resides in a shared environment. Migrating to a public cloud requires a transfer of control to the cloud provider of information as well as system components that were previously under the organization’s direct control. Organizations moving sensitive data into the cloud must therefore determine how these data are to be controlled and kept secure.

Applications and data may face increased risk from network threats that were previously defended against at the perimeter of the organization’s intranet, and from new threats that target exposed interfaces.

Access to organizational data and resources could be exposed inadvertently to other subscribers through a configuration or software error. An attacker could also pose as a subscriber to exploit vulnerabilities from within the cloud environment to gain unauthorized access. Botnets have also been used to launch denial of service attacks against cloud infrastructure providers.

Having to share an infrastructure with unknown outside parties can be a major drawback for some applications, and requires a high level of assurance for the strength of the security mechanisms used for logical separation.

Ultimately to make the whole idea workable, users must trust in the long-term stability of the cloud provider and must trust the cloud provider to be fair in terms of pricing and other contractual matters. Because the cloud provider controls the data to some extent in many implementations, particularly SaaS, it can exert leverage over customers if it chooses to do so.

As with any new technology, these issues must be addressed. But if the correct service model (IaaS, PaaS, or SaaS) and the right provider are selected, the payback can far outweigh the risks and challenges. The cloud’s implementation speed and ability to scale up or down quickly means businesses can react much faster to changing requirements.

The cloud is creating a revolution in SCADA system architecture because it provides very high redundancy, virtually unlimited data storage, and worldwide data access—all at very low cost.

fig3

Remote SCADA with Local HMI Look and Feel
Vipond Controls in Calgary provides control system and SCADA solutions to the oil and gas industry, including Bellatrix Exploration. To keep up with customer demand for faster remote data access, Vipond developed iSCADA as a service to deliver a high-performance SCADA experience for each client.

One of the greatest challenges in developing iSCADA was the state of the Internet itself as protocols and web browsers weren’t designed for real-time data and control. Common complaints of previous Internet-based SCADA system users included having to submit then wait, or pressing update or refresh buttons to show new data.

Many systems relied only on web-based technologies to deliver real-time data. Because the HTTP protocol was never designed for real-time control, these systems were always lacking and frustrating to use whenever an operator wanted to change a setpoint or view a process trend.
Users were asking for an Internet-based SCADA system with a local HMI look and feel, and that became the goal of Vipond Controls. This goal was reached with iSCADA as a service by giving each customer an individual virtual machine within Vipond’s server cloud.

All data is now kept safe and independent of other machines running in the cloud. A hypervisor allows multiple operating systems or guests to run concurrently on a host computer, and to manage the execution of the guest operating systems. The hypervisors are highly available and portable, so in the event of a server failure, the virtual machine can be restarted on another hypervisor within minutes.

All the SCADA software runs within the virtual machine, and users are offered a high degree of personal customization. Customers can connect directly to on-site controllers, and Vipond can also make changes to controllers and troubleshoot process problems.

This cloud-based SCADA solution can reduce end-user costs up to 90% over a traditional SCADA system, thanks to the provision of a third-party managed service and the reduction of investment required for IT and SCADA integration, development, hardware, and software.


Pre conference conference on Mercury as a Global Pollutant

02/08/2013
This is a brief summary of the Press Conference that preceeded the Mercury 2013 conference in Edinburgh (28 July – 2 August 2013 Scotland).
mercurypconf

Panel members: Loic Viatte, Swedish Ministry for the Environment, Dr Lesley Sloss, Chair of Mercury 2013 and Principal Environmental Consultant at IEA Clean Coal Centre and Lead – Coal Mercury Partnership area at the UNEP, John Topper, Managing Director, IEA Clean Coal Centre and Managing Director of the GHG Group, Dr David Piper, Deputy Head of the Chemicals Branch of UNEP’s Division of Technology Industry and Economics, Michael Bender, co-coordinator of the Zero Mercury Working Group, Eric Uram, Executive Director of SafeMinds, Prof. K. Clive Thompson, Chief Scientist at ALcontrol Laboratories UK.

The panel discussed the progress of legislation to reduce emissions from coal-fired power stations and Dr Lesley Sloss explained that, whilst mercury-specific legislation may take 5 to 10 years to be implemented in Europe, control technologies which can reduce mercury emissions by around 70% are already being utilised in many countries as part of initiatives to lower emissions for pollutants such as particulates, sulphur dioxide and nitrogen oxides. However, it was suggested that some developing countries and emerging economies may choose to implement these technologies as part of their commitment to the Minamata Convention.

rialtasalbaIn advance of the Press Conference, Paul Wheelhouse, Scottish Government Minister for Environment and Climate Change, issued the following statement:“An international conference of this stature puts Scotland on the world stage and demonstrates the important part we are playing in addressing global issues.
“Sound science, strong evidence and engaged citizens means properly informed choices and effective action on the ground and this is essential if the harmful effects of mercury pollution are to be reduced.
“This event is a key part of the journey to a new legally binding international agreement – and Scotland should take great pride in being at the heart of that process. I’d like to warmly welcome all of the 850 delegates from over 60 countries to Edinburgh and wish them every success as they progress this crucial agenda.”

Discussing the different priorities for the week’s conference, Michael Bender said “Mercury knows no boundaries which is why it has been necessary to develop an international convention.” One of the main sectors facing a mercury emissions reduction requirement is illegal artisanal gold mining, but this is a challenging social issue because gold mining is the sole source of income for many of these miners. Enforcing legislation could have very serious social consequences. In contrast, the coal industry, responsible for around 25% of the global emissions from human activities, around half of that from artisanal gold mining, is easier to regulate so this is often regarded as a more tempting target for guaranteed results.

Michael Bender also referred to the benefits of trade barriers which are beginning to halt the flow of mercury between countries, so there is a need for this trend to continue and for more chain of custody regulations.

The panel explained the need to ‘’think globally, act locally” – to acknowledge that mercury distributes itself around the globe with no respect for national borders but to appreciate that all countries may play their part to clean up their own back yard.

One of the priorities will be to address the mercury issues that are the quickest and easiest to address; the low-hanging fruit. The panel felt that this would be the products that contain mercury; especially in the healthcare sector (thermometers and similar instrumentation) because of its ‘do no harm’ ethos and the increasing availability of alternative methods and instruments.

One of the most important issues in delivering the aims of the Convention is ‘political will’ to drive change. For example, the election of President Obama was seen as a significant moment in the development of the Convention because he had already addressed mercury issues earlier in his political career. David Piper said that the support of the United States was very significant in the development of the Minamata Convention.

Michikazu Iseri from the Kumamoto Daily News in Japan asked the panel if NGOs are likely to be disappointed with the Convention, but Michael Bender from the Zero Mercury Working Group (an international coalition of over 100 NGOs) said that, whilst many of them might have preferred greater rigour in the terms of the convention, the overall reaction was very positive because the Convention combines both a financial mechanism and a compliance mechanism. David Piper agreed, describing the Convention as a ‘giant step forward’ but Lesley Sloss said the challenge now is to flesh the convention out with more ‘what and how’ detail.

The final question referred to the adoption of low energy compact fluorescent lightbulbs (CFLs) that contain a small amount of mercury; whilst helping to lower energy usage, they contribute to mercury emissions. Responding, David Piper said that he did not expect this to become a significant issue since these technologies are likely to be replaced with even more environmentally friendly options in the near future.


NPL trials identify improved bioaerosol monitoring technology

01/07/2013
Trials conducted by the British National Physical Laboratory (NPL) have identified improved methodologies for sampling and measuring bioaerosols at composting facilities. Commissioned by Britain’s Department for Environment, Food and Rural Affairs (DEFRA), the first project began in 2008 and the results of a larger series of trials will be published later this summer.

Background
As Britain seeks to reduce the quantity of waste going to landfill, there has been a growth in demand for composting, particularly to accommodate ‘green bin’ waste. In addition there has been an increase in the variety of wastes that are being composted, so it is important to be able to understand the emissions from these processes in order to minimise any impact on the environment and human health.

Trials have identified improved methodologies for sampling and measuring bioaerosols at composting facilities. However, bioaerosols are sampled in a wide variety of industries where airborne biological particles (such as bacteria, pollen, endotoxins, viruses and fungal spores) represent a potential hazard.

Trials have identified improved methodologies for sampling and measuring bioaerosols at composting facilities. However, bioaerosols are sampled in a wide variety of industries where airborne biological particles (such as bacteria, pollen, endotoxins, viruses and fungal spores) represent a potential hazard.

Micro-organisms are necessary for the composting process, so they will always be present in large quantities within the bulk material. Any handling process, such as moving, sorting or turning, is likely to create airborne dust that will contain micro-organisms, and studies have shown that exposure to the pathogenic fungus Aspergillus fumigatus can trigger asthma, bronchitis and allergic responses, so workers and residents near composting sites are potentially at risk.

Traditional bioaerosol sampling techniques rely on the impaction of particles on a solid agar medium. However, these methods can be time-consuming and are limited by low flow rates and unreliable impaction. They are also restricted to particles that can be cultivated. In contrast, the wet walled cyclonic technology employed by the Coriolis instruments, rapidly collects biological particles in liquid at a high flow rate with validated efficiency, and the liquid containing the particles is compatible with a number of rapid microbiological analysis methods, including qPCR (quantitative polymerase chain reaction), which enables the quantification and qualification of most targets.

Studies at NPL
The objective of the initial work was to improve the accuracy and speed of traditional measurement techniques, and one of the conclusions of the project was that the wet walled cyclonic technology employed by the Coriolis, gave the best performance for quantifying biological species such as fungi and bacteria, when used in conjunction with qPCR. Some of the experimental work was carried out at the Health Protection Agency (HPA)   – now Public Health England – to quantify the efficiency of sampling and analysis methods for the measurement of airborne Aspergillus fumigatus spores. This work demonstrated good correlation between Coriolis/qPCR and the HPA’s ‘standard’ method for these measurements.

As a result of the initial work, NPL now offers an Aspergillus fumigatus bioaerosol monitoring service to quantify airborne spore levels at composting sites using a rapid qPCR technique. The key advantages of this monitoring service over traditional microbiological methods are:

  1. Short sampling times
  2. Rapid analysis
  3. High sensitivity and broad detection range
  4. Species specific
  5. Detects total spore count (viable and non-viable), which overcomes any issue of emission underestimation as a result of damage to the spores during collection
  6. Aids differentiation between background spore levels and site specific emission

A full report in the early work has now been published on the Defra website, and further studies have been commissioned. The most recent studies have involved bioaerosol sampling with the Coriolis sampler at four different sites, every quarter during 2012. NPL’s David Butterfield says “The objective of the latest trial was to assess the sampling and monitoring technologies in greater detail, under differing weather conditions and with different sources.”

At the same time, a working group at CEN, the European Committee for Standardisation, is working on a new bioaerosol monitoring standard that is likely to accommodate the latest technology and will necessitate demonstration of equivalence.

Looking forward, Jim Mills from Air Monitors, the company which launched the Coriolis in the Britain, says “It will take some time before this new technology becomes standard practice, but in the meantime, with the benefit of the work that has been conducted by NPL and others, there is no reason why Coriolis should not be utilised widely to improve the efficiency and effectiveness of bioaerosol sampling at composting sites, and in many other applications such as hospitals, legionella investigations, cooling towers, animal housing and pharmaceutical manufacture.”


Get your head out of the clouds! – 3 ways to reduce maintenance costs of power generators

31/05/2013
Henrik Arleving, Product Line Manager, HMS Industrial Networks presents three ways to reduce maintenance costs of power generators.
Henrik Arleving

Henrik Arleving

Keeping track of a fleet of power generators can sometimes be a head-in-the-clouds-experience. It can be hard to focus on the right actions simply because there is not enough information on fuel levels, oil pressure or battery status for each genset. With a cloud-based remote management solution you can have immediate online access to generator parameters via a regular web browser. Below we propose three ways in which remote management can be used to reduce operating costs and improve control.

1) Perform service only when needed
Power generators are often serviced according to a pre-determined service schedule. By understanding how the generator has been operated, it is possible to plan service more dynamically. As site visits are costly, you are able to optimize the service costs by only sending service teams to generators that actually need service.

The challenge is to know when service is needed at each individual site. With a remote management solution, you can check operating hours, oil pressure, battery status, coolant temperatures, generated power output, fuel level, GPS position etc. A notification may also be generated whenever a critical level has been reached, for example if the generator has been running more than expected. We may then send a notification when the running hours exceed the service interval.

By being able to analyze the operation of each generator remotely, you will be able to understand their health and more efficiently schedule service visits in the field.

2) Test start generators remotely to reduce start-up problems
Just like a car that has been parked for an extended period, a generator engine that has not run in a long time is likely to have start-up problems. For back-up power generators that are not operated very often, it is important to regularly perform operational tests. Remote test starts can be made with a remote management solution that has control capabilities and is connected to the generator controller. With a simple action such as a remote operational test, you may increase the likelihood of the generator working the day there is a power outage and the generator needs to perform.

Typical web dashboard from which a power generator can be monitored and even started or stopped remotely.

Typical web dashboard from which a power generator can be monitored and even started or stopped remotely.

A well-maintained generator operates better and has lower operating costs since unplanned service visits often mean substantial expenses.

3) Minimize and reduce the effects of fuel theft
Fuel theft can be a significant problem. In certain regions, as much as 40% of genset fuel is reported to be stolen.

Avoiding fuel theft completely might be difficult since it is often stolen a bit at a time; during transportation, at fill-up, or at the power generator in the field. However, a remote monitoring system that connects to a fuel sensor can be used to ensure that the right amount of fuel is delivered at a refill. By using an intelligent level sensor, it is possible to track the fuel level of the tank. The fuel sensor can be calibrated to sense a full tank and by knowing this we can verify that the tank is properly refilled. A good fuel level sensor is able to detect variations down to 3-5 liters.

An abnormal decrease in content may be detected and indicate that the fuel is being stolen. With a remote monitoring system that supports alarms, a notification is sent immediately when the theft occurs. Even if it might be hard to catch the thieves, we are at least aware that the fuel has been stolen and we can schedule a refill to ensure the generators have the fuel needed to operate.

Tracking the level of fuel in a tank increases the awareness of what happens to the fuel on site and helps users understand when theft occurs. In some cases, where organized theft is common, this may help detect patterns and take action.

Remote monitoring puts you ahead of the game
Modern remote monitoring technology enables instant access to data from equipment in the field. While we are able to use this technology to reduce operating expenses as described above, it also brings us other benefits. By being able to have full control 24/7 and be instantly notified of any operational issues, the end-user also receives improved service quality.

How cloud-based remote management works
A communication gateway connects to the genset control panel, usually via serial communications or by using a popular open protocol such as Modbus RTU. The gateway sends data via the Internet or the mobile network (3G/GSM/GPRS) to an online data center in the cloud. Service engineers can access the data center through a regular web browser or smartphone and see live data from the power generators. This means that no IT expertise or programming is necessary. Alarms and notifications are sent whenever certain thresholds are reached.

So which solution should I choose?
There are a couple of different solutions for remote management of power generators available on the market. A few things to consider are that the solution should be able to send information via the mobile phone network since many power generators are placed in remote locations. It is important that the solution is “firewall friendly” so you don’t have to spend time on security issues and access rights.
Some remote management solutions, like the Netbiter remote management solution from HMS, have specialized solutions for power generators including pre-defined configurations for a range of for control panels from different manufacturers as well as built in features for fuel level management etc.

What are the costs involved?
You pay for the communication gateway which connects to the power generator. Most modern remote management solutions offer different service levels for cloud access. Free versions with basic functionality are often available offering a very quick return-on-investment.

No matter which solution you choose, the ROI will most likely be quick. A service visit is usually the same cost as a single remote management gateway meaning that you may have a payback time of only a few