Particulate monitors selling like hot cakes.

03/12/2016

Palas, the German manufacturer of particulate monitoring instruments, is expanding production to cope with demand for its fine particulate monitor, the Fidas® 200. In the following article Jim Mills explains why Air Monitors, the British distributor, is being kept busy by the demand for this exciting new technology.

fidas_200PM monitoring – the ultimate goal
We monitor PM because of its acute health effects. It irritates our eyes and lungs, and some of the finer particles were more recently shown to be able to move directly from the nasal cavity to the brain. Monitoring is therefore essential, but there are almost as many monitoring methods as there are types of PM, so it is vitally important to monitor what matters. If you are measuring dust from a construction site, the PM is relatively large in diameter and heavy, but if you are monitoring PM from diesel emissions in a city, the smallest particles with much less mass but high particle numbers, are of greater interest. Monitoring a single size fraction provides an incomplete picture of particulate contamination and risks ignoring the PM of most interest, particularly if the ignored fractions are the finer particles that travel deepest into the lungs. The ideal PM monitor would therefore reliably and accurately monitor all important PM fractions, with high data capture rates and low service requirements… hence the heavy demand for the Fidas 200.

Fidas® 200
The Fidas 200 is a fine dust ambient air quality monitoring device, developed specifically for regulatory purposes; providing continuous and simultaneous measurement of PM1, PM2.5, PM4, PM10, TSP (PMtot), as well as particle number concentration and particle size distribution between 180nm and 18µm (further non-certified size ranges are also available on request).

Employing a well-established measurement technology – optical light scattering of single particles – the Fidas 200 is equipped with a high intensity LED light source, which is extremely stable, delivering a long lifetime, with minimal service requirements. An optical aerosol spectrometer determines the particle size using Lorenz‐Mie scattered light analysis of single particles. These particles move through an optical measurement volume that is homogeneously illuminated with white light, and each particle generates a scattered light impulse that is detected at an angle of 85° to 95° degrees. The particle number measurement is based on the number of scattered light impulses, and the level of the scattered light impulse is a measure of the particle diameter.

The Fidas 200 operates with a volume flow of approx. 0.3m3/h and is equipped with a Sigma‐2 sampling head, which enables representative measurements even under strong wind conditions. The sampling system includes a drying system that prevents measurement inaccuracies caused by condensation from high humidity, which means that it will continue to function correctly in misty or foggy conditions but without the loss of semi-volatile fractions of the PM. It is also equipped with a filter holder for the insertion of a plane filter (47 or 50 mm in diameter) which enables subsequent chemical analysis of the aerosol.

Different versions of the Fidas 200 allow for stand-alone outdoors installation or for installation inside a measurement cabinet or air quality monitoring station.

Performance
The Fidas 200 is the only ambient continuous PM monitor in the UK to have passed TÜV and MCERTS. The MCERTS certificate (Sira MC16290/01) confirms that the Fidas 200 complies with the MCERTS Performance Standards for Continuous Ambient Air Quality Monitoring Systems, and with MCERTS for UK Particulate Matter. The instrument has type-approval to the Standards EN 12341 (PM10), EN 14907 (PM2.5) and is certified to the Standards EN 15267-1 and -2.

Importantly, the FIDAS 200 has half the uncertainty of many of its rivals and one third of the required uncertainty (25%).

Typical data capture rates exceed 99%. This has been achieved by a design approach that is focused on reliability. For example, two pumps operate in parallel, providing redundancy protection, and the instrument continuously monitors status and calibration.

Monitoring frequency has an adjustable time resolution ranging from 1 second up to 24 hours. However, high frequency data provides almost real-time access to readings when deployed with a remote web-enabled Envirologger. This enables the detection of short-term spikes, providing much greater insight into the causes of PM pollution.

The Fidas instruments have been proven in many countries as well as Britain; Air Monitors has been supplying Fidas PM monitors for around three years and there are now over 30 monitors in operation Britain alone.

Costs
One of the major financial considerations for Fidas 200 is its extremely low operating cost; the requirement for consumables is almost nil (no filter required) and its power consumption is around one fifth of its nearest rival. Calibration can be checked and adjusted, if necessary, quickly and easily in the field with a simple monodisperse powder test.

The purchase cost of a single Fidas 200 is a little more than some ambient PM monitors, but it is less expensive than others. However, for most instruments, a requirement to monitor two fractions, say PM2.5 and PM10, would necessitate two instruments and therefore double the cost. With budgets under pressure, Fidas therefore provides an opportunity to obtain better data for less cost.

In summary, the Fidas 200 offers better performance than all of its rivals; usually at significantly lower capital cost and always with dramatically lower operational costs. Consequently, it is no surprise that these instruments are selling like hot cakes.

@airmonitors #PAuto @_Enviro_News


Continuous compliance with PLM.

27/07/2016
Adam Bannaghan, technical director of Design Rule, discusses the growing role of PLM in managing quality and compliance.

The advantages of product lifecycle management (PLM) software are widely understood; improved product quality, lower development costs, valuable design data and a significant reduction in waste. However, one benefit that does not get as much attention is PLM’s support of regulatory compliance.

Compliance-PLMNobody would dispute the necessity of regulatory compliance, but in the product development realm it certainly isn’t the most interesting topic. Regardless of its lack of glamour, failure to comply with industry regulations can render the more exciting advantages of PLM redundant.

From a product designer’s perspective, compliance through PLM delivers notable strategic advantages. Achieving compliance in the initial design stage can save time and reduce engineering changes in the long run. What is more, this design-for-compliance approach sets the bar for quality product development, creating a unified standard to which the entire workforce can adhere. What is more, the support of a PLM platform significantly simplifies the compliance process, especially for businesses operating in sectors with fast-changing or complicated regulations.

For example, AS/EN 9100, is a series of quality management guidelines for the aerospace sector, which are globally recognised, but set to change later this year. December 2016 is the target date for companies to achieve these new standards – a fast transition for those managing compliance without the help of dedicated software.

Similarly, the defence industry has its own standards to follow. ITAR (International Traffic in Arms Regulations) and EAR (Export Administration Regulations) are notoriously strict exporting standards, delivering both civil and criminal penalties to companies that fail to comply.

“Fines for ITAR violations in recent years have ranged from several hundred thousand to $100 million,” explained Kay Georgi, an import/export compliance attorney and partner at law firm Arent Fox LLP in Washington. “Wilful violations can be penalised by criminal fines, debarment, both of the export and government contracting varieties, and jail time for individuals.”

PLM across sectors
The strict nature of all these regulations is not limited to aerospace and defence however. Electrical, food and beverage, pharmaceutical and consumer goods are also subject to different, but equally stern, compliance rules.

Despite varying requirements across industries, there are a number of PLM options that support compliance on an industry-specific basis. Dassault Systèmes ENOVIA platform, for example, allows businesses to input compliance definition directly into the program. This ensures that, depending on the industry, the product is able to meet the necessary standards. As an intelligent PLM platform, ENOVIA delivers full traceability of the product development process, from conception right through to manufacturing.

For those in charge of managing compliance, access to this data is incredibly valuable, for both auditing and providing evidence to regulatory panels. By acquiring industry-specific modules, businesses can rest assured that their compliance is being managed appropriately for their sector – avoiding nasty surprises or unsuccessful compliance.

For some industry sectors, failure to comply can cause momentous damage, beyond the obvious financial difficulties and time-to-market delays you might expect. For sensitive markets, like pharmaceutical or food and beverage, regulatory failure can wreak havoc on a brand’s reputation. What’s more, if the uncompliant product is subject to a recall, or the company is issued with a newsworthy penalty charge, the reputational damage can be irreparable.

PLM software is widely regarded as an effective tool to simplify product design. However, by providing a single source of truth for the entire development process, the potential of PLM surpasses this basic function. Using PLM for compliance equips manufacturers with complete data traceability, from the initial stages of design, right through to product launch. What’s more, industry-specific applications are dramatically simplifying the entire compliance process by guaranteeing businesses can meet particular regulations from the very outset.

Meeting regulatory standards is an undisputed obligation for product designers. However, as the strategic and product quality benefits of design-for-compliance become more apparent, it is likely that complying through PLM will become standard practice in the near future.

#PLM @designruleltd #PAuto #Pharma #Food @StoneJunctionPR

Celebrating twenty years abnormality!

21/07/2014

This year the Abnormal Situation Management (ASM®) Consortium  is celebrating 20 years of thought leadership in the process industry. The ASM Consortium grew out of a grassroots effort begun in 1989 by ASM to address alarm floods. Honeywell spearheaded the development of a proposal to the US NIST, Advanced Technology Program to form a Joint Research & Development Consortium.

Background on the ASM ConsortiumasmThe ASM Consortium was started in 1994 to address Process Industry concerns about the high cost of incidents, such as unplanned shutdowns, fires, explosions, emissions, etc. The term, Abnormal Situation Management®, was used to describe it. Matching funds from NIST enabled the consortium to spend several years researching and developing highly-advanced concepts to address the problem of abnormal situations. Since then research has continued and increasing effort has been put into development and deployment of solutions that incorporate ASM knowledge.The basis of the ASM Consortium is collaboration and information-sharing. By working together, members achieve far more than they could working alone. Research results are published for members, and often further shared by means of webinars, seminars and workshops. User members also guide Honeywell in selection and development of product solutions that incorporate ASM knowledge. Non-members can benefit from ASM Research as ASM Effective Practices Guidelines for Alarm Management, Display Design and Procedural Practices are available for purchase on Amazon.com.

The proposal addressed the challenging problem of abnormal situation management. In preparing for this proposal effort, Honeywell and its collaborators created the Abnormal Situation Management (ASM) Joint Research and Development Consortium (referred to as ASMC) under the U.S. Cooperative Research and Development Act. In November 1994, the ASM research joint venture began its research with $16.6 million (€12.27m) in funding for a three year study program, including $8.1 million (€6m) from ATP and industry cost-sharing of $8.5 million (€6.29m).

This year, ASM Consortium members have met twice for a week-long Quarterly Review Meetings (QRM) , once at Houston, Texas (USA) in April and then again at Antwerp (B) in June. Along with its normal business, the Consortium discussed plans to celebrate the Consortium’s 20 year of service to the Process Industry. The Quarterly Review Meetings are a platform for the ASM Consortium members to share the benefits gained from the ASM practices and products, and to discuss new challenges faced in plant operations. Members of the Consortium besides Honeywell include industrial manufacturers, a human factors research company, and universities that collaborate to research best practices for managing abnormal situations in industrial facilities.

To celebrate its 20th year, ASM Consortium will be spreading further awareness about managing and mitigating abnormal situations in process industries by publishing journal articles, white papers at leading industry conferences, and a planned video.


Ensuring that necessary dredging mantains water quality!

07/07/2014

Last winter brought unprecedented weather conditions both in Ireland and Britain. In the Read-out offices we were hit by a thunder and lightening storm which played havoc with our electronic equipment and elsewhere in the region the rough seas did incredible damage. In the south-west of England the farms and homes in the Somerset Levels and Moors, a sparsely populated coastal plain and wetland area of central Somerset, was severely hit with incredible flooding. Indeed the effects of this will be felt in the area for many years to come.

levels

This shows the incredible extent of last winter’s flooding with superimposed map showing location of the Somerset Levels and Moors.

A special monitoring system is helping protect water quality on the Somerset Levels and Moors where a major dredging operation is under way following this severe flooding. The system, which was supplied by OTT Hydrometry and installed by Wavelength Environmental, is designed to protect the river ecology by issuing email alerts if water quality deteriorates beyond pre-set conditions. Any such alerts would immediately be relayed to the project team and an assessment of conditions would then be undertaken, so that working practices can be reviewed and continued.

The flood caused extensive damage to properties in the area and many residents had to leave their homes.  Approximately 170 homes and businesses were affected. The Environment Agency estimated there were more than 65 million cubic metres of floodwater covering an area of 65 square kilometres.

Dredgers commenced work at the end of March 2014

Dredgers commenced work at the end of March 2014

On Monday 31st March 2014, three months after the flooding began, dredging work started on the banks of the river Parrett between Burrowbridge and Moorland, just a few minutes from Junction 24 of the M5 in the south west of England. Costing £1 million per mile, 5 miles of river bank will be dredged (3 miles of the river Parrett and 2 miles of the river Tone), based on restoring the river channels to their 1960’s profile and improving their drainage capability.

In recent years, an accumulation of sediment has narrowed the river channel and this is believed to be just one of the reasons for the severe flooding that took place. A network of mobile real-time water quality monitors is therefore being deployed to continuously monitor water quality upstream and downstream of the dredgers. This work complements the Environment Agency’s wider environmental monitoring.

Adcon Telemetry plus Hydrolab WQ sonde.

Adcon Telemetry plus Hydrolab WQ sonde.

The monitors consist of Hydrolab water quality ‘sondes’ and Adcon telemetry systems which transmit near-live data during the dredging operation that is due to run until the Winter of 2014. The monitors are anchored to the river bed and suspended in the river by means of two small buoys. Each sonde is fitted with sensors for the measurement of dissolved oxygen (DO), ammonium, temperature, pH, conductivity and turbidity. A short cable connects each sonde to an Adcon telemetry unit on the bank, which transmits data via GPRS every 10 minutes. The sondes contain internal dataloggers, however the transmitted data is available to project staff in near real-time via a web-based data portal. If water quality meets the pre-set alert conditions (for temperature, dissolved oxygen or ammonium), email messages are issued via the telemetry units. It is important to note that poor water quality can be caused by a number of factors including low flow levels and high nutrient levels arising from many sources in the area.

Downstream monitoring!

Downstream monitoring!

The project plan has allowed for up to eight dredging teams, and the monitors are being installed approximately 50 metres upstream and 100-150 meters downstream of the dredgers – to allow sufficient mixing.

Simon Browning from Wavelength Environmental has been monitoring the data from the sondes and says: “The monitors are quick and easy to deploy, and have performed very well; however, portability is extremely important because the instruments have to be moved and redeployed as the dredging work proceeds.

“We have also started fitting GPS units to the telemetry systems so that we can keep track of the monitoring locations. This is important because each dredging team is constantly moving, so the monitors have to be moved regularly.”

Matthew Ellison, a telemetry specialist from OTT Hydrometry, was delighted to be involved in this high profile project and recommended the Adcon systems because they are extremely small and therefore portable, and have been designed to run on very low power, which means they can be left to run in remote locations for extended periods of time with just a small solar panel.

In January, Owen Paterson, the Environmental Secretary of State in England, asked for a 20 year Action Plan to be developed to look at the various options for the sustainable management of flood risk on the Somerset Levels and Moors. The plan is supported by a £10m investment from the Department for Transport with a further £500k from the Department for Communities and Local Government, on top of the £10m previously announced by the British Prime Minister. The plan has been published and is available here on the Somerset County Council website!

Whilst the plan recognises that it will not be possible to stop flooding completely, it has 6 key objectives:

  1. Reduce the frequency, depth and duration of flooding.
  2. Maintain access for communities and businesses.
  3. Increase resilience to flooding for families, agriculture, businesses, communities, and wildlife.
  4. Make the most of the special characteristics of the Somerset Levels and Moors (the internationally important biodiversity, environment and cultural heritage).
  5. Ensure strategic transport connectivity, both within Somerset and through the county to the South West peninsula.
  6. Promote business confidence and growth.

“Dredging is one of the one things the local community has really been pressing for and people are going to check the Environment Agency is doing the work properly. The water quality monitoring undertaken by the mobile monitors and by our own static monitors will help provide assurance that the environment is not compromised by this work,” said Graham Quarrier for the Environment Agency.


Cloud Computing for SCADA

05/09/2013
Moving all or part of SCADA applications to the cloud can cut costs significantly while dramatically increasing reliability and scalability, says Larry Combs, vice president of customer service and support, InduSoft.

Although cloud computing is becoming more common, it’s relatively new for SCADA (supervisory control and data acquisition) applications. Cloud computing provides convenient, on-demand network access to a shared pool of configurable computing resources including networks, servers, storage, applications, and services. These resources can be rapidly provisioned and released with minimal management effort or service provider interaction.

By moving to a cloud-based environment, SCADA providers and users can significantly reduce costs, achieve greater reliability, and enhance functionality. In addition to eliminating the expenses and problems related to the hardware layer of IT infrastructure, cloud-based SCADA enables users to view data on devices like smartphones and tablet computers, and also through SMS text messages and e-mail.

Our company (InduSoft), along with a number of others, provides SCADA software and services for firms that want to use their own IT infrastructure, the cloud, or a combination of both to deploy their applications. We provide upfront consulting and advice to help customers make the best choice depending on their specific requirements and capabilities.

A cloud can be public or private. A public cloud infrastructure is owned by an organization and sold as services to the public. A private cloud infrastructure is operated solely for a specific customer. It may be managed by the customer or by a third party; it may exist on premise or off premise. Hybrid clouds consist of private and public clouds that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability.

Cloud computing can support SCADA applications in two fashions:

  • The SCADA application is running on-site, directly connected to the control network and delivering information to the cloud where it can be stored and disseminated, or
  • The SCADA application is running entirely in the cloud and remotely connected to the control network.
Figure 1: A public cloud formation in which the SCADA system is running onsite and delivers data via the cloud

Figure 1: A public cloud formation in which the SCADA system is running onsite and delivers data via the cloud

The first method is by far the most common and is illustrated in Figure 1 (right). The control functions of the SCADA application are entirely isolated to the control network. However, the SCADA application is connected to a service in the cloud that provides visualization, reporting, and access to remote users. These applications are commonly implemented using public cloud infrastructures.

The implementation illustrated in Figure 2 (below) is common to distributed SCADA applications where a single, local SCADA deployment is not practical. The controllers are connected via WAN links to the SCADA application running entirely in the cloud. These applications are commonly implemented using private or hybrid cloud architectures.

Service Choices
Most experts divide the services offered by cloud computing into three categories: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).

Figure 2: A private/hybrid cloud in which the controllers are connected via WAN links to the SCADA application running entirely in the cloud.

Figure 2: A private/hybrid cloud in which the controllers are connected via WAN links to the SCADA application running entirely in the cloud.

An IaaS such as Amazon Web Services is the most mature and widespread service model. IaaS enables service provider customers to deploy and run off-the-shelf SCADA software as they would on their own IT infrastructure. IaaS provides on-demand provisioning of virtual servers, storage, networks, and other fundamental computing resources.

Users only pay for capacity used, and can bring additional capacity online as necessary. Consumers don’t manage or control the underlying cloud infrastructure but maintain control over operating systems, storage, deployed applications, and select networking components such as host firewalls.

PaaS, like Microsoft’s Azure or Google Apps, is a set of software and product development tools hosted on the provider’s infrastructure. Developers use these tools to create applications over the Internet. Users don’t manage or control the underlying cloud infrastructure but have control over the deployed applications and application hosting environment configurations. PaaS is used by consumers who develop their own SCADA software and want a common off-the-shelf development and runtime platform.

SaaS, like web-based e-mail, affords consumers the capability to use a provider’s applications running on a cloud infrastructure from various client devices through a thin client interface like a web browser. Consumers don’t manage or control the underlying cloud infrastructure but instead simply pay a fee for use of the application.

SCADA vendors have been slow to adopt the SaaS service model for their core applications. This may change as the uncertainty of cloud computing begins to clear. For now, vendors are beginning to release only certain SCADA application components and functions as SaaS, such as visualization and historical reporting.

Economical Scalability
With all three service models, scalability is dynamic and inexpensive because it doesn’t involve the purchase, deployment, and configuration of new servers and software. If more computing power or data storage is needed, users simply pay on an as-needed basis.

Companies don’t have to purchase redundant hardware and software licenses or create disaster recovery sites they may never use. Instead they can provision new resources on demand when and if they need them. Add in the costs that a company would otherwise incur to manage an IT infrastructure, and the savings of moving to the cloud could be huge.

Instead of numerous servers and backups in different geographic locations, the cloud offers its own redundancy. On-demand resource capacity can be used for better resilience when facing increased service demands or distributed denial of service attacks, and for quicker recovery from serious incidents. The scalability of cloud computing facilities offers greater availability. Companies can provision large data servers for online historical databases, but only pay for the storage they’re using.

Building an IT infrastructure is usually a long-term commitment. Systems can take months to purchase, install, configure, and test. Equivalent cloud resources can be running in as little as a few minutes, and on-demand resources allow for trial-and-error testing.

The ability to easily switch back to a previous configuration makes it easier to make changes without having to start from scratch by taking a snapshot of a known working configuration. If a problem occurs when deploying a patch or update, the user can easily switch back to the previous configuration.

On-site IT projects involve significant cost, resources, and long timelines—and thus include significant risk of failure. Cloud computing deployments can be completed in a few hours with little or no financial and resource commitments, and therefore are much less risky.

Manageability, Security, and Reliability
The structure of cloud computing platforms is typically more uniform than most traditional computing centers. Greater uniformity promotes better automation of security management activities like configuration control, vulnerability testing, security audits, and security patching of platform components.

A traditional IT infrastructure environment poses the risk that both the primary and the single backup server could fail, leading to complete system failure. In the cloud environment, if one of the cloud computing nodes fails, other nodes take over the function of the failed cloud computing node without a blip.

If a company chooses to implement its own IT infrastructure, access to user data in this infrastructure generally depends on the company’s single Internet provider. If that provider experiences an outage, then users don’t have remote access to the SCADA application. Cloud computing providers have multiple, redundant Internet connections. If users have Internet access, they have access to the SCADA application.

The backup and recovery policies and procedures of a cloud service may be superior to those of a single company’s IT infrastructure, and if copies are maintained in diverse geographic locations as with most cloud providers, may be more robust. Data maintained within a cloud is easily accessible, faster to restore, and often more reliable. Updates and patches are distributed in real time without any user intervention. This saves time and improves system safety by enabling patches to be implemented very quickly.

Challenges and Risks
Cloud computing has many advantages over the traditional IT model. However, some concerns exist in regard to security and other issues. Data stored in the cloud typically resides in a shared environment. Migrating to a public cloud requires a transfer of control to the cloud provider of information as well as system components that were previously under the organization’s direct control. Organizations moving sensitive data into the cloud must therefore determine how these data are to be controlled and kept secure.

Applications and data may face increased risk from network threats that were previously defended against at the perimeter of the organization’s intranet, and from new threats that target exposed interfaces.

Access to organizational data and resources could be exposed inadvertently to other subscribers through a configuration or software error. An attacker could also pose as a subscriber to exploit vulnerabilities from within the cloud environment to gain unauthorized access. Botnets have also been used to launch denial of service attacks against cloud infrastructure providers.

Having to share an infrastructure with unknown outside parties can be a major drawback for some applications, and requires a high level of assurance for the strength of the security mechanisms used for logical separation.

Ultimately to make the whole idea workable, users must trust in the long-term stability of the cloud provider and must trust the cloud provider to be fair in terms of pricing and other contractual matters. Because the cloud provider controls the data to some extent in many implementations, particularly SaaS, it can exert leverage over customers if it chooses to do so.

As with any new technology, these issues must be addressed. But if the correct service model (IaaS, PaaS, or SaaS) and the right provider are selected, the payback can far outweigh the risks and challenges. The cloud’s implementation speed and ability to scale up or down quickly means businesses can react much faster to changing requirements.

The cloud is creating a revolution in SCADA system architecture because it provides very high redundancy, virtually unlimited data storage, and worldwide data access—all at very low cost.

fig3

Remote SCADA with Local HMI Look and Feel
Vipond Controls in Calgary provides control system and SCADA solutions to the oil and gas industry, including Bellatrix Exploration. To keep up with customer demand for faster remote data access, Vipond developed iSCADA as a service to deliver a high-performance SCADA experience for each client.

One of the greatest challenges in developing iSCADA was the state of the Internet itself as protocols and web browsers weren’t designed for real-time data and control. Common complaints of previous Internet-based SCADA system users included having to submit then wait, or pressing update or refresh buttons to show new data.

Many systems relied only on web-based technologies to deliver real-time data. Because the HTTP protocol was never designed for real-time control, these systems were always lacking and frustrating to use whenever an operator wanted to change a setpoint or view a process trend.
Users were asking for an Internet-based SCADA system with a local HMI look and feel, and that became the goal of Vipond Controls. This goal was reached with iSCADA as a service by giving each customer an individual virtual machine within Vipond’s server cloud.

All data is now kept safe and independent of other machines running in the cloud. A hypervisor allows multiple operating systems or guests to run concurrently on a host computer, and to manage the execution of the guest operating systems. The hypervisors are highly available and portable, so in the event of a server failure, the virtual machine can be restarted on another hypervisor within minutes.

All the SCADA software runs within the virtual machine, and users are offered a high degree of personal customization. Customers can connect directly to on-site controllers, and Vipond can also make changes to controllers and troubleshoot process problems.

This cloud-based SCADA solution can reduce end-user costs up to 90% over a traditional SCADA system, thanks to the provision of a third-party managed service and the reduction of investment required for IT and SCADA integration, development, hardware, and software.


Pre conference conference on Mercury as a Global Pollutant

02/08/2013
This is a brief summary of the Press Conference that preceeded the Mercury 2013 conference in Edinburgh (28 July – 2 August 2013 Scotland).
mercurypconf

Panel members: Loic Viatte, Swedish Ministry for the Environment, Dr Lesley Sloss, Chair of Mercury 2013 and Principal Environmental Consultant at IEA Clean Coal Centre and Lead – Coal Mercury Partnership area at the UNEP, John Topper, Managing Director, IEA Clean Coal Centre and Managing Director of the GHG Group, Dr David Piper, Deputy Head of the Chemicals Branch of UNEP’s Division of Technology Industry and Economics, Michael Bender, co-coordinator of the Zero Mercury Working Group, Eric Uram, Executive Director of SafeMinds, Prof. K. Clive Thompson, Chief Scientist at ALcontrol Laboratories UK.

The panel discussed the progress of legislation to reduce emissions from coal-fired power stations and Dr Lesley Sloss explained that, whilst mercury-specific legislation may take 5 to 10 years to be implemented in Europe, control technologies which can reduce mercury emissions by around 70% are already being utilised in many countries as part of initiatives to lower emissions for pollutants such as particulates, sulphur dioxide and nitrogen oxides. However, it was suggested that some developing countries and emerging economies may choose to implement these technologies as part of their commitment to the Minamata Convention.

rialtasalbaIn advance of the Press Conference, Paul Wheelhouse, Scottish Government Minister for Environment and Climate Change, issued the following statement:“An international conference of this stature puts Scotland on the world stage and demonstrates the important part we are playing in addressing global issues.
“Sound science, strong evidence and engaged citizens means properly informed choices and effective action on the ground and this is essential if the harmful effects of mercury pollution are to be reduced.
“This event is a key part of the journey to a new legally binding international agreement – and Scotland should take great pride in being at the heart of that process. I’d like to warmly welcome all of the 850 delegates from over 60 countries to Edinburgh and wish them every success as they progress this crucial agenda.”

Discussing the different priorities for the week’s conference, Michael Bender said “Mercury knows no boundaries which is why it has been necessary to develop an international convention.” One of the main sectors facing a mercury emissions reduction requirement is illegal artisanal gold mining, but this is a challenging social issue because gold mining is the sole source of income for many of these miners. Enforcing legislation could have very serious social consequences. In contrast, the coal industry, responsible for around 25% of the global emissions from human activities, around half of that from artisanal gold mining, is easier to regulate so this is often regarded as a more tempting target for guaranteed results.

Michael Bender also referred to the benefits of trade barriers which are beginning to halt the flow of mercury between countries, so there is a need for this trend to continue and for more chain of custody regulations.

The panel explained the need to ‘’think globally, act locally” – to acknowledge that mercury distributes itself around the globe with no respect for national borders but to appreciate that all countries may play their part to clean up their own back yard.

One of the priorities will be to address the mercury issues that are the quickest and easiest to address; the low-hanging fruit. The panel felt that this would be the products that contain mercury; especially in the healthcare sector (thermometers and similar instrumentation) because of its ‘do no harm’ ethos and the increasing availability of alternative methods and instruments.

One of the most important issues in delivering the aims of the Convention is ‘political will’ to drive change. For example, the election of President Obama was seen as a significant moment in the development of the Convention because he had already addressed mercury issues earlier in his political career. David Piper said that the support of the United States was very significant in the development of the Minamata Convention.

Michikazu Iseri from the Kumamoto Daily News in Japan asked the panel if NGOs are likely to be disappointed with the Convention, but Michael Bender from the Zero Mercury Working Group (an international coalition of over 100 NGOs) said that, whilst many of them might have preferred greater rigour in the terms of the convention, the overall reaction was very positive because the Convention combines both a financial mechanism and a compliance mechanism. David Piper agreed, describing the Convention as a ‘giant step forward’ but Lesley Sloss said the challenge now is to flesh the convention out with more ‘what and how’ detail.

The final question referred to the adoption of low energy compact fluorescent lightbulbs (CFLs) that contain a small amount of mercury; whilst helping to lower energy usage, they contribute to mercury emissions. Responding, David Piper said that he did not expect this to become a significant issue since these technologies are likely to be replaced with even more environmentally friendly options in the near future.


NPL trials identify improved bioaerosol monitoring technology

01/07/2013
Trials conducted by the British National Physical Laboratory (NPL) have identified improved methodologies for sampling and measuring bioaerosols at composting facilities. Commissioned by Britain’s Department for Environment, Food and Rural Affairs (DEFRA), the first project began in 2008 and the results of a larger series of trials will be published later this summer.

Background
As Britain seeks to reduce the quantity of waste going to landfill, there has been a growth in demand for composting, particularly to accommodate ‘green bin’ waste. In addition there has been an increase in the variety of wastes that are being composted, so it is important to be able to understand the emissions from these processes in order to minimise any impact on the environment and human health.

Trials have identified improved methodologies for sampling and measuring bioaerosols at composting facilities. However, bioaerosols are sampled in a wide variety of industries where airborne biological particles (such as bacteria, pollen, endotoxins, viruses and fungal spores) represent a potential hazard.

Trials have identified improved methodologies for sampling and measuring bioaerosols at composting facilities. However, bioaerosols are sampled in a wide variety of industries where airborne biological particles (such as bacteria, pollen, endotoxins, viruses and fungal spores) represent a potential hazard.

Micro-organisms are necessary for the composting process, so they will always be present in large quantities within the bulk material. Any handling process, such as moving, sorting or turning, is likely to create airborne dust that will contain micro-organisms, and studies have shown that exposure to the pathogenic fungus Aspergillus fumigatus can trigger asthma, bronchitis and allergic responses, so workers and residents near composting sites are potentially at risk.

Traditional bioaerosol sampling techniques rely on the impaction of particles on a solid agar medium. However, these methods can be time-consuming and are limited by low flow rates and unreliable impaction. They are also restricted to particles that can be cultivated. In contrast, the wet walled cyclonic technology employed by the Coriolis instruments, rapidly collects biological particles in liquid at a high flow rate with validated efficiency, and the liquid containing the particles is compatible with a number of rapid microbiological analysis methods, including qPCR (quantitative polymerase chain reaction), which enables the quantification and qualification of most targets.

Studies at NPL
The objective of the initial work was to improve the accuracy and speed of traditional measurement techniques, and one of the conclusions of the project was that the wet walled cyclonic technology employed by the Coriolis, gave the best performance for quantifying biological species such as fungi and bacteria, when used in conjunction with qPCR. Some of the experimental work was carried out at the Health Protection Agency (HPA)   – now Public Health England – to quantify the efficiency of sampling and analysis methods for the measurement of airborne Aspergillus fumigatus spores. This work demonstrated good correlation between Coriolis/qPCR and the HPA’s ‘standard’ method for these measurements.

As a result of the initial work, NPL now offers an Aspergillus fumigatus bioaerosol monitoring service to quantify airborne spore levels at composting sites using a rapid qPCR technique. The key advantages of this monitoring service over traditional microbiological methods are:

  1. Short sampling times
  2. Rapid analysis
  3. High sensitivity and broad detection range
  4. Species specific
  5. Detects total spore count (viable and non-viable), which overcomes any issue of emission underestimation as a result of damage to the spores during collection
  6. Aids differentiation between background spore levels and site specific emission

A full report in the early work has now been published on the Defra website, and further studies have been commissioned. The most recent studies have involved bioaerosol sampling with the Coriolis sampler at four different sites, every quarter during 2012. NPL’s David Butterfield says “The objective of the latest trial was to assess the sampling and monitoring technologies in greater detail, under differing weather conditions and with different sources.”

At the same time, a working group at CEN, the European Committee for Standardisation, is working on a new bioaerosol monitoring standard that is likely to accommodate the latest technology and will necessitate demonstration of equivalence.

Looking forward, Jim Mills from Air Monitors, the company which launched the Coriolis in the Britain, says “It will take some time before this new technology becomes standard practice, but in the meantime, with the benefit of the work that has been conducted by NPL and others, there is no reason why Coriolis should not be utilised widely to improve the efficiency and effectiveness of bioaerosol sampling at composting sites, and in many other applications such as hospitals, legionella investigations, cooling towers, animal housing and pharmaceutical manufacture.”