Celebrating twenty years abnormality!

21/07/2014

This year the Abnormal Situation Management (ASM®) Consortium  is celebrating 20 years of thought leadership in the process industry. The ASM Consortium grew out of a grassroots effort begun in 1989 by ASM to address alarm floods. Honeywell spearheaded the development of a proposal to the US NIST, Advanced Technology Program to form a Joint Research & Development Consortium.

Background on the ASM ConsortiumasmThe ASM Consortium was started in 1994 to address Process Industry concerns about the high cost of incidents, such as unplanned shutdowns, fires, explosions, emissions, etc. The term, Abnormal Situation Management®, was used to describe it. Matching funds from NIST enabled the consortium to spend several years researching and developing highly-advanced concepts to address the problem of abnormal situations. Since then research has continued and increasing effort has been put into development and deployment of solutions that incorporate ASM knowledge.The basis of the ASM Consortium is collaboration and information-sharing. By working together, members achieve far more than they could working alone. Research results are published for members, and often further shared by means of webinars, seminars and workshops. User members also guide Honeywell in selection and development of product solutions that incorporate ASM knowledge. Non-members can benefit from ASM Research as ASM Effective Practices Guidelines for Alarm Management, Display Design and Procedural Practices are available for purchase on Amazon.com.

The proposal addressed the challenging problem of abnormal situation management. In preparing for this proposal effort, Honeywell and its collaborators created the Abnormal Situation Management (ASM) Joint Research and Development Consortium (referred to as ASMC) under the U.S. Cooperative Research and Development Act. In November 1994, the ASM research joint venture began its research with $16.6 million (€12.27m) in funding for a three year study program, including $8.1 million (€6m) from ATP and industry cost-sharing of $8.5 million (€6.29m).

This year, ASM Consortium members have met twice for a week-long Quarterly Review Meetings (QRM) , once at Houston, Texas (USA) in April and then again at Antwerp (B) in June. Along with its normal business, the Consortium discussed plans to celebrate the Consortium’s 20 year of service to the Process Industry. The Quarterly Review Meetings are a platform for the ASM Consortium members to share the benefits gained from the ASM practices and products, and to discuss new challenges faced in plant operations. Members of the Consortium besides Honeywell include industrial manufacturers, a human factors research company, and universities that collaborate to research best practices for managing abnormal situations in industrial facilities.

To celebrate its 20th year, ASM Consortium will be spreading further awareness about managing and mitigating abnormal situations in process industries by publishing journal articles, white papers at leading industry conferences, and a planned video.


Is hiring instruments a good safety bet?

12/05/2014

Why instrument hire makes occupational safety sense!

Decisions concerning the acquisition of occupational safety monitoring instrumentation are often made by operational staff that may not have visibility of the full financial implications of their choices. This article, by James Carlyle of Ashtead Technology, examines the factors affecting these decisions and explain why a strategic decision to hire instrumentation can deliver substantial and wide-ranging advantages.

Background
The Management of Health and Safety at Work Regulations 1999 (originally introduced in Britain 1993 in response to an EU Directive) require employers and self-employed people ‘to carry out a suitable and sufficient assessment of the risks for all work activities for the purpose of deciding what measures are necessary for safety.’  However, the risks arising from toxic gases, dust, explosive mixtures and oxygen depletion can be complex and constantly changing. So, in addition to an initial risk assessment, ongoing monitoring is often necessary to ensure the protection of staff and others.

Employers may choose to conduct their own testing and monitoring, or they may prefer to employ the services of professional consultants to conduct the risk assessments. Either way, the employer of the consultant has to decide whether to purchase the instrumentation or to rent it.

The risks
Before examining the ways in which testing and monitoring should be undertaken, it is first necessary to consider the risks that need to be assessed.

Fire and/or an explosion can result from an excess of oxygen in the atmosphere, for example, from an oxygen cylinder leak, or an explosion may occur from the ignition of airborne flammable contaminants that may have arisen from a leak or spillage from nearby processes.

Toxic gas detection

Toxic gas detection

Toxic gases, fumes or vapours may also arise from leaks and spills, or from disturbed deposits or cleaning processes. Gases and fumes can accumulate in confined spaces such as sewers, manholes and contaminated ground. They can also build up in confined workspaces for welding, flame cutting, lead lining, brush and spray painting, or moulding using glass reinforced plastics, use of adhesives or solvents. Carbon monoxide, particulates and hydrocarbons may also become a problem in situations where the products of combustion are not exhausted adequately. Plant failure can also create gaseous hazards. For example, ammonia levels may increase if refrigeration plant fails or carbon dioxide may accumulate in some pub cellars following leaks from compressed gas cylinders.

Oxygen depletion in workplace air can cause headaches, breathlessness, confusion, fainting and even death. There are many situations in which this can occur; for example:

  • Workers breathing in confined spaces where replacement air is inadequate
  • Oxygen consumption by biological processes in sewers, storage tanks, storm water drains, wells etc.
  • Fermentation in agricultural silos or in brewing processes
  • Certain goods in cargo containers
  • Vessels that have been completely closed for a long time (particularly those constructed of steel) since the process of rust formation on the inside surface consumes oxygen
  • Increased levels of carbon dioxide from wet limestone chippings associated with drainage operations
  • Combustion operations and work such as welding and grinding
  • Displacement of air during pipe freezing, for example, with liquid nitrogen
  • Purging of a confined space with an inert gas to remove flammable or toxic gas, fume, vapour or aerosols
TSI Dustrak

TSI Dustrak

The COSHH definition of a substance hazardous to health includes dust of any kind when present at a concentration in air equal to or greater than 10 mg/m3 8-hour TWA of inhalable dust or 4 mg/m3 8-hour TWA of respirable dust. This means that any dust will be subject to COSHH if people are exposed above these levels. Some dusts have been assigned specific Workplace Exposure Limits (WELs) and exposure to these must comply with the appropriate limit.

Most industrial dusts contain particles with a wide range of size, mass and chemical composition. As a result, their effects on human health vary greatly. However, the Health & Safety Executive (HSE) distinguishes two size fractions for limit-setting purposes termed ‘inhalable’ and ‘respirable’.

Inhalable dust approximates to the fraction of airborne material that enters the nose and mouth during breathing and is therefore available for deposition in the respiratory tract. Respirable dust approximates to the fraction that penetrates to the gaseous exchange region of the lungs. Where dusts contain components that have their own assigned WEL, all the relevant limits should be complied with.

The financial justification for instrument hire
For most of us, when we need something, assuming funds are available, we buy it. At Ashtead Technology, we challenge that assumption; unless the required instrument is either very low cost or likely to be deployed on a frequent basis, it rarely makes sense to purchase the equipment. There are many reasons for this, but the most important is of course financial, however, operational staff are not always aware of the full cost of purchase, because the detail is hidden in the company’s accounts.

Capital purchases are generally written off in the company accounts over a 3, 4 or 5 year period. This means that the cost of ownership is at least 20% of the capital cost per year and possibly over 33%. However, there are of course other costs of ownership – most instruments require regular maintenance and calibration which itself involves further costs both in terms of materials and labour. A gas analyser, for example, would require calibration gases and associated valves and safety equipment; trained staff would be required to ensure that the instrument is calibrated correctly, and consumables such as filters and replacement gases would be required. The same issues arise with other types of instrumentation; all of which require maintenance by suitably trained and qualified staff. Consequently, the annual cost of instrument ownership can easily exceed 50% of the purchase cost.

Another significant financial cost is the ‘opportunity cost’ of the money that is tied up in a purchase; capital expenditure on equipment represents money that could have been used for other purposes – for investing in raw materials, staff, training, marketing, new premises etc. Alternatively that money could have been invested and delivered a return.

In addition to the financial justification, there are many more reasons to hire…

Renting provides appropriate technology
Once an instrument is purchased, the company is committed to that technology for the next few years and this can be a major disadvantage. For example, if a company purchases a PID gas detector for the measurement of solvents, it may find later that there is also a requirement to monitor methane, and the PID would not be suitable for this, so a second analyser would be necessary; an FID for example. Similarly, the company may discover at a later date that solvent speciation is necessary, which again, the PID would fail to achieve.

The same principle applies to other applications. For example, if a basic infrared camera is purchased and it later transpires that higher resolution images are required, a second more expensive camera would be necessary.

From a corporate perspective, instrument purchase can have negative implications because instruments are often shared amongst different departments and between different sites. However, it is unlikely that one technology or one particular instrument is able to meet everybody’s needs, so it is likely that each person will seek to acquire their own instrument; firstly to ensure that they get the kit that they need, but also so that their access to instrumentation is not limited because it is in use elsewhere. If each person is allowed to purchase their own kit; whilst this might be an extremely costly option, it does at least encourage ‘ownership’ so that the equipment is properly maintained. In contrast, shared ownership often results in poor maintenance because none of the staff take responsibility for ensuring that the equipment is serviced and maintained correctly.

Renting instrumentation ensures that all staff have continual access to a range of different technologies, so they do not have to ‘make do’ with whatever happens to be available at the time they need it. If a company has purchased an instrument, its staff are more likely to use it ‘because it is there’ rather than because it is the most appropriate technology.

Renting provides access to new technology
One of the problems with buying an instrument is that your technology is then stuck in a moment of time; inevitably new instruments are developed that are better than their predecessors, but once an instrument has been purchased it is not possible to take advantage of new technology. In contrast, with the benefits of scale, Ashtead is able to continually invest in new technology so that the rental fleet provides access to the latest technology and customers are therefore able to choose the instruments that best meet their needs.

Renting eliminates storage and maintenance costs
One of the common features of all instruments is that they require regular maintenance and in many cases calibration. This is often a skilled activity that requires training and appropriate equipment. Ashtead Technology’s engineers are therefore equipped with all of the necessary equipment to service and maintain every instrument in the rental fleet. They are also trained by manufacturers, so that all instruments can be delivered tested and ready for immediate use. Storage can also represent a cost for the larger pieces of equipment, especially if it is not possible to store the instruments in the same location as the main users.

Technical support from rental companies
Instrumentation is constantly evolving; newer instruments are usually more accurate, more sensitive, faster, lighter, and easier to use. However, the array of instruments available can be bewildering so it is often helpful to discuss options with an Ashtead Technology engineer before making a choice, and then after the instrument is delivered, many customers value telephone support during the setup and operation of the instrument.

Summary
The basic premise behind Ashtead Technology’s business is an intense focus on providing customers with exactly the right equipment at the precise moment that they need it. We therefore seek to become our clients’ instrumentation partner; saving them time and money, and ensuring that they always have access to the best available technologies. This is achieved by:

  • Continually searching the market, looking for the best technologies from the world’s leading suppliers
  • Utilising expert knowledge and buying power to ensure that our fleet of instruments includes a broad selection of the best available technologies
  • Manufacturer training for our engineers
  • Investing in the equipment, spares and consumables for servicing, calibrating and maintaining the entire instrumentation fleet

We invest in these measures so that our clients don’t have to.


Germany lowers biogas formaldehyde emissions

23/12/2013

Power generation from Germany’s enormous biogas industry produces emissions to air that are regulated by the Technical Instructions on Air Quality Control (TA Luft). As part of the approval process, the emissions from each plant have to be tested every three years. Formaldehyde is one of the pollutants of greatest concern because of its carcinogenicity and the TA Luft emission limit is 60 mg/m³. However, the German Government has also created a financial incentive scheme to encourage process managers to lower their formaldehyde emissions to below 40 mg/m³. To be eligible for the EEG (Erneuerbare Energien Gesetz) scheme, plants must be tested every year.

VDI_TestSiteFormaldehyde (HCHO) can be difficult to measure in hot, wet emissions, not least because it would dissolve in condensate if the sample gas is allowed to cool. Test engineers in Germany have therefore deployed portable (DX 4000 and CX4000 from Gasmet) FTIR analyzers to measure formaldehyde, and a number of systems are currently in use across Germany.

Background
The biogas industry in Germany has grown enormously in recent years; in 1992 there were 139 biogas plants in the country, but by the end of 2013 there will be almost 8,000 with an electrical capacity of about 3,400 MW – sufficient for the energy needs of around 6.5 million households. Initially, biogas plants were built to handle the by-products of human and animal food production as well as agricultural waste, but with government incentives to generate renewable energy, farmers are now growing crops such as maize specifically for energy production.

Biogas is produced by anaerobic digestion with anaerobic bacteria or fermentation of biodegradable materials. The main constituent gases are methane and carbon dioxide, with small amounts of hydrogen sulphide and water. The products of biogas combustion are mostly carbon dioxide and water, but the combustion of biogas also produces formaldehyde.

Biogas-fuelled combined heat and power (CHP) plants are becoming a very popular source of renewable energy in many countries because they provide a reliable, consistent source of energy in comparison with wind and solar power. In addition to the renewable energy that these plants produce; the fermentation residue is a valuable product that can be used as a fertiliser and soil conditioner for agricultural, horticultural and landscaping purposes.

Exhaust gas tests
The exhaust emissions of each biogas plant are tested every three years for substances hazardous to air quality, such as particulates, carbon monoxide, nitrogen oxides, sulphur dioxide and formaldehyde. Most of these parameters can be measured on-site with portable equipment. However, in the early years and still to this day, the complexity of formaldehyde analysis has necessitated sampling and laboratory analysis – a time-consuming and costly activity.

FTIR_DX4000

FTIR_DX4000

In 2007 Wolfgang Schreier from the environmental analysis company RUK GmbH (now part of the SGS Group) started working on the use of portable FTIR gas analysers for formaldehyde analysis. The FTIR analysers are manufactured by Gasmet (Finland) and supplied in Germany by Ansyco GmbH, a Gasmet group company.

FTIR analysers are able to qualitatively and quantitatively analyse an almost endless number of gas species. However, Wolfgang Schreier says: “The Gasmet units are primarily employed for the measurement of formaldehyde, and whilst they are able to measure other parameters of interest such as CO, NOx and Methane, they are not yet certified for doing so in the emissions of biogas plants, unless an internal validation has been undertaken.

“The DX4000 proved to be the ideal instrument for this application because it samples at high temperatures (above 180 Deg C) so formaldehyde cannot dissolve in condensate, and the instrument provides sensitive, accurate, reliable real-time formaldehyde measurements – no other portable analyser is able to achieve this.

“Importantly, the DX4000 is also robust and weighing just 14kg, it is easy to transport from site to site. In addition to a heated sample line, the only other accessory is a laptop running Gasmet’s Calcmet™ software.”

In contrast with the portable FTIR, it is typical for the results of laboratory gas analysis to become available around 2 weeks after sampling. This highlights a further benefit of the direct-reading instrument; real-time results enable plant managers to adjust their process in order to improve efficiency and minimise the emissions of formaldehyde and other gases.

Ansyco’s Gerhard Zwick says: “We hope that the other measurements that are possible with the Gasmet FTIR will also soon be accepted. A new VDI method (VDI 3862-8) for the measurement of formaldehyde by FTIR is being established and this is likely to be published at the beginning of 2014.

“The preparation of this standard involved rigorous field tests with 5 Gasmet FTIR analysers at a live biogas plant. During testing, samples were taken for analysis according to the existing standard laboratory methods and the results showed that portable FTIR produced even better results than lab analysis.”

Formaldehyde reduction incentive
The bonus is paid to the operators of biogas plants which are subject to approval by the Federal Immission Control Act if certain conditions are met. Measurements to demonstrate the effectiveness of emission reduction have be taken each year by an organisation which is approved according to § 26 of the Act.

While the emission limit for formaldehyde is 60 mg/m3, according to the EEG legislative, the plant operator receives a bonus of 1 cent per kW when formaldehyde emission levels are below 40 mg/m3, with simultaneous fulfilment of the emission limits for nitrogen monoxide and nitrogen dioxide (combined), and for carbon monoxide.

With the benefit of real-time readings from the FTIR, process operators are able to employ process control measures to alter formaldehyde emissions. However, this may also affect the efficiency of the combustion process or the concentrations of other limited gases. In addition, it is now commonplace for modern plants to use a catalyst for formaldehyde emission reduction.

Summarising Gerhard Zwick says: “The standard formaldehyde emissions monitoring package consists of a Gasmet DX4000 analyser and a heated sampling system, so no adaptations were necessary for the measurement of biogas emissions.

“We have now supplied instruments to most of the key testing organisations as well as motor and system manufacturers in Germany. Happily, the feedback has been extremely positive because, as a portable analyser, the Gasmet FTIR systems are able to test more plants, more quickly, and this lowers costs.”


2014 lineup of technical division symposia announced!

04/12/2013

Deadlines for abstract submission are fast approaching

The International Society of Automation (ISA) has announced the dates, locations and Call for Papers (where applicable) for its technical division symposia for 2014.

Download ISA Brochure (pdf)

Download ISA Brochure (pdf)

These annual technical division symposia bring together innovators, thought leaders and other automation and control professionals around the world to explore and discuss the latest technologies, practices and trends, and gain high-value, peer-reviewed technical content across a wide variety of automation fields and disciplines.

There are several new or innovative features to be seen in the list. There are two new arrivals on the slate. These are the Food and Pharmaceutical Industries Division Symposium scheduled for their HQ in North Carolina (USA) 5-6 March 2014 and the  Process Control & Safety Symposium, which will be held in Houston (TX USA) 6-9 October 2014.  

Another innovation is the holding of one of these symposiums in Europe. The 60th ISA International Instrumentation Symposium will be held in England (London) in June (23-27 June 2014). This is possible the first time one of the main-stream seminars has been held outside of continental North America.

We notice that the 9th Sales & Marketing Summit will be held on-line. Again this is a first from the ISA for one of their ‘main-line’ conferences. The dates are 9-11 September 2014.

Details of the full slate of symposia may be accessed for the ISA website here!

The Society also publishes a Training Monthly which lists training courses and webinars.

Plan now to attend the 2014 ISA technical symposium or conference of your choice and professional interest and expertise.


Automation preventing nuclear disasters

26/08/2013

Thoughts of an automation thought leader stimulated by the events at the Fukushima nuclear power station in the wake of the tsunami in 2011.

Bela Lipták is one of those people who may without doubt be called an automation leader such as mentioned in Nick Denbow’s recent article, Who are the Automation Thought Leaders?

Lipták, a patriotic Hungarian by birth, has done much to pass on his considerable knowledge to the next generation of automation professionals. He is a worthy recipient of the ISA’s Life Achievement Award  for his history of dedication to the instrumentation, systems, and automation community as evidenced by his teachings, writings, and inventions. He has published over 200 technical articles and has written 34 technical books, including four editions of the multi-volume Instrument Engineer’s Handbook.

He has published a series of articles through the years on the role of automation and more specifically how its correct application might have have prevented some of the sometimes fatal and always catastrophic nuclear disasters that have occurred down the years. Events such as that at Three Mile Island (USA) in 1979 or in Chernobyl (Former USSR) in 1986.

Of course the most recent such incident is the Fukushima Power Station irrepairably damaged after the major earthquake and subsequent tsunami in North East Japan. All these studies have been published in Control Global, or its sister publication and normally we would just put a link on our news pages (and indeed may have in some cases at the time of publication). This time we are using this method because he has published several articles at different times continuing his thoughts on this major and still alarming disaster.

The Fukushima Nuclear Accident – Part 1 (15/4/2011) Béla Lipták talks about the safety processes used at the plant.

Preventing Nuclear Accidents by Automation – Part 2 (5/7/2011) Bela Liptak here discusses the design and control errors at Fukushima, because they still exist in many american boiling-water reactors (BWR) and must be corrected.

How Automation Can Prevent Nuclear Accidents-Part 3 (31/8/2011) Watch out for outdated and/or unreliable instruments. These can cause major disasters.

In Automation Could Have Saved Fukushima (Part 1), (15/3/2013) Liptak says that if the Fukushima level detectors had operated correctly, the hydrogen explosions would have been prevented.

Automation Could Have Prevented Fukushima, (Part 2) (30/4/2013) discusses automatic vs. manual operation of the emergency cooling systems, and the roles the bad designs of control and block valves played in this nuclear accident.

He promises some further articles and we hope to put links here to those as they appear.

 


Pre conference conference on Mercury as a Global Pollutant

02/08/2013
This is a brief summary of the Press Conference that preceeded the Mercury 2013 conference in Edinburgh (28 July – 2 August 2013 Scotland).
mercurypconf

Panel members: Loic Viatte, Swedish Ministry for the Environment, Dr Lesley Sloss, Chair of Mercury 2013 and Principal Environmental Consultant at IEA Clean Coal Centre and Lead – Coal Mercury Partnership area at the UNEP, John Topper, Managing Director, IEA Clean Coal Centre and Managing Director of the GHG Group, Dr David Piper, Deputy Head of the Chemicals Branch of UNEP’s Division of Technology Industry and Economics, Michael Bender, co-coordinator of the Zero Mercury Working Group, Eric Uram, Executive Director of SafeMinds, Prof. K. Clive Thompson, Chief Scientist at ALcontrol Laboratories UK.

The panel discussed the progress of legislation to reduce emissions from coal-fired power stations and Dr Lesley Sloss explained that, whilst mercury-specific legislation may take 5 to 10 years to be implemented in Europe, control technologies which can reduce mercury emissions by around 70% are already being utilised in many countries as part of initiatives to lower emissions for pollutants such as particulates, sulphur dioxide and nitrogen oxides. However, it was suggested that some developing countries and emerging economies may choose to implement these technologies as part of their commitment to the Minamata Convention.

rialtasalbaIn advance of the Press Conference, Paul Wheelhouse, Scottish Government Minister for Environment and Climate Change, issued the following statement:“An international conference of this stature puts Scotland on the world stage and demonstrates the important part we are playing in addressing global issues.
“Sound science, strong evidence and engaged citizens means properly informed choices and effective action on the ground and this is essential if the harmful effects of mercury pollution are to be reduced.
“This event is a key part of the journey to a new legally binding international agreement – and Scotland should take great pride in being at the heart of that process. I’d like to warmly welcome all of the 850 delegates from over 60 countries to Edinburgh and wish them every success as they progress this crucial agenda.”

Discussing the different priorities for the week’s conference, Michael Bender said “Mercury knows no boundaries which is why it has been necessary to develop an international convention.” One of the main sectors facing a mercury emissions reduction requirement is illegal artisanal gold mining, but this is a challenging social issue because gold mining is the sole source of income for many of these miners. Enforcing legislation could have very serious social consequences. In contrast, the coal industry, responsible for around 25% of the global emissions from human activities, around half of that from artisanal gold mining, is easier to regulate so this is often regarded as a more tempting target for guaranteed results.

Michael Bender also referred to the benefits of trade barriers which are beginning to halt the flow of mercury between countries, so there is a need for this trend to continue and for more chain of custody regulations.

The panel explained the need to ‘’think globally, act locally” – to acknowledge that mercury distributes itself around the globe with no respect for national borders but to appreciate that all countries may play their part to clean up their own back yard.

One of the priorities will be to address the mercury issues that are the quickest and easiest to address; the low-hanging fruit. The panel felt that this would be the products that contain mercury; especially in the healthcare sector (thermometers and similar instrumentation) because of its ‘do no harm’ ethos and the increasing availability of alternative methods and instruments.

One of the most important issues in delivering the aims of the Convention is ‘political will’ to drive change. For example, the election of President Obama was seen as a significant moment in the development of the Convention because he had already addressed mercury issues earlier in his political career. David Piper said that the support of the United States was very significant in the development of the Minamata Convention.

Michikazu Iseri from the Kumamoto Daily News in Japan asked the panel if NGOs are likely to be disappointed with the Convention, but Michael Bender from the Zero Mercury Working Group (an international coalition of over 100 NGOs) said that, whilst many of them might have preferred greater rigour in the terms of the convention, the overall reaction was very positive because the Convention combines both a financial mechanism and a compliance mechanism. David Piper agreed, describing the Convention as a ‘giant step forward’ but Lesley Sloss said the challenge now is to flesh the convention out with more ‘what and how’ detail.

The final question referred to the adoption of low energy compact fluorescent lightbulbs (CFLs) that contain a small amount of mercury; whilst helping to lower energy usage, they contribute to mercury emissions. Responding, David Piper said that he did not expect this to become a significant issue since these technologies are likely to be replaced with even more environmentally friendly options in the near future.


State of control and safety in manufacturing and power generating industries!

29/07/2013

This paper was written by a team at Premier Farnell. Premier Farnell is a distributor of electronics technology.

The power generation industry has gone through many changes in the last 50 years or so and the controls and safety features that were based primarily on the pneumatic controls are now taken over by electronic controls (with its own set of integrated systems resistors, and capacitors) and/or the Digital Control Systems, Furnace safeguard Supervisory Systems (FSSS) and computer controlled systems. The automation systems have improved over the years and now a standard has emerged for the power generating industries. The general improvements in the systems can be enumerated as below. The systems in a boiler control are generally divided in five sections; wiz, drum level controls, steam temperature control, boiler pressure controls, and furnace safeguard Supervisory Systems (FSSS) as also auxiliary interlocks; apart from the boiler water chemistry control. The boiler water chemistry is a separate control not normally associated with other controls and will not be discussed here.

System for drum level controls
Earlier the controls were based on water level in the drum. Sometimes when the steam demand went down the decrease in water level (because of increase of pressure) would tend to supply more water to the drum. This anomaly was rectified by the introduction of a three level control for the boiler drum. In the new system, the difference in the steam flow and the water flow was the main element for control of drum level with correction from drum level adding to the better control. The drum level system can be independent control with its own electronic controls consisting of resisters transistors and capacitors or integrated circuits.

Figure 1: Three element control

Figure 1: Three element control

The system was further refined with pressure and temperature correction from steam parameters. This resulted in better management of drum level. The three element control with the compensation has not undergone any major change over the last 50 years and is now the industry standard.

Figure 2: Three element controls modified

Figure 2: Three element controls modified

Control for boiler pressure
The pressure control has changed from just pressure control to the addition of other parameters like air flow, fuel flow, and the fuel calorific value for pressure control. The online efficiency calculations are also now integrated along with fuel pressure controls. The primary air, secondary air measurements also assists the control of boiler pressure in the system.

Figure 3: Boiler pressure control

Figure 3: Boiler pressure control

Furnace safeguard Supervisory Systems (FSSS)
The FSSS system makes sure that there is no uncontrolled fuel that can cause explosion in the boiler at any stage. The flame is sensed by photo sensors and as long as the flame is available, the supply of fuel is continued to the burners. The absence of flame shuts off all the fuel in the furnace system. Of course the flame sensing is done by two out of three sensors so that at any stage at least two sensors are working and any dependence on a single flame sensor is avoided. Failure of two sensors to see a flame is a signal to close all the fuel supply to the furnace. The fuel can be in solid, liquid or gaseous form. All the fuel in any form will be cut off irrespective of the condition of boiler pressure. The fuel supply cannot be re-started unless the complete purging of fuel is ensured. For ensuring of absence of any fuel in furnace, the furnace is purged with 30% of air flow for 5 minutes and then only the new supply of fuel can be introduced. The purging is done irrespective of the reason of trip. Here also better flame sensors and better

The functions of FSSS are,

  1. Starting of furnace purge after conditions of auxiliary interlocks are satisfied
  2. Permit starting of fuel introduction
  3. Making sure that burners start working only when an auxiliary flame exists
  4. Stopping of all fuel to boiler when flame is extinguished or no longer detected.

FSSS has three units, wiz indicating and operator’s console, relay and logic cabinet with its own electronic circuits of transistors, resistors and capacitors, relays, timers, AC and DC supplies and the fuel trip system.

Figure 4: Typical FSSS panel

Figure 4: Typical FSSS panel

Steam temperature controls
The steam temperature controls ensure that the metals used in construction of the boiler are below the safe limits under which they can operate. The control of steam temperature can be achieved by many means, but the most effective control is achieved by attemperation of steam in the area between the primary and the secondary superheaters. Any increase of temperature can affect the life of parts and may even cause failure of metals of the final superheater.

Figure 5: Steam temperature control

Figure 5: Steam temperature control

Boiler alarm panels that were previously hard wired have started to become much more sophisticated panels. For example, previously changing alarm settings was only possible with the help of an instrument engineer, now the operator of the panel is able to fine tune the alarm setting.

Figure 6: Boiler indicating and alarm panel

Figure 6: Boiler indicating and alarm panel

All these systems were operated as independent systems with no outside communication. These were known as single loop controllers with their own logic. The trip systems operated independently and had no effect on the other entities in a power plant like the turbine, generator or the electrical systems. Additional reliability was introduced with the help of two out of three systems of primary sensing elements. With all of them agreeing on the value of the sensing element, the average value was used for control. When one of them gave a value beyond a permissible error, it was ignored and the other two were used for calculations and an alarm about the third one going out of service was given to the operator. This increased the reliability of the system, something that was not possible with the pneumatic systems.

Other systems like the water management and fire fighting systems could not be integrated with the overall systems, even though operators could get information through hard wired systems. In emergencies, some of these systems were often ignored, resulting in less than ideal ways of handling the emergency.

The next major change was the introduction of a Digital Control System (DCS) and the use of software in the boiler control system. All the above controls were previously independent of one another, but the introduction of DCS and computer software in alarm and emergency handling systems brought the safe shut down of the boiler, turbine and generator. The system of programmable logic controls and the mechanical relays for protection of generators slowly gave rise to the electronic relays in electrical systems and its integration with the DCS.

The architecture of the DCS is local control supervised by additional layers that overlook the entire system. The individual control level of the drum, pressure control, FSSS of boiler, as well as the control of auxiliary interlocks turbine and generator protection, are individual systems which have their own logics and system of alarm generation for decentralized controls, but the information is sent to a higher level where the safety system takes over in emergencies.

In case of emergency situations in individual areas, the system can have its own set of controls over the change in parameters to handle the situation. The safety of the entire system takes precedence over the individual systems and the safe shutdown of the entire system gets activated.

While the individual controls are meant for the control of a single parameter, the alarm system signals the operator about the abnormal condition. The operator can take over the situation and bring the system back to normal. In case the operator is unable to do so and the situation threatens to get out of hand, the shutdown system comes into play. At this stage the operator cannot have any involvement and can only oversee the system shutting down safely.

Overall, the change from individual controls, to dedicated single loop controllers with redundant sensing elements to the DCS formed the line of change to the present method of state of control and safety in manufacturing and power generating industries.