Energy experience shared by users and producers!

26/06/2014
Notable industry experts discussed the future of smart infrastructures, substation automation and the Internet of Energy, providing valuable insights for improving operational efficiency.

Last May industrial software and energy automation expert COPA-DATA hosted the Energy Experience Day in Warwick (GB), an event aimed at addressing the challenges currently faced by power utilities, municipalities and grid operators.

Engineers and analysts from all parts of industry attended the Energy Experience Day in May 2014

Engineers and analysts from all parts of industry attended  and participated in Copa-Data’s Energy Experience Day in May 2014

The energy industry, having gone through drastic changes over the last hundred years, now faces a new frontier of innovation. Usability, design, independence and the ergonomics of process control are paving the way for most critical industries. Sectors such as pharmaceutical, energy and infrastructure, food and beverage, manufacturing automation and automotive are seeking to revolutionise human interconnectivity.

Martyn Williams

Martyn Williams

Host to a series of expert presentations, the Energy Experience Day delivered insight into the key issues of standardisation and collaboration within the industry. Martyn Williams, Managing Director of COPA-DATA in Britain, kicked things off with a keynote speech looking at how far the industry has come over the last hundred years and what the future holds.

“The future of the energy industry is hugely dependent on continuous progress in the field of Smart Grids,” explained Williams. “Industrial automation software is one of the keys to creating a national grid system that is smart enough to meet the rising demand for energy and integrate renewable energy sources. Products like the zenon Energy Edition make interfaces more efficient, ergonomic and user-friendly, while also increasing the security of substations, power plants and wind farms.”

Focusing on substation automation and smart infrastructures, industry experts from Intel, Mitsubishi, Advantech, Bilfinger and the University of Salzburg (A) spoke at the event. The discussions centred on the need for standardisation within the energy supply chain.

In particular, the importance of standards such as IEC 61850, an embedded protocol used in supervisory control and data acquisition (SCADA) systems, was highlighted as the gateway to cost effective, multi-vendor substation automation. Its role in helping companies bridge the gap between centralised control and the increasingly dispersed nature of geo-information systems, was emphasised.

Ross Corfield, EMEA Market Development Manager for Intelligent Transportation at Intel, spoke about the Internet of Things (IoT), end-to-end (E2E) connectivity, infrastructure security and the growth of cloud computing.

“Intel is very keen to understand the issues and challenges faced by the energy sector,” he explained. “The COPA-DATA Energy Experience Day is the perfect opportunity to connect with people who operate on the ground and face these challenges on a day-to-day basis. For us, the event has been about how Intel can design the best technology that will make a difference for the future of energy.”

Juergen Resch

Juergen Resch

Jürgen Resch, Industry Manager for Energy at COPA-DATA, stressed the importance of best practice in substation automation. He demonstrated how the optimisation of software architecture has now improved control capability over geographically remote locations using portable and mobile devices.

Cost reduction was another key area highlighted by several speakers at the event. David Bean, Infrastructure Sales Manager at Mitsubishi UK, spoke about how effective telemetry and data management can yield significant cost savings in substation automation. Tony Milne, Manager for Power and Energy at Advantech, expanded on the topic of effective multi-vendor automation. He explained how IEC 61850 enables multi-vendor systems for substations to improve technical features, reduce costs and facilitate commissioning or installations.

Nigel Allen, Sales Manager at Bilfinger Industrial Automation Services, developed on the challenges faced by a non-integrated system which includes multiple companies, energy sources, interfaces, programming techniques and communication protocols. He then moved on to explain how Bilfinger addressed some of these challenges in an offshore wind farm project and an energy management application for large buildings.

Sébastien Roberto, Sales Manager at COPALP, COPA-DATA’s French subsidiary, also discussed the software needs of the energy industry. He emphasised the importance of using universal tools, which support protocols like IEC 61850, IEC 60870, MODBUS, DNP and DLMS/COSEM. He also stressed the importance of remote access, including online debugging and soft scope for the future of the energy industry. “The key,” Roberto concluded, “is to optimise resources, to ensure the reliability of products and make customers’ lives better.”

Simon Back, Researcher at the Salzburg University of Applied Sciences, offered a comprehensive presentation regarding the potential of bridging SCADA systems and Geoinformation systems (GIS) for the energy sector, particularly in the field of Smart Grids. For example, he explained, GIS can help visualise the position of electric consumers, generators and power lines of a Smart Grid, while SCADA can fulfil the surveillance and control function of the system.

Overall, the Energy Experience Day was received well by the public. Attendees included engineers and analysts from all parts of the industry including the British National Grid, Alstom, Atkins and Network Rail.

“The configuration specification [IEC 61850] is the key to industry development,” said Ray Zhang, Tech Leader of Automation Engineering at National Grid. “This is a wonderful forum for utilities developers, manufacturers and systems integrators to get together and share experiences and information.”

“The Energy Experience Day was all about giving people an idea, an inspiration about what can be achieved with standardised software, independence, ergonomics, IEC 61850 and collaborative partners,” explained Martyn Williams. “All of us at COPA-DATA would like to thank the attendees and we look forward to building on the success of this event with a follow-up session to be arranged for later this year.”


W.A.G.E.S. for cost reduction!

07/01/2013

This paper from Endress + Hauser, discusses the increase in understanding and necessity of monitoring and controlling energy efficiency in utilities.

1. Introduction
Production plants in all industries are coming more and more under pressure to measure the cost of their utilities:

– Water
– Air
– Gas (e.g. Natural Gas, other gases or fuels)
– Electricity and
– Steam

It is interesting to confirm that this W.A.G.E.S. trend is independent of the type of industry. It is to be seen in small breweries and in big chemical sites.

One important driver for this pressure is the rise in the cost of energy. The cost of natural gas for industrial applications has more than tripled within less than ten years and the price for electricity in Europe has risen by 30% within less than 4 years.

Certifications according to EMAS and the ISO 14000 series also force customers to measure the energy streams using calibrated technology.

Savings_book_large
Find out more about how you can benefit from E+H’s experience in Energy Monitoring Solutions, you can now receive a free copy of their Energy Saving Book from their site!
More information about the Endress+Hauser Energy Monitoring Solution on line.

The utilities have been neglected frequently in the past. Currently, however, they are coming more and more into the focus. Still many companies only measure natural gas and electricity only at the custody transfer point. Using these few measurements, however, important parameters like specific energy consumptions are determined that give important indications: how much energy does it take to make a ton of product? These measurements, however, are only taken on a monthly or sometimes even on a yearly basis. Investing a relatively small amount of money in comparative terms it is possible to set up energy monitoring systems that measure the consumption of each respective utility close to the point of use. These measurements can then be used to build meaningful relations between energy consumptions and driving factors that enable the customer to

• Control their energy consumption with a better resolution (application-wise and time-wise)
• Identify and justify energy reduction projects (where is most energy consumed? Which changes are possible?)
• Detect poor performance earlier (are the boiler’s heating surfaces fouling?)
• Get support for decision making (should the contract with the provider of electricity be changed?)
• Report performance automatically (which Energy Accountability Centre/shift etc. is performing best? Did exceptions occur?)
• Audit historical operations
• Get evidence of success (did promises made by a manufacturer of energy efficient equipment come true?)
• Get support for energy budgeting and management accounting
• Provide the energy data to other systems (e.g. existing SCADA)

2. What is energy management

Picture 1: The Energy Management Cycle.

Picture 1: The Energy Management Cycle.

Energy management can be seen as a cyclic operation. Everything starts with the basic data collection: energy consumption is measured and converted to appropriate units. For most of the utilities, these conversions require highest attention:

– already the conversion from volumetric units (e.g. natural gas measured by turbines, steam measured by DP devices or vortex meters) to corrected volume, mass or energy often is done in a wrong way resulting in errors in the range of typically 10…30%
– many devices are wrong installed resulting in similar error ranges and
– if already the basic is wrong, the analysis will be wrong and all action taken will be based on wrong information.

The easiest form data collection is paper and pencil. It is amazing to see how many people in the industry still have to walk around the factory and find certain meters on a monthly basis to take the readings. Modern systems perform this automatically: Modern recorders as stand-alone devices or so-called “software recorders”are able to record data in the commonly used 15 min. or 30 min. intervals. If these intervals are not sufficient, even a data collection every 100ms is possible.
Most modern systems of data collection are even able to collect the data of up to 30 devices using bus communication and pass the data on using “Field Gates”.

3. Data analysis
If the Data collection is the basis of it all, data analysis is the heart: It helps to convert the pure measurements of energy data into meaningful data.

A first basic way consists in analyzing the 15-min or 30-min data profiles:

– What is the base-load of the application? Why energy is still consumed without production? How can this base-load be reduced?
– What is the typical maximum load during productive hours? How can the maximum load be reduced? (This is important e.g. for electricity contracts)
– What is the typical load distribution? How can a more uniform load-distribution be obtained?

For this purpose, different policies of load-management are available (e.g. peak-clipping)
Even more meaningful is to put energy consumptions into relation to a driving factor. Examples are:

– how much heating energy is consumed compared to how cold the weather is (so-called degree days)
– how much energy is consumed to make a ton of product
– how much electricity is consumed in order to light a building compared to the hours of day-light.

Since all of these parameters put into relationship energy consumption with a relevant driver, they are generally called “Specific Energy Consumptions” (SEC).

Controlling such a factor now enables the customer to control if a certain process is drifting over time, i.e. the process is becoming more in-efficient. Possible causes of such a drift can have multiple reasons:

– the amount of leakage in a compressed air grid is growing because of lacking maintenance
– the specific energy consumption for making steam is rising because of lacking maintenance of steam traps (steam traps fail open in case of a failure)
– the specific energy consumption for heating a building rises because of fouling of the surfaces of heat-exchangers

Generally, comparing the energy consumption with a driver will reveal a linear relationship. In certain applications, this linear relationship also shows an intercept that does not equal zero.

If no actions are taken, the trend will be as follows:

– the intercept grows (examples: increasing leakage in a compressed air application or due to failing steam traps)
– the slope of the linear relationship grows (loss of efficiency e.g. because of fouling heat-exchangers)

Customers, however, will strive to

– reduce the intercept and
– reduce the slope of the linear relationship.

The linear relationship found can now be used as a target for the future. One example: if in the past it has taken 4 GJ of energy to make a ton of steam, we expect this same value for the future, too – unless we take any actions to improve efficiency.

We can now compare the real energy consumption to the expected one and record the differences. If this difference exceeds a certain value, a warning will be generated.

pic2

Picture 2: The Control Graph for controlling deviations from a pre-set target. If the control limits are exceeded, an alarm can be generated

We can also take these differences and total them up over time in the so-called CUSUM (cumulated sums) chart.

Picture 3: The CUSUM chart. It acts as a totalizer and can reveal savings achieved.

Picture 3: The CUSUM chart. It acts as a totalizer and can reveal savings achieved.

This chart acts like a bank-account: If the process becomes less efficient, the CUSUM chart will run away from the zero line. In the picture the process has become more efficient, however. In our example, an economizer was installed improving a steam boiler’s efficiency. We can now read directly from the chart that compared to former performance the investment into the economizer saved the company 1100 MWh of energy within 15 weeks.

Where this data analysis can be done?
Recording the performance, analyzing data every 15 or 30 minutes and displaying current specific energy consumption values can be done easily using modern time recorders that display these values close to the process. These modern recorders already can perform even complex math operations. Thus, employees running certain processes can be directly involved and start asking questions:

– Why are certain shifts more efficient than other?
– Why was the specific energy consumption stable for months but started drifting recently?

These analysis techniques and also the “targeting” procedure described above can also be performed in Energy Monitoring software.

pic4

Picture 4: Set-up of a typical full-blown energy monitoring information system

4. Communication/reporting
Recipients of Energy reports can be found in different hierarchies: from operations personnel to top management and in different areas of a company (production/operation/engineering, controlling, energy and eco management).

The reports must provide information to enable the user to act. Operational staff needs to know when a problem has occurred as quickly as possible and know what they should do about it. Senior management, on the other hand, needs summary information to know that procedures and systems are working well. In order to design reports, it is important to understand who needs reports and why.

Reports to senior management might include:

– a summary of last year’s costs, broken down into EACs (energy accountable centers)
– a summary of the current year’s performance on a monthly basis

• against budget
• against the previous year
• against targets

– a note of the savings (or losses) achieved to date and how they were achieved
– a note of additional savings opportunities and what actions are ongoing to address them

A new report to management should be issued each month and be available in time for board meetings.

Operations management will be responsible for operating processes and plant efficiency. They will need to know on a shift, daily, weekly or monthly basis (depending on the nature of the process and the level of energy use) what energy has been used and how this compares with various targets. The information will be used to

– measure and manage the effectiveness of operations personnel and process plant and systems
– identify problem areas quickly
– provide a basis for performance reporting (to executives)

Operations personnel need to know when a problem has occurred and what needs to be done to rectify it. This information needs to be provided in a timely manner, which might mean within a few minutes of the event for a major energy-using process, or within a day or a week.

Engineers associated with operations will need reports similar to those for operations personnel. Engineers may typically be involved with problems where there is more time to act (compared with process operators), for example, cleaning heat exchangers, solving a control problem or removing air from a refrigeration condenser.

Engineers who are not directly in operations but who provide support will need more detailed historical information. Typically, these individuals will be involved in analyzing historical performance, developing targets and modeling. They will require access to the plant data historian and will use analysis tools, ranging from commonly available spreadsheet software to advanced data mining and similar software.

Engineers that are involved in projects will need supporting data, for example, levels of energy use, process operating conditions, etc. They will also need access to the raw data in the historian and access to analysis tools.

The accounts department may be interested in actual energy usages and costs to compare with budgets. They will need information that is broken down by department so that costs can be allocated to related activities. Accurate costing of operations and the cost of producing goods can improve decisions regarding product pricing, for example, and the allocation of resources.

Energy and environmental managers will need summary data that identifies the performance achieved and trends, much like what executives and operations managers require. Like engineers, they may require more detailed information for specific analysis.

The environmental department may want energy consumption expressed as equivalent CO2 emissions, and the energy reports may need to be integrated into environmental reports that are more general. Summary information may be required for annual energy and environmental reporting and may be needed more frequently by regulatory bodies.

The energy manager may be involved in energy purchasing as well as efficiency. He may need information about the profile of energy use (using a half-hourly graph, for example), peak usage, nighttime usage, etc. The energy manager will also need access to the raw data in order to allow evaluation of purchasing options and to check bills.

We can see from this broad variety of requirements that modern Energy Management Information Systems have to be very flexible in creating these reports.

5. Taking the action
Results of implementing Energy Monitoring Informations Systems in the UK indicate that, when properly implemented, such a system can save 5 to 15 percent of annual energy costs. As an initial approximation, 8 percent appears to be a reasonable estimate. [1]

Implementing an Energy Management Information System alone and taking the action based on the outcome of this tool alone will result typically in 8 percent savings. Most experience regarding this tooling can be found in the UK based on the local “Carbon Trust”.
Further savings can be achieved by spending capital cost e.g. for more efficient burners and boilers, economizers etc.

Savings Strategies in Energy Management typically fall into the four following categories:

• Eliminate. Generally, one should question if certain processes or sections of a plant are really required or if they could be replaced. A simple example: eliminating dead legs of a plant.
• Combine. CHP is a well-known “combine” process: generation of heat and electricity are combined. Another example is the use of off-heat created by compressors for making air e.g. for pre-heating factory air.
• Change equipment, person, place, or sequence. Equipment changes can offer substantial energy savings as the newer equipment may be more energy efficient. Changing persons, place, or sequences can offer energy savings as the person may be more skillful, the place more appropriate, and the sequence better in terms of energy consumption. For example, bringing rework back to the person with the skill and to the place with the correct equipment can save energy.
• Improve. Most energy management work today involves improvement in how energy is used in the process because the capital expenditure required is often minimized. Examples include reducing excess air for combustion to a minimum, reducing temperatures to the minimum required. Improving does sometimes require large amounts of capital. For example, insulation improvements can be expensive, but energy savings can be large, and there can be improved product quality.


China leads the way in distribution automation!

16/07/2012
Distribution automation, the ‘Darling’ of the Smart Grid to achieve $5 Billion

The Distribution Automation System market has become a top priority for smart grid utilities, across the board and around the world. In recent years, advanced metering infrastructure and demand response were the darlings of the industry, but now the utilities are turning their attention to improving efficiency and control in the distribution segment of the grid that lies between the substation and the meter.

Besides the tangible benefits of improving reliability and efficiency within grid operations, distribution automation system implementations and upgrades have another important attribute — the ability to deliver strong return on investment without requiring extensive consumer engagement or behavior change. In a nutshell, utilities are placing increased emphasis on adding greater intelligence and control capabilities to their distribution infrastructure.

Sovereign investment programs are focused on the expansion of capacity in emerging markets, upgrading of aging transmission and distribution infrastructure, and improvement of reliability and efficiency.

“The Electric Power Distribution Automation System market is being driven by the world’s continued demand for more energy as utilities are faced with the challenges of realizing a good ROI while providing a higher level of service to their customers. These challenges drive the need to conserve energy (specifically the energy lost in the distribution network), defer the building of additional generation facilities, and improve customer service,” according to Steve Clouther , author of ARC’s Electric Power Distribution Automation Systems Worldwide Outlook”.

Distribution automation system providers
The distribution automation systems market has a diverse group of players and market share leaders, even though the typical range of a distribution network is relatively short; i.e., a distribution network carries electricity from a sub-station to consumers.

The market leaders for worldwide distribution automation are subsidiaries and/or divisions of large international control companies with an expansive global presence. Collectively, the top ten vendors account for 71% of the 2011 worldwide distribution automation systems market.

China is “all In” on the Smart Grid
At the 2011 Smart Grid World Forum in Beijing (October 2011), China’s State Grid Corporation announced plans to invest $250 billion (€205b) in electric power infrastructure upgrades over the next five years, of which $45 billion (€37b) is earmarked for smart grid technologies. According to its three-stage plan, China will invest another $240 billion (€197b) between 2016 and 2020 (including another $45 billion (€37b) toward smart grid technologies) to complete the build-out of a “stronger, smarter” Chinese power grid.

China to be a driving force in distribution automation
According to the International Energy Agency’s (IEA) World Energy Outlook 2011, by 2015 China will overtake the US, and become the leader in total electric power generation capacity. And then, by 2035 China will consume nearly 70% more energy than the United States. With ex-plosive growth such as this, China leads the way in distribution automation.


Latest analytical technology ensures biogas efficiency

28/05/2012

Anaerobic Digestion (AD) relies on the ability of specific micro-organisms to convert organic material into a gas that can be used to generate electricity. However, these bacteria require specific conditions if they are to function effectively and instrumentation specialist company Hach Lange has developed a range of online, portable and laboratory instruments that have enabled a large number of AD plants to maximise efficiency and prevent the risk of failure.

Introduction
In 2009, renewable energy accounted for just 3% of  Great Britain’s energy supply. However, the Government there  has a target to raise this contribution to 15% by 2020 as part of its strategy to fight climate change. Along with wind, solar and various other sources of renewable energy, AD has an important role to perform in helping to achieve the renewable energy target whilst also helping with the management of organic waste.

Biogas is generated in large anaerobic digesters; air tight tanks in which bacterial digestion takes place in the absence of oxygen. Biogas is a combination of Methane, Carbon Dioxide and many other gases in trace amounts, which can be burnt to produce electricity, and then transported to the National Grid. Alternatively it can be further processed and refined to around 100% methane and injected into the national gas grid.

The remnant digestate can be used for a variety of purposes such as a nutritional additive to crops on arable land, much in the way manure is used, or as a landfill restoration material.

There are two types of biogas plants, determined by the substrate they use; co-fermentation plants and renewable raw material fermentation plants. In co-fermentation plants, substrates of non-renewable raw materials are used, such as residues from fat separators, food residues, flotation oil, industrial waste products (glycerol or oil sludge) and domestic organic waste. Renewable raw material fermentation plants utilise materials such as maize, grass, complete cereal plants and grains, sometimes together with manure slurry.

The need for testing and monitoring
Efficiency is vital to the success of a biogas production plant; bacteria require optimum conditions to effectively produce biogas from the digestion of organic matter. Plant operators therefore have a strong interest in the efficiency of their biogas plant and the activity of the bacteria. Consequently these production plants require reliable, on-site analysis in combination with continuously operating process instruments. Loading excessive levels of biomass into a digester may have severe economic consequences and could potentially lead to biomass inactivation and necessitate a cost-intensive restart. Conversely, under-loading a biomass digester could also have financial implications, because less electricity is produced and potential revenue is lost. Substrate amounts must be tailored to achieve the optimum rate of bacterial digestion.

The degradation process which occurs within the biogas plant digesters does so in a highly sensitive microbial environment. The digesting, methane-producing bacteria, for example, are highly temperature sensitive and most active within the temperature ranges of around 35 to 40 DegC and between 54 to approximately 57 DegC. The specific nature of the microbial environment inside the digesters must be maintained throughout fermentation to increase production and avoid inactivation of the highly responsive bacteria.

Monitoring equipment
Hach Lange provides portable, laboratory and online monitoring systems that facilitate examination at key points within the fermentation process, including eluate analysis, where the substrate is fed into the digester, but also within the digester itself. Online process analysis instrumentation can be employed to continuously maintain optimum conditions within the biogas plant and/or samples can be collected regularly for analysis.

Different analytical instruments are required for different stages of the fermentation process: at the substrate entry point; within the main digester; in post-fermentation tanks and to continuously monitor biogas production.

Process monitoring instruments used across the fermentation cycle allow operators to constantly supervise the anaerobic digestion rate and biogas production.

Hach Lange TIM 840 Titrator

One of the most important measurements for assessing fermentation progress is known as the FOS/TAC ratio. This is determined by their TIM 840 Titrator, and the values generated enable the system supervisor to identify potential process problems such as the imminent inversion of digester biology, so that countermeasures can be initiated. The FOS stands for Flüchtige Organische Säuren, i.e. volatile organic acids while TAC stands for Totales Anorganisches Carbonat, i.e. total inorganic carbonate (alkaline buffer capacity).

To measure the FOS/TAC ratio with the TIM 840 titrator, 5ml of sample is added to a titration beaker containing a follower bar. 50ml of distilled water is then added and the measurement is started. The addition of reagents is then conducted automatically by the titrator which saves operator time and reduces the potential for human error. After about 5 minutes the TAC and FOS values are calculated automatically using a pre-programmed formula.

All measured values can be stored in the autotitrator and/or sent to a printer or PC.

The FOS/TAC ratio provides an indication of the acidification of the fermenter, which is an important measurement because a low acid content demonstrates that the rate of bacterial digestion is not high enough. Conversely, too high an acid content means bacterial digestion is exceeding required levels, due to an overloading of substrate.

Case Study:

Viridor’s Bredbury facility

Viridor’s Resource Recovery Facilities in Reliance Street, Newton Heath, Manchester and Bredbury, Stockport. (GB)

At the Resource Recovery facilities which incorporate AD plants the feedstock is derived from domestic waste collections – the ‘black bag’ portion that would otherwise be destined for landfill. Pre-sorting removes plastics, metals and glass, after which the waste is pulverised to produce a slurry that is passed to the AD plant. This slurry contains the organic fraction that is processed to produce biogas.

Steve Ivanec is responsible for ensuring that the pant operates to optimal efficiency. He says “Monitoring is extremely important at this plant because of the variability of the feedstock – the organic content can fluctuate from one day to another, so we have to be able to respond very quickly.”

Steve’s team uses Hach Lange instruments to closely monitor the entire process and to ensure that the plant’s bacteria are provided with optimal conditions. These tests include chloride, pH, alkalinity and volatile fatty acids; the ratio of the latter two being the same as the FOS/TAC ratio, which is determined by a TIM Biogas titrator. In addition, samples are taken from the feed, the digesters and the effluent to monitor ammonia and COD with a Hach Lange spectrophotometer. This data is essential to ensure compliance with the plant’s discharge consent.

The Reliance Street plant utilises biogas to generate electricity and the residue from the AD process can be defined as a product rather than a waste because it complies with the BSI PAS110 Quality Protocol for Anaerobic Digestate (partly as a result of the monitoring that is undertaken). This product is termed ‘compost-like output’ (CLO) and can be landfilled, used as a landfill cover, or spread on previously developed land to improve that land. However, CLO cannot currently be applied to agricultural land used for growing food or fodder crops.

Summary
The Hach Lange test and monitoring equipment enables the operators of AD plants to ensure that the bacteria are provided with optimum conditions so that biogas production is as efficient as possible. As a result, less waste is sent for landfill and renewable energy is generated efficiently. This ensures the best possible return on investment and by reducing the use of fossil fuels for power generation, helps in the fight against climate change.


Energy & Environment – 3 big predictions!

25/04/2012

Frost & Sullivan has released its three big predictions for 2012 for the global energy and environment market. Industrial convergence, smart technology and distributed generation will be the key topics in 2012 and beyond.

Based on a survey of several thousand companies conducted in December 2011, this research paper highlights areas of growth.  “Data and opinions of key stakeholders, combined with analysis and commentary from Frost & Sullivan industry experts, have been used to present key market highlights, hot growth topics, global and regional hot spots, areas of market convergence, and bold predictions for 2012,” explains John Raspin, Director and Partner at Frost & Sullivan.

Convergence and Value Chain Integration
Energy and environment players see the greatest convergence today from within their own sector as well  as the industrial automation and ICT industries. It is also highly significant the convergence between energy and automotive markets and relates primarily to the emergence of electric vehicles and e-mobility that is driving innovation in batteries, energy storage, transmission and distribution infrastructure, battery charging and integration of mobility into the smart home. Convergence opportunities exist also in the water sector driven by innovation among treatment technology, process control and automation & instrumentation sectors.

Value chains, less consolidated and stretching across hundreds of components, have been and are still presenting many opportunities in the renewable energy industry. Wind power is the most highly attractive segment to European companies. Solar PV still has a positive growth outlook but opportunities for European players are slowly vanishing as the global market becomes more dominated by Chinese players. Chemical and material companies have begun to take significant steps further down the value chain to acquire key technology and solution capabilities in the fast growing and high potential energy and environment markets.

Smart Technology
Smart technology is going to play a key role in the future development of the energy and environment sector with efficiency improvements at the centre of the market evolution.  Smart grids, buildings, homes, cities and water networks will all become a reality this decade, thus creating far-reaching market growth opportunities.

Distributed Generation
There is and will be increasing focus on deployment of small scale renewable energy close to the point of consumption.  This will create opportunities for suppliers or micro-generation technologies as well as requiring new strategies and business models from power generation companies and utilities. Growth of distributed generation, legislation surrounding this market as well as integration of distributed generation into the grid are emerging trends especially in the developed markets.

The key themes outlined above feature heavily in Frost & Sullivan’s energy and environment research programme for 2012.  A key driver for the research is the work Frost & Sullivan has been conducting around Mega Trends which are driving new and emerging market segments for key industry participants.

See the presentation!
Frost & Sullivan insight on the three big predictions for 2012 and beyond is available on SlideShare.


Europe to experience five-fold growth in installed base of smart meters by 2017

10/02/2012
Increasing competition with Chinese and other Asian companies expected to enter the market in the short to medium term

The European smart meter market is at a growing stage. While smart meter developments are taking place in countries like Denmark, Finland and Norway, large scale rollout has been planned in countries such as UK, France, Spain,Portugal and Ireland to meet the energy targets and environmental policies set by the EU. Currently Sweden and Italy are the only mature markets in Europe.

New analysis from Frost & Sullivan, European Smart Meter Markets, finds that the smart meter revenue in Europe is expected to grow from €422.00 ($318.4) million in 2010 to €1.45 ($1.93) billion in 2017 at a compound annual growth rate (CAGR) of 29.3 per cent. The smart meter installed base in Europe is expected to grow from 43.90 million in 2010 to 200.43 million in 2017 at a CAGR of 24.2 per cent. The market foresees larger growth post 2012 with the publishing of the standardisation mandate. Standardisation will affect the future development and innovation of smart meters.

Europe is a push market where the smart meter and smart grid markets are legislation driven. There is region-wise disparity due to the different regulatory challenges faced by each country, thus having a direct impact on implementation.

“The smart meter market is expected to prosper, owing to the recent impetus from renewable energy and smart grid implementation,” says Frost & Sullivan Research Analyst Neha Vikash. “Smart meters are required for integration of renewable energy. Europe is focussed on meeting the 20-20-20 targets which is a necessary driver for increase in renewable energy and the third energy directive targets 80 per cent smart meter penetration in the residential sector by 2020.”

Currently, the European smart meter market has less than 20 vendor companies. The competition among manufacturers, utilities, ICT, network, remote monitoring and automation companies is high and it is forecast to increase along with new participants entering the market. In particular, Chinese and other Asian companies will start to make their appearance in this market during the next 1-2 years.


The best cooling solutions!

15/12/2011
Heat generated by datacenters is ten times greater than the heat generated by them around 10 years back

A study by Frost & Sullivan’s Gautham Gnanajothi

Datacenter technology has arrived to a point of no return in the recent times. The servers used in them have evolved and have reduced in physical size but have increased in performance levels. The trouble with this fact is that it has considerably increased their power consumption and heat densities. Thus, the heat generated by the servers in datacenters is currently ten times greater than the heat generated by them around 10 years back; as a result, the traditional computer room air conditioning (CRAC) systems have become overloaded. Hence, new strategies and innovative cooling solutions must be implemented to match the high-density equipment. The rack level power density increase has resulted in the rise of thermal management challenges over the past few years. Reliable datacenter operations are disrupted by hot spots created by such high-density equipment.

Emerson's global data center (St Louis MO US) uses the latest energy-efficiency technologies, precision cooling products, and efficiency strategies.

Some of the main datacenter challenges faced in the current scenario are adaptability, scalability, availability, life cycle costs, and maintenance. Flexibility and scalability are the two most important aspects any cooling solution must possess; this, combined with redundant cooling features, will deliver optimum performance. The two main datacenter cooling challenges are airflow challenge and space challenge. These challenges can be overcome with the use of innovative cooling solutions. Some of the cooling techniques used in datacenters are discussed below.

Aisle Containment

Aisle containment strategies have gained immense popularity among data center operators in the past and this trend is expected to continue in the future as well.  With the use of hot aisle and cold aisle containment, energy-efficient best practice in server rooms can be achieved. Usage of hot aisle or cold aisle depends uniquely on the type of application used. Most data centers have a standard hot aisle/cold aisle layout: the aisle containments are the refinements of these layouts. In these layouts each successive aisle is labeled either hot aisle or cold aisle. In the hot aisle, the banks of the server rack exhausts hot air. In a cold aisle, the server racks are aligned in such a way that the equipment inlets face each other in the opposite sides. There is usually a raised floor system known as the “plenum” under which the cool air from the CRAC or the computer room air handler (CRAH) flows to the perforated floor tiles. These floor tiles are located in the cold aisles and facilitate the cool air into the server inlets in front of the racks and exhaust via the hot aisle. By the hot aisle/cold aisle containment, the cool air can be directed closer to the server inlets; thereby increasing the latter’s energy efficiency.

Rows of server racks at the computer center at CERN in Switzerland. (Pic CERN)

However, there are a couple of challenges faced by the aisle containment solution  the first one being “Bypass Air”; this situation arises when the cool air refuses to enter the server. The other one is “recirculation” where the heated exhaust air flows back into the cold aisle through empty space or over the top of the racks. These two conditions are known for creating hot spots in the server rooms. Data center operators use sheets made of plastic, cardboard, and so on to make barriers for the cold aisles so that the hot air does not re-enter the cold aisle.

High-density Supplemental Cooling
Data center densities have increased from 2 to 3 kW per rack to an excess of 30 kW per rack. A different cooling approach needs to be implemented to meet the high-density requirements. This is when supplemental cooling comes into place. It uses two different approaches: “rear door heat exchangers” and “over head heat exchangers”. Rear door heat exchangers come to the rescue of the struggling CRAC by conditioning the hot air and returning it to the room at colder temperature. They require a chilled water source and a connection to a remote chiller unit. The over head heat exchangers, as the name suggests, are suspended above the server rows. They compliment the hot aisle/cold aisle containment by sucking the hot air from the hot aisle exhaust, condition it, and send cool air to the cold aisles. The supplemental cooling reduces the pressure off the CRAC unit.

Liquid Cooling

The Aurora supercomputer from Eurotech, which uses liquid cooling.

With the rise in the number of applications and services that require high-density configurations, liquid cooling is generating a lot of interest among data center operators. As the name suggests, it brings the liquid (either chilled water or refrigirant) closer to the heat source for a more effective cooling. On contrary to a CRAC unit where it is isolated to a corner of the room, liquid cooling solutions are embedded in row of server racks or suspended from the ceiling or installed in a closed relationship with one or more server racks. There are two types of the liquid cooling – “in row liquid cooling” and “in rack liquid cooling”; both of them require chilled water (or refrigirant) and return piping. It is run either overhead or beneath the floor to each individual cooler.

Closed Couple Cooling
Another remedy for the high-density computing would be closed couple cooling where the distant air conditioner is moved closer to the computing load. The latest generation cooling products can be described by the term closed coupled cooling. Although their solutions vary in terms of configuration and capacity, their approach is the same. It brings the heat transfer closest to the source, which is the server rack. By doing so, the inlet air is delivered more precicely and the exhaust air is captured efficiently. There are two configurations in which it operates – “closed loop” and “open loop”. In the open loop configuration, the air stream will tend to interact with the room environment to an extent. However, closed loop configuration is completely independent of the room in which it is installed. It creates a micro climate within the enclosure because the rack and the heat exchanger work exclusively with one another.

The present high-density computing data center has thousands of racks each with multiple computing units. The parts of the computing units include multiple microprocessors – each one dissipates about 250 W of power. The heat dissipation from the racks containing such computing units excedes 10 KW. Assuming that the present data centers have 1,000 racks and more than 30,000 square feet, these would require 10 MW of power for the computing infrasructure. The future datacenters, which would be even bigger with more servers, would have greater power requirements and, hence, more energy-efficient and innovative cooling solutions.

Conclusion
There are a number of cooling solutions available in the market place, however there is not one particular cooling solution which would be suitable for all kinds of data center applications. They are often dependant on many factors like room layout, installation densities and geographic location. On the whole when we compare the different cooling solutions, it can be said that the liquid cooling technique is proving itself as an efficient and effective cooling solution for high density data centers because it brings the cooling liquid closer to the heat source for a more effective cooling. This type of cooling solutions are gaining popularity among the data center operators as the units are embedded in rows of server racks and do not take up floor space, they are also retrofit friendly which means that the data center can stay operational as the units are brought online. Data centers would benefit by using liquid cooling solutions for their high density servers and this would be the best way forward.

Heat generated by the servers in datacenters is currently ten times greater than the heat generated by them around 10 years back


Wind energy T&M market

17/10/2011

Spotlight on renewable energy post recession ensures strong traction for the wind testing market, which is expected to reach $84.3 million in 2015

Improved global economic conditions and the gravitation toward renewable energy enabled robust growth for the wind testing market during 2010. The trend toward alternative resources and clean energy is underway, with wind turbines becoming the fastest-growing energy source in the world and enabling higher power outputs.

New opportunities are unfolding for the wind testing market as the need for certification and verification of wind turbine components increases. Both component manufacturers and wind power operators need to test, monitor, and inspect procedures during the product’s lifecycle.

New analysis from Frost & Sullivan, Renewable Energy Market Opportunities: Wind Testing, finds that the market earned revenues of €44.7 ($60.7) million in 2010 and estimates this to reach  €62.1 ($84.3) million in 2015.

“The adoption of newly-developed turbine technologies is likely to trigger fast-paced growth in the global wind industry,” says Frost & Sullivan Industry Analyst Sivakumar Narayanaswamy. “Increasingly sophisticated computational Interpretation and analytic capabilities of measured data are driving the growth of the condition-monitoring solutions market.”

The CAGR of global offshore wind farm capacity is pegged at 32% from 2009 to 2015, with its contribution expected to reach 55 gigawatts (GW) by 2020.

One of the key challenges for vendors in this market is the lack of standards that define the testing procedures. Finalizing testing standards at the earliest will benefit stakeholders in the wind energy sector and drive revenues.

In addition, although the wind energy sector has picked up steam, acceptance of wind as an energy source is slowing down due to higher operation and maintenance (O&M) costs. This has a profound impact in cases where the location of wind energy plants/farms is remote, as in offshore constructions.

Test equipment manufacturers catering to this market are therefore challenged to provide cost-effective solutions to keep O&M costs low. In the non-destructive test (NDT) equipment segment, inspections on wind plant infrastructure are carried out by visual, radiographic, and ultrasonic methods from the design phase until maintenance after installation.

“The use of composite materials in the construction of blades and towers for greater efficiency and reliability necessitates better NDT techniques and tools,” says Narayanaswamy. “The vendors in this market have to tackle this issue by expediting R&D efforts to keep pace with the evolving component technologies.”


Tidal turbine development in Ireland and Canada

09/07/2010

Novel sensors aid tidal turbine development

A few months ago we reported on an application on harnessing electrical power from the sea out in Galway Bay on the west coast of Ireland. Today we have a report from the other side of the country, Greenore, at the mouth of Carlingford Lough in Co Louth. This company is also working on using the tides but is different in that their generators are completly submerged at the sea bed.

Non-contact torque sensors from Sensor Technology are playing a key role in the development of commercial-scale in-stream tidal turbines produced by OpenHydro. The company is using these novel sensors, which are based on surface acoustic wave (SAW) technology, to accurately measure rotational speed and frictional forces in a simulator for the turbine bearings, thereby allowing it to optimise the performance and reliability of its innovative products.

OpenHydro is a technology company that designs and manufactures marine turbines to generate renewable energy from tidal streams. The company’s vision is to deploy farms of tidal turbines under the world’s oceans, where they will dependably generate electricity with no cost to the environment. This method of producing electricity has many benefits.

Because the turbines are submerged, they are invisible and they produce no noise. And because they are submerged at a considerable depth, they present no hazard to shipping. An advantage that is possibly the most important, however, is that the tides are completely predictable, which means that the energy output of the turbines is equally predictable. There are no large seasonal variations and no dependence on the vagaries of the weather, as there are with many other renewable energy sources.

Reliably and efficiently harvesting energy from the tides, however, requires the use of novel technology and, in the case of OpenHydro, this takes the form of open-centre turbines that can be deployed directly on the seabed. Clearly, installation in such an inaccessible location makes reliability a prime consideration in the design and construction of the turbines. For this reason, OpenHydro carefully and comprehensively evaluates the performance of all of the components used in its turbines.

For the bearings, this evaluation involves the use of a simulator that allows the company’s engineers to determine how frictional forces in the bearings vary with different loads and rotational speeds. Central to the operation of this simulator is the measurement of torque in a shaft from the motor that drives the bearing under test. With conventional sensors, it is hard to carry out this type of torque measurement accurately and reliably, but OpenHydro found that Sensor Technology’s TorqSense RWT320 series sensor provided an ideal solution.

Like all TorqSense sensors, the RWT320 units depend for their operation on surface acoustic wave (SAW) transducers. These transducers comprise two thin metal electrodes, in the form of interlocking “fingers”, on a piezoelectric substrate such as quartz.

When an RF signal of the correct frequency is applied to the transducer, surface acoustic waves are set up, and the transducer behaves as a resonant circuit. If the substrate is deformed, however, the resonant frequency changes. When the transducer is attached to a drive shaft, the deformation of the substrate and hence the change in resonant frequency will be related to the torque applied to the shaft. In other words, the transducer operates as a frequency-dependent strain gauge.
Since the transducers operate at radio frequencies, it is easy to couple signals to them wirelessly. Hence TorqSense sensors can be used on rotating shafts, and can provide data continuously without the need for the inherently unreliable and inconvenient brushes and slip rings often found in traditional torque measurement systems.

“We chose the RWT320 because of its convenient wireless operation, and because it was easy for us to fix in line with an existing shaft in our experimental set up,” said Kevin Harnett, Mechanical Engineer at OpenHydro.  “In addition, this model of sensor has integral electronics and a serial output, which means that we can link it directly to a laptop computer in our test laboratory. This is a very straightforward and convenient arrangement.”

OpenHydro uses the RWT320 sensor in conjunction with Sensor Technology’s TorqView software. This offers a choice of dial, digital bar and chart graph format display for torque, RPM, temperature and power. It also provides facilities for realtime plotting and for data recording, and can output stored results as files that are compatible with Matlab and Excel.

“We have found both the sensor and the software very easy to work with,” said Kevin Harnett, “and the sensor has proved itself to be well able to withstand the tough operating conditions in our laboratory. We’ve also received excellent technical support from Sensor Technology, which was very helpful as we have never previously worked with sensors of this type. Overall, we’re very happy with product and the service we’ve received, and the sensor is providing invaluable data for our development work.”

Proof that this development work is yielding dividends was amply provided late in 2009, when OpenHydro deployed the first commercial-scale in-stream tidal turbine in the Bay of Fundy, Canada, on behalf of its customer, Nova Scotia Power.

This 1 MW unit was arrived on site on 11 November and was operational, rotating with the tides, collecting data and producing energy by 17 November.


Power from the sea

22/04/2010

Floating off the Conamara Coast looking west on Galway Bay


Prototyping a Wave Farm Energy Converter Using LabVIEW, Compact FieldPoint and CompactRIO

By Eugene Doogan, Wavebob.

Ní minic a bhíonn seans againn tuairisc ar thógra inár áit dúchas. Ach seo tógra i gCois Fhairrige atá thar a bheith spéisiúil.

Just eleven miles east of the Read-out offices is a winking presence bobbing on the waves of Galway Bay a little distance from the shore. This article from Eugene Doogan of Wavebob tells us what’s its doing and how they keep tabs on what is happening out there.

How Wavebob works!

The Challenge:
Developing a control and data acquisition system for a wave energy converter (WEC) to achieve efficient power extraction in varying sea conditions.

The Solution:
Rapidly creating a highly integrated, rugged system for real-time control and data acquisition for a WEC prototype using NI LabVIEW, Compact FieldPoint, and CompactRIO.
” In particular, the versatility, speed, and simplicity of coding in LabVIEW, as well as excellent diagnostic and debugging tools, made it an obvious choice.”

Wave Energy
Since 1999, Wavebob Ltd, one of the world’s leading wave energy technology companies, has been developing a prototype WEC for deployment in offshore “wave farms” that are similar to wind farms. Our goal is to develop a commercial WEC that can produce significant electrical power for the onshore grid on coastlines with a suitable wave climate.

Invented by Irish physicist William Dick, the Wavebob WEC is a unique dual-body point absorber in which the two bodies move relative to ocean waves and to each other. The two bodies are coupled by hydraulic cylinder pumps, which are used to extract power from the relative motion. This part of the WEC is known as the power takeoff (PTO).

WEC development involves trials at different scales: at small-scale in-wave generating tanks (one one-hundredth to one-tenth) and then a larger scale (one-fifth to one-half) with fully operating PTO systems. The development team administers trials with the small-scale WECs in a sink, bath, or pond. When all trials are complete and successful, the team builds a full-scale WEC prototype.

To control the PTO in extreme sea conditions while maintaining efficient power extraction, the WEC requires a rugged and sophisticated control system. In addition, each stage of product development has its own requirements for the data acquisition and supervisory control system, which changes throughout the development cycle.

Hardware and Software Selection
The WEC prototype trials aim to successfully demonstrate the Wavebob WEC technology and gather data, which would inform the design of a full-scale Wavebob WEC.

A control and data acquisition system for the trials required real-time control of hydraulic valve switching according to sensor input, as well as data acquisition from a variety of sensors at appropriate sample rates. The requirements are similar to those of many industrial controller applications, but also include the unique challenges inherent to operating in varying sea conditions. These include operating in a marine environment, consequent dynamic effects on equipment, operation from DC source (24 VDC batteries with charging systems), the need for deterministic control (real-time OS), and relatively high-channel-count data acquisition and digital I/O.

In addition, the WEC prototypes include a variety of sensors, and the digital I/O includes solenoid switching with significant power requirements. Rapid code development, easy-to-modify control software, code versatility, and standard interfacing are essential.

After extensive research into the various options on the market, LabVIEW graphical design software was a natural fit for the PTO control system. In particular, the versatility, speed, and simplicity of coding in LabVIEW, as well as excellent diagnostic and debugging tools, made it an obvious choice. In addition, the range of hardware available from NI and its seamless integration with LabVIEW offered real benefits to the project.

The team selected LabVIEW coupled with Compact FieldPoint and CompactRIO for the control and data acquisition system to achieve the following benefits:

  • Hardware/software integration
  • Rapid development using LabVIEW
  • Real-time hardware and OS
  • Compact, rugged, and adaptable hardware
  • Upgrade path and distributed system capability
  • Excellent technical backup, particularly via National Instruments website

Control and Acquisition System – WB 06-07
During the first two trial phases for the Wavebob WEC, the team used Compact FieldPoint with LabVIEW and LabVIEW Real-Time. While the system performed extremely well, both the complexity of the control requirements and channel count increased in the subsequent development phase (MK3). As a result, the limits of the test system were approached on the second prototype. The MK3 prototype would require more processor power and faster acquisition rates.

MK3 Prototype
The scale of the WEC prototype in this phase of development, also known as the MK3 prototype, is a quarter of full scale with a fully operable PTO. Given the increasing complexity of control and additional sensors, the team selected CompactRIO hardware, LabVIEW, LabVIEW Real-Time, and the LabVIEW FPGA Module. In addition, they constructed a one-seventeenth scale model for rapid trials of structural and other changes. We had to build the new control and acquisition system into this model for data acquisition.

We selected CompactRIO for its integration with LabVIEW as well as its processing power and acquisition rates. The hardware offers the ability to run control and data acquisition loops at much faster sample rates without compromising the timing integrity of the system due to processor overload. All control and I/O functions can be programmed on the field-programmable gate array (FPGA) in the CompactRIO backplane and can run simultaneously. The controller only has to read the resulting data when logging. The small footprint and low power consumption of the CompactRIO system also facilitated incorporation into the one-seventeenth scale model.

Just as in the previous prototypes, LabVIEW was the ideal choice due to its tight integration with the selected hardware. The graphical programming language is easy to use, versatile, has a myriad of modules and tools available (including the ability to target a real-time embedded OS), and very good support.