It all began with the War of the Currents…

24/01/2020

Today, people greatly appreciate having electrical energy available at the flip of a switch, seemingly at any time and for any occasion. But where does electricity actually come from? The answer most people would give you is: “from the wall socket, of course”. So does this automatically settle the question of security of supply? More on this later.

If we compare the history of electric current with the 75 years of the history of Camille Bauer Metrawatt AG, it is easy to see how they were interlinked at certain times in the course of their development. Why is that?

It all began with the War of the Currents – an economic dispute about a technical standard

It was around 1890 when the so-called War of the Currents started in the USA. At that time, the question was whether the direct current favoured by Thomas Alva Edison (1847-1931) or the alternating current promoted by Nicola Tesla (1856-1943) and financially supported by George Westinghouse (1846-1914), was the more suitable technology for supplying the United States of America with electrical energy over large areas and constructing power grids. Because of Westinghouse’s market dominance at that time compared to Edison General Electric (called General Electric from 1890 on), it soon became clear that the alternating voltage invented by Nicola Tesla was rapidly gaining the upper hand. This was not least because its approximately 25% lower transmission losses weighed unquestionably in its favour. Soon afterward, came the breakthrough for alternating voltage as the means of transmitting electrical energy using. Initially, the main target application was electric lighting, which to be spurred on by the invention of the incandescent lamp by Edison. The reasons for this were logical. Westinghouse was initially a lighting manufacturing company and wanted to secure as great a market share as possible.

As developments continued, it is no surprise that already by 1891, in Germany for example, the first long-distance transmission of electrical energy was put into operation, over a distance of more than 170 km from Lauffen am Neckar to Frankfurt am Main. It was a technological breakthrough using three-phase current technology. However, this has by no means been the end of the story for direct current. Not least because of digitalization, electromobility, decentralized energy supplies, etc., DC voltage has experienced a full-blown renaissance and now is treated almost as a brand-new topic.

The Camille Bauer story.
The foundation of the Camille Bauer company dates back to 1900, immediately after the War of the Currents just described, at a time when electricity was rapidly gaining in importance. At the turn of the century, the Camille Bauer company, named after its founder Camille Bauer-Judlin, began importing measuring instruments for the trendy new phenomenon called “electricity” into Switzerland for sale to the local market. Some years later, in 1906, Dr. Siegfried Guggenheimer (1875 – 1938), formerly a research scientist for Wilhelm Conrad Röntgen (1845 – 1923) and who in 1901, became the first winner of the Nobel Prize for physics, founded what was a start-up company in Nuremberg, Germany, trading under his own name. The company was engaged in the production and sale of electrical measuring instruments. However, due to pressure from the Nazis because Dr. Guggenheimer was of Jewish descent, he had to rename the company in 1933, creating Metrawatt AG.

Four technological segments.

Four technological segments.

In 1919, a man by the name of Paul Gossen entered the picture. He was so dissatisfied with his employment with Dr. Guggenheimer that he founded his own company in Erlangen, near Nuremberg, and for decades the two rivals were continuously in fierce competition with one another. In 1944, towards the end of the Second World War, Camille Bauer could see that its importing business had virtually come to a standstill. All the factories of its suppliers, which were mainly in Germany (for example Hartmann & Braun, Voigt & Haeffner, Lahmeyer, etc.), had been converted to supplying materials for the war. At this point, a decision had to be made quickly. Camille Bauer’s original trading company located in Basel (CH), undertook a courageous transformation. In order to survive, it turned itself into a manufacturing company. In a first step, the recently formed manufacturing company Matter, Patocchi & Co. AG in Wohlen (CH) was taken over, in order to be get the business up and running quickly with the necessary operating resources at their disposal. Thus the Swiss manufacturing base in Wohlen in the canton of Aargau was born.

The story does not end there. In 1979, Camille Bauer was taken over by Röchling a family-owned company in Mannheim, Germany. At that time, Röchling wanted to quit the iron and steel business and enter the field of I&C technology. Later, in 1993, Gossen in Erlangen and Metrawatt in Nuremberg were reunited in a single company, after Röchling became owner of the Gossen holding company as a result of the acquisition of the Bergmann Group from Siemens in 1989, and Metrawatt was acquired from ABB in 1992. At the same time, Camille Bauer’s German sales operation in Frankfurt-Dreieich also became a part of the company. Today the companies operate globally and successfully under the umbrella brand of GMC-I (Gossen Metrawatt Camille-Bauer-Instruments).

A new era.
The physics of electric current have not changed over the course of time. However, business conditions have changed drastically, especially over the last 5-10 years. Catch phrases such as electricity free market, collective self-consumption, renewable energy sources, PV, wind power, climate targets, reduction of CO2 emissions, e-mobility, battery storage, Tesla, smart meters, digitalization, cyber security, network quality, etc. are all areas of interest for both people and companies. And last but not least, with today’s protest demonstrations, climate change has become a political issue. We will have to see what results from this. At the very least, the catch phrases mentioned above are perfect for developing scenarios for electricity supply security. And it really is the case that the traditional electricity infrastructure, which is often as old as Camille Bauer Metrawatt itself, was not designed for the new types of energy behaviour, either those on the consumer side or the decentralised feed-in side. As a result, it is ever more important to have increasing numbers of intelligent systems which need to work from basic data obtained from precise measurements in order to avoid outages, blackouts and resulting damage.

The overall diversity of these new clusters of topics has prompted Camille Bauer Metrawatt AG to once more face the challenges with courage and above all to do so in an innovative and productive way. In this spirit, Camille Bauer Metrawatt AG develops, produces and distributes its product range globally in 4 technological segments.

These are:
(1) Measurement & Display,
(2) Power Quality,
(3) Control & Monitoring,
(4) Software, Systems and Solutions.

Through its expert staff, modern tools and external partners Camille Bauer Metrawatt is able, for example, to analyse power quality and detect power quality problems. In addition, the Camille Bauer Metrawatt Academy, recently founded in 2019, puts its focus on knowledge transfer by experienced lecturers, with the latest and most important topics as its main priority. Furthermore, we keep in very close contact with customers, authorities, associations, specialist committees, educational institutions, practice-oriented experts and the scientific community in order to continually provide the requisite solutions to the market and interested parties.

#Camille_Bauer_Metrawatt #PAuto @irishpwrprocess


Power needs for autonomous robots!

20/08/2018
Jonathan Wilkins, marketing director at EU Automation, argues that the way we power six axis robots needs to be reassessed to meet the needs of new applications such as autonomous mobile robots (AMRs).

Since industrial six-axis robots were popularised back in the 1960s, the technology that makes up robots, as well as the way in which we now use robots, has changed considerably.

What was once considered a high-risk sector, where robots were relegated to operating in cells and cages behind no-go zones, has changed to one where robots can now work in collaboration with human workers.

Advances in motor technology, actuation, gearing, proximity sensing and artificial intelligence has resulted in the advent of various robots, such as CoBots, that are portable enough to be desktop mounted, as well as autonomous mobile robots (AMRs) that can move freely around a facility.

These systems are not only capable of delivering high payloads weighing hundreds of kilograms, but are also sensitive enough to sense the presence of a human being at distances ranging from a few millimetres to a few metres. The robot can then respond in under a millisecond to stimuli, such as a person reaching out to guide the robot’s hand, and automatically change its power and force-limiting system to respond accordingly.

Although six-axis robots and CoBots are predominantly mains powered, portable AMR service robots are gaining popularity in sectors as diverse as industrial manufacturing, warehousing, healthcare and even hotels. In these settings, they can operate 24/7, only taking themselves out of action for charging and taken offline by an engineer for essential repairs and maintenance.

In the warehousing sector, for example, the picking and packing process can be manually intensive, with operators walking up and down long aisles picking products from a shelf to fulfil each order. This is a time consuming and inefficient process that adds time to the customer order. Using an autonomous mobile robot in this situation can allow the compact robot to pick up the shelf and move it to the human operator in true “goods-to-man” style.

However, this demanding use-cycle prompts the question: are the batteries that power these robots sufficiently suited to this new environment? To answer this, we need to understand the types of batteries used.

The two most popular types of secondary, rechargeable, battery are sealed lead acid (SLA) and lithium-ion (li-ion). Having been around for nearly 160 years, lead acid technology is capable of delivering high surge currents due to its low impedance. However, this type of battery can be large and heavy, making it impractical for smaller machines.

Alternatively, lithium-ion provides the highest density and delivers the highest energy-to-weight ratio of any battery chemistry, which allows design engineers to use it in even the most compact devices. It also maintains a stable voltage throughout its discharge cycle, resulting in highly efficient, long runtimes.

When choosing robotic systems for their application, it’s important that engineers match the right type of battery to the load. As we increasingly begin to rely on smart factories with high levels of portable and mobile automation, considering the power needs of each device will be vital in delivering long run times with high efficiency.

@euautomation #PAuto #Robotics

Monitoring and managing the unpredictable.

07/06/2018
Energy sector investments in big data technologies have exploded. In fact, according to a study by BDO, the industry’s expenditure on this technology in 2017 has increased by ten times compared to the previous year, with the firm attributing much of this growth to the need for improved management of renewables. Here, Alan Binning, Regional Sales Manager at Copa-Data UK, explores three common issues for renewables — managing demand, combining distributed systems and reporting.

Renewables are set to be the fastest-growing source of electrical energy generation in the next five years. However, this diversification of energy sources creates a challenge for existing infrastructure and systems. One of the most notable changes is the switch from reliable to fluctuating power.

Implementing energy storage
Traditional fossil-fuel plants operate at a pre-mitigated level, they provide a consistent and predictable amount of electricity. Renewables, on the other hand, are a much more unreliable source. For example, energy output from a solar farm can drop without warning due to clouds obscuring sunlight from the panels. Similarly, wind speeds cannot be reliably forecasted. To prepare for this fluctuation in advance, research and investment into energy storage systems are on the rise.

For example, wind power ramp events are a major challenge. Therefore, developing energy storage mechanisms is essential. The grid may not always be able to absorb excess wind power created by an unexpected windspeed increase. Ramp control applications allow the turbine to store this extra power in the battery instead. When combined with reliable live data, these systems can develop informed models for intelligent distribution.

Britain has recently become home to one of the largest energy storage projects to use EV batteries. While it is not the first-time car batteries have been used for renewable power, the Pen y Cymoedd wind farm in Wales has connected a total of 500 BMW i3 batteries to store excess power.

Combining distributed systems
Control software is the obvious solution to better monitor this fluctuating source of power. However, many renewable energy generation sites, like solar PV and wind farms, are distributed across a wide geographical scope and are therefore more difficult to manage without sophisticated software.

Consider offshore wind farms as an example. The world’s soon-to-be-largest offshore wind farm is currently under construction 74.5 miles off the Yorkshire coastline. To accurately manage these vast generation sites, the data from each asset needs to be combined into a singular entity.

This software should be able to combine many items of distributed equipment, whether that’s an entire wind park or several different forms of renewable energy sources, into one system to provide a complete visualisation of the grid.

Operators could go one step further, by overlaying geographical information systems (GIS) data into the software. This could provide a map-style view of renewable energy parks or even the entire generation asset base, allowing operators to zoom on the map to reveal greater levels of detail. This provides a full, functional map enabling organisations to make better informed decisions.

Reporting on renewables
Controlling and monitoring renewable energy is the first step to better grid management. However, it is what energy companies do with the data generated from this equipment that will truly provide value. This is where reporting is necessary.

Software for renewable energy should be able to visualise data in an understandable manner so that operators can see the types of data they truly care about. For example, wind farm owners tend to be investors and therefore generating profit is a key consideration. In this instance, the report should compare the output of a turbine and its associated profit to better inform the operator of its financial performance.

Using intelligent software, like zenon Analyzer, operators can generate a range of reports about any information they would like to assess — and the criteria can differ massively depending on the application and the objectives of the operator. Reporting can range from a basic table of outputs, to a much more sophisticated report that includes the site’s performance against certain key performance indicators (KPIs) and predictive analytics. These reports can be generated from archived or live operational data, allowing long term trends to be recognised as well as being able to react quickly to maximise efficiency of operation.

As investments in renewable energy generation continue to increase, the need for big data technologies to manage these sites will also continue to grow. Managing these volatile energy sources is still a relatively new challenge. However, with the correct software to combine the data from these sites and report on their operation, energy companies will reap the rewards of these increasingly popular energy sources.


GIS in power!

20/02/2018
Geographic Information Systems (GIS) are not a new phenomenon. The technology was first used during World War II to gather intelligence by taking aerial photographs. However, today’s applications for GIS are far more sophisticated. Here, Martyn Williams, managing director of  COPA-DATA UK, explains how the world’s energy industry is becoming smarter, using real-time GIS.

GIS mapping is everywhere. In its most basic format, the technology is simply a computerised mapping system that collects location-based data — think Google Maps and its live traffic data. Crime mapping, computerised road networking and the tech that tags your social media posts in specific locations? That’s GIS too.

Put simply, anywhere that data can be generated, analysed and pinned to a specific geographical point, there’s potential for GIS mapping. That said, for the energy industry, GIS can provide more value than simply pinning where your most recent selfie was taken.

Managing remote assets
One of the biggest challenges for the industry is effectively managing geographically dispersed sites and unmanned assets, such as wind turbines, offshore oil rigs or remote electrical substations.

Take an electrical distribution grid as an example. Of the 400,000 substations scattered across Britain, many are remote and unmanned. Therefore, it is common for operators to rely on a connected infrastructure and control software to obtain data from these sites. While this data is valuable, it is the integration of GIS mapping that enables operators to gain a full visual overview of their entire grid.

Using GIS, operators are provided with a map of the grid and every substation associated with it. When this visualisation is combined with intelligent control software, the operator can pin data from these remote sites on one easy-to-read map.

Depending on the sophistication of the control software used, the map can illustrate the productivity, energy consumption or operational data from each substation. In fact, operators can often choose to see whatever data is relevant to them and adjust their view to retrieve either more, or less, data from the map.

When used for renewable energy generation, the operator may want to see the full geographical scope of the wind turbines they control, pin-pointed on a geographically accurate map. However, upon zooming into the map, it is possible for the operator to view the status, control and operational data from each turbine on the grid.

GIS mapping is not only advantageous for monitoring routine operations, but also for alerting operators of unexpected faults in the system.

Taking out the guesswork
Unexpected equipment failure can be devastating to any business. However, when managing assets in the energy industry, providing a fundamental service to the public, the impact of downtime can be devastating.

Traditionally, energy organisations would employ huge maintenance teams to quickly react to unexpected errors, like power outages or equipment breakdowns. However, with GIS and software integration, this guesswork approach to maintenance is not necessary.

The combination of GIS with an intelligent control system means that operators will be alerted of faults in real-time, regardless of whether it occurs at an offshore wind turbine, a remote pumping station or a substation. When an error is identified, the operator is automatically informed of exactly where the fault has occurred, by a pinpoint on the map.

Enabling intelligent maintenance
In the energy industry, there is no sure-fire way to predict exactly how and when faults will occur, but there are ways to deploy reliability centred maintenance (RCM) techniques to minimise downtime when they do.

Using GIS-mapping and alerts, an operator can accurately pinpoint the location of the error, and a maintenance engineer can be deployed to the site immediately. This allows organisations to plan more effectively from a human asset perspective, ensuring their engineers are in the right places at the right time.

In addition, using the data acquired by the control software, the engineer can then take a more intelligent approach to maintenance. GIS mapping allows an operator to easily extract data from the exact location that the fault occurred, passing this to the engineer for more intelligent maintenance.

For the energy industry, GIS technology provides an opportunity to better understand remote operations, enables more effective maintenance and could dramatically minimises downtime caused by unexpected errors. The reliability of the technology has already been proven in other industry areas, like crime mapping, road networking — and for novelty applications, like social media tagging.

Now, it’s time for the energy industry to make its mark on the GIS map.


Understanding risk: cybersecurity for the modern grid.

23/08/2017
Didier Giarratano, Marketing Cyber Security at Energy Digital Solutions/Energy, Schneider Electric discusses the challenge for utilities is to provide reliable energy delivery with a focus on efficiency and sustainable sources.

There’s an evolution taking place in the utilities industry to build a modern distribution automation grid. As the demand for digitised, connected and integrated operations increases across all industries, the challenge for utilities is to provide reliable energy delivery with a focus on efficiency and sustainable sources.

The pressing need to improve the uptime of critical power distribution infrastructure is forcing change. However, as power networks merge and become ‘smarter’, the benefits of improved connectivity also bring greater cybersecurity risks, threatening to impact progress.

Grid complexity in a new world of energy
Electrical distribution systems across Europe were originally built for centralised generation and passive loads – not for handling evolving levels of energy consumption or complexity. Yet, we are entering a new world of energy. One with more decentralised generation, intermittent renewable sources like solar and wind, a two-way flow of decarbonised energy, as well as an increasing engagement from demand-side consumers.

The grid is now moving to a more decentralised model, disrupting traditional power delivery and creating more opportunities for consumers and businesses to contribute back into the grid with renewables and other energy sources. As a result, the coming decades will see a new kind of energy consumer – that manages energy production and usage to drive cost, reliability, and sustainability tailored to their specific needs.

The rise of distributed energy is increasing grid complexity. It is evolving the industry from a traditional value chain to a more collaborative environment. One where customers dynamically interface with the distribution grid and energy suppliers, as well as the wider energy market. Technology and business models will need to evolve for the power industry to survive and thrive.

The new grid will be considerably more digitised, more flexible and dynamic. It will be increasingly connected, with greater requirements for performance in a world where electricity makes up a higher share of the overall energy mix. There will be new actors involved in the power ecosystem such as transmission system operators (TSOs), distribution system operators (DSOs), distributed generation operators, aggregators and prosumers.

Regulation and compliancy
Cyber security deployment focuses on meeting standards and regulation compliancy. This approach benefits the industry by increasing awareness of the risks and challenges associated with a cyberattack. As the electrical grid evolves in complexity, with the additions of distributed energy resource integration and feeder automation, a new approach is required – one that is oriented towards risk management.

Currently, utility stakeholders are applying cyber security processes learned from their IT peers, which is putting them at risk. Within the substation environment, proprietary devices once dedicated to specialised applications are now vulnerable. Sensitive information available online that describes how these devices work, can be accessed by anyone, including those with malicious intent.

With the right skills, malicious actors can hack a utility and damage systems that control the grid. In doing so, they also risk the economy and security of a country or region served by that grid.

Regulators have anticipated the need for a structured cyber security approach. In the U.S. the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) requirements set out what is needed to secure North America’s electric system. The European Programme for Critical Infrastructure Protection (EPCIP) does much the same in Europe. We face new and complex attacks every day, some of which are organised by state actors, which is leading to a reconsideration of these and the overall security approach for the industry.

Developing competencies and cross-functional teams for IT-OT integration

Due to the shift towards open communication platforms, such as Ethernet and IP, systems that manage critical infrastructure have become increasingly vulnerable. As operators of critical utility infrastructure investigate how to secure their systems, they often look to more mature cybersecurity practices. However, the IT approach to cybersecurity is not always appropriate with the operational constraints utilities are facing.

These differences in approach mean that cybersecurity solutions and expertise geared toward the IT world are often inappropriate for operational technology (OT) applications. Sophisticated attacks today are able to leverage cooperating services, like IT and telecommunications. As utilities experience the convergence of IT and OT, it becomes necessary to develop cross-functional teams to address the unique challenges of securing technology that spans both worlds.

Protecting against cyber threats now requires greater cross-domain activity where engineers, IT managers and security managers are required to share their expertise to identify the potential issues and attacks affecting their systems

A continuous process: assess, design, implement and manage
Cybersecurity experts agree that standards by themselves will not bring the appropriate security level. It’s not a matter of having ‘achieved’ a cyber secure state. Adequate protection from cyber threats requires a comprehensive set of measures, processes, technical means and an adapted organisation.

It is important for utilities to think about how organisational cybersecurity strategies will evolve over time. This is about staying current with known threats in a planned and iterative manner. Ensuring a strong defence against cyberattacks is a continuous process and requires an ongoing effort and a recurring annual investment. Cybersecurity is about people, processes and technology. Utilities need to deploy a complete programme consisting of proper organisation, processes and procedures to take full advantage of cybersecurity protection technologies.

To establish and maintain cyber secure systems, utilities can follow a four-point approach:

1. Conduct a risk assessment
The first step involves conducting a comprehensive risk assessment based on internal and external threats. By doing so, OT specialists and other utility stakeholders can understand where the largest vulnerabilities lie, as well as document the creation of security policy and risk migration

2. Design a security policy and processes
A utility’s cybersecurity policy provides a formal set of rules to be followed. These should be led by the International Organisation for Standardisation (ISO) and International Electrotechnical Commision (IEC)’s family of standards (ISO27k) providing best practice recommendations on information security management. The purpose of a utility’s policy is to inform employees, contractors, and other authorised users of their obligations regarding protection of technology and information assets. It describes the list of assets that must be protected, identifies threats to those assets, describes authorised users’ responsibilities and associated access privileges, and describes unauthorised actions and resulting accountability for the violation of the security policy. Well-designed security processes are also important. As system security baselines change to address emerging vulnerabilities, cybersecurity system processes must be reviewed and updated regularly to follow this evolution. One key to maintaining and effective security baseline is to conduct a review once or twice a year

3. Execute projects that implement the risk mitigation plan
Select cybersecurity technology that is based on international standards, to ensure appropriate security policy and proposed risk mitigation actions can be followed. A ‘secure by design’ approach that is based on international standards like IEC 62351 and IEEE 1686 can help further reduce risk when securing system components

4. Manage the security programme
Effectively managing cybersecurity programmes requires not only taking into account the previous three points, but also the management of information and communication asset lifecycles. To do that, it’s important to maintain accurate and living documentation about asset firmware, operating systems and configurations. It also requires a comprehensive understanding of technology upgrade and obsolescence schedules, in conjunction with full awareness of known vulnerabilities and existing patches. Cybersecurity management also requires that certain events trigger assessments, such as certain points in asset life cycles or detected threats

For utilities, security is everyone’s business. Politicians and the public are more and more aware that national security depends on local utilities being robust too. Mitigating risk and anticipating attack vulnerabilities on utility grids and systems is not just about installing technology. Utilities must also implement organisational processes to meet the challenges of a decentralised grid. This means regular assessment and continuous improvement of their cybersecurity and physical security process to safeguard our new world of energy.

@SchneiderElec #PAuto #Power

Power distribution for the digital age.

01/06/2017
Éirin Madden, Offer Manager at Schneider Electric Ireland talks about the smart devices that enable facility managers to take preventive measures to mitigate potential risks in power distribution.

Éirinn Madden

We are currently witnessing the rise of a new chapter in power distribution. After all, today’s digital age is going to impact our lives and business as much as the introduction of electricity did at the end of the 19th century. This is going to bring with it a wave of innovations in power that will blur the lines between the energy and digital space. The traditional centralised model is giving way to new economic models and opportunities, which redefine the core basics of power distribution; efficiency, reliability, safety, security, and performance.

Many of us know the inconvenience of experiencing a blackout at home, but the impact is much more far reaching when it occurs in your corporate facility – from lost revenue and unhappy tenants, to more extreme scenarios like the loss of life. Recently, tourists and shoppers in central London were plunged into darkness after an underground electric cable faulted on a high voltage network caused an area-wide power cut. Theatre shows were cancelled and shops were closed, leaving shoppers and storeowners frustrated and disappointed.

A call to get smart 
How can such outages be prevented? At the core of smart power distribution systems are smart devices that enable facility managers to take preventive measures to mitigate potential risks. These devices have become more than just responsible for controlling a single mechanism. They now measure and collect data, and provide control functions. Furthermore, they enable facility and maintenance personnel to access the power distribution network. 

In many places throughout the power network the existing intelligence can be embedded inside other equipment, such as the smart trip units of circuit breakers. These smart breakers can provide power and energy data, as well as information on their performance, including breaker status, contact wear, alerts, and alarms. In addition to core protection functions, many devices are also capable of autonomous and coordinated control, without any need for user intervention.

Today, hardware such as the Masterpact MTZ Air Circuit Breaker (ACB) has evolved to include new digital capabilities. One of these primary new digital technologies revolves around communication abilities, providing a way to send the data the device is gathering to building analytic software, where it can be put to use.

Building analytics is another enabler for smart power distribution systems, offering an advanced lifecycle managed service that delivers automated fault detection, diagnosis, and real-time performance monitoring for buildings. Information is captured from building systems and sent to cloud-based data storage. From that point, an advanced analytics engine uses artificial intelligence to process building data and continuously diagnose facility performance by identifying equipment and system faults, sequence of operation improvements, system trends, and energy usage. 

Combatting operational efficiency decline
One of the biggest challenges facing facility managers today is the need to maintain existing equipment performance. Components are prone to breaking or falling out of calibration, and general wear and tear often results in a marked decline of a buildings’ operational efficiency. What’s more, reduced budgets are forcing building owners to manage building systems with fewer resources. The issue is then further exacerbated by older systems becoming inefficient over time. Even when there is budget at hand, it is time-consuming and increasingly difficult to attract, develop, and retain staff with the right skills and knowledge to make sense of the building data being generated. 

When it comes to switchgear in particular, there is the challenge around spending when it comes to maintenance and services. There is no doubt that regularly scheduled maintenance extends the life of existing switchgear. However, at some point facilities must decide whether to maintain or replace with new equipment. Of course, although keeping up with equipment maintenance has its challenges, especially with limited resources, the safety and reliability of a facility depends on it and must be the priority. 

Looking ahead with building analytics
For many building owners and occupants, they are also looking at how building analytics can be used beyond just safety and reliability to make a difference to the bigger picture of workplace efficiency. From comfort to space, and occupant services, to management dashboards, organisations are now placing more emphasis on well-being at work. When building analytics recommendations are implemented, the results are obvious – enhanced building performance, optimised energy efficiency through continual commissioning, and reduced operating costs — all with a strong return on investment and an improved building environment.

@SchneiderElec #Power #PAuto @tomalexmcmahon