What have we learned about the virus, so far?

21/10/2020
Adrian Gee-Turner, an antiviral disinfectant manufacturer (Sterling Presentation Health), reviews the published evidence and provides a brief summary of the conclusions that can be drawn (so far)

As the pandemic progressed, our understanding of the virus that causes Covid-19 grew, and this has influenced the guidance on transmission prevention measures such as face masks, social distancing, hand washing and disinfection. Whilst a few grey areas persist, such as the degree of challenge presented by aerosolised virus, a number of conclusions are emerging. For example:

• Fomite transmission (from objects and surfaces) is highly likely given the extended periods (days) that SARS-CoV-2 is able to remain viable on a variety of surfaces, including glass and plastic. This is important because people generally touch mobile phones and keyboards many times per day, so as well as hand washing, ‘touch points’ will need frequent disinfection with an antiviral disinfectant (caution: some ‘antibacs’ are not antiviral).

Sterling Presentation Health

Sterling Presentation Health was founded in 2015 with a focus on the importance of infection prevention. Working with ground-breaking scientists, the founders developed Nemesis eH2O; a natural antiseptic, anti-bacterial, antiviral and antifungal disinfectant.
Nemesis eH2O is highly effective against a broad spectrum of pathogens including for example Coronaviruses, Influenza, E. coli, S. aureus, Norovirus, MRSA and C. difficile.

• It appears that the viability of SARS-CoV-2 is significantly reduced by sunlight or high temperatures. This conclusion would appear to be borne out by the outbreaks that have occurred in chilled food packing facilities.

• SARS-CoV-2 can remain infectious as an aerosol for at least several hours. This is important because pathogens predominate in small particles of less than 5 microns (<5 μm) which do not settle in the way that larger particles do.

Virus survival on surfaces
The emergence of a novel human coronavirus, SARS-CoV-2, prompted a review of the available data on the persistence of coronaviruses on inanimate surfaces and their inactivation with biocidal agents. Published in January 2020 in the Journal of Hospital Infection, the review examined 22 studies that evaluated the persistence of human coronaviruses such as Severe Acute Respiratory Syndrome (SARS or SARS-CoV-1) coronavirus and Middle East Respiratory Syndrome (MERS) coronavirus. The assessment detailed the persistence of coronaviruses on inanimate surfaces such as metal, glass and plastic for up to 9 days, but found that they can be efficiently inactivated by surface disinfection procedures. It proposed, therefore that such procedures should be adopted to curtail the further spread of SARS-CoV-2. Interestingly, whilst the Paper concluded that human coronaviruses can remain infectious on inanimate surfaces at room temperature for up to 9 days; it also mentioned that temperatures of 30 DegC or higher reduce the duration of persistence.

Is SARS-CoV-2 different?
In general, the health effects of infection by SARS-CoV-1 are more serious than by SARS-CoV-2, but as a contagion, SARS-CoV-2 is more important because it appears to transmit more easily than its predecessor. This is likely to be because the viral load is highest in the nose and throat of people with COVID-19 shortly after symptoms develop, whereas with SARS, viral loads peak much later in the illness. Consequently, people with COVID-19 may be transmitting the virus even before their symptoms develop. According to the Centers for Disease Control and Prevention (CDC), some research suggests that COVID-19 can be spread by people with no symptoms.

In March 2020, at about the same time that the British lockdown was first announced, van Doremalen and others published a Paper in the New England Journal of Medicine, which compared the aerosol and surface stability of SARS-CoV-2 in comparison with SARS-CoV-1. The work assessed the viability of the viruses in five conditions: in aerosols, and on plastic, stainless steel, copper, and cardboard. All of the trials were conducted at 40% relative humidity and 21-23 DegC, and found that the stability of SARS-CoV-2 was similar to that of SARS-CoV-1 under the experimental circumstances tested.

The research showed that SARS-CoV-2 was more stable on plastic and stainless steel than on copper and cardboard, and viable virus was detected up to 72 hours after application to these surfaces, although the virus titer (viral load) was greatly reduced. The results indicated that aerosol and fomite transmission of SARS-CoV-2 is plausible, since the virus can remain viable and infectious in aerosols for hours and on surfaces up to several days.

Airborne transmission
Clearly, more work is necessary to better understand the airborne behaviour of the virus. However, a study of the particle sizes of infectious aerosols (July, 2020) published in the Lancet, found that humans produce infectious aerosols in a wide range of particle sizes, but pathogens predominate in small particles (<5 μm) that are immediately respirable by exposed individuals. Also, evidence is accumulating that SARS-CoV-2 is transmitted by both small and large particle aerosols. It would appear therefore that masks should be capable of intercepting even ultrafine particles and given the persistence of viable virus in aerosols, facemasks represent an important means with which to limit transmission of the virus.

The effects of temperature and light
In October 2020 the Virology Journal published work by Riddell and others in Australia, in which the effect of temperature on the persistence of SARS-CoV-2 was evaluated on surfaces including glass, stainless steel and both paper and polymer banknotes. These surfaces were chosen because they represent most of the major ‘touch points’ such as mobile phones, money, bank ATMs, supermarket self-serve checkouts etc.

All experiments were conducted in the dark, to negate any effects from UV light; SARS-CoV-2 has been shown to be inactivated by simulated sunlight (Ratnesar-Shumate S, et al. (2020), and Schuit M, et al. (2020)). Inoculated surfaces were incubated at 20 DegC, 30 DegC and 40 DegC and sampled at various time points.

The initial viral loads were approximately equivalent to the highest titres excreted by infectious patients, and viable virus was isolated for up to 28 days at 20 DegC from the surfaces. Conversely, infectious virus survived less than 24 hours at 40 DegC on some surfaces. Nevertheless, this work indicates that SARS-CoV-2 survival rates are considerably longer than previously believed, so disinfection strategies should be adjusted accordingly.

As the manufacturer of Nemesis eH2O, we have witnessed an enormous increase in demand for both sprayable and foggable product. This is because, whilst hand washing can help protect individuals, effective spraying of touch points is also essential, coupled with the fogging of large spaces to decontaminate surfaces and viral aerosols.

References:
Kampf, G. et al. (January, 2020) Persistence of coronaviruses on inanimate surfaces and their inactivation with biocidal agents. Journal of Hospital Infection.
van Doremalen N, et al. (March, 2020) Aerosol and Surface Stability of SARS-CoV-2 as Compared with SARS-CoV-1. New England Journal of Medicine.
Ratnesar-Shumate S, et al. (July, 2020). Simulated sunlight rapidly inactivates SARS-CoV-2 on surfaces. The Journal of Infectious Diseases.
Fennelly K P. (July, 2020) Particle sizes of infectious aerosols: implications for infection control. The Lancet Respiratory Medicine.
Schuit M, et al. (August, 2020). Airborne SARS-CoV-2 is rapidly inactivated by simulated sunlight. The Journal of Infectious Diseases.
Riddell et al. (October, 2020) The effect of temperature on persistence of SARS-CoV-2 on common surfaces. Virology Journal.
@Nemesis_eH2O @_Enviro_News #coróinvíreas #COVID19 #coronavirus

Challenging designs.

26/10/2017
Ian McWha, key account manager at industrial systems integrator Boulting Technology, explores the importance of recognising and overcoming challenges when designing a switchboard.

Sir Sean Connery, most famous for his award-winning portrayal of James Bond, once said “there is nothing like a challenge to bring out the best in man.” These are wise words as we all continually face our own challenges, throughout every aspect of life.

A bespoke MCC designed in an L-shape to meet space restrictions.

When plant managers look to install a new switchboard in their facility, they are often presented with a range of challenges that they must address. Identifying these challenges as soon as possible is imperative to the success of the installation and the functionality of the switchboard. If not addressed, these issues can have drastic consequences, causing production downtime or even damage to other systems and employees.

Design challenges
Each facility is unique and as such will have its own design requirements, dependant on the function of the plant.

Many plants have limited space that they are keen to maximise, so the footprint of the switchboard needs to be as small as possible, while ensuring its integrity is not compromised.

Boulting Technology’s designers are experienced in creating bespoke systems that meet client specifications, particularly in space-short environments. Bespoke MCC designs include, integrated back to back systems with shared riser and main distribution bars, custom made U shape centres, L shape units that fit round corners and bridges that extend above equipment and wall partitions.

The specific needs of each job may also present additional challenges that the design engineer must be aware of. When working with pumping stations for example, a switchboard may be required to be near water. In these cases, the ingress protection (IP) rating, which classifies the degrees of protection provided against solid objects, dust, and water must be adhered to.

As challenges are often individual to a facility, a unique switchboard may be the answer. Bespoke solutions such as Boulting Technology’s can make the most of limited space or other restrictions, while meeting client specifications exactly.

Maintenance
Forward planning is essential when installing new equipment, especially when establishing a regular maintenance programme. Planned and predictive maintenance is crucial to keep machinery working efficiently for as long as possible, avoiding production downtime. To solve this, plant managers should work closely with the switchboard manufacturer to develop a robust maintenance programme.

Boulting Technology offers an all-encompassing maintenance solution, which includes a comprehensive survey that assesses control systems across a facility. An initial online survey assesses areas, such as obsolete parts, equipment life cycle and efficiency.

From the survey, a series of multi stage recommendations provide a hierarchy of risk, allowing plat managers to focus on high risk critical systems in the first instance and implement an appropriate plan of action.

Safety
Not properly addressing design challenges can cause safety issues. For example, it is essential that the switchboard has the correct rated short time withstand rating. This is the rating of current that the assembly can withstand for a set period of time without the aid of a short circuit protective device (SCPD). The short time withstand rating, used by engineers to determine the ability of the assembly to protect itself and other devices, is made up of two parts: the fault current rating in kiloampere (kA) and the duration time.

Manufacturers also need to be aware of the prospective short circuit current (PSCC) or fault current. The PSCC is the highest electric current which can exist in a system under short-circuit conditions.

While engineers should always be aware of the PSCC, specific applications such as when operating transformers in parallel, can present dangerous situations if not managed correctly.

Legislation, such as BS EN 61439 is the first step to ensuring switchboard safety. BS EN 61439 is a mandatory standard for all low voltage switchboard assemblies (LVSAs) and helps the manufacturer and plant manager ensure the board achieves acceptable levels of performance, safety and reliability.

It is important to choose a manufacturer and integrator which understands the relevant legislation, how to meet them and how to ensure the product is safe, while also meeting customer requests and requirements.

While meeting legislation standards is important, it does not automatically mean the switchboard is fit for the desired purpose. Safety requirements can easily be met without the equipment meeting client specifications or even working correctly. Legislation should be one of many considerations when installing new equipment.

Thinking outside of the box means design challenges can not only be overcome, but can become useful, resulting in bespoke ideas and revolutionary products. This is just as true for engineers designing industrial products such as low voltage switchboards, as it has been throughout the brilliant Sean Connery’s life.

@BoultingTech #PAuto @StoneJunctionPR

Understanding risk: cybersecurity for the modern grid.

23/08/2017
Didier Giarratano, Marketing Cyber Security at Energy Digital Solutions/Energy, Schneider Electric discusses the challenge for utilities is to provide reliable energy delivery with a focus on efficiency and sustainable sources.

There’s an evolution taking place in the utilities industry to build a modern distribution automation grid. As the demand for digitised, connected and integrated operations increases across all industries, the challenge for utilities is to provide reliable energy delivery with a focus on efficiency and sustainable sources.

The pressing need to improve the uptime of critical power distribution infrastructure is forcing change. However, as power networks merge and become ‘smarter’, the benefits of improved connectivity also bring greater cybersecurity risks, threatening to impact progress.

Grid complexity in a new world of energy
Electrical distribution systems across Europe were originally built for centralised generation and passive loads – not for handling evolving levels of energy consumption or complexity. Yet, we are entering a new world of energy. One with more decentralised generation, intermittent renewable sources like solar and wind, a two-way flow of decarbonised energy, as well as an increasing engagement from demand-side consumers.

The grid is now moving to a more decentralised model, disrupting traditional power delivery and creating more opportunities for consumers and businesses to contribute back into the grid with renewables and other energy sources. As a result, the coming decades will see a new kind of energy consumer – that manages energy production and usage to drive cost, reliability, and sustainability tailored to their specific needs.

The rise of distributed energy is increasing grid complexity. It is evolving the industry from a traditional value chain to a more collaborative environment. One where customers dynamically interface with the distribution grid and energy suppliers, as well as the wider energy market. Technology and business models will need to evolve for the power industry to survive and thrive.

The new grid will be considerably more digitised, more flexible and dynamic. It will be increasingly connected, with greater requirements for performance in a world where electricity makes up a higher share of the overall energy mix. There will be new actors involved in the power ecosystem such as transmission system operators (TSOs), distribution system operators (DSOs), distributed generation operators, aggregators and prosumers.

Regulation and compliancy
Cyber security deployment focuses on meeting standards and regulation compliancy. This approach benefits the industry by increasing awareness of the risks and challenges associated with a cyberattack. As the electrical grid evolves in complexity, with the additions of distributed energy resource integration and feeder automation, a new approach is required – one that is oriented towards risk management.

Currently, utility stakeholders are applying cyber security processes learned from their IT peers, which is putting them at risk. Within the substation environment, proprietary devices once dedicated to specialised applications are now vulnerable. Sensitive information available online that describes how these devices work, can be accessed by anyone, including those with malicious intent.

With the right skills, malicious actors can hack a utility and damage systems that control the grid. In doing so, they also risk the economy and security of a country or region served by that grid.

Regulators have anticipated the need for a structured cyber security approach. In the U.S. the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) requirements set out what is needed to secure North America’s electric system. The European Programme for Critical Infrastructure Protection (EPCIP) does much the same in Europe. We face new and complex attacks every day, some of which are organised by state actors, which is leading to a reconsideration of these and the overall security approach for the industry.

Developing competencies and cross-functional teams for IT-OT integration

Due to the shift towards open communication platforms, such as Ethernet and IP, systems that manage critical infrastructure have become increasingly vulnerable. As operators of critical utility infrastructure investigate how to secure their systems, they often look to more mature cybersecurity practices. However, the IT approach to cybersecurity is not always appropriate with the operational constraints utilities are facing.

These differences in approach mean that cybersecurity solutions and expertise geared toward the IT world are often inappropriate for operational technology (OT) applications. Sophisticated attacks today are able to leverage cooperating services, like IT and telecommunications. As utilities experience the convergence of IT and OT, it becomes necessary to develop cross-functional teams to address the unique challenges of securing technology that spans both worlds.

Protecting against cyber threats now requires greater cross-domain activity where engineers, IT managers and security managers are required to share their expertise to identify the potential issues and attacks affecting their systems

A continuous process: assess, design, implement and manage
Cybersecurity experts agree that standards by themselves will not bring the appropriate security level. It’s not a matter of having ‘achieved’ a cyber secure state. Adequate protection from cyber threats requires a comprehensive set of measures, processes, technical means and an adapted organisation.

It is important for utilities to think about how organisational cybersecurity strategies will evolve over time. This is about staying current with known threats in a planned and iterative manner. Ensuring a strong defence against cyberattacks is a continuous process and requires an ongoing effort and a recurring annual investment. Cybersecurity is about people, processes and technology. Utilities need to deploy a complete programme consisting of proper organisation, processes and procedures to take full advantage of cybersecurity protection technologies.

To establish and maintain cyber secure systems, utilities can follow a four-point approach:

1. Conduct a risk assessment
The first step involves conducting a comprehensive risk assessment based on internal and external threats. By doing so, OT specialists and other utility stakeholders can understand where the largest vulnerabilities lie, as well as document the creation of security policy and risk migration

2. Design a security policy and processes
A utility’s cybersecurity policy provides a formal set of rules to be followed. These should be led by the International Organisation for Standardisation (ISO) and International Electrotechnical Commision (IEC)’s family of standards (ISO27k) providing best practice recommendations on information security management. The purpose of a utility’s policy is to inform employees, contractors, and other authorised users of their obligations regarding protection of technology and information assets. It describes the list of assets that must be protected, identifies threats to those assets, describes authorised users’ responsibilities and associated access privileges, and describes unauthorised actions and resulting accountability for the violation of the security policy. Well-designed security processes are also important. As system security baselines change to address emerging vulnerabilities, cybersecurity system processes must be reviewed and updated regularly to follow this evolution. One key to maintaining and effective security baseline is to conduct a review once or twice a year

3. Execute projects that implement the risk mitigation plan
Select cybersecurity technology that is based on international standards, to ensure appropriate security policy and proposed risk mitigation actions can be followed. A ‘secure by design’ approach that is based on international standards like IEC 62351 and IEEE 1686 can help further reduce risk when securing system components

4. Manage the security programme
Effectively managing cybersecurity programmes requires not only taking into account the previous three points, but also the management of information and communication asset lifecycles. To do that, it’s important to maintain accurate and living documentation about asset firmware, operating systems and configurations. It also requires a comprehensive understanding of technology upgrade and obsolescence schedules, in conjunction with full awareness of known vulnerabilities and existing patches. Cybersecurity management also requires that certain events trigger assessments, such as certain points in asset life cycles or detected threats

For utilities, security is everyone’s business. Politicians and the public are more and more aware that national security depends on local utilities being robust too. Mitigating risk and anticipating attack vulnerabilities on utility grids and systems is not just about installing technology. Utilities must also implement organisational processes to meet the challenges of a decentralised grid. This means regular assessment and continuous improvement of their cybersecurity and physical security process to safeguard our new world of energy.

@SchneiderElec #PAuto #Power

Unlocking the value of cross-facility data sets.

29/04/2016

The great migration: cloud computing for smart manufacturing
By Martyn Williams, managing director at COPA-DATA UK.

According to this industry survey by IBM, two thirds of mid-sized companies have already implemented – or are currently considering – a cloud based storage model for their organisation. The analytic advantages of cloud computing in industry are no secret, in fact, 70 per cent of these cloud-using respondents said they were actively pursuing cloud-based analytics to gleam greater insights and efficiency in order to achieve business goals

Copa_Cloud_MigrationFor the manufacturing industry, the benefits of migrating to cloud computing have been heavily publicised, but in an industry that has been slow to embrace new technology, a mass move to the cloud can feel like a leap in the unknown. Despite an increased adoption of smart manufacturing technologies, some companies may still feel hesitant. Instead, many decide to test the water by implementing a cloud storage model in just one production site. However, this implementation model can only provide limited benefits in comparison to a mass, multi-site migration to the cloud.

So what should companies expect to undertake during their cloud migration?

Define business objectives
Before migrating to the cloud, companies should first consider how it can help them achieve -and in some cases refine – their business objectives and plan their migration with these objectives in mind. For businesses that want to improve collaboration and benchmarking across multiple locations, for example, the cloud plays a significant role.

A company with multiple production sites operating in several locations will be familiar with the complications of cross-facility benchmarking. Often, business objectives or key performance indicators (KPIs) are only set for single site locations. In an ideal situation, the business objectives have to be coordinated across all locations to offer a clear, company-wide mission.

To achieve better collaboration and transparency across sites, companies can resort to using a cloud storage and computing application that gathers all available production data (from multiple production sites) in one place. Certain individuals or teams in the company can be granted access to relevant data sets and reports, depending on their responsibilities within the organisation.

Determine the ideal status
Once a business objective is clear, companies should identify what the ideal status of each process is. By using production data and energy information stored and analysed in the cloud, a company can gain insight on productivity, overall equipment effectiveness (OEE), energy usage and more. This insight helps companies make changes that will bring the existing production environment closer to the ideal status.

Combined with the right SCADA software, the cloud unlocks rich company-wide data sets. By bridging information from different facilities in real-time, the software generates a bird’s eye view of company-wide operations and detailed analysis of energy consumption, productivity and other operational KPIs. This makes it easier for a company to monitor progress against the original business objectives and scale up or down when necessary.

Already, a large number of manufacturers are using industrial automation to speed up production and increase efficiency. With the large scale adoption of intelligent machinery, cloud computing is poised to become the obvious solution to store and manage the complexity of data this industry connectivity creates.

Unlike the restrictions associated with on-premises storage, cloud based models provide unlimited scalability, allowing companies to store both real-time and historical data from all production their sites and integrate any new production lines or sites to their cloud solution in a seamless manner. When accompanied with data analytics software, like zenon Analyzer, cloud computing can help companies prevent potential problems in production and even ignite entirely new business models.

Continuous improvement
For manufacturers with strict energy efficiency and productivity targets, easy access to company-wide data is invaluable. However, the knowledge provided by the cloud does not end with past and present data, but also gives manufacturers a glimpse into the future of their facilities.

By using the cloud, companies can implement a long-term continuous improvement strategy. Often, continuous improvement will follow the simple Plan-Do-Check-Act (PDCA) model often used in energy management applications. This allows companies to make decisions based on data analytics and to evaluate the effectiveness of those decisions in the short and medium run.

Using data collected from industrial machinery, companies can also employ predictive analytics technology to forecast why and when industrial machinery is likely to fail, which also means they can minimise costly downtime.

Predictive analytics allows manufacturers to identify potential problems with machinery before breakdowns occur. Avoiding expensive overheads for production downtime and costly fines for unfulfilled orders, the priceless insights predictive analytics can provide is the obvious solution to such costly problems.

Converting from the safe familiarities of on-premises storage to an advanced cloud model may seem risky. As with any major business transition, there is bound to be hesitation surrounding the potential problems the changeover could bring. Before making a decision, companies should closely assess three things: business objectives, how the cloud can help them achieve the ideal status across one or multiple production sites and how it can help them continuously improve in the long run.


Connection allows expansion modules be added in seconds.

12/01/2016

Peak Production Equipment manufactures a comprehensive range of test equipment, from simple test boxes used by subcontract manufacturers to stand-alone high specification test racks and systems used in the aerospace and other industries.

HartingandPeak

Harting’s har-flex® PCB connector system is a key component in a new versatile interface developed by Peak.

A key element of the company’s offering is the fact that all its test fixtures and interfaces are designed and manufactured in-house, which represents an increasing challenge because of the growing demand for lower-cost test solutions from customers. To accommodate this requirement, Peak needed a robust, computer-controlled interface board containing relay controls and digital inputs and outputs which could be configured flexibly to accept different customers’ test scenarios.

As such boards are not available in the marketplace at a reasonable cost, Peak took on the challenge of producing the board in-house.

HAR_FlexPCBHarting’s har-flex® family is a general-purpose PCB connector series based on a 1.27 mm grid with SMT termination technology. With its straight, angled and cable variants, har-flex® provides connectivity solutions for many different board-to-board and cable-to-board applications.
The different stacking heights of the mezzanine connectors and the flexible IDC connector cable lengths offer a high degree of freedom to the system design. A broad choice of configurations between six and 100 contacts in even-numbered positions is available.

The system had to be compact, low cost, expandable, robust and reliable and cover a wide voltage range, while at the same time incorporating multiple control interfaces, with one interface controlling a range of expansion modules. It also had to be compatible with multiple software drivers, and all the components used in its construction had to be fully traceable.

The solution arrived at by Peak engineers was based around a master interface PCB which acts as the key interface between the controlling PC and all the expansion modules. It is fitted with three interfaces: USB, Ethernet and RS232. The board can be used as a stand-alone controller, or it can be “piggy-backed” onto any expansion module or alternatively connected to expansion modules using a ribbon cable for maximum flexibility. The PCB assembly has a high speed I2C interface, 23 channels of digital I/O and 256 kbits of on-board memory, all controlled by any one of the three control interfaces. The PCB has a wide voltage input range from 7 to 36 V DC, and measures only 100 × 50 mm.

The on-board memory allows storage of data such as test cycles and date of manufacture, while the digital I/O is useful for monitoring sensor inputs and switching indicators and additional relays. The I2C interface is used for all expansion module communications, but can also be used as a stand-alone interface.

A 16-channel high-power SPDT relay board is used as the expansion module. This contains 16 SPDT 12 A, 250 V AC relays for general power switching. The relays can be switched and the status can be read back by the master interface PCB. All relays have LED indication, and the PCB has the same wide voltage input range as the master interface board (7-36 V DC) and measures 100 × 220 mm. Although the relay board can be used for general switching inside test fixtures and systems, it can also be used in many other applications.

HARTING har-flex® connectors were selected for board connectivity due to their small size, robustness and flexibility. They can be used as board-to-board connectors, allowing the master interface PCB to be connected to any expansion module directly; alternatively, the same connector can have a ribbon cable connected to connect subsequent expansion modules.

The small size of the connector allowed Peak to increase the pin count, allowing power lines to be commoned up and all communications and power to be passed down a single ribbon cable. As a result, expansion modules could be added in a matter of seconds.

The har-flex® family is a general-purpose PCB connector series based on a 1.27 mm grid with SMT termination technology. With its straight, angled and cable variants, har-flex® provides connectivity solutions for many different board-to-board and cable-to-board applications.

The different stacking heights of the mezzanine connectors and the flexible IDC connector cable lengths offer a high degree of freedom to the system design. A broad choice of configurations between six and 100 contacts in even-numbered positions is available.


Air pollution – the invisible roadside killer.

14/12/2015

The VW emissions scandal has helped to raise awareness of the deadly threat posed by air pollution in many of our towns and cities. In the following article, Jim Mills, Managing Director of Air Monitors, an instrumentation company, explains why diesel emissions will have to be lowered and how the latest monitoring technology will be an essential part of the solution.

Background
The World Health Organisation has estimated that over 500,000 Europeans die prematurely every year as a result of air pollution – especially fine particulates from combustion processes and vehicles. Of these, around 30,000 are in Britain; however, experts believe that the figures could be substantially higher if the effects of Nitrogen Dioxide (NO2) are also taken into consideration.

London Smog - now less visible!

London Smog – now less visible!

Historically, air pollution was highly visible, resulting in air pollution episodes such as the Great London Smog in 1952. However, today’s air pollution is largely invisible (fine particulates and NO2 for example), so networks of sophisticated monitors are necessary.

The greatest cause for alarm is the air quality in our major towns and cities where vehicles (main diesels) emit high levels of NO2 and particulates in ‘corridors’ that do not allow rapid dispersion and dilution of the pollutants. Urban vehicles also emit more pollution than free-flowing traffic because of the continual stopping and starting that is necessary.

As a result of its failure to meet European air quality limits, the Government was taken to the UK Supreme Court in April 2015 by ClientEarth, an organisation of environmental lawyers. In a unanimous judgement against Defra (English Department for Environment, Food and Rural Affairs), the Court required the urgent development of new air quality plans. In September 2015 Defra published its Draft Air Quality Plans, but they have not been well received; respondents have described them as disappointing and unambitious. CIWEM (The Chartered Institution of Water and Environmental Management) , an organisation representing environmental management professionals, for example, said: (the plans) “rely on unfunded clean air zones and unproven vehicle emission standards.”

Some commentators believe that Defra should follow Scotland’s lead, following the publication, in November 2015, of ‘Cleaner Air for Scotland – The Road to a Healthier Future’ (CAFS). Key to this strategy is its partnership approach, which engages all stakeholders. Under CAFS, the Scottish Government will work closely with its agencies, regional transport partnerships, local authorities (transport, urban and land-use planners and environmental health), developers, employers, businesses and citizens. CAFS specifies a number of key performance indicators and places a heavy emphasis on monitoring. A National Low Emission Framework (NLEF) has been designed to enable local authorities to appraise, justify the business case for, and implement a range of, air quality improvement options related to transport (and associated land use).

Traffic-related air pollution
In addition to the fine particulates that are produced by vehicles, around 80% of NOx emissions in areas where Britain is exceeding NO2 limits are due to transport. The largest source is emissions from diesel light duty vehicles (cars and vans). Clearly, there is now enormous pressure on vehicle manufacturers to improve the quality of emissions, but urgent political initiatives are necessary to address the public health crisis caused by air pollution.

A move to electric and hybrid vehicles is already underway and developments in battery technology will help improve the range and performance of these vehicles, and as they become more popular, their cost is likely to lower. The prospect of driverless vehicles also offers hope for the future; if proven successful, they will reduce the need for car ownership, especially in cities, thereby reducing the volume of pollution emitting vehicles on the roads.

Vehicle testing is moving out of the laboratory in favour of real-world driving emissions testing (RDE) which will help consumers to choose genuinely ‘clean’ vehicles. However, the ultimate test of all initiatives to reduce traffic-related air pollution is the effect that they have on the air that people breathe.

Ambient air quality monitoring
Networks of fixed air quality monitoring stations provide continual data across the UK, accessible via the Defra website and the uBreathe APP. Many believe that this network contains an insufficient number of monitoring points because measurement data has to be heavily supplemented with modelling. However, these reference monitoring stations, while delivering highly accurate and precise data, are expensive to purchase, calibrate and service. They also require a significant footprint and mains electricity, so it is often difficult or impossible to locate them in the locations of most interest – the pollution hotspots.

Public sector budgets are under pressure, so the cost of running the national monitoring network and those systems operated by Local Authorities is a constant source of debate. The challenge for technology companies is therefore to develop air quality monitors that are more flexible in the locations in which they are able to operate and less costly in doing so.

Air Monitors’s response

New technology
Air Monitors has developed a small, battery-powered, web-enabled, air quality monitor ‘AQMesh’, which can be quickly and easily mounted on any lamp post or telegraph pole at a fraction of the cost of traditional monitors. Consequently, for the first time ever, it is possible to monitor air quality effectively, where it matters most; outside schools, on the busiest streets and in the places where large numbers of people live and breathe.AQMesh_podAQMesh ‘pods’ are completely wireless, using GPRS communications to transmit data for the five main air polluting gases to ‘the cloud’ where sophisticated data management generates highly accurate readings as well as monitoring hardware performance. In addition, it is now possible to add a particulate monitor to new AQMesh pods.AQMesh does not deliver the same level of precision as reference stations, but this new technology decreases the cost of monitoring whilst radically improving the availability of monitoring data, especially in urban areas where air quality varies from street to street.The flexibility of these new monitors is already being exploited by those responsible for traffic-related pollution – helping to measure the effects of traffic management changes for example. However, this new level of air quality data will also be of great value to the public; helping them to decide where to live, which routes to take to work and which schools to send their children to.

The future is (almost) now!

29/11/2015

Buzzwords fly around in industry like wasps at a picnic. Industry 4.0 is one of these hugely popular concepts, particularly when it comes to manufacturing. Here Steve Hughes, managing director of REO UK, gives further insight into Industry 4.0.

A business man with an open hand ready to seal a deal!

The first industrial revolution saw the development of mechanisation using water and steam power. The second was the introduction of electricity in manufacturing environments, which facilitated the shift to mass production. The digital revolution happened during our lifetime, using electronics and IT to further automate manufacturing.

Industry 4.0 is the fourth in this series of industrial revolutions. Although it is still, relatively speaking, in its infancy, the idea relies on sophisticated software and machines that communicate with each other to optimise production.

In Industry 4.0, strong emphasis is placed on the role of intelligent factories. They are energy efficient organisations based on high-tech, adaptable and ergonomic production lines. Smart factories aim to integrate customers and business partners, while also being able to manufacture and assemble customised products.

Industry 4.0 is more about machines doing the work and interpreting the data, than relying on human intelligence. The human element is still central to the manufacturing process, but fulfils a control, programming and servicing role rather than a shop floor function.

Siemens_Amberg

At Siemens’ Amberg Plant Simatic PLCs manage the production of PLCS!

The Siemens (IW 1000/34) Electronic Works facility in Amberg (D), is a good example of the next generation of smart plants. The 108,000 square-foot high-tech facility is home to an array of smart machines that coordinate everything from the manufacturing line to the global distribution of the company’s products.

Despite the endless variables within the facility, a Gartner industry study conducted in 2010 found that the plant boasts a reliability rate of more than 99 per cent, with only 15 defects in every million finished products.

Thanks to the data processing capacity of Industry 4.0-ready devices, it is possible to generate the information, statistics and trends that allow manufacturers to make their production lean and more fuel efficient.

If you work in the food manufacturing industry, you probably know that many production lines today operate at less than 60 per cent efficiency, which means there is considerable room for improvement. Saving electricity and water are also key requirements for modern plant managers, who can achieve their eco-friendly goals by using smart plant connectivity.

The great news is that a lot of the technology associated with Industry 4.0 already exists. The not so great news is that implementing it will probably cost your company a pretty penny, especially if you aim to be an early adaptor.

What the future holds
For most automation companies, the move will be a gradual one, an evolution rather than a revolution. This is why continuity with older systems will still be essential for manufacturing in the years to come.

Industry 4.0 will ultimately represent a significant change in manufacturing and industry. In the long run, the sophisticated software implanted in factory equipment could help machines self-regulate and make more autonomous decisions. Decentralisation also means tasks currently performed by a central master computer will be taken over by system components.

In years to come, geographical and data boundaries between factories could become a thing of the past, with smart plants joining up sites located in different places around the world.

Industry 4.0 is an excellent opportunity for industries to apply their skills and technologies to gradually start the shift towards smarter factories. New technologies will also lead to more flexible, sustainable and eco-friendly production and manufacturing lines. The first step is taking the Industry 4.0 concept from the land of buzzwords, to the land of research and development.


Industry 4.0 business ecosystem will change dynamics in the Global industry.

26/06/2015
Frost &Sullivan: a new IoIT (Internet of Industrial Things) supplier ecosystem estimated to reach €420 billion by 2020
Ind4_IoT Frost & Sullivan plans to publish four new studies dedicated to the evolution of the Smart Manufacturing paradigm and new cooperations and alliances in the industrial services market during July 2015.
• Internet of Industrial Things – The Vision and the Realities
• Services 2.0–The New Business Frontier for Profitability
• The Safety-Security Argument: Expanding Needs in a Connected Enterprise
• The Industrie 4.0 Business Ecosystem: Decoding the New Normal

In the evolution towards the Smart Manufacturing paradigm, end-user requirements are set to evolve and become more complex than ever before. Global suppliers find it increasingly difficult to meet the growing needs of the end-users that are further augmented with a very high degree of complexity. But the current scenario will also provide the biggest opportunity to realign one’s exiting business approach and forge alliances and partnerships with market participants. The result would be a newly built supplier ecosystem that can effectively address end-user needs for growth in near and long term perspectives.

According to a recent Frost & Sullivan research on the industrial services market, a new wave of influence is disrupting business dynamics between end-user and supplier. This change is founded on new service paradigms that are enabling end-users to achieve high degrees of cost optimization and enhanced operational efficiency. For instance, end-users and supplier equations are currently being determined by service architectures founded on frameworks defined by advanced Information and Communication Technology (ICT). Services based on such advanced ICT concepts, were found to hold more than 75% of the global industrial services market in 2014.

While spare parts and maintenance still retain a major share of the service revenue models, the growth of advanced services is expected to witness a CAGR of 20 percent over the coming years.

“In order to design and deliver such advanced services, industrial suppliers are required to forge partnerships with cloud and data analytics vendors. In some end use cases even the most rudimentary solutions built on an integrated analytics package have enabled suppliers’ upsell and increase product prices by up to 10 percent. It also helped achieve differentiation in a technology saturated market place,” notes Frost & Sullivan Practice Director Industrial Automation & Process Control and Measurement & Instrumentation, Muthukumar Viswanathan.

Major structural revisions are also expected on the shop floor driven by the advent of M2M (machine-to-machine) communication. By 2020, nearly 12 billion devices in the industry are poised to be connected via advanced M2M technology.

“There is still a lot of scepticism surrounding this rapid transition towards the smart factory framework, however. This can be summed up by a key question that surfaces across all major industrial discussion forums. Who will be the single responsible entity for the integrated solution delivered to an end-user?” Mr. Viswanathan continues. “I would opine that although the emerging business demands would warrant an ecosystem approach, there will still be one key partner who would liaise with the end-user and agree to be the ultimate risk bearer of the final solution delivered to the customer”.


The advantages and disadvantages of advanced NDT.

29/04/2015

Alison Glover from Ashtead Technology describes the advantages and disadvantages of advanced Non-Destructive Testing equipment, and explains why her company has invested over £1m (€1.4m) in advanced NDT equipment.

Alison Glover

Alison Glover

Simple, conventional inspection methods such as ultrasonic thickness testing can provide a useful, fast, low-cost method for assessing materials. However, in comparison with advanced NDT, such methods are generally less repeatable, less recordable, and have a lower Probability of Detection (PoD).

There are many advantages to be gained from advanced NDT, some of which will be briefly described below. The major disadvantage of these high-end technologies is, of course, their cost. Advanced NDT instruments may cost tens of thousands of pounds and require significant levels of training to best exploit their benefits. For this reason, Ashtead Technology has chosen to invest over £1 million in advanced NDT equipment; the ability to rent this technology can dramatically lower the cost of entry to this market for our customers in applications such as crack and flaw detection, weld evaluation, tube testing, corrosion mapping, composite inspection etc.

The advantages of advanced NDT
In general terms, advanced instrumentation can, in the right hands, provide more accurate and reliable inspection data with an improved PoD. The data is more recordable and more repeatable. Advanced technologies such as Phased Array Ultrasound (PAUT) or Eddy Current Array (ECA) provide more intuitive displays, better ways of presenting data and generate inspection reports of higher value to clients. For example, a colour-coded C-Scan is an intuitive way of representing inspection data. Using different colours for different remaining wall thicknesses, for instance in PAUT inspection of corroded or eroded pipes, produces an image that is easy to understand. Similarly, many clients will find a colour-coded ECA C-Scan simple to interpret compared with the conventional eddy current impedance plane display.

Composite OmniScanMX2

Composite OmniScanMX2

With greater control over instrument configuration, advanced NDT procedures can be optimised for particular inspections. Setup files can be saved digitally and easily transferred to others. Digital recording means that data can be emailed to colleagues when a second opinion is required, or when complex data needs to be assessed by a higher level technician. The ability to store large inspection data files also means that new inspections can be compared with those done previously, to determine whether there has been further deterioration, and to monitor, for example, crack growth. In addition, the use of scanners improves repeatability and helps to ensure that sequential inspections are directly comparable and less subjective.

As a result of these advantages, there has been strong growth in the advanced NDT sector, and this has been reflected in the volume of advanced NDT instruments in the Ashtead Technology fleet that are out on hire.

The disadvantages of advanced NDT
In comparison with conventional methods the operation of advanced instrumentation requires a higher level of training and additional certification, which incurs more costs. However, as discussed above, the deployment of advanced NDT delivers a superior, and therefore higher £value service. Once advanced training is completed, as with any skill, it is important to practise what was learned on the training course. This can only be done if high-value advanced instrumentation is available. Again, purchase costs may be preclusive, so renting can be a preferable option. In addition, instrument purchase ties the user to a specific technology, whereas a rental fleet offers users the ability to deploy the most appropriate kit for each job, or to hire equipment from different manufacturers depending on the users’ training, experience and preference.

Purchase of a particular technology may also reduce or preclude access to other methods that may be developed at a later date.

It is important to remember that the capital expenditure on advanced NDT equipment is not the only cost. Instrument maintenance incurs a further cost, as does depreciation. Capital purchases will typically be written off in the company accounts over a 3 year period, which means that this equipment must generate substantial profit for a return on that investment. Also, there are borrowing and opportunity costs – the money used in equipment purchase or to pay interest could have been used for something else, such as training or hiring more staff.

To be a worthwhile investment, high value advanced NDT equipment must be used regularly. Without an assured steady flow of work, there is a danger of underutilisation. In contrast, renting provides a way to only pay for equipment when it is needed and not to incur any costs in the intervening periods.

In summary, the advantages of advanced NDT can be enormous if the financial implications are managed effectively. One way to achieve this is by taking advantage of rental instrumentation from Ashtead Technology.


#HM14 Without data integration there will be no Industry 4.0

30/12/2014

This was one of our most popular posts in 2014: Without data integration there will be no Industry 4.0 #Pauto

Conference & Exhibition Releases

From CAx to MES – the digital factory makes product data available to all systems over the entire life cycle. Experts explain how the software interacts.

Everyone is talking about “Industry 4.0 – in other words, the vision of a fourth industrial revolution. In the near future it will provide intelligent, networked products from networked, systems that to a large extent operate autonomously,” says analyst and technical author Ulrich Sendler. “However, there is one major condition to Industry 4.0: all types of digital products and production data need to be merged with the real world.” For the PLM expert this means “that the many systems which are currently being used as islands need to be networked in the future in order to provide real data integration for the entire life cycle of the products.”

integrated-industry_stage_desktopModern production relies on data from products, production planning and manufacturing engineering. “The continuous availability…

View original post 528 more words