White Paper – Save time and money with modern monitoring and calibration

The text below is taken from a Rotronic White Paper available here in full.

Companies across many industries needing to perform regular monitoring and calibration have never faced a more challenging environment. Stricter compliance requirements mean companies are under greater pressure to deliver accurate and reliable data, whilst internal budget restrictions demand the most cost effective and efficient solutions.

Can modern measurement &  calibration techniques help your business operations?

It is well known that accurate measurements reduce energy use and improve product consistency. Instrument users, calibration laboratories and manufacturers are constantly looking for smarter ways of operating and are responding with innovations that are transforming the measurement and calibration industry.

New ways of working

Industrial environments are now more automated and interconnected than ever before and companies need to ensure that their infrastructure and processes have the ability to respond and adapt to industry changes. With the introduction of newer, more complex instrumentation, organisations can often be slow to recognise the additional business benefits that can be achieved by replacing a traditional method that (offers a short term result) with a more modern method (that delivers a longer term sustainable solution). Implementing a new approach can also help re-position the calibration process from being viewed simply as a cost to business to one that helps deliver improved process and energy efficiencies with a return on investment.

Industry advancements

Historically, in-situ calibration has been the standard approach; however, advances in technology means that there is now a viable alternative whilst still maintaining the growing demand for on-site services. With the market moving away from analogue to digital signal processing, interchangeable digital sensors are proving to be a more practical solution for both large and small organisations alike. As businesses look for greater automation and productivity, modern interchangeable digital sensors are allowing calibration to be achieved much more quickly without the costly implications of operational downtime and on-site maintenance.


Why calibrate? – The only way to confirm performance
In unsettled economic times it can be tempting to simply extend the intervals between calibration cycles or to forgo calibration altogether. However, neglecting system maintenance and calibration will result in reduced performance and a loss of measurement confidence, ultimately leading to a failure to meet compliance standards. Measurement drift over time negatively impacts on processes and quality. Regular, accredited calibration demonstrates compliance, but equally importantly, sends a message to customers that quality is taken seriously and that they can be confident in both the process and the final product.
What is your route to traceability
What is your route to traceability

Traditional In-Situ Sensor Calibration

Until recently most humidity calibrations were performed on-site in-situ. Larger organisations with multiple instruments generally found it more convenient to have their own in-house calibration instruments with dedicated technicians working on-site. Smaller organisations unwilling or unable to invest in on-site calibration equipment had the option to engage the services of a commercial calibration provider.

In most cases, trained instrument technicians are required for in-situ calibration work; the equipment is brought to the probes and generally only one probe can be calibrated at a time. One of the main disadvantages of this process is the impact that it has on production downtime, as typically a salt or chamber based calibration can take more than three hours. Moreover, as the processes or control systems are interrupted during calibration, the actual conditions can be unknown.

Modern Ex-Situ Sensor Calibration

Companies keen to avoid the impacts of in-situ calibration and/or operational downtime caused by the replacement of failed hard wired instruments are opting instead for the flexibility and convenience of interchangeable sensors and modern portable calibration generators. Instead of bringing in equipment to calibrate in-situ, the technician brings pre-calibrated probes directly from the laboratory (on-site or external). Using interchangeable digital sensors, the pre-calibrated probes can be exchanged with the in-situ probes in seconds (known as hot swaps), saving time and avoiding operational disruption. If a wider system loop calibration is required, digital simulators are applied and provide any fixed values exactly and instantly. The old probes are then taken back to a calibration laboratory and calibrated accordingly. This adds the benefit that an external accredited laboratory can be used without issue.

Improved accuracy and traceability?

By ensuring that all calibrations are performed within dedicated laboratories as opposed to ad-hoc locations, better procedures and instrumentation can be utilised. In addition, time pressures are usually reduced as processes and monitoring systems are unaffected during calibration. As such calibrations are typically performed to a higher standard leading to lower associated measurement uncertainty (every calibration will have an uncertainty associated with it – whether it is defined or not). Overall in most circumstances these methods deliver greater reliability, improved traceability and importantly, reduces on-site workload and limits operational downtime.


CASE STUDY – Meeting the demands at the National Physical Laboratory, London.

National Physical Laboratory
National Physical Laboratory, London

When the National Physical Laboratory (NPL) in London needed to replace their entire building management system (BMS), they turned to Rotronic Instruments (UK) for an integrated solution to the sensors and calibration. The NPL was looking for both a complete range of temperature and humidity sensors and instrumentation, and the fulfilment of the calibration and commissioning needs of these instruments. Working closely with the project stakeholders, the Rotronic Instruments (UK) team developed a tailored solution, matching the instruments and service to the project requirements.

The decision by the NPL to replace the BMS was brought about by the need for tighter control, greater reliability and easier calibration. One of the key elements in achieving these objectives was the use of interchangeable probes. This immediately limited time-consuming and disruptive on-site sensor calibration to a minimum. Every probe’s digital output was calibrated in Rotronic Instruments’ (UK) UKAS accredited laboratory, and each transmitter’s analogue output was calibrated using a simulated digital input. To resolve any measurement errors in-situ between the calibrated sensors and uncalibrated BMS, each installed high accuracy instrument was loop calibrated and adjusted. Typical installations errors corrected for to date on the brand new BMS are ±0.5 %rh and ±0.25°C; a significant result for labs requiring tolerances of better than 1 %rh and 0.1°C.

Whilst the use of high performance instruments was essential, not every sensor location or application could justify this approach. However, mindful of the NPL’s long term objectives, even the lowest specification thermistor products were customised to provide long-term performance and low drift. Additionally, a robust commissioning procedure and training for key personnel was developed to enable ongoing commitment to delivering quality measurements. Finally, it was effective communication and regular on-site interaction with all the stakeholders that helped deliver a successful outcome to this substantial project.


Summary

All companies that need to perform regular monitoring and instrument calibration should be constantly reviewing their processes and questioning whether their operations and procedures are delivering the maximum return for their business. As increased regulatory compliance and demands for improved energy efficiencies continue to grow, traditional processes may no longer offer the optimum solution. An organisational mindset change may be needed to move calibration from being seen as a fixed cost to a process that can help deliver business objectives through ongoing cost and energy efficiencies.

With the advent of calibration methods that can significantly reduce in-situ disruption, downtime is minimised, labour costs are reduced and productivity improved. Using interchangeable digital systems increases the accuracy and traceability of calibrations, resulting in higher quality product.

Choosing the right calibration methodology may require new thinking and a different approach, but those companies that get it right will end up with a modern, flexible system that both achieves compliance and delivers long term cost and energy efficiencies to their business.

For more information on the NPL case study or how your business can develop innovative and efficient monitoring solutions please contact us.

Critical monitoring of wind turbines

The future is very encouraging for wind power. The technology is growing exponentially due to the current power crisis and the ongoing discussions about nuclear power plants. Wind turbines are becoming more efficient and are able to produce increased electricity capacity given the same  factors.

Picture2
Worldwide installed wind power per year in MW. (Source GWEC)

Converting wind power into electrical power:

A wind turbine converts the kinetic energy of wind into rotational mechanical energy. This energy is directly converted, by a generator, into electrical energy. Large wind turbines as shown in the picture, typically have a generator installed on top of the tower. Commonly, there is also a gear box to adapt the speed. Various sensors for wind speed, humidity and temperature measurement are placed inside and outside to monitor the climate. A controller unit analyses the data and adjusts the yaw and pitch drives to the correct positions. See the schematic below.

Wind Turbine
Schematic of Wind Turbine Systems

The formula for wind power density:

W   = d x A2 x V3 x C  where :

d: defines the density of the air. Typically it’s 1.225 Kg/m3 This is a value which can vary depending on air pressure, temperature and humidity.

A2: defines the diameter of the turbine blades. This value is quite effective with its squared relationship. The larger a wind turbine is the more energy can be harnessed.

V3: defines the velocity of the wind. The wind speed is the most effective value with its cubed relationship.

In reality, the wind is never the same speed and a wind turbine is only efficient at certain wind speeds. Usually 10 mph (16 km/h) or greater is most effective. At high wind speed the wind turbine can break. The efficiency is therefore held to a constant of around 10 mph.

C: defines the constant which is normally 0.5 for metric
values. This is actually a  combination of two or more constants depending on the specific variables and the  system of units that is used.

 Why measure the local climate?

To forecast the power of the wind over a few hours or days is not an easy task.

Picture3
Off shore wind farms

Wind farms can extend over miles of land or offshore areas where the climate and the wind speed can vary substantially, especially in hilly areas. Positioning towers only slightly to the left or right can make a significant difference because the wind velocity can be increased due to the topography. Therefore, wind mapping has to be performed in order to determine if a location is  correct for the wind farm. Such wind maps are usually done with Doppler radars which are equipped with stationary temperature and humidity sensors. These sensors improve the overall accuracy.

Once wind mapping has been carried out over different seasons, wind turbine positions can be determined. Each turbine will be equipped with sensors for wind direction and speed, temperature and humidity. Using all these parameters, the turbine characteristics plus  the weather forecast, a power  prediction can be made using complex mathematics.

The final power value will be calculated in “watts” which will be supplied into power grids, (see schematic on the right).  Electricity for many houses or factories can be powered by the green energy.

Picture4
Not ideal energy generating conditions!

Why measure inside a wind turbine?

Wind farms are normally installed in areas with harsh environments where strong winds are common. Salty air, high humidity and condensation are daily issues for wind turbines.

Normal ventilation is not sufficient to ensure continuous operation. The inside climate has to be monitored and dehumidified by desiccant to protect the electrical components against short circuits and the  machinery against corrosion. These measurements are required to ensure continuous operation and reduce maintenance costs.

What solutions can Rotronic offer?

Rotronic offers sensors with  exceptional accuracy and a wide range of products for meteorological applications and for monitoring  internal conditions.

Low sensor drift and long-term stability are perfect in   wind energy applications where reduced maintenance reduces operational costs.

The wide range of networking possibilities including RS-485, USB, LAN and  probe extension cables up to 100 m allows measurements in remote or hard to reach places. Validated Rotronic HW4 software makes it easy to analyse the data or it can be exported into MS Excel for  reporting and further processing.

The ability to calibrate  accurately using humidity standards and portable generators on site ensures continued sensor performance!

Comments or queries? Please do get in touch!

 

Temperature and Humidity Monitoring in Data Centres

Over the years there has been a rapid increase in large stand-alone data centres housing computer systems, hosting cloud computing servers and supporting telecommunications equipment. These are crucial for every company for IT operations around the world.

It is paramount for manufacturers of information technology equipment (ITE) to increase computing capability and improve computing efficiency.  With an influx of data centers required to house large numbers of servers, they have become significant power consumers. All the stakeholders including ITE manufacturers, physical infrastructure manufacturers, data centers designers and operators have been focusing on reducing power consumption from the non-computing part of the overall power load: one major cost is the cooling infrastructure that supports the ITE.

Data Centre Modelling
Data Centre Modelling

Too much or too little Humidity can make one uncomfortable. Similarly, computer hardware does not like these extreme conditions any more than we do. With too much humidity, condensation can occur and with too little humidity, static electricity can occur: both can have a significant impact and can cause damage to computers and equipment in data centers.

It is therefore essential to maintain and control ideal environmental conditions, with precise humidity and temperature measurement, thus increasing energy efficiency whilst reducing energy costs in Data Centers. ASHRAE Thermal Guidelines for Data Processing Environments has helped create a framework for the industry to follow and better understand the implications of ITE cooling component.

Rotronic’s high precision, fast responding and long-term stability temperature and humidity sensors are regularly specified for monitoring and controlling conditions in data centres.

Why measure temperature and humidity?

Maintaining temperature and humidity levels in the data center can reduce unplanned downtime caused by environmental conditions and can save companies thousands or even millions of dollars per year. A recent whitepaper from The Green Grid (“Updated Air-Side Free Cooling Maps: The Impact of ASHRAE 2011 Allowable Ranges”) discusses the new ASHRAE recommended and allowable ranges in the context of free cooling.

The humidity varies to some extent with temperature, however, in a data center, the absolute humidity should never fall below 0.006 g/kg, nor should it ever exceed 0.011 g/kg.

Maintaining temperature range between 20° to 24°C is optimal for system reliability. This temperature range provides a safe buffer for equipment to operate in the event of air conditioning or HVAC equipment failure while making it easier to maintain a safe relative humidity level.  In general ITE should not be operated in a data center where the ambient room temperature has exceeded 30°C. Maintaining ambient relative humidity levels between 45% and 55% is recommended.

Additionally, data centre managers need to be alerted to  change in temperature and humidity levels.

Rotronic temperature and humidity probes with suitable transmitters or loggers are most suitable for monitoring & controlling conditions in data centres due to their high precision and fast response with long-term stability.

With Rotronic HW4 Software a separate monitoring system can be implemented. This enables data center managers to view measured values and automatically save the measured data. Alarm via email and SMS, with report printout allow data integrity guaranteed at all times.

Dr Jeremy Wingate
Rotronic UK

Field Testing the new HL-1D Compact Logger – Up the Matterhorn!

Last week Rotronic launched their latest small compact temperature and/or humidity data logger!

hygrolog1_front
HL-1D Compact Logger UK RRP £73

With the Friday off work myself and a friend thought how better to test the impressive little logger than slinging it in a pack and carrying it up through sun, fog, snow and rain on an audacious weekend attempt to climb the 4478m Matterhorn in the beautiful Swiss Alps (I confess my friend could not care less about the logger aspect but was certainly up for the climb).

Matterhorn
Hornli Ridge of the Matterhorn 4478m

With no time for acclimatization, the climb would be grueling enough without carrying additional instruments, but thankfully the HL-1D is very compact and light. It has 3 year battery life, can store 32,000 readings and has high measurement accuracy of ± 3.0% RH and ± 0.3 °C. Of course the logger is designed more for monitoring office and work spaces,  transportation of products, production and storage environments, still we though it wise to push it to its limits!

Due to very poor conditions on the mountain we planned to overnight in a small hut at 4000m. So with our packs loaded we set off from the 2000m high gondola station above the beautiful village of Zermatt. But first ensured we were well fueled with ‘Apfel Strudel’ and coffee!

Breakfast HL-1D
Breakfast of kings!

The climb itself started at 3000m and the temperature quickly began to drop as we gained altitude.  At nearly 4000m the temperature dropped rapidly and clouds came in (shown by a rapid increase in the humidity). Luckily the Solvay Hut at 4004m provided welcome shelter and a ‘comfortable’ 3°C temperature (much warmer inside our sleeping bags).

Start of Hornli Ridge
At the base of the route proper

The morning showed that the cold temperatures and thick cloud had turned to more heavy snow fall, making any further progress even harder. The fresh snow combined with the debilitating effects of altitude sickness meant that we (wisely) decided to head straight down (this was just a quick weekend getaway after all).

Descent - more snow
Lots more snow on the way down!

The decent was challenging and navigation difficult. Snow fall was consistent most of the day and topped off by a steady shower of rain as we made our final walk back down to the gondola station (you can see the logger showing 100%rh as the top pocket of my bag becomes saturated in the down pour).

Zermatt
Relaxing back in beautiful Zermatt the following day – It’s sunny now!!

Back in Zermatt and we quickly find shelter to dry off and find a good spot for a celebratory beer and hearty Swiss meal.

What of our little logger? It provides a great record of the trip. Values safely recorded through the freezing temperatures and soaking rain.
Full trace of the logger can be found below; click on the image for more detail.

Matterhorn Trace
Matterhorn Trace

If you would like more info on the latest compact logger click here or for any other measurement queries please do not hesitate to contact us!

Dr. Jeremy Wingate
Rotronic UK

Technical Note 1 – Digital Integration of Rotronic devices

The Rotronic HygroClip2 was launched around four years ago and is used as standard with most of our devices. Underpinning the HygroClip2’s performance beyond the Rotronic sensor element is some impressive technology.

The Airchip3000 is the chip that provides high resolution measurement of the raw sensor outputs, temperature compensation and calibration correction tables which ultimately provides the high accuracy measurements our customers demand.  In addition, the Airchip provides digital and analogue communications. All the Rotronic instrumentation communicates digitally to these probes but these interface methods are possible without using a Rotronic handheld/logger/transmitter etc.

Lets explore what is possible…

Connections

Devices can be connected to your software or systems via USB, Ethernet, Serial or Wireless depending on the physical connections available. The AirChip itself has a simple RS232 output so additional hardware will be required for for anything but direct RS232 interface (to a Raspberry Pi GPIO for example).

Rotronic DLL

The Rotronic DLL provides a link between Rotronic devices and your software program (as well as our HW4 software). The DLL allows you to call up all functions within our devices that are accessible via our software. We have several example packages to make developing your own systems easier including;

– C++
– Visual Basic
– LabView
– Excel

The DLL can be integrated into wider software systems, if you have sufficient technical know-how. For example using using ctypes in Python allows the integration of Windows DLL. Python programs can then be used cross platform (Windows, Mac and Linux etc).

This approach is typically used when integrating our HC2 range of probes via our AC3001 Probe-USB converter cable. This way you can utilise our highest accuracy probes in a simple and efficient manor without any loss of accuracy due to digital-analogue conversions. It is also possible to quickly add the measured values into your existing projects. This is how our HygroGen2’s Autocal system communicates to the Rotronic probes during automated calibration and adjustment runs.

Example programs and DLL itself can be downloaded here

If you require support integrating our sensors into your systems please do not hesitate to contact us!

Direct Device Interface

In certain situations utilsing our DLL may not be appropriate for your project. So it is also possible to directly communicate with the Airchip3000 devices avoiding the DLL and using direct protocol commands. This is often a far simpler method and more commonly used when integrating to industrial systems.

With Ethernet and Serial devices communication if very easy using a terminal program (eg Putty) or direct from your Linux terminal (For USB some extra step are required explained at the end of this article).

1. Connecting to Rotronic devices via Putty (!!! USING USB? READ THE NOTE AT THE BOTTOM OF THIS POST !!!

Firstly, you simply need to connect to the relevant comm port or IP Address and send your commands. Serial interface settings are detailed below. For Ethernet simply use RAW connection and select port 2101 or use Telnet with Port 2001 (you will need your devices IP address)

Step 1 – Setup Serial Settings in Putty

Putty Setup

Step 2 – Force Echo On / Line Editing
I strongly recommend changing the Terminal settings to Force Echo (so you can see what you type and edit it)…

Putty Setting Echo

Step 3 – Connect
Now simply open your session…

Putty Open

All Airchip devices will respond to the command below, an example response is shown from a HC2-S probe.

Sent Command
{ 99rdd}

Return String
{F00rdd 001; 36.30;%rh;000;=; 24.30;°C;000;=;nc;—.- ;°C;000; ;001;V2.0-2;0061176056;HC2 ;000;C

Explaination ( “;” separated values)
{
F = Device Type
00 = RS485 address
rdd = command
001 = Device type

36.30 = value 1
%rh = value 1 units
000 = value 1 alarm condition
= = trend

24.30 = value 2
°C = value 2 units
000 = value 2 alarm condition
= = trend

nc = calculated value selected
—.- = calculated value
°C = calculated value units
000 = calculated value alarm condition
= calculated value trend

001 = hardware version
V2.0-2 = firmware version
0061176056 = serial number
HC2 = device name
000 = sensor alarm
C = checksum

Important Note! Using USB interface with Putty

By default all Rotronic USB interface cables will link to the Rotronic driver and try to use the DLL. However if you configure the cable to be a Virtual Comm Port you can use the simple serial connection method described above! So you can see every device  connection type can be interfaced using this method 🙂

To do this you need to force windows to use the standard FTDI driver and setup the Virtual Comm Port.

Step 1 Install FTDI Drives

Select the relevant drivers from this page for you OS http://www.ftdichip.com/Drivers/D2XX.htm

Step 2 – Force Windows to use new driver

Go to device manager (Control Panel, System, Device Manager)

1 – Click Update Driver
2 – Select Browse my computer for Driver
3 – Choose ‘Let my pick from a list’
4 – Click Have Disk
5 – Go to the FTDI folder and click  ftdibus.ini
6 – Select the USB Serial Port

Now you will see a new USB Serial Port in Device Manager under Ports (COMM AND LPT) – right click and select properties. Ensure the Port Settings are as below.

Baud rate : 19200
Data bits : 8
Parity : none
Stop bits : 1
Flow Control : none

You can now use the Virtual Comm Port in putty or other projects.

In my experience with bespoke software packages for a single device type the terminal connection is very simple.

For example a simple Python code to communicate with an Ethernet device is below…

try:
session = telnetlib.Telnet(192.168.1.1, 2001, 0.5)
except socket.timeout:
print (“socket timeout”)
else:
session.write(“{ 99RDD}”.encode(‘ascii’) + b”\r”)
output = session.read_until(b”/r/n/r/n#>”, timeout )
session.close()
print(output)

We will look at direct interface to the AirChip and available protocol options next time!

Comments or queries – let us know!!

Dr. Jeremy Wingate
Rotronic UK

CO2 Monitoring in the Beverage Industry

The Carbonating Process

Everybody loves a refreshing sparkling drink during the summer heat. CO2 does not only bring the bracing sparkling effect into your drink but even helps to conserve the beverage. A chemical reaction of CO2 and water forms carbonic acid which has an antibacterial effect. All well known soft drinks come with the right fizz.

The beverages are treated with a carbonating process just before the final bottling or canning. Carbonating systems mainly consist of a booster pump, a CO2 saturator, a carbonating tank and an optional CO2 analyser to check the carbon acid content of the final product.

With the aid of a booster pump the beverage mixture is conveyed to the saturator which works according to the Venturi principle. An optimising control keeps the flow velocity through the saturator within a constant working range. This generates a partial vacuum at the smallest cross section of the saturation which causes a reduction of the pressure level. This suction effect then mixes the CO2 with the beverage liquid. The short-time increase of the flow velocity guarantees a fine distribution of the gas and homogenous mixing.

The process essentially depends on the tank pressure which has to be set slightly higher then the saturating pressure of a specific product. Right after that, the drink is ready to be bottled automatically to preserve its texture.

diagram

CO2 saturator in a carbonating stage of a bottling line

Why the need to monitor CO2 in a beverage plant?

Carbonating processes use most of the CO2 in the beverage industry. But beside that the gas also occurs during fermentation or it is used for refrigeration – so CO2 is omnipresent in such facilities.

High concentrations of CO2 in closed areas where workers attend to their jobs can become a lethal risk. Extensive CO2 levels can lead to bad headaches, drowsiness, unconsciousness and even sudden death. A CO2 level above 5000ppm is considered as alarming. The gas can neither be recognised by its odour nor by its visual appearance. Soft-drink factories or breweries therefore require an accurate CO2 control and alarm system to maintain their high standard of operational safety.

Capture

To assure hygienic conditions and to reduce the risks of CO2 incidents, bottling lines which fill carbonated drinks are often operated in separated areas of a factory. There is a controlled loss of CO2 during the bottling or canning process of sparkling drinks which is minimal, but the amount adds up considering that industrial lines are able to fill up to 30.000 bottles an hour. With each filling a tiny amount of CO2 gets exposed to the surrounding atmosphere.

Factories require big amounts of CO2 which is delivered and stored in gas cylinders. During transport or storage there is always the risk of a thin crack occurring and that gas escapes unnoticed. Drinks which are not meant to be carbonized such as beer or wine also emit CO2 during the fermentation process. The gas needs to be release controlled. Also here leakage can be a danger and CO2 sensors help to keep control of the atmosphere.

This small insight shows how beverage manufacturers depend on reliable CO2 monitoring systems!

Candice – Sales Support