Turbidity is a measure of the cloudiness of water and is used as a key water quality indicator. This can then be used to quantify the amount of particles disrupting light passing through a liquid. For example, muddy water has a lot of small soil particles. This disrupts light going through it and makes it very cloudy and turbid. However spring water tends not to have many particles in and so is clear and not very turbid. Turbidity is one of the most commonly used measures to indicate levels of contamination in water. Turbid water could contain silt, sand, mud or algae. All of which can be associated with dangerous bacteria and pathogens. Individually however, these may be difficult to measure. Turbidity acts as an indicator of water quality, not all contributors to turbidity are harmful but it does indicate a failure of water treatment in drinking water.
How is turbidity measured?
Turbidity can be measured in a number of ways, including Sechhi disks, turbidity tubes and a turbidity meter. You can read our other post here, which summaries each method. The most appropriate method depends on your application and how important it is to get accurate readings, the level of turbidity in your sample and if you require regulatory approval. An electronic meter is recommended when high precision at low turbidity levels is required.
What complications are there when measuring low levels of turbidity?
Measuring turbidity at very low levels is difficult to do accurately and reproducibly. You will need an accurate nephelometric turbidity meter (light scatter at 90° angle is measured), such as the Hach TL23 series benchtop meter or Hach 2100Q handheld meter. Although, even with an accurate meter, low level turbidity measurements are difficult to do, when taking into account user method and measurement technique.
Measurements can become affected by a number of factors such as :
- Bubbles in your sample
- Contamination of the sample or the outside of the sample cell
- Preparation of the calibration standards themselves.
- Stray light
- Variation between sample cells
A few steps we recommend when taking reading are:
- Proper sample cell care – avoid over handling the cells and use a lint free cloth for cleaning
- Regular accurate calibration with primary standards – either preparation of your own standards from formazin or use of stabilised formazin standards (such as Hach Stablcal standards are considered primary standards by USEPA. They do not require preparation other than mixing – all standards except the <0.1NTU dilution water must be gently inverted prior to calibration). See below about issues surrounding calibration with very low level standards
- Verification with secondary standards – such as Gelex standards. They are used to check the calibration of the turbidity meter between your primary calibrations. Once the instrument has been calibrated with primary standards, you place each secondary standard in and record the values. Their values are then periodically checked to verify that the calibration has not significantly changed. You should verify your instrument’s reading in the range where you usually measure your samples
- Back to base calibration – we also offer a back to base calibration service for your meter
How do I calibrate at low levels?
Although, it seems like it makes sense that when you are measuring very low turbidity level samples you will want to calibrate with very low level standards, this can actually lead to inaccurate calibration. This is mainly due to the fact that the standards can easily become contaminated and at very low levels, such contamination will cause a greater error percentage. At higher turbidity levels (say 1 NTU and greater) standards can be accurately prepared as long as good technique is applied. Much of the error in preparation of standards below 1 NTU can be attributed to turbidity naturally present in the dilution water. When preparing low turbidity standards this can have a huge effect, but in higher value standards the effect in minimal.
One way around having to prepare these low level standards is to use the one-point calibration algorithm. In nephelometric turbidity meters, there is a linear relationship between light scatter and turbidity between 0.012 – 40 NTU. So you should calibrate at one point in this range (typically 20 NTU is chosen). A zero point reading is then performed with the instruments light turned off and a linear calibration curve is drawn between those two points.
Further reading on turbidity:
You can shop our turbidity range here. If you have any questions on measuring or calibrating for turbidity, then contact at firstname.lastname@example.org.