We realize that much of the information surrounding thermal and night vision is complex and technical. On this page we have tried to accurately and plainly convey what we believe to be the most important material regarding these tools. The basis for these explanations has been gathered from the websites of reputable manufacturers, academic papers, and government documents and agencies.
Microbolometer (thermal sensor) resolution is usually defined in pixels by manufacturers. Just like digital cameras, the number of pixels a sensor has is directly relatable to how refined an image can be. This measurement is standardized (i.e. 640×480) and can be found on most manufacturer websites and in the product descriptions on this site. It is important to understand that manufacturers will also give a pixel quantity for their displays; these are usually of a higher pixel count since this component is showing the image that is digitally magnified and producers want the best possible clarity presented to the eye.
The greater the number of pixels, and the smaller the sensor, the greater the resolution. This technical information is seen in practice when detection, recognition, and identification ranges are discussed. Additionally, units with a higher pixel count will have a broader field of view compared to those with a lower count.
Detection range is when the object being looked at can fit into two pixels; something with a heat signature is showing up in the optic but what it is is unknown. Recognition range is when the object takes up six pixels or, more accurately, when the user is able to actually determine what the “something” is. Identification range, defined as at least 12 pixels, is when the “something’s” features are visible (i.e antlers that are still growing or the different sizes of pigs shuffling around in a group).
Keep in mind that while resolution specifications are important, an understanding of detection vs. recognition vs. identification is necessary when choosing an optic. The magnification offered means very little if the image quality is so poor that nothing beyond detection is possible.
Micron – a measurement equal to one-thousandth of a millimeter. This is often used to describe the size of the sensor of a thermal optic (often referred to as the ‘thermal core’). Broadly speaking, smaller sensors allow for the use of smaller objectives without the loss of overall resolution. This is advantageous in that the most expensive component of the optic – the germanium objective lens – can be smaller without compromising overall resolution. However, if everything remains the same between two optics with different sensors, the one with the larger sensor will have better resolution. Some manufacturers will combine a larger objective lens with a more efficient, smaller core to maximize the benefits of both components, though an increase in price is unavoidable (i.e. N-Vision Halo-LR or Trijicon IR-Hunter MK3).
NETD (Noise Equivalent Temperature Difference)
This is a wildly technical term that basically equates to the minimal detectible difference between a background and an object capable by the device. An optic capable of discerning between two things of a very close temperature can clearly define more objects. This rating is a useful indicator of how well a thermal device can perform in adverse weather conditions (i.e. fog or rain).
NETD is usually communicated in millikelvin (noted as “mK”), a small measurement on the Kelvin Scale which is a temperature spectrum used primarily by physicists (the only scale including “absolute zero” – the total absence of any heat energy). The lower the mK rating, the better the resolution of the optic as there is a lower threshold of the heat required for the optic to differentiate between different objects (a scope with a <25mK rating will better define different objects of a similar temperature than one with a <40mK rating). Normally a lower mK rating will be seen alongside a higher pixel count.
This measurement refers to the number of frames presented to the user each second. Normally this will be presented in Hertz (where 1Hz equals 1 frame per second). Since thermal optics are presenting a thermogram, not the actual scene like a standard optic, there can be a noticeable lag when the user transitions from different targets. Faster frame rates are an indication that the optic has minimized this issue.
The average speed the human eye can detect the brief lag we are referring to equates to somewhere between 60-75 Hertz. The highest frame rate currently on the market is around 60Hz, and we have not noticed any lag in these units (or many of a lower Hertz rating).
Since a thermal optic is constantly ingraining information into the thermal sensor in order to present a thermogram, devices can begin to show the left-overs of previous images which harms the resolution. To counteract this, thermal optics use a process called “NUC-ing” – non-uniformity correcting. This happens when a shutter interrupts the light transmission between the objective lens and the sensor, giving it time to recalibrate according to a flat reference. This can happen automatically or manually, depending upon the design of the optic. This process is noted with a quiet clicking noise (the shutter moving into the light’s path) and a momentary freeze of the image.
F-number (Focal Ratio)
For the purpose of evaluating and comparing thermal optics, the focal ratio number (seen as “F1.1” or some other variation like “F1.4”) basically tells us how well the lens brings in light. The lower the number, the more light allowed into the optic. The F-number does not communicate a physical measurement but rather the quantity of light allowed through a specific lens in relation to the focal length (how well an optic can bend light to form an image).
Analogue v. Digital – Function
Traditional night vision technology (analogue) utilizes an image intensifier tube that transitions what usable light is available into a format our eyes can register. To be more specific, photons are translated into amplified electrons through a photocathode and are then shifted through what is known as a micro-channel plate which multiples the electrons and casts them onto a phosphor screen thereby creating flashes of light perceivable to the human eye (the multiplied electrons becoming photons again). All this is to say that what photons are available are effectively multiplied to the point we can use them to construct an image. (Traditionally, night vision images have been some tint of green because phosphor glows green when impacted by electrons that do not carry any color information; additionally, the human eye can see more variances in green than any other color so more image detail is possible.)
Digital night vision operates by transitioning the light information available into a digital format that can be presented as an image on a screen. This occurs when an active-pixel sensor (CMOS) or a charged couple device (CCD) translate the incoming light waves into small bursts of current that are held by photosensors in order to present a digital image. (Digital night vision does not usually incorporate color, utilizing a black and white spectrum instead. This is because color sensors take up more space than black and white ones as more room is needed in each pixel to house subpixels for colors red, blue, and green. Additionally, when monochrome light hits these pixels it only has the ability to activate one subpixel, further degrading resolution as pixels are left only partially illuminated.)
Analogue v. Digital – Pros and Cons
Analogue units generally have a better battery life and are normally lighter and smaller in format as well, allowing them to be mounted to helmets for hands-free use.
If an analogue unit is not auto-gated (meaning it rapidly switches on and off to maximize its photocathode efficiency) and it does not have a lens cover, the tube (specifically the photocathode) has a high chance of being damaged. Of note is that this component is related to the longevity of the night vision unit (most Generation 3 analogue tubes have a life expectancy of 10,000+ hours).
Digital units can easily be upgraded with better IR illuminators and they are usable during the day with no threat of being damaged. While generally heavier and larger, this variation of night vision technology accommodates a host of features analogue cannot (i.e. recording capability, multiple reticles, easier sight-in methods, and more).
Analogue – Generations
The periods of advancement for analogue night vision are noted as “generations”. These phases have largely been dictated by the United States Night Vision and Electronic Sensors Directorate (NVESD) of the Army.
The devices were used sparingly during the final stages of World War II, mainly by Germany and the U.S. They were “active” devices, meaning they utilized infrared illuminators. These early versions were mostly vehicle mounted (primarily on tanks) due to their size; however, they offered little improvement over eyesight in comparison to later NV iterations.
These early “passive” devices relied strictly on ambient light and amplified it roughly 1000 times greater than it naturally appeared. Developed in the early 1960s, these units were exclusively weapon mounted due to their size and weight.
The 1970s saw the introduction of the micro-channel plate (which multiplies electrons) and improved photocathodes. This brought light amplification around 20,000. (Later improvements to these designs have resulted in some discrepancies with Generation III units.)
During the 1980s another improvement came about for the photocathode, this time in construction material (gallium arsenide) rather than a shift in the activation procedure. Additionally, the micro-channel plate was coated with an ion barrier film which increased the life of the tube (but added a ‘halo’ effect around brighter light sources). These two improvements actually clashed as the ion barrier prohibited electron passage, counteracting much of the improved resolution brought about by the GaAs construction. Battery life was generally less than Gen II offerings but light amplification was more than doubled.
These units utilize an ‘auto-gate’ power supply system which regulates the voltage to the photocathode and allows for faster adjustment to changing light conditions. A thinner or removed ion barrier is also noted as a Gen III+ qualification (this is notated as “unfilmed” by most manufacturers). (The auto-gate can be added to previous generations of night vision; this is denoted with a “+”.)
Resolution – Analogue
For traditional night vision units the measurement of Line Pairs per Millimeter is used to convey the resolution of the optic – the clarity of the image presented. To measure this, a specialty piece of equipment called a collimator presents several groups of illuminated lines at a set distance. It is advised that an LPM (a.k.a. lp/mm) rating of 25 or higher is sought as this is the minimum necessary resolution to differentiate a man from similar sized objects/animals at 100 meters.
FOM – Figure of Merit
This is a formula wherein the Signal-to-Noise ratio (the light reaching the eye divided by the perceived noise seen by the eye) of the optic and its resolution are multiplied to give a quick estimation of the clarity of the device. (The SNR is gathered in an empirical fashion and the resolution must be in the LPM discussed above.) Generally speaking, this measurement, while useful, does not accurately determine an optic’s ‘clarity’ because the imperfections noted in a manufacturing setting are observed under a microscope or by specialized equipment. We view this figure as a quick reference, a threshold for producers much like the percentage of light transferred by lens used in manufacturing standard optics.
When researching the FOM of a particular unit you will likely find Zones 1, 2, and 3 listed with ranges for different sized imperfections. The imperfections are measured in thousandths-of-an-inch; the zones are laid out like a bullseye target with the center (that which our eye focuses on during use of the device) being “1”, most of the remaining circular area around that being “2”, and the periphery being “3”. This is another reason FOM can be misleading as small imperfections in Zone 3 can lower the rating of the optic but have no real impact on its performance as your eye is not focusing on that area of the image.
Resolution – Digital
Digital night vision resolution is a more complicated relationship between individual components, like the pixel quantity of the sensor, the size of the sensor, the pixel quantity of the display, or the size and quality of the objective lens. Normally, the pixel quantity of the sensor is what is advertised by companies to convey a product’s resolution. (Additionally, quality digital units incorporate specific algorithms to maximize the unit’s performance by refining important aspects of the process, such as how sensor pixels interact.)
AMOLED – Active matrix organic light emitting diode; used in the displays of several current digital night vision optics. Use of this technology drains less battery power and speeds up the refresh rate of images.
The options that can replace factory IR illuminators are categorized by the infrared wave length they emit light at (i.e. 850nm, 940nm, etc.); “nm” stands for “nanometers” which refers to the wavelength of electromagnetic radiation relative to the visible part of the light spectrum. The lower the number, the closer it is to the visible portion of the spectrum with 700nm to 900nm being considered “near infrared” (can only really be seen if looking directly at the light at close distances). An 840nm light will be useful at longer ranges but give off more of a dull red glow if you look directly at the light at closer distances. A 950nm illuminator casts a broader beam, cannot be used as far, and has less of a glow from the emitter. Mammals cannot see either nanometer, but it has been witnessed by countless hunters (ourselves included) that ‘educated’ animals will be spooked by the dull red light. (As a side note, current scientific findings suggests that the only animals that can actually “see” infrared light don’t see it at all; it’s sensed through some other mechanism since eyes are not structured to properly detect IR waves).
LASERs (Light Amplification by Stimulated Emission of Radiation)
Lasers have historically been classified by their strength. This is measured by calculating the maximum power the unit can emit along a specific wavelength (nm) and the exposure time it can achieve (this being how long the human eye will allow the laser to actually make contact before the blink reflex activates). This is communicated in milliwatts (mW); the milliwatt rating is largely what classifies the laser and this classification is based upon the harm the laser could cause eyes within certain distances. (Red and green are the most common colors for firearm related equipment, so nanometer wavelengths in the mid-600s and low-mid-500s are mentioned in product specifications, respectively.)
There are limitations on the potency of commercially available lasers, and this is primarily what separates military and civilian devices. The higher the “Class” rating, the more potent the harm the laser can cause (i.e. Class 4 being more harmful than Class 1).
Most of the devices listed on our page have an IP rating; that is, an Ingress Protection rating. This rating scale is published by the International Electrotechnical Commission and is used to classify the durability many products have in relation to intrusive material. The third letter (or number) indicates protection against solid objects like tools. The fourth digit is an indication of how well the device holds up to liquids, namely water.
Something like IPX7 (seen on just about anything from Pulsar) indicates that the unit does not have any formal testing against objects (though this does not mean it cannot handle more than casual usage) and that the device can be submerged in water one-meter in depth for up to 30 minutes.
The IP67 rating of the N-Vision Halo series indicates the units are “dust tight” and waterproofed to the same degree described above.