Banner_how_to_choose_machine_vision_camera


 

How to approach a machine vision project — based on practical experience from engineering to delivery

Anyone who has closely experienced multiple machine vision projects, from simple presence detection to complex inline inspections in high speed production environments, sees a recurring pattern. Vision systems rarely fail because a component is not good enough. They fail because the problem has not been defined correctly, because assumptions have not been validated, or because choices were made in the wrong order.  

A machine vision system is not a sum of separate components. It is a chain in which product, positioning, lighting, lens, camera and software directly influence one another. Sensor size, pixel size and field of view determine what can physically be captured, while lighting and exposure time determine what actually becomes visible. A small deviation in one element affects the rest of the system. This often only becomes visible during integration or even in production, when adjustments are complex and costly. That is why a vision project should not be approached as a selection of hardware, but as a structured engineering process. Based on practical experience, the difference between a stable and an unstable system almost always lies in the order in which decisions are made.

 

Machine vision project approach: start with the product and variation

The first step in every vision project is to fully understand the product. In practice, this is often underestimated. A limited number of samples are examined and assumptions are then made about the entire production.

In reality, variation is always present.

Metal parts vary in reflectivity due to small differences in machining or material. Plastic products can show batch to batch variation in color or surface. Organic products are by definition not identical. Even packaging shows variation in print, deformation or positioning. A vision system designed on the basis of ideal samples will show unstable behavior in production. That is why from the start you need to work with a representative set of products: good parts, edge cases and known defects. At least equally important is defining what is considered a defect. Without a clear defect definition and defect catalog, discussion will always arise during validation.

The key question in this phase is not whether something is visible, but whether it can be consistently and reproducibly distinguished from a good product.


In machine vision, the inspection specification determines both system choice and hardware selection

After understanding the product, it must be defined exactly what the system is supposed to do. This seems like a logical step, but in practice it is often defined too broadly. A presence check is a fundamentally different problem from cosmetic defect detection. Reading a code places different demands than measuring a dimension. Classification differs again from both. 

These differences have a direct impact on:
• required resolution
• field of view and pixel size
• type of lighting
• tolerances
• software complexity

What often goes wrong is that the inspection specification is expanded during the project. At that point, a fundamental problem arises: the hardware has already been chosen. A system designed for simple detection is rarely suitable for fine grained inspection without modifications. 

Therefore, in practice the rule is: the inspection specification must be stable before selecting the camera, lens and lighting hardware.

Changes after that almost always lead to compromises or redesign.


Inspection position and positioning determine the stability of the vision system

The inspection position is one of the most decisive factors for the success of a vision system. Not the camera, but the reproducibility of the image determines reliability. A stationary product offers maximum control. Lighting can be set optimally and there is no motion blur because a longer exposure time is possible. For moving products, the requirements immediately become more demanding. The speed determines the required exposure time and therefore the light intensity and the risk of motion blur. But even more important than speed is the stability of the movement.

A product that is positioned identically in every cycle is predictable. Variations in position, rotation or height cause variation in the image. That variation must be compensated somewhere. The better the mechanical positioning, the simpler and more robust the vision solution. In practice, the rule is: stability starts mechanically, not in software.


Environmental factors and lighting in machine vision systems

In addition to positioning and movement, environmental factors play an important role in the stability of a vision system.

Ambient light can vary due to daylight or reflections and directly affects contrast. In many applications it is therefore necessary to control the lighting and limit unwanted influences. To what extent shielding is required strongly depends on the chosen lighting and the contrast mechanism being used. In some setups, a dominant, well directed light source is sufficient, while in other cases physical shielding is needed to guarantee stable imaging.

In addition, contamination and process influences often play an important role. In many industrial applications, water, oil, dust or product residue may be present, depending on the process and the environment. These can deposit on optical surfaces such as protective glass, lighting and the lens, and thereby affect contrast, sharpness and reproducibility, especially with smaller pixel sizes.

Even when these influences are not continuously present, they still have to be taken into account in the design. A system that only works stably under ideal conditions will sooner or later cause problems in production. That is why robust solutions consider not only protection, but also how cleaning and maintenance can be carried out without disturbing the system. Accessibility and replaceability of protective parts are explicit points of attention here.

Electrical influences such as grounding and interference are often underestimated, but can lead to unstable triggering or noise in images.


Choosing the camera, lens and lighting: the right order in machine vision

Only when the product, inspection and environment are clear does it make sense to select hardware. The order in which this is done is essential and is often applied incorrectly in practice.

The camera is selected first on the basis of resolution, field of view, pixel size and speed. The chosen interface, such as GigE, USB3 or CoaXPress, also determines the available bandwidth and latency toward processing. This defines how much detail must be captured and within what time. Only then is the lens selected. The lens must be able to optically achieve this resolution over the full field of view and must match the sensor size. This is where things often go wrong: a lens with insufficient optical resolution fundamentally limits the system. Detail that is lost optically cannot be restored in software.

Only after that is the lighting determined. The lighting must create the right contrast within the geometry and limitations of the camera and lens, and provide sufficient intensity for the selected exposure time.

This leads to an important principle:
•    The camera determines what is required
•    The lens determines what is optically feasible
•    The lighting determines what becomes visible

All three must be in balance.

The selection of camera, lens, lighting and software is further elaborated in detail in separate articles. 


Test setup and validation of machine vision systems

The test setup is the phase in which the system is actually proven. This is where assumptions are tested and choices are validated.

In this phase it becomes clear:
• whether the chosen contrast mechanism works
• whether the lens delivers sufficient detail
• how large the influence of product variation is

What often becomes clear is that small variations have a major impact. What works under ideal conditions may prove unstable under realistic conditions. That is why the test setup must be representative of production. What does not work stably here will not work stably later either.


Cycle time, exposure time and motion blur in vision systems

The cycle time determines how much time is available for image acquisition, processing and communication within the system and through the selected interface, but it is separate from the physical image formation itself. Product movement speed, by contrast, has a direct influence on image quality. As speed increases, the exposure time must become shorter to prevent motion blur. This immediately places demands on the intensity and stability of the lighting, as well as on the synchronization of triggering.

Motion blur is a physical effect that cannot be corrected in software. When detail is lost due to motion during exposure, that information is no longer available for further processing. In practice, this means that movement speed and exposure time must be determined together, independently of the cycle time. A system may have sufficient processing time, yet still fail because of insufficient image quality caused by motion blur.


FAT and SAT: validation of machine vision systems in practice

After the engineering phase comes validation, and this is the moment when a vision system has to prove itself in practice. In many projects, this phase is seen as a formal step, but in reality this is where it becomes clear whether all earlier choices actually hold up.

The Factory Acceptance Test (FAT) takes place under controlled conditions. Here the system is tested with known products, according to the predefined specifications. This is the moment when it is checked whether the system functions as intended: are the correct characteristics detected, is the stability sufficient, and are the acceptance criteria met. It is important that the FAT must not be an ideal demo. If only perfect samples are used or variation is limited, a distorted picture of performance is created. A good FAT approaches reality as closely as possible, including variation in product and positioning.

After the FAT follows integration into the machine and thus the transition to the Site Acceptance Test (SAT). This is the phase in which the system is tested in the actual production environment. Here all factors come together that in earlier phases were only partially present: environmental influences, mechanical variation, operator interaction and process fluctuations.

In practice, this is the moment when systems distinguish themselves. A solution that was stable during the FAT may show instability during the SAT if the boundary conditions are not fully controlled. That is why the SAT is not a formality, but the real validation of the system.

The goal of this phase is not only to demonstrate that the system works, but that it continues to function reliably under all normal production conditions.


Common mistakes in machine vision projects

When vision projects show problems, the causes are rarely unique. In practice, the same patterns keep returning.

  • One of the most common causes is underestimating lighting. Without sufficient and stable contrast, every form of detection becomes uncertain, especially when variations occur in product or environment.
  • In addition, a lens is regularly chosen that does not optically achieve the required resolution. As a result, detail is lost before it reaches the sensor, and that can no longer be corrected.
  • Another recurring problem is working with too limited a set of samples. Systems designed on the basis of ideal conditions prove not to be robust enough in production.
  • Mechanical instability is also a common cause. Variations in positioning translate directly into variations in the image, which puts pressure on the reliability of the inspection.
  • Changing inspection specifications after the hardware has already been selected also leads to problems. At that point, a mismatch arises between what the system must do and what is physically possible.
  • Finally, software is often used to compensate for structural problems. This leads to complexity without solving the underlying cause.

 

What all these causes have in common is that they do not arise from a lack of technology, but from the order in which decisions were made.


Success factors for a stable machine vision system

Successful vision projects are distinguished not by complexity, but by structure and discipline in the approach.

The foundation lies in a good understanding of the product and the variations that occur in production. From there, the inspection is clearly and unambiguously defined, including clear acceptance criteria. Next, the inspection position is designed in such a way that the product is positioned reproducibly. This forms the basis for stable imaging. Only after that follows the hardware selection, in the right order: first the camera, then the lens and finally the lighting.

When these steps have been completed correctly, a stable image is created. This is the moment when software is developed, not earlier.


Conclusion: machine vision as systems engineering

Machine vision is not a choice of components.

It is systems engineering. The quality of a vision system is determined by how well the problem is understood and in what order choices are made. Anyone who starts with hardware tries later to solve problems that could have been prevented earlier. Anyone who starts with the product and the physics of image formation lays the foundation for a robust system.

Ultimately, the only relevant measure is not whether a system works during a demo, but whether it continues to function stably in production — under variation, over time and without continuous adjustment.

 

Need help with your machine vision project?

Do you still need some extra help with your machine vision project? Then feel free to contact our vision specialists.