Home

 

 

 


Table of Contents

  1. Should I use a CCD or CMOS Sensor?
  2. Why is CCD lower resolution and higher cost?
  3. Will AIT design and/or build a special version for me?
  4. Where can I learn more about sensor technologies?
  5. Links to TI DSP Information
  6. What “Format” is the AIT100 and what does that mean.
  7. CS Mount vs C mount.
  8. Lens focal length.
  9. Field of view (FOV). 
  10. F-number or Aperture ratio. 
  11. Resolution/Depth of Field
  12. Minimum Object Distance (MOD)
  13. Handy_Formulas:

Should I use a CCD or CMOS Sensor?

The CCD's primary advantage derives from its better "fill factor", which means that more of the pixel is photosensitive (typically 100 percent in CCD's vs. 60-70 percent in CMOS sensors).  The result is that the CCD camera has better sensitivity and lower noise than the CMOS part.  If you are operating in low light conditions (indoors) or need the optimum image quality, CCD is your best option.  If you have good lighting and can live with a bit of background noise, the CMOS product gives you more pixels at a lower cost.

The CCD part and its considerable support circuitry also consume more power than does the CMOS part. In the AIT product, the DSP and its support circuitry typically dominates the power budget, so the total power consumes by the CMOS product is not a lot less than the CCD product. .

Back to Top

Why is CCD lower resolution and higher cost?

Three reasons:

1) The part itself is a lot more expensive.  In the CCD camera, the single most expensive part is the CCD sensor itself.  In fact, it it can well be more expensive than ALL the other electronics combined.  The primary reason for this is that CMOS sensors are built on process lines which build conventional CMOS integrated circuits in immense volumes. CCD sensors are built with a much less common process.

2)  The interface circuitry for the CMOS part is a lot simpler.  CMOS parts run on one logic voltage (3.3V, typically) and have on board A/D converters.  It is essentially a logic part insofar as interfacing is concerned.  CCD parts use various obscure and critical voltages which have to be generated and have to be right.  Believe me.  So, we have to design and include these odd ball voltages, add A/D converters, etc.  Some of this stuff is pretty hairy.  It works fine once you get it going, but getting there takes more work.

3) We spent a lot more engineering time on the CCD product and have a bigger manufacturing exposure.  If we drop a CCD sensor, hundreds of dollars go into the trash; the CMOS sensor only causes momentary anguish if we have to throw one away. 

Back to Top

Will Apollo Imaging Technologies design  and/or build a special version for me?

Short answer: Yes, if it makes sense. Our customers are typically OEM's.  Our product goes into their product.  It is not at all unusual for the OEM to want a "special" that better fits his needs. It can be as simple a matter as preloading his code; or it can be as complex as a custom design.  Or, we may write some code for you.

As you can imagine, this costs money.  If it is something that is unique to you, you will have to pay the costs.  If it is something that we have use for, we may share some or all of the costs, depending on the situation. 

If you have a question, ask.  SalesEmail

Back to Top

Where can I learn more about sensor technologies?

There are a number of very good sites and articles on the Web on CCD and CMOS technologies. We try to keep the links current, but Web sites are dynamic, they come and go.  If you find any of these no longer in existence, we would appreciate an e-mail at CompanyWebmaster.  Likewise, if you find a site that you think we should add. 

More than a superficial discussion of optics and lenses is beyond the scope of this document.  Consult an optical designer or read a good optics book.  Your optics suppler can be of great help; Melles-Griot is a good supplier of all things optical, and has a good optics discussion both in their catalog and on their web site http://www.mellesgriot.com/products/optics/toc.htm.  Edmund Scientifics Industrial Optics http://www.edmundoptics.com is another good supplier and reference.

 Our objective here is to pass on what we have learned and get you a starting point.

Links that you may find useful are:

http://www.ccd.com/ccdu.html

http://www.kodak.com/US/en/digital/ccd/appNotes.shtml

 

http://www.rdrop.com/~cary/html/machine_vision.html

 

http://www.photocourse.com/

 

 

Back to Top

 

 


 

Links to TI DSP Information

http://dspvillage.ti.com/docs/catalog/dspplatform/overview.jhtml?templateId=5154&path=templatedata/cm/dspovw/data/c6000_ovw

Back to Top

 


What “Format” is the AIT100 and what does that mean?

 

The AIT100 uses a ½” format sensor.   Sensor dimensions and “formats” are, approximately: 

Format

1/3”

½”

2/3”

1”

Horizontal dimension (mm)

4.8

6.4

8.8

12.8

Vertical dimension (mm)

3.6

4.8

6.6

9.6

 

The above numbers are approximate; the “1/2 inch” sensor in the AIT100 is actually 6.83mm x 5.45mm. 

A lens specified for a larger format can be used on a smaller format sensor; in fact it may provide better image linearity near the edges; a lens specified for a smaller format will result in the corners of the sensor not being filled.

Back to Top


 

CS Mount vs C mount. 

Both CS mount and C mount lenses are industry standards.  They differ in that the focal plane for the CS mount lens is 5 mm further from the lens than with C mount lens. Apollo Imaging Technologies cameras use CS-mount lenses.  C-mount lenses can be adapted for use with a simple 5 mm spacer. 

Back to Top


 

Lens focal length.   

Focal length relates to magnification.  The equation for focal length is:

F = S/W*D

Where:

    F= Focal length

    S = Width of Sensor

    D = Distance to Object

     W = Width of Object.

S/W is also the “magnification”, as it relates to the sensor.  A feature of size X on the object becomes a feature of size S/W *X on the sensor.  Since the image ordinarily gets displayed on a monitor, the  system magnification depends on the monitor display area.

Note that the “Sensor Width” is the dimension of the sensor which concerns us.  The AIT 100 Sensor is 6.8 mm wide by 5.4mm high.  Use whichever dimension is critical.  For the following examples, we will use the larger dimension.

Example: The AIT100 sensor has a width of 6.8mm.  It is desired to fill this width with an object five meters wide at a distance of 10 meters.

The desired focal length is: 

F = 6.8mm*(10/5) = 13.2mm.

Now, suppose the lens we have is a 10mm lens.  The actual width of the object that fills the sensor width will be:

W = S*D/F = 6.8mm*(10m/10mm) = 6.8 meters.  The five meter wide object will occupy about 75 percent of the sensor.

As expected the slight lower focal length lens gives us less magnification.

Back to Top


 

Field of view (FOV). 

Expressed either as width at a distance (for example the 10mm lens above yields a 6.8 meter object at 10 meters) or as angle. 

The basic relationship for FOV is: 

                D = F(1 + FOV/S). 

For FOV/S >> 1; this reduces to: 

                D= F*FOV/S 

For example, if we wanted to know what lens to object distance is required to produce a 5M FOV with a 10mm focal length and our 6.8 mm AIT100 sensor, then: 

                D = 10 * 5000/6.8 = 7.352 meters. 

The angle is:               

                FOV = 2* arc tan (W/F/2) 

The FOV of the 10mm lens is 2* arc tan(6.8/10/2) = 37.5 degrees   

Back to Top


 

F-number or Aperture ratio. 

F-number (f-stop) is the ratio of the focal length to the effective lens diameter. If the lens has an iris, closing down the iris decreases the effective lens diameter and increases the F-number.   A low f-stop lens (f1.2 or so) is said to be “fast”, because it has a large aperture and collects a lot of light.  Exposure time on either film or the solid state sensor is short (“fast”).  Decreasing the f-stop, at a constant focal length, has the following effects:

1)       More light collected—shorter exposure times.  Less motion induced blur.

2)       Poorer Depth of Field.

3)       Better lens resolution. 

The more light collected/shorter exposure time is pretty intuitive.  The larger diameter lens brings in more light, the sensor forms an image more quickly.  Useful in low light conditions, or when it is desired to image rapidly moving objects. 

Back to Top


Resolution/Depth of Field

Resolution and Depth of Field tradeoffs require a bit more discussion.  Lower f-stop lenses have better resolution (until the lens resolution exceeds the sensor resolution) but have poorer depth of field.

Resolution is the ability of an optical system to distinguish between two features that are close together. For example, if a lens images a bar code, it must be able to distinguish the two adjacent black bars separated by a white space.  Higher resolution lenses also have sharper edge definition making for “better” images.

Since we are discussing digital imaging, there are two primary contributors to resolution: The lens and the sensor.  If the lens has more resolution than the sensor, then the resolution is “sensor limited”, if the reverse is true the system resolution is “lens limited”.

Diffraction spreads each point on an image to a spot whose diameter is 2.44 * light wavelength * f-number.  Since visible light has a wavelength of about 0.5 microns, this spot has a diameter of about 1.2 * f-number, or for working purposes, the lens resolution (in microns) is equal to its f-number.

The sensor resolution is the pixel size.  For example, the AIT100 sensor pixels are 5.2 micron by 5.2 micron.  Hence, any diffraction limited lens of f number smaller than 5.2 results in the sensor limiting the resolution.

Now, the real question—how large a feature on the object is a pixel—assuming the resolution is sensor limited?

Pixel(object) = Pixel(sensor)*(Object Distance/focal length)

For instance, in our 10mm focal length example, with the 6.8 meter object at 10 meters:

Pixel (object) = 5.2micron *(10m/10mm) = 5.2mm.  Or about 0.2 inch for those of us in the US.

Note that this constitutes a lower limit on the size of an object that can be detected; it does not constitute a lower limit necessarily on measurement resolution.  Gray scale interpolation techniques can generate effective measurement resolutions much smaller than a pixel.

Resolution should not be confused with accuracy.  Accuracy is affected by lens distortion. Lens distortion causes objects near the edge of the field of view to be further from the axis than they actually are.  Lens distortion is caused by  lens design issues beyond the scope of this discussion.  See the manufacturer’s data sheet of the lens you are considering.

Depth of field (DOF) is the range of lens-to-object distances over which the image will be in acceptable focus. “Acceptable” is application dependent; if fine feature resolution is critical, the DOF will be smaller for the same optical systems and conditions than it would if the feature size is larger.

For an object at a given distance, the depth of field is inversely proportional to the focal length of the lens; that is, the smaller the focal length number of the lens, the greater the depth of field. For example, a 28mm lens has the ability to capture more of the picture in sharp focus than a 100mm lens.

Depth of field is directly proportional to distance; i.e. a subject at a greater distance will have greater depth of field than a close-up subject. Therefore, you need not worry as much about a distant subject being out of focus. 

While changing the aperture (f-stop) will not have a striking effect on the depth of field for a distant subject or a wide angle (short focal length) lens, it can make a great deal of difference in a close-up or a photo taken using a telephoto or zoom lens.

A wider aperture (smaller f-stop number) will result in a shallower depth of field. You can use this to keep either the foreground or background out of focus while maintaining the subject in focus. When changing the aperture setting, you will need to also adjust the shutter to maintain the correct exposure.

Given this situation, any DOF formula has to be a guideline.

A good approximation is:

DOF = 2* ((object distance/focal length)*pixel size*f -number.

If the pixel size is in microns, the DOF will be in microns.

As discussed under resolution, f-numbers below the pixel size don’t contribute to resolution, so are a good minimum (you may need faster lenses for low light applications, but you won’t gain any image sharpness and it will cost you depth of field.

Examples: 

The example we have been using (10m object distance, 10mm focal length, with the AIT100 5.2 micron pixel size).  Assume a f-number of 5.2.

DOF = 2*(10000/10)**2*5.2*5.2 microns = 54 meters

This seems an odd result—the object is at 10 meters, but has a Depth of Field of 54M?  This is because the object is at nearly infinity; the DOF is not symmetrical; most of this DOF is on the far side of the object.  It is possible to calculate the near side DOF limit and the far side DOF limit—see www.mellesgriot.com or our lens calculator .

A better example.  Suppose we have the same lens, with a Minimum Object Distance of 300mm, and we are focused at that limit.

DOF = 2* (300/10)**2*5.2*5.2 = 48.672 millimeters.

Note that the depth of field increases as the square of the amount we allow the best focus point to defocus.  For example, if we permit a two pixel focus (10.4 micron) and increase the f number accordingly, we increase the DOF by 4 to 194.6 mm.

The three decimal point precision in the preceding two examples is entirely bogus and included only to make a point. The DOF calculation is essentially a guideline for a subjective criterion.  Your mileage may vary.

Back to Top


 

Minimum Object Distance (MOD).

Minimum Object Distance (MOD) is the minimum distance from the lens that the lens can focus.  The lens focuses closer objects by moving the lens further from the sensor.  There is a mechanical limit to how far away the focus mechanism can move the lens.  It is possible to decrease the MOD, to some extent, by the use of extension tubes which space the lens further from the sensor.

Back to Top


Handy Formulas:

F = Focal Distance

S = Width of Sensor

D = Distance to Object

W = Width of Object

 

1)       Focal length; distance to object.

F= S/W *D

2)       Field of View (FOV):

      For FOV/S >>1

D = F*FOV/S

Or,

FOV = S*D/F

FOV (degrees) = 2*arctan(W/F/2)

3)       Resolution:

Lens Resolution (diameter of circle in microns) = lens f-number

Sensor Resolution = pixel size.

4)       Size of object pixel (lens at infinity)

Pixel size (sensor) *(D/F)

5)       Depth of Field (DOF)

DOF = 2*(D/F)*pixel size*f-number

Back to Top

Copyright © 2003  Apollo Imaging Technologies, Inc.  All rights reserved.
Revised: 04/08/08.

 

Copyright © 2005-2011 Apollo Imaging Technologies, Inc.
All Rights Reserved