dante stella stories photographs technical guestbook

If you cheat at golf, you cheat at life.
 
Leica M8 lens testing and other wastes of time
 

Is that your Slazenger Seven?

In America, it might be ok to cheat in business. It might be ok to cheat on your taxes. Depending on the circumstances, it might even be ok to cheat on your wife. But cheating at golf is not allowed. Ever.

Many lens tests on digital bodies are just that – cheating at golf. The worst thing is that the people doing the cheating have no idea that they are doing it.  Something is lost in the logic of testing; the measurement gets mixed up with the method.  And the result is, well, like that golf ball that went into the water hazard but miraculously reappeared on the green.  Other lens tests can seem like a neophyte's hitting a hole-in-one.

There are only two things you could ever learn from any lens test that does not involve an optical bench and calibrated targets: (1) whether the lens consistently exceeds the resolving power of a camera sensor in some arbitrary set of conditions and (2) the more subjective qualities of the lens (out of focus areas, vignetting, macro contrast).

Isn't that really enough?

Testing lenses in the lab

MTF (modulation transmission function) curves are plotted by measuring the contrast of an aerial image (of line-pair clusters) through a lens at an arbitrary distance.   The resulting MTF chart shows the contrast (usually at 10, 20 and 40lp/mm - or whatever the target is designed to show) at varying distances from the center of the optical axis. Where the contrast is above a certain contrast threshold at a certain resolution the lens is said to resolve that much. MTF is an objective measure that sees only the optical components of the lens (the optical cell).

MTF testing via aerial imaging has some drawbacks - not being able to show diffraction limitations, chromatic aberrations, and other qualities that become apparent when a flat target is projected by the lens onto another flat surface (like a digital imager).  To capture those requires the reduction of the aerial image onto a target, and that introduces a host of variables that have to be accounted.  People who test lenses for a living have ways of testing these qualities of lenses objectively (or at least in a controlled fashion).  Today, more and more lab testing is being done with a Siemens target, a starburst that makes it pretty clear at which point the a lens cannot resolve more. But the underlying idea is similar - pick a pass/fail point at which you make the evaluation of lens resolution and report that number.

One sure thing is that no one professionally tests lenses with newspaper, oblique rulers, or brick walls.

Pitfalls of the casual test

So you cruise over to a web site and see a lens sharpness test.  Or you've made your own.  Someone (maybe even you) plugs various lenses into a Leica M8 and then compares 1:1 images.  How sharp is that lens?  How sharp is it compared to x or y or z

Note to file: remove any such tests from my site before writing this! 

No matter how well-intentioned, all you really know at the end is how one person managed to do with his eyesight, his camera body, his particular example of a lens, and any experimental errors picked up along the way.  We are often treated to anecdotal examples of stellar lens performance, but we don't see the frames that were cut because the tester believed that the result was out-of-bounds and the result of his own errors.

The controls required to make a formal lens test are many.  The potential errors are not minor, nor are they few, and they multiply when checking multiple lenses from different manufacturers in parallel.  Let's face it: we've all done lens tests at some point ourselves, but it's time to fill out the scorecard the right way.

Variations in optical cells.  If the only thing you were testing were the optical cell, there are good examples of a particular optical cell and bad ones.  Unless you are dealing with a manufacturer that rejects large numbers of lens cells for minor, minor defects, it's a foregone conclusion that a proper test of the glass itself will encompass more than one example.  Leica may be the one exception, with tolerances of 0.01mm or less, but that kind of consistency comes at a very high price.

Tolerances in mechanical parts.  Mass-produced lenses are complex mechanical devices mounted to cameras, which are in turn complex mechanical devices.  To give a flavor of the sheer number of variable involved, consider that to test a rangefinder lens on a camera depends on a number of things being exactly controlled:

p The optical unit must end up perfectly parallel to the imaging surface (meaning that both the camera and the lens must be aligned);

p The imaging surface must be perfectly parallel to the focusing target (good luck verifying this test condition yourself);

p The flange to imaging plane distance of the camera must be perfect;

p Any intermediary adapters (such as a Leica M body to M39 lens adapter) must be controlled for proper thickness and machining.  Leica has never published the spec dimensions for these adapters, so it's anyone's guess whether third-party adapters are correction.  It's also impossible to tell whether Leica's own tolerances for these (cooked up in the 1950s) are close enough to guarantee proper operation on digital camera bodies;

If the test is at anything but infinity, five other things must be watched carefully:

p The RF cam position of the lens must be verified to be correct;

p The pitch of the RF cam (and/or the angle of its ground face) must be sufficiently precise to be at the correct distance for any given distance;

p The focal length of the lens (there are minor variations in every lens) must be precisely matched to the focusing helicoid (consider that Leica can have several dozen focusing mounts for focal length variations for the "same" model lens);

p The camera rangefinder must be aligned perfectly; and

p There must be no parallax between the RF spot and the subject in the center of the imaging surface.

p If the test is on a digital system, then the RAW converter must be controlled – because it, too, has an effect on the performance of the lens as system. Erwin Puts wrote an excellent article on the effect of RAW converters on apparent lens performance. You can find it here.

There is always human error, particularly mis-focusing due to poor eyesight.  Most of the people who review optics are not young.

At the end of the day, it's really hard to draw any generalized prediction of lens "sharpness" from one example tested at one point in time.  Or any reliable prediction of the relative, quantifiable performance of two different model lenses.

Mulligan #1: multiple examples of a lens

In golf, a mulligan is where the golfer, faced with a high shot count or an impossible lie, metaphorically kicks the ball into the cup.  This avoids the indignity of having to count the rest of the strokes.  There are at least a couple of different types of mulligans in lens testing.  The first can be called the "good-example-of-the-lens" technique.  I think this is often motivated by a desire to feel "objective" or an unconscious desire to fulfill one's own expectations of the manufacturer (which expecations may or may not be realistic).   The "good example" technique raises at least three different questions.

A.  What is a "good" example?  When people start discussing "good" and "bad " examples of a particular lens, paricularly when starting with a single "bad" copy, we must ask ourselves why we think it is (or might be) a bad example:

p Is it "bad" because it didn't work well with the reviewer's camera as it is calibrated?

p Is it "bad" because it didn't conform to its own manufacturer's specifications and tolerances?

p Is it "bad" because we expected tack sharp corner-to-corner performance and didn't get it?

p Is it "bad" because a fast lens has spherical aberration and focus shift?

We don't really know if someone has a bad copy unless there is also a good copy - but finding the norm is all but impossible.  Manufacturers do not publish their manufacturing specifications, much less what they intended by making certain intentional deviations from standard specifications.  For example, if a Zeiss ZM 50mm Sonnar-C is shot wide-open, it will exhibit a back focal distance that is shorter than would be expected from a Leica mount lens.  A reviewer looking for an example that performed "dead-on" at 1m and f/1.5 would actually be looking for an example that deviates from the norm, not exemplifies it.  The Sonnar example is one of a very small number of cases where a manufacturer has actually explained something like this.  In most cases, things quickly progress into speculation.

B.  Can we solve the problem with statistics?  This is a good question.  In the manufacturing world, it's pretty routine to reverse-engineer products by measuring and drawing a statistically-valid sample (n samples, with n depending on how close you need to get it) and then determining the mean and median measurements and from those, the tolerances.  I'm not aware of any lens reviewer who is also a statistician or a production engineer.  The closest thing we ever see to this is the testing of one or two additional samples, never demonstrably rising to any established n that is required to figure out what is normal.  And forget about anecdotal stories about X or Y or Z lens.  If you went by these stories, you would conclude that 75% of all lenses are defective (or superlative) in some way.

C.  Are multiple samples ultimately useful?  I would say not.  An end user cares about one thing: if I buy X (whichever example my dealer has) and plug it into my Y (the camera I now own), will I get results that befit the amount of money I spent on the camera and lens?  People do not experience photography in averages (except for a few pathological cases where people hoard multiple examples of the same product).  And there is generally one chance to get the picture.  It would seem that given a consumer's interest is in a single example, the only "objective" thing to do is to randomly take a single product off a dealer's shelf, test it, and report the results.  This would be as predictable as anything of the consumer's experience.

Mulligan #2: "Focus bracketing."

The second mulligan is a practice called "focus bracketing," by which a reviewer will take various pictures at the indicated focus, behind and in front.  The justification of "focus bracketing" is that it eliminates possible camera malfunctions, "sample variation," and human error.  None of these, in my view, is a particularly compelling justification.

First, a camera is a fundamental and necessary part of the imaging chain.  Without taking the time to make sure you have a perfectly calibrated sample of a camera built to spec, the test is pretty much worthless as an objective (or even comparative) measure of performance.  It gets more egregious when the lens in question is built for a particular standard (such as a Leica screwmount lens) and the test occurs on a body whose resemblance to a Leica screwmount is by virtue of reverse engineering or the use of adapters.  The prototypical situation is adapting a screwmount rangefinder lens onto an Epson RD-1.  We have no idea how Epson/Cosina chose to place the imaging plane (film is curved; digital sensors are flat; flange-to-focus distance breaks down as a standard when crossing formats).  Even Leica had to make design compromises in the M8 that have resulted in many, many recollimations of many types of lenses.

Second, "sample variation" is at the heart of lens quality (or lack thereof).  To make a lens well, consistently, costs money.  High reject rates, close tolerances, and careful machining drive unit costs up - primarily in the department of the mechanical parts.  Companies like Leica (and Zeiss in its German operations) make things very, very carefully.  If you go to the low end of Japanese lenses (particularly rangefinder lenses), you see cheaper materials, looser machining (made up with heavier grease) and more ad-hoc "adjustments" achieved by practices like grinding down parts of the rangefinder contact cam.  The practice of "focus bracketing" tends to give a free ride to lenses whose mechanical parts may show wild variations in manufacturing or simply be out of spec for their intended use.  Focus bracketing turns things into a test of a lens cell, not the lens.  At that point, you should start thinking about reading the MTF charts instead.

Finally, human error is significant.  Like it or not, human factors make up part of the system too.  If a lens has too fast a twist to focus accurately, that is something users have to live with in the real world.  And filtering this out by nudging the focus creates a testing method that is not readily repeatable, particularly on wideangle lenses.* 

*While we are on this subject, I would pose this question: why not bracket the distance to the subject instead of the position of the lens focusing ring?  With a 75mm f/1.4 lens, for example, the depth of field at 1m and f/1.4 should be 8mm (assuming 0.02mm as the circle of confusion), and the tolerable lens-to-focus-plane error is about 0.02mm front-to-back. You can do the math (the answer is 400), but it would seem that the more predictable way of eliminating rangefinder error would be to incrementally change the distance between target and lens/camera rather than moving the focusing ring.  An array of independently moveable targets across the field would be a pretty easy way to judge field curvature.

The counterpoint would of course be that focus bracketing should be done to eliminate human error, but such an argument points to a severely compromised testing situation.

What is a reasonably reliable type of test I can do?

There are two tests that would not seem to depart from the realm of the realistic.

A.  Can the lens exceed the sensor's capabilities in normal use?  If you get moiré on the sensor at a given aperture, the system (of which the lens is a limiting element) is outresolving the sensor.  In shorthand, this means that the lens is better than the sensor.  This is an easy, objective test. 

To take an example, if a Kodak 14n sensor is 4,500 pixels and 36mm wide, you can ascertain that it contains 2,250 line pairs, or has a maximum system resolution of about 62 lp/mm (I'm using rounded numbers for sensor width in mm).  A lens that generates moire on that sensor is making at least that resolution.  Likewise with a Nikon D2x that has a 24mm wide sensor that is 4,288 pixels wide, the threshold is 90 lp/mm.  And so on with the Leica M8, which packs 3,916 pixels into 27mm for 73 lp/mm. 

The catch with this type of test is that you must be able to see moiré somewhere.  And this is much easier with a camera that lacks an antialiasing filter (a Leica M8 is suitable).  On these cameras, you can either see the moiré directly on the camera LCD at 100% magnification (since the camera doesn't bother to correct this in its preview) or in the finished images (assuming you can eliminate the anti-moiré process in the RAW conversion).

What, you ask, would be the experimental controls here?  The answer is simple: none.  Because this is a pragmatic test, and because it does not pretend to be scientific (except in a binary result - pass or fail), you can do anything you want.  Start with a camera with a reasonably good calibration.  Shoot people, shoot objects, shoot landscapes.  Look at the results.  If it looks "good enough," it is.

B.  What are the subjective qualities of the lens?  This is easy, and interpretation is sheerly a question of personal taste.  With the first subject at x distance from the camera and the second at y distance from the first, how do the various foreground and background elements look?  What happens when a strong light source gets into the frame?  What is vignetting like at various apertures and distances?  This, again, is a test that gets better the more practical pictures you take.

What's the score?

You might obviate all lens testing by reference to three simple ideas.

A.  Any fast (greater than f/2) rangefinder lens made before 2006 will probably need to be recollimated for a digital rangefinder camera.  Manufacturing tolerances that did the job for film cameras will not do it for digital if you like to look at things 1:1.  And film, given its curl, does not end up in the same plane of focus as a digital imager.  So make sure the collimation is done with a first-surface mirror.  Yes, it costs money.  But yes, it will eliminate metaphysical doubt and allow you to get on with life.  It's cheaper than a psychoanalyst.

B.  If you are using a blind-focusing camera system (like a rangefinder) for what you believe to be a critical purpose, go right for the expensive lenses.  The proper function of these systems is all about maintaining tolerances, not just at infinity but throughout the distance range.  And it's also important to maintain those tolerances over time.

C.  Consider slowing down.  Lenses with smaller maximum apertures are less expensive, smaller, lighter, more consistent in field flatness, less prone to focus shift with stopping down, and generally sharper.  With a Leica M8, where the base ISO is 160, and shooting at ISO 320 does not impact things terribly, you can probably afford to use a slower lens than you would have with a film camera (assuming the effective focal length is  the same, e.g., a 35mm lens on a film camera vs. a 28mm lens on the M8).

DAST