Nothing really interesting to me, I'm afraid. Nobody would ever doubt that a solid support for the camera (and the lens) yields the best image quality. Neither would anyone deny the fact that image stabilization technology improves image quality at slower shutter speeds. So with this in mind, what does the reader learn from the article?
Puts tried to use the results of a test to prove the common knowledge that image quality is best with the tripod, worse with handheld and with VR/IS somewhere in between. Unfortunately the test he performed is not scientifically sound. First of all, the methodology is poorly described. He mentioned that he used the Star Chart from Imatest at a 2-meter distance for data collection and analysis. Other than that, we know basically nothing about how the test was carried out. For example, which version of Imatest did he use? Earlier verions have significant bumps in the MTF response, which was fixed in later versions (ver. 2.7.2 and later) with a change in algorithm. Judging from the bumpiness of the MTF curves in Puts' article, it seems to me that he was using one of the earlier verions. In addition, RAW images cannot be analyzed directly with Imatest and so they have to be converted into JPEGs first. How was the conversion done? In-camera or with software? In either case, what were the settings (sharpening, for instance), and were they the same for all three cameras? This might sound trivial, but the conversion process does affect image quality. For the handheld and VR/IS tests, how many shots were taken per shutter setting per camera? What was the order of the camera tested? Is the order the same as reported in the article (that is, Nikon D3, Olympus E3 and then Leica M8)? Could the results be affected by differences in the tester's performance at the beginning and at the end of the test because of muscle fatique? Hopefully the test was performed in a way such that systematic bias is eliminated, but we never know from the article.
Puts wrote in his article that he repeated the camera-on-tripod test over several days and got "comparable results". However, there is no mentioning of repeating the handheld or VR/IS tests, which to me is very strange because intrinsically data from these tests are more variable and so the need to repeat them is more than the tripod test. The variability can clearly be seen when you compare the Nikon D3 handheld results at 1/80s and 1/15s. Save for the horizontal axis (segments 1 and 5) the MTF curves at 1/15s look better than those at 1/80s. I bet if multiple shots at 1/80s and 1/15s are compared, the overall MTF data will show better results at the higher shutter speed. Anyway, my guess is, for this particular shot at 1/80s, there was significant camera movement at axes other than the horizontal one.
In fact the Olympus E3 handheld MTF graphs in Puts' article also show the same anomaly. As a whole, the MTF curves at 1/50s look better to me than those at 1/200s. It should be noted that in the text Puts discussed about image quality at 1/100s, but somehow he mistakenly showed the graphs at 1/200s instead.
Why did Puts use the word "comparable" instead of "identical" or "almost identical"? Does it mean that the MTF curves are not the same between tests? If that is the case, what is the variability between tests? How about the variability within a test? Say, with three sequential shots on a tripod at the same camera setting, are the MTF curves identical? The availability of this kind of data is crucial before we can draw any meaningful conclusions.
On the usefulness of VR/IS, Puts concluded that with the Nikon "One may say that VR at 1/15 is much better than no-VR at 1/180. For number aficionados this implies an improvement of 3.5 to 4 stops." and with the Olympus "Generally one can say that the VR at 1/30 produces better results than 1/200 at simple handholding situations: a gain of 2.5 to 3 stops." He then went on to make the statement "The Nikon in-lens performance is definitely better than the Olympus in-camera performance". I would seriously question the logic leading to the proclamation of the statement. As I have mentioned earlier, there is not enough detail in the article with respect to the methodology of the test. From the way he described the results I will have to assume that he is the only tester. Therefore, the statement is over-generalized because it only applies to the tester himself using the described body/lens combinations. In order to make a sweeping statement like that, he has to increase the sample size, both in terms of the number of testers and the variety of lenses/bodies tested.
Another flaw in the test design is that in the handheld and VR/IS tests, the aperture was not held constant. In the tripod test he demonstrated that there is an optimal aperture for each lens (for example, f/5.6 for the Nikon 105mm VR). In the VR test, he showed that the MTF curves for the 1/4s f/16 shot was worse than those taken at 1/100s f/3 or 1/15s f/8. Certainly VR failed at slow shutter speed settings, but I would say the lens performance at f/16 is not the same as those at f/3 or f/8. This makes the analysis very complicated. A better test design would be to vary the light source (e.g. increase/decrease number of floodlight and move them farther/closer to the test chart) such that all tests are performed at the lens optimal aperture.
I don't usually read Puts' articles. My limited knowledge of him is that he is a Leica guru. However, I am really disappointed with this test report. It gives me the impression that the tests were poorly conducted, the data were iffy, the analysis was improperly done, and the conclusions were not supported by the data shown. Even the web page was not carefully assembled together. For example, the text describes one set of graph (E3 at 1/100s) but the actual graph shown is from another setting (E3 at 1/200s). In the tripod test, the X-axes of the three graphs are in different scales. To the casual reader the performance of the Olympus E3 will look much worse because of this. It should be noted that the sudden increase in MTF value of the E3 at high frequencies is due to artifacts caused by the anti-alias filter. These high frequencies data should either be discarded or mentioned in the text.
It is surprising to me that Puts is so highly regarded in the field of photography. Some of you are so critical about a particular guy at dchome and yet are so forgiving with Puts. At least to me Puts needs to brush up his skills in doing test reports. Anyone who spends the money on the Imatest software and the test target, and not to mention the time to do the test, will be able to generate mind-boggling MTF charts. The question is, are the results valid? For this particular article, I find that Puts is making sweeping comments the same way as the guy at dchome is doing.
|