Friday, January 21, 2011

Fewer Fucking Pixels!

I'm still going to likely buy the Panasonic GH2. I'm simply waiting to see what Olympus has up its sleeve. What bothers me is that instead of concentrating on noise reduction, Panasonic upped the pixel count on the GH2. The smaller the pixel, the higher the gain required to achieve a set ISO. And this was never too much of an issue for until recently. I've had my GF1 for slightly over a year, and generally, it does everything that I want. But I was out wth my tripod a week or so ago and decided to do a long exposure. Not ridiculously long. Five seconds, but sure enough, I had a hot pixel. It was only one, and this was easily fixed in post, but I'd imagine that the number of hot pixels would increase with, say, a ten second exposure or longer.

I understand that this isn't much of an issue. But it goes towards my assessment of the general attitude of the camera manufacturers. They're still interested in numbers, which I assume means that we consumers are still interested in numbers. What it means for me is that my GF1 is not sufficient for all of my applications. I like the 12mp rating. I think that it's a good trade-off between detail and noise, especially at low-ISO. But I would have rather seen a generational advance in noise reduction. I'm already disappointed as hell in the E-5, I want to see something new to get my blood flowing, like the GF1 and Pen cameras did.

Moreover, the 4/3's format is ripe for experimentation. It's finally found its niche. The 4/3's format, backed almost exclusively by Olympus, was a failure, but Micro 4/3's is hot shit. Panasonic's multi-aspect sensor in the GH1 and GH2 illustrates a cool advantage of the small 4/3's sensor. You can experiment with its layout and design without adversely affecting the final size of the camera. Doing the same thing in an APS-C camera wouldn't likely be possible. Manufacture a larger sensor and let users play around with aspect ratios, or even move the lens forward and back.

What I'm trying to say, in the end, is that the Micro 4/3's format feels usable. It's small and almost toy-like. It's accessible and friendly, as opposed to the massive gear associated with full-format cameras and their philosophical little brothers, the APS-C cameras. This is the format to use to experiment with niches and get an entire generation interested in the wild possibilities of photography. Just imagine Panasonic selling Lens Baby at Best Buy. That would be great.

UPDATE: There's an article over at Luminous Landscape discussing the nature of sensors and pixels. Basically, it's not the size of the pixel that matters, it's the size of the sensor. You're going to have a set amount of noise over the surface of the sensor for any given ISO, and it doesn't really matter if you spread that out over 10MP or 20MP.

From experience, and also technically, I disagree. The author addresses the need for connecting hardware that makes pixels smaller at higher MP ratings, but if we simply count the surface area of a pixel, and split that into two pixels, thus doubling the MP rating, we still have the same surface area of the previous, larger pixel. The same amount of light is being detected.

We have a few examples of this being proven not entirely correct. DxO Mark's rating of the Panasonic GH1 and GH2. They are very similar sensors, only one has a higher pixel count: the GH2. The GH2 underperforms the GH1 by four points, according to the scale. Moreover, in each camera generation, it's the camera with the highest pixel count that gets hurt the most by noise.

Again, if we follow the formula used and simply scale down the higher-resolution image to the resolution of the lower-resolution image, we do achieve similar levels of dynamic range and noise, but it's artificial. The process of photons randomly hitting the pixels on the sensor, thus determining the exposure, is an analog process. It is the process that is most representative of the scene we're trying to photograph. In post production, We have to digitally determine how to average together the smaller pixels into a hypothetical larger pixel. How can the program know it got it right?

Finally, the formula used in the article would be perfectly accurate if all pixels measured the same thing, but they don't. Each pixel only registers a red, green, or blue value. If we take a set area of a sensor, fill that area with an RGB set of pixels, then triple the pixel count, we must fit an entire RGB assembly where the blue pixel once was. That means that that area of the sensor is now 1/3rd as sensitive to blue light as it once was.

Averaged over a large area, this effect is minimized, which is supposedly what DxO Mark does, but it obviously doesn't fully negate the effect since small areas of the sensor will be less sensitive to particular colors than they once were, introducing hot pixels and noise. These hot pixels and noise will then be averaged into the final image when we try to scale the image down to the size that a lower-resolution sensor would have produced. This means that the raw materials from which the image is produced are more numerous, theoretically allowing an average to be high quality, but each individual pixel is of lower quality.

No comments:

Post a Comment

All posts are moderated, so it may take a day for your comment to appear.