!!!Sorry I haven’t touched this in a while, Very Busy!!!!
Okay, this might be a biggy and we’ll see how many days and beers it takes to get it done. There are a number of reasons that I want to write down some technical vomit as I call it. Just a mass of information. A lot of misconceptions have come my way through peers, blogs, forums, and then I got this Jim Andre guy asking me about what camera to by and I talk a bit over his head with techno babble and he said it’d be great if I put it all down. Now, I am nobody and what I write is not gold or words of God, but I did go to graduate school for linguistics and hated my grammar class, so I read about a book every two weeks on cameras, codecs, building lenses, and other random tech stuff. This was all in resentment for getting awful looking footage when I went to shoot digital or transferred film over. I can remember going in to do a very long short of mine with an HVX200, the holy grail of the genesis HD cameras. Well, my blacks looked crappy- like muddy and not really black. Why can’t I have black? Well, I’ll give the answer I found later on, but most of these sections might have real world consequences on how you light and frame shots. This is intended for lower end cameras, sub Red level. You can get great images out of them, but an Arri Alex kinda scoffs them in my opinion. Speaking of that, take a lot of this as a professional opinion, meaning opinion backed up by half baked technical knowledge. Feel free to argue, challenge, or spit at my blog if you feel the need to.
There is more literature on these things than anything else. A lot of people really get into what is going on with their sensors.
1. What do Sensors actually see?: Simply, but not accurately put, camera sensors are like photo voltaic cells that collect light and turn it into energy. The more light, the more voltage goes through, the more signal is read, the brighter the image that is interpreted. Pretty simple. DB is a measurement of gain, or how much energy is put into the sensor before input, which makes it more sensitive. This is usually compared to old school ISO, but really is more in tune with pre flashing film or pushing and pulling during development. By dumping more energy into the sensor you excite the photon receptive cells, you raise the noise floor, and increase exposure. This has a number of negative consequences and the art of it really comes down to those either looking for exposure of grain. Raising the Gain on a sensor should not be thought of as straight forward as using a higher ISO, it is a misconception from film days. Higher ISO’s in digital have more negative impact on the image than Film did. The noise reduction software in cameras or your NLE (Non linear editing software) can reduce detail levels and overall image sharpness. Of course, when your film goes to a lab the answer print is de-grained and that also caused loss of detail- but the method used was far less obtrusive.
Interesting enough sensors are color blind and a use a color overlay in a specific pattern to determine what is what color. For example, if a pixel node has a green gel over it then when green light hits it, more green light is let through, the other are colors rejected, and the pixel node now knows that indeed it is supposed to be green. The red pixel next to it gets very little light, and therefore the computer in the camera now knows that there is not supposed to be any red in this area. The processor in the camera compares this information, runs an algorithm, and walla you get color. Bayer is the most common way to lay out color pattern. This uses 2 green pixels, one blue, and one red. It turns out we see more green than any other color, must be a money thing. This color pattern is so common in interchangeable lens cameras that it doesn’t effect what we do. But fixed lens systems can use a couple interesting designs that have certain advantages over interchangeable. The oldest and most common advantage is the 3-mos or 3-ccd systems that use 3 small sensors and a beam splitter prism that separates the 3 main colors and sends each one to a different sensor. This way a bayer pattern isn’t needed, a whole chip is used to understand the color and this gives way more information to the camera and can produce quit saturated, almost reminiscent of Kodachrome, colors. A lot of ENG cameras use this, with their fixed Fujinon glass (and no one shakes a stick at Fujinon) the image quality can be quit supurb. 3 sensors also means more fidelity in areas like moire, noise reduction, and whatever else the camera company can think of. Cross checking each sensor to the other yield good results, but nay, they only do this with smaller chips like 2/3 or less- so narrative filmmakers do not buy them and are superstitious enough to think the image quality is crap. Righ, size is all that matters.
2. Does a bigger sensor matter, or is it the motion of the photons?: Yes, but no, but more yes than no. A bigger sensor effects image quality in many ways, to an advantage and disadvantage. Camera manufacturers are struggling to make these differences more and more minute on both ends of the scale. It’s easier to make a median size of sensor, Full Frames are just now reaching high resolutions and small sensors get really, really small and are difficult to manufacture. Interesting enough, original analog sensors for video were 3 inches big- ahhh. That’s roughly 75mm, the same size as IMAX. But it looked like crap and that was because technology just sucked back then compared to now. 3 inch sensors are still used in optical transfer of film though, because they are so damn big and have low ISO- no grain gets added to the equation. Yes, 35mm is even small compared to some.
The way sensors size effects your image is mainly in crop factor. This is a wierd way, but popular, to think about sensors. Since 35mm was so predominate every sensor is compared to this size. So a sensor half the size of 35mm is considered a 2x crop because it only has half the field of view of a 35mm sensor with an equivalent focal length lens on it. Along with this crop is a change in perceived depth of field, a 50mm lens always has the same depth of field, but when you can only see a part of the image area the out of focus areas is limited. This is both good and bad. 35mm sometimes has too little depth of field. I remember trying to take pictures of my daughter with a 35mm lens at f 2.8. To get her whole face in focus I had to step back quit a bit, and have a bad shot, to get it. So a smaller sensor can sometimes save you and it makes pulling focus easier. Even super 35mm was smaller than full frame and this was because focus pulling was still pleasing but not terribly hard to do. There is a limit to this though, a sensor that is too small, say 8mm, has way too much depth of field and you have to focus really close to get any real play. For sure there is a shallow depth of field party going on in movies these days, the advent of cheap large sensors has popularized this, but most people don’t understand that full frame 35mm can be very difficult to focus on in low light. Try medium format.
Another thing you might be thinking is that the crop factor of a lens means that it is harder to get wide angle. Well, remember that on a full 35mm we rarely went below 24mm as a wide angle, 17mm was too damn wide and distorted and almost fishy, and below that was just a circular looking. With smaller sensors you can use wider focal length glass with less distortion because the imaging circle is smaller. This is why for 16mm we can get 5mm lenses with little distortion. So in the end it doesn’t matter was system you use for wide angle, because it balances out. Of course I’m ignoring Nikon’s famous 6mm fisheye that costs 12 Grand.
Now for the biggest advantage of shooting with a larger frame, and that is low light. This can be a harder concept to grasp, I mean, a smaller sensor should need less light right? It’s smaller, there fore the same amount of light hitting it compared to a full frame would fill more space. But no, not quit. The light is just focused differently and most of it just doesn’t get used to same. Take a metaphor of a flashlight being shined on two different sized walls. The same amount of light is coming out of the flashlight. On the bigger wall, it barely fills the area and leaves dark corners- vignettes. On the smaller wall it fills it nicely but some of the light goes over the edges and off into oblivion- the great beyond??? This light coming through is the iris- so the f-stop remains the same for exposure. The real difference comes if we change that light into a projector with an image. You would notice that on the larger wall the details of the image are more clear because they are spread out. The smaller wall would have all the same details, but would not appear as clear because the details are jumbled up. There are quit a few things wrong with this last part of the analogy, but its a good start to understanding the relation. Larger frames are generally better at capturing details because there is more area for the details to fall, and when we include pixel placement, this is essential in low light photography. Pixels and algorithms drop information, you could just walk up to the wall and see the same detail in my analogy. Later I will talk about knowing your print size is what truly matters.
3. Sensor sizes: Common ones. Motion picture: 8, super 8, 16, super 16, super 35, 65, 75. Photographic: 9.5, 10, 11 (all for consumer stuff throughout history), 35, full plate and 1/8 divisions of that, 4×6, 6×6 inches. Digital: 1/1.65, 1/4, 1/3, 1/2, 2/3, 4/3, APS-C, APS-H, Super 35mm, Red (3 different sizes), Full Frame, 3″, 4×6. The links below and what I’ve put before explain this much better.
This is good for Red users since the size of the Red Sensors is never obvious. One flaw I notice is that it doesn’t talk about aspect ratio here, would be a good place to put a note.
This blog is a bit biased, but I still think it is the best place I’ve seen to get the best info. I say it is biased because the guy who writes for the website is sharp lens obsessed. He gives kudos to good lenses, but he really favors and pushes the best of the best. Sometimes I find sharp lenses to be harsh and ugly- they can give a plastic effect of digital and sometimes on film it just seemed to reveal the grain too much. I think that lenses are a personal choice and that you should really try them out to see if you like them. Another issue is that he doesn’t talk about print size, which effects perception of sharpness more than anything. I will come back to this later, because it is a lot to consider and most people won’t listen anyway. Enjoy this post I did earlier from the 4/3 rumor website. Some silly goose on there. Reminds me of why I am working on this. 2 days ago | Reply
Troller speaking on the new Voigtlander 17.5 f/0.95 lens.
Its light gathering capabilities isn’t greater than that of a f 2.0 35 mm lens. The difference is that the minuscule m43 sensor requires less light. This for exposure it is a true f/0,95, but it doesn’t gather more light as such.
No, actually this is incorrect but somewhat on the right track. It can be confusing, but the smaller imaging circle allows for elements with less curvature and better and more exact converging point. So when you run the f-stop equation of the lens, the median for the focal point is positioned different and gives you a lower f-stop. It is the same mechanic that influences f-stop sweet spot and diffraction. Hence why 4/3 lenses are more sharp than ff counterparts at the same low f-stop. Look at Medium format and compare it to full frame. You can’t hardly get under 2.8 on them and they are usually stopped down to 16 to be their sharpest. Interesting though, there are exceptions such as some Leica’s f/1 lenses and the famous Kubrick f 0.7- but this is only possible is a standard lens (40-50mm) because the curvature of field is so limited that light beams go parallel throughout.
Anyway, I don’t know much but this but this is a distinct and funny advantage of 4/3 along with other formats. With FF low light looks better, but opening up a lens as adverse consequences. Also curvature of field hurts wider formats so that wide angle landscape lenses suffer from less sharp corners, were on 4/3 it is easier to manufacture a wide angle lens with sharp corners. Just check out mtf on fisheyes and wide angles from the two systems and you will see the drop off by percentage is much less extreme on 4/3. Best to carry both systems with you and understand them for what they are.
Next time: Why not make everything Full Frame?
Suggestions: write to me firstname.lastname@example.org