Blender 3D: Noob to Pro/Understanding the Camera

Real-World Cameras

Before discussing the camera in Blender, it helps to understand something about how cameras work in real life. We have become accustomed to so many of their quirks and limitations when looking at real photographs, that 3D software like Blender often expends a lot of effort to mimic those quirks.

When taking a photo with a real camera, a number of important factors come into play:

As you can see, many of these different factors interact with each other. The brightness of the image can be affected by the exposure time, the aperture, the gain sensitivity and the focal length of the lens. Each of these have side-effects on the image in other ways.

Blender and other computer graphics software are, in principle, free of the problems of focus, exposure time, aperture, sensitivity and focal length. Nevertheless, it is common to want to introduce deliberate motion blur into an image, to give the impression of movement. Sometimes it is useful to introduce a deliberately shallow depth of field, blurring objects in the background in order to draw emphasis to the important part of the image, i.e that which is in focus.

Exposure (the total amount of light captured in the image) is also less of a problem in computer graphics than in real-world photography, because in computer graphics you always have total control over the amount and placement of lighting in the scene. Nevertheless, if you’re not careful, you can produce overexposed (bright parts losing detail by saturating to a solid, featureless white) or underexposed images (dark parts losing detail by becoming solid black).

The field of view issue arises from basic principles of geometry, and Blender’s camera is just as much subject to that as real cameras.

The Camera In Blender

Here we are going to concentrate on the important issue of field of view.

You can change the field of view in two ways - move the camera closer to or farther from the scene (called dollying in film/TV production parlance), or change the angle of the lens (zooming). You do the latter in the Object Data tab in the Properties window (the Camera has to be selected by  RMB  in Object Mode, or the required tab will not be visible).

Perspective is the phenomenon where objects that are farther away from the viewer look smaller than those nearby. More than that, different parts of the same object may be at different distances from the eye, leading to a change in the apparent shape of the object called perspective distortion. The mathematical theory of perspective was worked out by Alhazen in the 11th century, and famously adopted by the Italian Renaissance painters four hundred years later.

Here are two renders of the same scene with two different cameras, to illustrate the difference.

This one moves the camera closer but gives it a wider field of view:

This one moves the camera back, while narrowing its field of view, to try to give the scene the same overall size.

The latter is like using a “telephoto” lens with a real camera. Notice how the wider field of view gives you a greater perspective effect. The boxes are all cuboids, with parallel pairs of opposite faces joined by parallel edges, yet there is a noticeable angle between notionally-parallel edges in both images, which is more pronounced in the upper image. That is what perspective distortion is all about.

Specifying the Field of View

When you select a camera object, its settings become visible in the Camera Context in the Properties window, which should initially look something like at right.

Photographers are accustomed to working in terms of the focal length of the lens - longer means narrower field of view, shorter means wider field of view. But the field of view also depends on the size of the sensor (image capture area). Modern digital cameras typically have a smaller sensor size than the exposed film area in older 35mm film cameras. Thus, the focal length measurements have to be adjusted accordingly, in order to give the same field of view.

Blender allows you to work this way, by specifying the focal length in the “Lens” panel, and the sensor size in the “Camera” panel. It even offers a “Camera Presets” menu, which sets the sensor size for any of a range of well-known cameras.

Also, you might be doing compositing of your computer-generated imagery on top of an actual photograph. In which case, to make the results look realistic, you need to closely match the characteristics of the camera and lens that were used to take the photo. If you know the lens focal length and camera sensor size, it makes sense to be able to plug those values in directly.

But if you’re not doing photo compositing, but generating completely synthetic imagery, you might consider this a somewhat roundabout way of working. Why not specify the field of view directly as an angle?

Blender allows for this as well. From the popup menu in the Lens panel that says “Millimeters”, select “Field of View” instead, and the Focal Length field will turn into a Field of View field, showing the angle in degrees directly. This is much easier to relate to the geometry of the scene!

Note:

Relationship between the two: If the width of the sensor is d, the focal length of the lens is f, and the angle of view is θ, then they are related by .

 

See Also

This article is issued from Wikibooks. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.