Why do camera flashes make eyes red and why do two flashes correct this problem?

Why do camera flashes make eyes red and why do two flashes correct this problem?

The retinas of your eyes appear reddish when you look at them with white light. The red eye problem occurs because light from the flash passes through the lens of your eye, strikes the retina (which allows you to see the flash), and reflects back toward the camera. This reflection is mostly red light and it is directed very strongly back toward the camera. The camera captures this red reflection very effectively and so eyes appear red. The double flash is meant to get the irises of your eyes to contract (as they do whenever your eyes are exposed to bright light or you are startled or excited). The first flash causes your irises to contract so that less light from the second flash can pass into and out of your eyes. Unfortunately, this trick doesn’t work all that well.

Why do people in flash pictures have “red eye”? How do cameras try to solve th…

Why do people in flash pictures have “red eye”? How do cameras try to solve that problem?

When light from the flash illuminates people’s eyes, that light focuses onto small spots on their retinas. Most of the light is absorbed, by a small amount of red light reflects. Because the lens focused light from the flash onto a particular spot on the retina, the returning light is focused directly back toward the flash. The camera records this returning red light and eyes appear bright red. To reduce the effect, some flashes emit an early pulse of light. People’s pupils shrink in response to this light and allow less light to go into and out of their eyes. Professional photographers often mount their flashes a foot or more from the lens so that the back-reflected red light that returns toward the flash misses the lens.

Why is film ruined when it is exposed to light?

Why is film ruined when it is exposed to light?

Photographic film chemically records information about the light that it has absorbed. Normally, this light was projected on it by a lens and formed a clear, sharp pattern of the scene in front of the camera. However, if light strikes the film uniformly, the information recorded on the film will have nothing to do with an image. The entire sheet of film will record intense exposure to light and will have no structure on its chemical record.

Does your pupil opening and closing have anything to do with it focusing on a mo…

Does your pupil opening and closing have anything to do with it focusing on a more distant object?

The size of your pupil does not depend on the distance to an object. It depends only on how bright the scene in front of you is. But the size of your pupil does affect your ability to focus. When it is relatively dark and your pupil is wide open, the whole lens of your eye is involved in light gathering. Focusing becomes very critical and you have very little depth of focus. Moreover, if your lens isn’t perfect, you will see things as blurry. But when it is bright out and your pupil is small, you are only using the center portion of your lens and everything is in focus. That’s why it is harder to focus at night than during the day. When you squint, you are artificially shrinking the effective diameter of the lens in your eye and increasing your depth of focus. Unfortunately, you are also reducing the amount of light that reaches your eye. If you look through a pinhole in a sheet of paper, you will find everything in focus, although it will appear very dim.

How does a video camera work?

How does a video camera work?

There are many parts to this question, so I’ll deal with only two: how the camera forms an image of the scene in front of the camera on its imaging chip and how the camera obtains a video signal from that imaging chip. The first part involves a converging lens—one that bends rays of light toward one another. As the light from a particular spot in the scene passes through the camera’s lens, the lens slows the light down. Because the lens’ surfaces are curved, this slowing process causes the light rays to bend so that they tip toward one another. These rays continue toward one another after they leave the lens and they all meet at a single point on the surface of the camera’s imaging chip. That point on the chip thus receives all the light from only one spot in the scene. Likewise, every point on the imaging chip receives light from one and only one spot in the scene. The lens is forming what is called a “real image”—a pattern of light in space (or on a surface) that is an exact copy of the scene from which the light originated. You can form a real image of a scene on a sheet of paper with the help of a simple magnifying glass. The actual camera lens often contains a number of individual glass or plastic elements, which allow it to bend all colors of light evenly and to adjust the size and brightness of the real image that it forms on the imaging chip.

The second part of this question revolves around the imaging chip. In this chip, known as a “charge-coupled device,” the arriving light particles or “photons” causes electric charge to be transferred into a narrow channel of semiconductor—that is a material that can conduct electricity in a controllable manner. Each photon contains a tiny amount of energy and this energy is enough to move the electric charge into the channel. The imaging chip has row after row of these light-sensitive channels so that the pattern of light striking the chip creates a pattern of charge in its channels. To obtain a video image from these channels, the camera uses an electronic technique to shift the charge through the channels. The camera thus reads the electric charge point-by-point, row-by-row until it has examined the pattern of charge (and thus the pattern of light) on the whole imaging chip. This reading process is just what is needed to build a video signal, since a television also builds its image point-by-point, row-by-row. To obtain a color image, the imaging chip is covered with a tiny pattern of colored filters so that each point on its surface is only sensitive to a certain primary color of light: either red, green, or blue. This sort of color sensitivity mimics that of our own eyes—our retinas respond only to red, green, or blue light, but we see mixtures of those three colors as a much richer collection of colors.

How does the camera know (measure) what the distance is to the object?

How does the camera know (measure) what the distance is to the object?

Modern cameras use a variety of techniques to find the distance to objects. Some cameras bounce sound off of the objects and time how long it takes for the echo to return. Others observe the central portion of the image (presumably the object) from two vantage points simultaneous and then adjust the angles at which those two observations are made until the images overlap. This rangefinder technique is the one you use to sense distance with your eyes. You view the object through each eye and adjust the angles of view until the two images overlap (in your brain). At that point, you can tell how far away the object is by how crossed or uncrossed your eyes are. A rangefinder camera has two small viewing windows and lenses to look at the object, just as you have two eyes to look at the object. Finally, some cameras don’t really measure the distance to the object but instead adjust the lens until it forms the sharpest possible image. A sharp image has the highest possible contrast while an out-of-focus image will have relatively low contrast. The cameras adjust the lens until the light striking a sensor exhibits maximal contrast (brightest bright spots and darkest dark spots).