Is it possible to construct a capacitor capable of storing the energy in lightni…

Is it possible to construct a capacitor capable of storing the energy in lightning, then allowing that energy to flow gradually into the power grid?

Actually, the system of cloud and ground that produces lightning is itself a giant capacitor and the lightning is a failure of that capacitor. Like all capacitors, the system consists of two charged surfaces separated by an insulating material. In this case, the charged surfaces are the cloud bottom and the ground, and the insulating material is the air. During charging, vast amounts of separated electric charge accumulate on the two surfaces—the cloud bottom usually becomes negatively charged and the ground below it becomes positively charge. These opposite charges produce an intense electric field in the region between the cloud and the ground, and eventually the rising field causes charge to begin flowing through the air: a stroke of lightning.

In principle, you could tap into a cloud and the ground beneath and extract the capacitor’s charge directly with wires. But this would be a heroic engineering project and unlikely to be worth the trouble. And catching a lightning strike in order to charge a second capacitor is not likely to be very efficient: most of the energy released during the strike would have to dissipate in the air and relatively little of it could be allowed to enter the capacitor. That’s because no realistic capacitor can handle the voltage in lightning.

Here’s the detailed analysis. The power released during the strike is equal to the strike’s voltage times its current: the voltage between clouds and ground and the current flowing between the two during the strike. Voltage is the measure of how much energy each unit of electric charge has and current is the measure of how many units of electric charge are flowing each second. Their product is energy per second, which is power. Added up over time, this power gives you the total energy in the strike. If you want to capture all this energy in your equipment, it must handle all the current and all the voltage. If it can only handle 1% of the voltage, it can only capture 1% of the strike’s total energy.

While the current flowing in a lightning strike is pretty large, the voltage involved is astonishing: millions and millions of volts. Devices that can handle the currents associated with lightning are common in the electric power industry but there’s nothing reasonable that can handle lightning’s voltage. Your equipment would have to let the air handle most of that voltage. The air would extract power from the flowing current in the lightning bolt and turning it into light, heat, and sound. Your equipment would then extract only a token fraction of the stroke’s total energy. Finally, your equipment would have to prepare the energy properly for delivery on the AC power grid—its voltage would have to be lowered dramatically and a switching system would have to convert the static charge on the capacitors to an alternating flow of current in the power lines.

How is a diode different from a piece of ordinary wire?

How is a diode different from a piece of ordinary wire? — R

An ordinary wire will carry electric current in either direction, while a diode will only carry current in one direction. That’s because the electric charges in a wire are free to drift in either direction in response to electric forces but the charges in a diode pass through a one-way structure known as a p-n junction. Charges can only approach the junction from one side and leave from the other. If they try to approach from the wrong side, they discover that there are no easily accessible quantum mechanical pathways or “states” in which they can travel. Sending the charges toward the p-n junction from the wrong side can only occur if something provides the extra energy needed to reach a class of less accessible quantum mechanical states. Light can provide that extra energy, which is why many diodes are light sensitive—they will conduct current in the wrong direction when exposed to light. That is the basis for many light sensitive electronic devices and for most photoelectric or “solar” cells.

How is sound picked up on a microphone?

How is sound picked up on a microphone? — PB, Marion, MA

Sound consists of small fluctuations in air pressure. We hear sound because these changes in air pressure produce fluctuating forces on various structures in our ears. Similarly, microphones respond to the changing forces on their components and produce electric currents that are effectively proportional to those forces.

Two of the most common types of microphones are capacitance microphones and electromagnetic microphones. In a capacitance microphone, opposite electric charges are placed on two closely spaced surfaces. One of those surfaces is extremely thin and moves easily in response to changes in air pressure. The other surface is rigid and fixed. As a sound enters the microphone, the thin surface vibrates with the pressure fluctuations. The electric charges on the two surfaces pull on one another with forces that depend on the spacing of the surfaces. Thus as the thin surface vibrates, the charges experience fluctuating forces that cause them to move. Since both surfaces are connected by wires to audio equipment, charges move back and forth between the surfaces and the audio equipment. The sound has caused electric currents to flow and the audio equipment uses these currents to record or process the sound information.

In an electromagnetic microphone, the fluctuating air pressure causes a coil of wire to move back and forth near a magnet. Since changing or moving magnetic fields produce electric fields, electric charges in the coil of wire begin to move as a current. This coil is connected to audio equipment and again uses these currents to represent sound.

Is it true that you shouldn’t put a speaker near a microwave oven?

Is it true that you shouldn’t put a speaker near a microwave oven?

A microwave oven that’s built properly and not damaged emits so little electromagnetic radiation that the speaker should never notice. The speaker might have some magnetic field leakage outside its cabinet, and that might have some effect on a microwave oven. However, most microwaves have steel cases and the steel will shield the inner workings of the microwave oven from any magnetic fields leaking from the speaker. The two devices should be independent.

Why can you force the current from the n-type semiconductor to the p-type after …

Why can you force the current from the n-type semiconductor to the p-type after a p-n junction has been created but you can’t force current from the p-type to the n-type?

Actually, you are asking about a current of electrons, which carry a negative charge. It’s true that electrons can’t be sent across the p-n junction from the p-type side to the n-type side. There are several things that prevent this reverse flow of electrons. First, there is an accumulation of negative charge on the p-type side of the p-n junction and this negative charge repels any electrons that approach the junction from the p-type end. Second, any electron you add to the p-type material will enter an empty valence level. As it approaches the p-n junction, it will find itself with no empty valence levels in which to travel the last distance to the junction. It will end up widening the depletion region—the region of effectively pure semiconductor around the p-n junction; a region that doesn’t conduct electricity.

How do bipolar transistors work?

How do bipolar transistors work? — BR

A bipolar transistor is a sandwich consisting of three layers of doped semiconductor. A pure semiconductor such as silicon or germanium has no mobile electric charges and is effectively an insulator (at least at low temperatures). Dope semiconductor has impurities in it that give the semiconductor some mobile electric charges, either positive or negative. Because it contains mobile charges, doped semiconductor conducts electricity. Doped semiconductor containing mobile negative charges is called “n-type” and that with mobile positive charges is called “p-type.” In a bipolar transistor, the two outer layers of the sandwich are of the same type and the middle layer is of the opposite type. Thus a typical bipolar transistor is an npn sandwich—the two end layers are n-type and the middle layer is p-type.

When an npn sandwich is constructed, the two junctions between layers experience a natural charge migration—mobile negative charges spill out of the n-type material on either end and into the p-type material in the middle. This flow of charge creates special “depletion regions” around the physical p-n junctions. In this depletion regions, there are no mobile electric charges any more—the mobile negative and positive charges have cancelled one another out!

Because of the two depletion regions, current cannot flow from one end of the sandwich to the other. But if you wire up the npn sandwich—actually an npn bipolar transistor—so that negative charges are injected into one end layer (the “emitter”) and positive charges are injected into the middle layer (the “base”), the depletion region between those two layers shrinks and effectively goes away. Current begins to flow through that end of the sandwich, from the base to the emitter. But because the middle layer of the sandwich is very thin, the depletion region between the base and the second end of the sandwich (the “collector”) also shrinks. If you wire the collector so that positive charges are injected into it, current will begin to flow through the entire sandwich, from the collector to the emitter. The amount of current flowing from the collector to the emitter is proportional to the amount of current flowing from the base to the emitter. Since a small amount of current flowing from the base to the emitter controls a much larger current flowing from the collector to the emitter, the transistor allows a small current to control a large current. This effect is the basis of electronic amplification—the synthesis of a larger copy of an electrical signal.

What was the difficulty in developing the blue LED?

What was the difficulty in developing the blue LED? — JM, Hoboken, NJ

A light emitting diode (an LED) produces light when a current of electrons passes through the junction between its two pieces of semiconductor—from a n type semiconductor cathode to an p type semiconductor anode. The LED’s light is actually produced in the anode when an electron that has just crossed the p-n junction and is orbiting a positively charged region (called a “hole”) drops into the hole to fill it. In filling the hole, the electron releases energy and that energy becomes light through a process called fluorescence.

The energy in a particle of light (a photon) is related the color of that light—with blue photons having more energy than red photons. Here is where the difficulty in making blue LED’s comes in: to produce a blue photon, the electron in an LED must give up lots of energy as it fills the hole in the anode. This need for a large energy release places a severe demand on the semiconductors from which the blue LED is made. These semiconductors need an unusually large band gap—the energy spacing between two types of paths that electrons can follow in the semiconductor. It wasn’t until recently that good quality semiconductors with the appropriate electrical characteristics were available for this task.

Why is it that when you put two electric lamps into a circuit in parallel with o…

Why is it that when you put two electric lamps into a circuit in parallel with one another, the current through the circuit increases, while when you put those two lamps in series with one another, the current through the circuit decreases?

When the two lamps are in parallel with one another, they share the current passing through the rest of the circuit. Current arriving at the two lamps can pass through either lamp before continuing its trip around the circuit. The two lamps operate independently and each one draws the current that it normally does when it experiences the voltage drop provided by the rest of the circuit. With both lamps providing a path for current, the current through the rest of the circuit is the sum of the currents through the two lamps.

But when the two lamps are in series with one another, each lamp carries the entire current passing through the circuit. Current arriving at the two lamps must pass first through one lamp and then through the other lamp before continuing its trip around the circuit. There is no need to add the currents passing through the lamps because it is the same current in each lamp. Moreover, the voltage drop provided by the rest of the circuit is being shared by the two lamps so that each lamp experiences roughly half the overall voltage drop. Since lamps draw less current as the voltage drop they experience decreases, these lamps draw less current when they must share the voltage drop. Thus the current passing through the circuit is much less when the two lamps are inserted into the circuit in series than in parallel.

How does an operational amplifier work?

How does an operational amplifier work? — BR

An operational amplifier is an extremely high gain differential voltage amplifier—a device that compares the voltages of two inputs and produces an output voltage that’s many times the difference between their voltages. How the operational amplifier performs this subtraction and multiplication process depends on the type of operational amplifier, but in most cases two input voltages control how current is shared between two paths of a parallel circuit. Even a tiny difference between the input voltages produces a large current difference in the two paths—the path that’s controlled by the higher voltage input carries a much larger current than the other path. The imbalance in currents between the two paths produces significant voltage differences in their components and these voltage differences are again compared in a second stage of differential voltage amplification. Eventually the differences in currents and voltage become quite large and a final amplifier stage is used to produce either a large positive output voltage or a large negative output voltage, depending on which input has the higher voltage. In a typical application, feedback is used to keep the two input voltages very close to one another, so that the output voltage actually falls in between its two extremes. At that operating point, the operational amplifier is exquisitely sensitive to even the tiniest changes in its input voltages and makes a wonderful amplifier for small electric signals.

What is a VU meter on tape deck? How does it differ from a dB meter? I know that…

What is a VU meter on tape deck? How does it differ from a dB meter? I know that the best recording is achieved when the needle hovers around the zero and that the sound distorts above zero and is barely audible the lower into the negative numbers you go, but what are the meanings of the plus and minus readings? — GF, California

VU and dB meters both measure the audio power involved in recording and they both use logarithmic scales to report that power. Because of these logarithmic scales, a factor of 10 increase in power produces an increase of 10 in both the VU reading and the dB reading. For example, -20 dB is 10 times the power of -30 dB. In both measures, the zero is chosen as the highest acceptable power—the highest power for which distortion is acceptable.

Where VU and dB differ is in how they measure audio power. VU is short for “volume units” and it is a measure of average audio power. A VU meter responds relatively slowly and considers the sound volume over a period of time. Its zero is set to the level at which there is 1% total harmonic distortion in the recorded signal. dB is short for “decibels” and it is a measure of instantaneous audio power. A dB meter responds very rapidly and considers the audio power at each instant. Its zero is set to the level at which there is 3% total harmonic distortion. Because of these differences in zero definitions, the dB meter’s zero is roughly at the VU meter’s +8. Nonetheless, both meters are important and both should be kept at or below zero to avoid significant distortion in a recording. In certain situations, such as when there are sudden loud sounds or with instruments that are very rich in harmonics, it’s possible to have the dB meter read above zero even though the VU meter remains below zero.