A black hole is what remains when a massive star dies.
If you have read How Stars Work, then you know that a star is a huge, amazing fusion reactor. Because stars are so massive and made out of gas, there is an intense gravitational field that is always trying to collapse the star. The fusion reactions happening in the core are like a giant fusion bomb that is trying to explode the star. The balance between the gravitational forces and the explosive forces is what defines the size of the star.
As the star dies, the nuclear fusion reactions stop because the fuel for these reactions gets burned up. At the same time, the star's gravity pulls material inward and compresses the core. As the core compresses, it heats up and eventually creates a supernova explosion in which the material and radiation blasts out into space. What remains is the highly compressed, and extremely massive,
core. The core's gravity is so strong that even light cannot escape.
Artist concept of a black hole: The arrows show the paths of objects in and around the opening of the black hole.
This object is now a black hole and literally disappears from view. Because the core's gravity is so strong, the core sinks through the fabric of space-time, creating a hole in space-time -- this is why the object is called a black hole.
The core becomes the central part of the black hole called the singularity. The opening of the hole is called the event horizon.
You can think of the event horizon as the mouth of the black hole. Once something passes the event horizon, it is gone for good. Once inside the event horizon, all "events" (points in space-time) stop, and nothing (even light) can escape. The radius of the event horizon is called the Schwarzschild radius, named after astronomer Karl Schwarzschild, whose work led to the theory of black holes
Saturday, November 13, 2010
Sunday, September 5, 2010
What Does General Relativity Mean?
or an analogy to general relativity, consider that you stretched out a bedsheet or piece of elastic flat, attaching the corners firmly to some secured posts. Now you begin placing things of various weights on the sheet. Where you place something very light, the sheet will curve downward under the weight of it a little bit. If you put something heavy, however, the curvature would be even greater.
Assume there's a heavy object sitting on the sheet and you place a second, lighter, object on the sheet. The curvature created by the heavier object will cause the lighter object to "slip" along the curve toward it, trying to reach a point of equilibrium where it no longer moves. (In this case, of course, there are other considerations -- a ball will roll further than a cube would slide, due to frictional effects and such.)
This is similar to how general relativity explains gravity. The curvature of a light object doesn't affect the heavy object much, but the curvature created by the heavy object is what keeps us from floating off into space. The curvature created by the Earth keeps the moon in orbit, but at the same time the curvature created by the moon is enough to affect the tides.
Assume there's a heavy object sitting on the sheet and you place a second, lighter, object on the sheet. The curvature created by the heavier object will cause the lighter object to "slip" along the curve toward it, trying to reach a point of equilibrium where it no longer moves. (In this case, of course, there are other considerations -- a ball will roll further than a cube would slide, due to frictional effects and such.)
This is similar to how general relativity explains gravity. The curvature of a light object doesn't affect the heavy object much, but the curvature created by the heavy object is what keeps us from floating off into space. The curvature created by the Earth keeps the moon in orbit, but at the same time the curvature created by the moon is enough to affect the tides.
Why Do Stars Twinkle?
Stars twinkle because of turbulence in the Earth's atmosphere. You can think as the atmosphere being made up of several "layers." Each layer has a different temperature and density. As the light from a star passes through the atmosphere, it is bent by each layer, and we perceive the twinkling.
The bending of the light when it passes from one medium to another, like water to air, or one layer of air to another, is called refraction. Refraction is what makes a straw look bent when you put it in a glass of water. Refraction also makes the stars and the Sun near the horizon look higher in the sky than they really are. In fact, when the Sun is setting, we are seeing the sun's disk when the Sun is already below the horizon
You will notice that stars closer to the horizon twinkle more; this is because there is a lot more atmosphere between you and a star near the horizon than between you and a star in the zenith (the point directly overhead) . You will also notice that planets do not twinkle. Stars are so far away that they appear as points of light, but planets are much closer and their disks can seen through telescopes. The fluctuations in the atmosphere are not large enough to affect the light coming from the planets, the Moon or the Sun.
The bending of the light when it passes from one medium to another, like water to air, or one layer of air to another, is called refraction. Refraction is what makes a straw look bent when you put it in a glass of water. Refraction also makes the stars and the Sun near the horizon look higher in the sky than they really are. In fact, when the Sun is setting, we are seeing the sun's disk when the Sun is already below the horizon
You will notice that stars closer to the horizon twinkle more; this is because there is a lot more atmosphere between you and a star near the horizon than between you and a star in the zenith (the point directly overhead) . You will also notice that planets do not twinkle. Stars are so far away that they appear as points of light, but planets are much closer and their disks can seen through telescopes. The fluctuations in the atmosphere are not large enough to affect the light coming from the planets, the Moon or the Sun.
Saturday, September 4, 2010
What Does the Large Hadron Collider Do?
What Does the Large Hadron Collider Do?
The LHC circulates a beam of charged particles (specifically hadrons, probably either protons or lead ions) through a tube which maintains a continuous vacuum. The particles are guided through the continuous vacuum within the circular tube using a series of magnetic superconductors which accelerate and guide the charged particles. In order to maintain the superconducting properties of the magnets, they remain supercooled near absolute zero by a massive cryogenic system.
Once the beam reaches its highest energy levels, obtained by steadily increasing the energy as the beam circles repeatedly through the magnets, it will be maintained in a storage ring. This is a loop of tunnel where the magnets will keep circulating the beam so that it retains its kinetic energy, sometimes for hours on end. The beam can then be routed out of the storage ring to be sent into the various testing areas of the LHC.
The beams are expected to obtain energy levels up to 7 TeV (7 x 1012 electronvolts). Since two beams will collide with each other, the energy of the collisions are therefore anticipated to reach 14 TeV from protons.
In addition, by accelerating heavier lead ions, they anticipate collisions with energies in the range of 1,250 TeV ... energy levels on the order of those obtained only moments after the Big Bang. (Not the energies obtained during the Big Bang. The TeV energy scale is about 1016 times smaller than the Planck mass energy scale, for example, which Lee Smolin uses as the top of his particle energy scale in The Trouble with Physics. Presumably, the Big Bang energy levels would have been somewhere on this Planck energy scale or higher, where the quantum physics and general relativity aspects of reality both begin to break down.)
What Is the Large Hadron Collider Looking For?
Since the Large Hadron Collider will be having collisions of such high energy, the hope is that it will release exotic particles which are normally not observed. Any results from the Large Hadron Collider collisions should have a major impact on our understanding of physics, either confirming or refuting the projections from the Standard Model of particle physics.
One major product which is being looked for is the Higgs boson, the last particle from the Standard Model of particle physics that hasn't been observed.
It's also possible that the LHC will create some indicators of the exotic dark matter, which makes up nearly 95% of the universe but cannot be directly observed!
Similarly, there might be some evidence of the extra dimensions predicted by string theory. The fact is that we just don't know until we perform the experiments!
LHC Experiments
There are a variety of ongoing experimental systems built into CERN:
ATLAS (A Toroidal LHC ApparatuS) and CMS (Compact Muon Solenoid) - these two large, general purpose detectors will be capable of analyzing the particle produced in LHC collisions. Having two such detectors, designed and operated on different principles, allows independent confirmation of the results.
ALICE (A Large Ion Collider Experiment) - this experiment will collide lead ions, creating energies similar to those just after the Big Bang. The hope is to create the quark-gluon plasma believed to have existed at these energy levels.
LHCb (LHC beauty) - this detector specifically looks for the beauty quark, which will allow it to study the differences between matter and antimatter, including why our universe appears to have so much matter and so little antimatter!
TOTEM (TOTal Elastic and diffractive cross section Measurement) - this smaller detector will analyze "forward particles" which only brush past each other instead of having head-on collisions. It will be able to measure the size of the proton, for example, and the luminosity within the LHC.
LHCf (LHC forward) - this small detector also studies forward particles, but analyzes how the cascades of charged particles within the LHC relates to the cosmic rays that bombard the Earth from outer space, helping interpret and calibrate studies of the cosmic rays.
Who Runs the Large Hadron Collider?
The Large Hadron Collider was built by the European Organization for Nuclear Research (CERN). It is staffed by physicists and engineers from around the world. Nations participating in the construction and experiments consist of:
Armenia, Australia, Austria, Azerbaijan Republic, Belarus, Belgium, Brazil, Bulgaria, Canada, China, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, India, Israel, Italy, Japan, Korea, Morocco, Netherlands, Norway, Pakistan, Poland, Portugal, Romania, Russia, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, United States, Uzbekistan
How Much Did It Cost?
The building of the accelerator, including manpower and materials, is 3.03 billion euros - roughly 4 billion U.S. dollars (using conversion from Sept. 4, 2008). On top of this, of course, is the cost of the various experiments and computing power.
How Is It Going?
The Large Hadron Collider originally went online in September of 2008 and, within about a week, had to shut down due to a leak in one of the seals that insulated the supercooled vacuum from the outside world. After about a year of repairs, the LHC went online once again, this time with much more success. In December 2009 it produced beams with an energy of 1.18 TeV each, resulting in collisions of 2.36 TeV - the most powerful experiment ever conducted on Earth. At present, physicists are still analyzing the results of these collisions to discover what the results mean.
The LHC circulates a beam of charged particles (specifically hadrons, probably either protons or lead ions) through a tube which maintains a continuous vacuum. The particles are guided through the continuous vacuum within the circular tube using a series of magnetic superconductors which accelerate and guide the charged particles. In order to maintain the superconducting properties of the magnets, they remain supercooled near absolute zero by a massive cryogenic system.
Once the beam reaches its highest energy levels, obtained by steadily increasing the energy as the beam circles repeatedly through the magnets, it will be maintained in a storage ring. This is a loop of tunnel where the magnets will keep circulating the beam so that it retains its kinetic energy, sometimes for hours on end. The beam can then be routed out of the storage ring to be sent into the various testing areas of the LHC.
The beams are expected to obtain energy levels up to 7 TeV (7 x 1012 electronvolts). Since two beams will collide with each other, the energy of the collisions are therefore anticipated to reach 14 TeV from protons.
In addition, by accelerating heavier lead ions, they anticipate collisions with energies in the range of 1,250 TeV ... energy levels on the order of those obtained only moments after the Big Bang. (Not the energies obtained during the Big Bang. The TeV energy scale is about 1016 times smaller than the Planck mass energy scale, for example, which Lee Smolin uses as the top of his particle energy scale in The Trouble with Physics. Presumably, the Big Bang energy levels would have been somewhere on this Planck energy scale or higher, where the quantum physics and general relativity aspects of reality both begin to break down.)
What Is the Large Hadron Collider Looking For?
Since the Large Hadron Collider will be having collisions of such high energy, the hope is that it will release exotic particles which are normally not observed. Any results from the Large Hadron Collider collisions should have a major impact on our understanding of physics, either confirming or refuting the projections from the Standard Model of particle physics.
One major product which is being looked for is the Higgs boson, the last particle from the Standard Model of particle physics that hasn't been observed.
It's also possible that the LHC will create some indicators of the exotic dark matter, which makes up nearly 95% of the universe but cannot be directly observed!
Similarly, there might be some evidence of the extra dimensions predicted by string theory. The fact is that we just don't know until we perform the experiments!
LHC Experiments
There are a variety of ongoing experimental systems built into CERN:
ATLAS (A Toroidal LHC ApparatuS) and CMS (Compact Muon Solenoid) - these two large, general purpose detectors will be capable of analyzing the particle produced in LHC collisions. Having two such detectors, designed and operated on different principles, allows independent confirmation of the results.
ALICE (A Large Ion Collider Experiment) - this experiment will collide lead ions, creating energies similar to those just after the Big Bang. The hope is to create the quark-gluon plasma believed to have existed at these energy levels.
LHCb (LHC beauty) - this detector specifically looks for the beauty quark, which will allow it to study the differences between matter and antimatter, including why our universe appears to have so much matter and so little antimatter!
TOTEM (TOTal Elastic and diffractive cross section Measurement) - this smaller detector will analyze "forward particles" which only brush past each other instead of having head-on collisions. It will be able to measure the size of the proton, for example, and the luminosity within the LHC.
LHCf (LHC forward) - this small detector also studies forward particles, but analyzes how the cascades of charged particles within the LHC relates to the cosmic rays that bombard the Earth from outer space, helping interpret and calibrate studies of the cosmic rays.
Who Runs the Large Hadron Collider?
The Large Hadron Collider was built by the European Organization for Nuclear Research (CERN). It is staffed by physicists and engineers from around the world. Nations participating in the construction and experiments consist of:
Armenia, Australia, Austria, Azerbaijan Republic, Belarus, Belgium, Brazil, Bulgaria, Canada, China, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, India, Israel, Italy, Japan, Korea, Morocco, Netherlands, Norway, Pakistan, Poland, Portugal, Romania, Russia, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, United States, Uzbekistan
How Much Did It Cost?
The building of the accelerator, including manpower and materials, is 3.03 billion euros - roughly 4 billion U.S. dollars (using conversion from Sept. 4, 2008). On top of this, of course, is the cost of the various experiments and computing power.
How Is It Going?
The Large Hadron Collider originally went online in September of 2008 and, within about a week, had to shut down due to a leak in one of the seals that insulated the supercooled vacuum from the outside world. After about a year of repairs, the LHC went online once again, this time with much more success. In December 2009 it produced beams with an energy of 1.18 TeV each, resulting in collisions of 2.36 TeV - the most powerful experiment ever conducted on Earth. At present, physicists are still analyzing the results of these collisions to discover what the results mean.
Friday, September 3, 2010
Photon
Under the photon theory of light, a photon is a discrete bundle (or quantum) of electromagnetic (or light) energy. Photons are always in motion and, in a vacuum, have a constant speed of light to all observers, at the vacuum speed of light (more commonly just called the speed of light) of c = 2.998 x 108 m/s.
Basic Properties of Photons
According to the photon theory of light, photons . . .
move at a constant velocity, c = 2.9979 x 108 m/s (i.e. "the speed of light"), in free space
have zero mass and rest energy.
carry energy and momentum, which are also related to the frequency nu and wavelength lamdba of the electromagnetic wave by E = h nu and p = h / lambda.
can be destroyed/created when radiation is absorbed/emitted.
can have particle-like interactions (i.e. collisions) with electrons and other particles, such as in the Compton effect.
History of Photons
The term photon was coined by Gilbert Lewis in 1926, though the concept of light in the form of discrete particles had been around for centuries and had been formalized in Newton's construction of the science of optics.
In the 1800s, however, the wave properties of light (by which I mean electromagnetic radiation in general) became glaringly obvious and scientists had essentially thrown the particle theory of light out the window. It wasn't until Albert Einstein explained the photoelectric effect and realized that light energy had to be quantized that the particle theory returned.
Wave-Particle Duality in Brief
As mentioned above, light has properties of both a wave and a particle. This was an astounding discovery and is certainly outside the realm of how we normally perceive things. Billiard balls act as particles, while oceans act as waves. Photons act as both a wave and a particle all the time (even though it's common, but basically incorrect, to say that it's "sometimes a wave and sometimes a particle" depending upon which features are more obvious at a given time).
Just one of the effects of this wave-particle duality (or particle-wave duality) is that photons, though treated as particles, can be calculated to have frequency, wavelength, amplitude, and other properties inherent in wave mechanics.
Fun Photon Facts
The photon is an elementary particle, despite the fact that it has no mass. It cannot decay on its own, although the energy of the photon can transfer (or be created) upon interaction with other particles. Photons are electrically neutral and are one of the rare particles that are identical to their antiparticle, the antiphoton.
Photons are spin-1 particles (making them bosons), with a spin axis that is parallel to the direction of travel (either forward or backward, depending on whether it's a "left-hand" or "right-hand" photon). This feature is what allows for polarization of light.
Basic Properties of Photons
According to the photon theory of light, photons . . .
move at a constant velocity, c = 2.9979 x 108 m/s (i.e. "the speed of light"), in free space
have zero mass and rest energy.
carry energy and momentum, which are also related to the frequency nu and wavelength lamdba of the electromagnetic wave by E = h nu and p = h / lambda.
can be destroyed/created when radiation is absorbed/emitted.
can have particle-like interactions (i.e. collisions) with electrons and other particles, such as in the Compton effect.
History of Photons
The term photon was coined by Gilbert Lewis in 1926, though the concept of light in the form of discrete particles had been around for centuries and had been formalized in Newton's construction of the science of optics.
In the 1800s, however, the wave properties of light (by which I mean electromagnetic radiation in general) became glaringly obvious and scientists had essentially thrown the particle theory of light out the window. It wasn't until Albert Einstein explained the photoelectric effect and realized that light energy had to be quantized that the particle theory returned.
Wave-Particle Duality in Brief
As mentioned above, light has properties of both a wave and a particle. This was an astounding discovery and is certainly outside the realm of how we normally perceive things. Billiard balls act as particles, while oceans act as waves. Photons act as both a wave and a particle all the time (even though it's common, but basically incorrect, to say that it's "sometimes a wave and sometimes a particle" depending upon which features are more obvious at a given time).
Just one of the effects of this wave-particle duality (or particle-wave duality) is that photons, though treated as particles, can be calculated to have frequency, wavelength, amplitude, and other properties inherent in wave mechanics.
Fun Photon Facts
The photon is an elementary particle, despite the fact that it has no mass. It cannot decay on its own, although the energy of the photon can transfer (or be created) upon interaction with other particles. Photons are electrically neutral and are one of the rare particles that are identical to their antiparticle, the antiphoton.
Photons are spin-1 particles (making them bosons), with a spin axis that is parallel to the direction of travel (either forward or backward, depending on whether it's a "left-hand" or "right-hand" photon). This feature is what allows for polarization of light.
What is the Photoelectric Effect?
Though originally observed in 1839, the photoelectric effect was documented by Heinrich Hertz in 1887 in a paper to the Annalen der Physik. It was originally called the Hertz effect, in fact, though this name fell out of use.
When a light source (or, more generally, electromagnetic radiation) is incident upon a metallic surface, the surface can emit electrons. Electrons emitted in this fashion are called photoelectrons (although they are still just electrons). This is depicted in the image to the right.
Setting Up the Photoelectric Effect
To observe the photoelectric effect, you create a vacuum chamber with the photoconductive metal at one end and a collector at the other. When a light shines on the metal, the electrons are released and move through the vacuum toward the collector. This creates a current in the wires connecting the two ends, which can be measured with an ammeter. (A basic example of the experiment can be seen by clicking on the image to the right, and then advancing to the second image available.)
By administering a negative voltage potential (the black box in the picture) to the collector, it takes more energy for the electrons to complete the journey and initiate the current. The point at which no electrons make it to the collector is called the stopping potential Vs, and can be used to determine the maximum kinetic energy Kmax of the electrons (which have electronic charge e) by using the following equation:
Kmax = eVs
It is significant to note that not all of the electrons will have this energy, but will be emitted with a range of energies based upon the properties of the metal being used. The above equation allows us to calculate the maximum kinetic energy or, in other words, the energy of the particles knocked free of the metal surface with the greatest speed, which will be the trait that is most useful in the rest of this analysis.
The Classical Wave Explanation
In classical wave theory, the energy of electromagnetic radiation is carried within the wave itself. As the electromagnetic wave (of intensity I) collides with the surface, the electron absorbs the energy from the wave until it exceeds the binding energy, releasing the electron from the metal. The minimum energy needed to remove the electron is the work function phi of the material. (Phi is in the range of a few electron-volts for most common photoelectric materials.)
Three main predictions come from this classical explanation:
The intensity of the radiation should have a proportional relationship with the resulting maximum kinetic energy.
The photoelectric effect should occur for any light, regardless of frequency or wavelength.
There should be a delay on the order of seconds between the radiation’s contact with the metal and the initial release of photoelectrons.
The Experimental Result
By 1902, the properties of the photoelectric effect were well documented. Experiment showed that:
The intensity of the light source had no effect on the maximum kinetic energy of the photoelectrons.
Below a certain frequency, the photoelectric effect does not occur at all.
There is no significant delay (less than 10-9 s) between the light source activation and the emission of the first photoelectrons.
As you can tell, these three results are the exact opposite of the wave theory predictions. Not only that, but they are all three completely counter-intuitive. Why would low-frequency light not trigger the photoelectric effect, since it still carries energy? How do the photoelectrons release so quickly? And, perhaps most curiously, why does adding more intensity not result in more energetic electron releases? Why does the wave theory fail so utterly in this case, when it works so well in so many other situation Einstein's Wonderful Year
In 1905, Albert Einstein published four papers in the Annalen der Physik journal, each of which was significant enough to warrant a Nobel Prize in its own right. The first paper (and the only one to actually be recognized with a Nobel) was his explanation of the photoelectric effect.
Building on Max Planck's blackbody radiation theory, Einstein proposed that radiation energy is not continuously distributed over the wavefront, but is instead localized in small bundles (later called photons). The photon's energy would be associated with its frequency (nu), through a proportionality constant known as Planck's constant (h), or alternately, using the wavelength (lambda) and the speed of light (c):
E = h nu = hc / lambda
or the momentum equation: p = h / lambda
In Einstein's theory, a photoelectron releases as a result of an interaction with a single photon, rather than an interaction with the wave as a whole. The energy from that photon transfers instantaneously to a single electron, knocking it free from the metal if the energy (which is, recall, proportional to the frequency nu) is high enough to overcome the work function (phi) of the metal. If the energy (or frequency) is too low, no electrons are knocked free.
If, however, there is excess energy, beyond phi, in the photon, the excess energy is converted into the kinetic energy of the electron:
Kmax = h nu - phi
Therefore, Einstein's theory predicts that the maximum kinetic energy is completely independent of the intensity of the light (because it doesn't show up in the equation anywhere). Shining twice as much light results in twice as many photons, and more electrons releasing, but the maximum kinetic energy of those individual electrons won't change unless the energy, not the intensity, of the light changes.
The maximum kinetic energy results when the least-tightly-bound electrons break free, but what about the most-tightly-bound ones; The ones in which there is just enough energy in the photon to knock it loose, but the kinetic energy that results in zero? Setting Kmax equal to zero for this cutoff frequency (nuc), we get:
nuc = phi / h
or the cutoff wavelength: lambdac = hc / phi
These equations indicate why a low-frequency light source would be unable to free electrons from the metal, and thus would produce no photoelectrons.
After Einstein
Experimentation in the photoelectric effect was carried out extensively by Robert Millikan in 1915, and his work confirmed Einstein's theory. Einstein won a Nobel Prize for his photon theory (as applied to the photoelectric effect) in 1921, and Millikan won a Nobel in 1923 (in part due to his photoelectric experiments).
Most significantly, the photoelectric effect, and the photon theory it inspired, crushed the classical wave theory of light. Though no one could deny that light behaved as a wave, after Einstein's first paper, it was undeniable that it was also a particle.
When a light source (or, more generally, electromagnetic radiation) is incident upon a metallic surface, the surface can emit electrons. Electrons emitted in this fashion are called photoelectrons (although they are still just electrons). This is depicted in the image to the right.
Setting Up the Photoelectric Effect
To observe the photoelectric effect, you create a vacuum chamber with the photoconductive metal at one end and a collector at the other. When a light shines on the metal, the electrons are released and move through the vacuum toward the collector. This creates a current in the wires connecting the two ends, which can be measured with an ammeter. (A basic example of the experiment can be seen by clicking on the image to the right, and then advancing to the second image available.)
By administering a negative voltage potential (the black box in the picture) to the collector, it takes more energy for the electrons to complete the journey and initiate the current. The point at which no electrons make it to the collector is called the stopping potential Vs, and can be used to determine the maximum kinetic energy Kmax of the electrons (which have electronic charge e) by using the following equation:
Kmax = eVs
It is significant to note that not all of the electrons will have this energy, but will be emitted with a range of energies based upon the properties of the metal being used. The above equation allows us to calculate the maximum kinetic energy or, in other words, the energy of the particles knocked free of the metal surface with the greatest speed, which will be the trait that is most useful in the rest of this analysis.
The Classical Wave Explanation
In classical wave theory, the energy of electromagnetic radiation is carried within the wave itself. As the electromagnetic wave (of intensity I) collides with the surface, the electron absorbs the energy from the wave until it exceeds the binding energy, releasing the electron from the metal. The minimum energy needed to remove the electron is the work function phi of the material. (Phi is in the range of a few electron-volts for most common photoelectric materials.)
Three main predictions come from this classical explanation:
The intensity of the radiation should have a proportional relationship with the resulting maximum kinetic energy.
The photoelectric effect should occur for any light, regardless of frequency or wavelength.
There should be a delay on the order of seconds between the radiation’s contact with the metal and the initial release of photoelectrons.
The Experimental Result
By 1902, the properties of the photoelectric effect were well documented. Experiment showed that:
The intensity of the light source had no effect on the maximum kinetic energy of the photoelectrons.
Below a certain frequency, the photoelectric effect does not occur at all.
There is no significant delay (less than 10-9 s) between the light source activation and the emission of the first photoelectrons.
As you can tell, these three results are the exact opposite of the wave theory predictions. Not only that, but they are all three completely counter-intuitive. Why would low-frequency light not trigger the photoelectric effect, since it still carries energy? How do the photoelectrons release so quickly? And, perhaps most curiously, why does adding more intensity not result in more energetic electron releases? Why does the wave theory fail so utterly in this case, when it works so well in so many other situation Einstein's Wonderful Year
In 1905, Albert Einstein published four papers in the Annalen der Physik journal, each of which was significant enough to warrant a Nobel Prize in its own right. The first paper (and the only one to actually be recognized with a Nobel) was his explanation of the photoelectric effect.
Building on Max Planck's blackbody radiation theory, Einstein proposed that radiation energy is not continuously distributed over the wavefront, but is instead localized in small bundles (later called photons). The photon's energy would be associated with its frequency (nu), through a proportionality constant known as Planck's constant (h), or alternately, using the wavelength (lambda) and the speed of light (c):
E = h nu = hc / lambda
or the momentum equation: p = h / lambda
In Einstein's theory, a photoelectron releases as a result of an interaction with a single photon, rather than an interaction with the wave as a whole. The energy from that photon transfers instantaneously to a single electron, knocking it free from the metal if the energy (which is, recall, proportional to the frequency nu) is high enough to overcome the work function (phi) of the metal. If the energy (or frequency) is too low, no electrons are knocked free.
If, however, there is excess energy, beyond phi, in the photon, the excess energy is converted into the kinetic energy of the electron:
Kmax = h nu - phi
Therefore, Einstein's theory predicts that the maximum kinetic energy is completely independent of the intensity of the light (because it doesn't show up in the equation anywhere). Shining twice as much light results in twice as many photons, and more electrons releasing, but the maximum kinetic energy of those individual electrons won't change unless the energy, not the intensity, of the light changes.
The maximum kinetic energy results when the least-tightly-bound electrons break free, but what about the most-tightly-bound ones; The ones in which there is just enough energy in the photon to knock it loose, but the kinetic energy that results in zero? Setting Kmax equal to zero for this cutoff frequency (nuc), we get:
nuc = phi / h
or the cutoff wavelength: lambdac = hc / phi
These equations indicate why a low-frequency light source would be unable to free electrons from the metal, and thus would produce no photoelectrons.
After Einstein
Experimentation in the photoelectric effect was carried out extensively by Robert Millikan in 1915, and his work confirmed Einstein's theory. Einstein won a Nobel Prize for his photon theory (as applied to the photoelectric effect) in 1921, and Millikan won a Nobel in 1923 (in part due to his photoelectric experiments).
Most significantly, the photoelectric effect, and the photon theory it inspired, crushed the classical wave theory of light. Though no one could deny that light behaved as a wave, after Einstein's first paper, it was undeniable that it was also a particle.
Albert Einstein - Biography
Nationality: German
Born: March 14, 1879
Death: April 18, 1955
Spouse:
Mileva Maric (1903 - 1919)
Elsa Lowenthal (1919 - 1936)
1921 Nobel Prize in Physics "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect" (from the official Nobel Prize announcement)
Albert Einstein - Early Work:
In 1901, Albert Einstein received his diploma as a teacher of physics and mathematics. Unable to find a teaching position, he went to work for the Swiss Patent Office. He obtained his doctoral degree in 1905, the same year he published four significant papers, introducing the concepts of special relativity and the photon theory of light.
Albert Einstein & Scientific Revolution:
Albert Einstein's work in 1905 shook the world of physics. In his explanation of the photoelectric effect he introduced the photon theory of light. In his paper "On the Electrodynamics of Moving Bodies," he introduced the concepts of special relativity.
Einstein spent the rest of his life and career dealing with the consequences of these concepts, both by developing general relativity and by questioning the field of quantum physics on the principle that it was "spooky action at a distance."
Albert Einstein Moves to America:
In 1933, Albert Einstein renounced his German citizenship and moved to America, where he took a post at the Institute for Advanced Study in Princeton, New Jersey, as a Professor of Theoretical Physics. He gained American citizenship in 1940.
He was offered the first presidency of Israel, but he declined it, though he did help found the Hebrew University of Jerusalem.
Misconceptions About Albert Einstein:
The rumor began circulating even while Albert Einstein was alive that he had failed mathematics courses as a child. While it is true that Einstein began to talk late - at about age 4 according to his own accounts - he never failed in mathematics, nor did he do poorly in school in general. He did fairly well in his mathematics courses throughout his education and briefly considered becoming a mathematician. He recognized early on that his gift was not in pure mathematics, a fact he lamented throughout his career as he sought out more accomplished mathematicians to assist in the formal descriptions of his theories.
Born: March 14, 1879
Death: April 18, 1955
Spouse:
Mileva Maric (1903 - 1919)
Elsa Lowenthal (1919 - 1936)
1921 Nobel Prize in Physics "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect" (from the official Nobel Prize announcement)
Albert Einstein - Early Work:
In 1901, Albert Einstein received his diploma as a teacher of physics and mathematics. Unable to find a teaching position, he went to work for the Swiss Patent Office. He obtained his doctoral degree in 1905, the same year he published four significant papers, introducing the concepts of special relativity and the photon theory of light.
Albert Einstein & Scientific Revolution:
Albert Einstein's work in 1905 shook the world of physics. In his explanation of the photoelectric effect he introduced the photon theory of light. In his paper "On the Electrodynamics of Moving Bodies," he introduced the concepts of special relativity.
Einstein spent the rest of his life and career dealing with the consequences of these concepts, both by developing general relativity and by questioning the field of quantum physics on the principle that it was "spooky action at a distance."
Albert Einstein Moves to America:
In 1933, Albert Einstein renounced his German citizenship and moved to America, where he took a post at the Institute for Advanced Study in Princeton, New Jersey, as a Professor of Theoretical Physics. He gained American citizenship in 1940.
He was offered the first presidency of Israel, but he declined it, though he did help found the Hebrew University of Jerusalem.
Misconceptions About Albert Einstein:
The rumor began circulating even while Albert Einstein was alive that he had failed mathematics courses as a child. While it is true that Einstein began to talk late - at about age 4 according to his own accounts - he never failed in mathematics, nor did he do poorly in school in general. He did fairly well in his mathematics courses throughout his education and briefly considered becoming a mathematician. He recognized early on that his gift was not in pure mathematics, a fact he lamented throughout his career as he sought out more accomplished mathematicians to assist in the formal descriptions of his theories.
Thursday, September 2, 2010
What Is Quantum Physics?:
Quantum physics is the study of the behavior of matter and energy at the molecular, atomic, nuclear, and even smaller microscopic levels. In the early 20th century, it was discovered that the laws that govern macroscopic objects do not function the same in such small realms.
What Does Quantum Mean?:
"Quantum" comes from the Latin meaning "how much." It refers to the discrete units of matter and energy that are predicted by and observed in quantum physics. Even space and time, which appear to be extremely continuous, have smallest possible values.
Who Developed Quantum Mechanics?:
As scientists gained the technology to measure with greater precision, strange phenomena was observed. The birth of quantum physics is attributed to Max Planck's 1900 paper on blackbody radiation. Development of the field was done by Max Planck, Albert Einstein, Niels Bohr, Werner Heisenberg, Erwin Schroedinger, and many others. Ironically, Albert Einstein had serious theoretical issues with quantum mechanics and tried for many years to disprove or modify it.
What's Special About Quantum Physics?:
In the realm of quantum physics, observing something actually influences the physical processes taking place. Light waves act like particles and particles act like waves (called wave particle duality). Matter can go from one spot to another without moving through the intervening space (called quantum tunnelling). Information moves instantly across vast distances. In fact, in quantum mechanics we discover that the entire universe is actually a series of probabilities. Fortunately, it breaks down when dealing with large objects, as demonstrated by the Schroedinger's Cat thought experiment.
Quantum Optics:
Quantum optics is a branch of quantum physics that focuses primarily on the behavior of light, or photons. At the level of quantum optics, the behavior of individual photons has a bearing on the outcoming light, as opposed to classical optics, which was developed by Sir Isaac Newton. Lasers are one application that has come out of the study of quantum optics.
Quantum Electrodynamics (QED):
Quantum electrodynamics (QED) is the study of how electrons and photons interact. It was developed in the late 1940s by Richard Feynman, Julian Schwinger, Sinitro Tomonage, and others. The predictions of QED regarding the scattering of photons and electrons are accurate to eleven decimal places.
What Does Quantum Mean?:
"Quantum" comes from the Latin meaning "how much." It refers to the discrete units of matter and energy that are predicted by and observed in quantum physics. Even space and time, which appear to be extremely continuous, have smallest possible values.
Who Developed Quantum Mechanics?:
As scientists gained the technology to measure with greater precision, strange phenomena was observed. The birth of quantum physics is attributed to Max Planck's 1900 paper on blackbody radiation. Development of the field was done by Max Planck, Albert Einstein, Niels Bohr, Werner Heisenberg, Erwin Schroedinger, and many others. Ironically, Albert Einstein had serious theoretical issues with quantum mechanics and tried for many years to disprove or modify it.
What's Special About Quantum Physics?:
In the realm of quantum physics, observing something actually influences the physical processes taking place. Light waves act like particles and particles act like waves (called wave particle duality). Matter can go from one spot to another without moving through the intervening space (called quantum tunnelling). Information moves instantly across vast distances. In fact, in quantum mechanics we discover that the entire universe is actually a series of probabilities. Fortunately, it breaks down when dealing with large objects, as demonstrated by the Schroedinger's Cat thought experiment.
Quantum Optics:
Quantum optics is a branch of quantum physics that focuses primarily on the behavior of light, or photons. At the level of quantum optics, the behavior of individual photons has a bearing on the outcoming light, as opposed to classical optics, which was developed by Sir Isaac Newton. Lasers are one application that has come out of the study of quantum optics.
Quantum Electrodynamics (QED):
Quantum electrodynamics (QED) is the study of how electrons and photons interact. It was developed in the late 1940s by Richard Feynman, Julian Schwinger, Sinitro Tomonage, and others. The predictions of QED regarding the scattering of photons and electrons are accurate to eleven decimal places.
What Is Physics?
Physics is the scientific study of matter and energy and how they interact with each other.
This energy can take the form of motion, light, electricity, radiation, gravity . . . just about anything, honestly. Physics deals with matter on scales ranging from sub-atomic particles (i.e. the particles that make up the atom and the particles that make up those particles) to stars and even entire galaxies. How Physics Works
As an experimental science, physics utilizes the scientific method to formulate and test hypotheses that are based on observation of the natural world. The goal of physics is to use the results of these experiments to formulate scientific laws, usually expressed in the language of mathematics, which can then be used to predict other phenomena.
The Role of Physics in Science
In a broader sense, physics can be seen as the most fundamental of the natural sciences. Chemistry, for example, can be viewed as a complex application of physics, as it focuses on the interaction of energy and matter in chemical systems. We also know that biology is, at its heart, an application of chemical properties in living things, which means that it is also, ultimately, ruled by the physical laws.
This energy can take the form of motion, light, electricity, radiation, gravity . . . just about anything, honestly. Physics deals with matter on scales ranging from sub-atomic particles (i.e. the particles that make up the atom and the particles that make up those particles) to stars and even entire galaxies. How Physics Works
As an experimental science, physics utilizes the scientific method to formulate and test hypotheses that are based on observation of the natural world. The goal of physics is to use the results of these experiments to formulate scientific laws, usually expressed in the language of mathematics, which can then be used to predict other phenomena.
The Role of Physics in Science
In a broader sense, physics can be seen as the most fundamental of the natural sciences. Chemistry, for example, can be viewed as a complex application of physics, as it focuses on the interaction of energy and matter in chemical systems. We also know that biology is, at its heart, an application of chemical properties in living things, which means that it is also, ultimately, ruled by the physical laws.
Wednesday, September 1, 2010
An expandable molecular sponge
Zinc ions and some other metal ions can bind to three or four organic molecules at once. If those molecules are long and attach to zinc at both ends, it's possible to create a metal–organic framework (MOF), an open sheet of linked molecules with ions at the vertices. And if those sheets bind to each other and stack in register, the result is a material whose columnar pores can store, catalyze, or otherwise usefully process small molecules. Matthew Rosseinsky and his coworkers at the University of Liverpool in the UK have made a MOF material, but with a new twist. For its linker, the Liverpool team used a dipeptide—that is, two peptide-bonded amino acids (glycine and alanine; see figure). The team made two versions of the material, one incorporating a solvent (a mix of water and methanol) and one not. X-ray diffraction and nuclear magnetic resonance spectroscopy revealed that adding the solvent caused the dipeptide linkers to straighten, widening the pores to accommodate the solvent ions. Glycine, alanine, and the 18 other naturally occurring amino acids are characterized by side chains that are polar, nonpolar, positively charged, or negatively charged. Given that variety, the Liverpool experiment suggests that peptide-based MOF materials might find uses as expandable sponges for a wide range of molecules.
Tuesday, August 31, 2010
Electromagnetic Induction
Introduction
Ørsted had discovered that electricity and magnetism were linked, electric current gave rise to magnetic fields. However no one had succeded in generating electricity by using magnetic fields, until Michael Faraday found that moving a conductor in a magnetic field (or by moving the magnet field near a stationary conductor) created a voltage. The wire must be part of an electrical circuit. Otherwise the electrons have no place to go. In other words, there is no electrical current produced with a wire with open ends. But if the ends are attached to a light bulb, to an electrical meter or even to each other, the circuit is complete and electrical current is created.
Figure 1. Inducing a current in a wire by moving the wire in a magnetic field.
Direction of Current
The direction of the current is determined by Flemming's Right hand rule. The left-hand rule is used for motors and motion produced by a magnetic field. The right-hand rule is used for generators and current generated by a motion. Using the right-hand, the thumb is in the direction of the motion, the first finger points in the direction of the field and the second finger points in the direction of the current.
Flux and Flux Linkage
To create electricity all that was required was a coil of wire, ends of which may be connected to a voltmeter. The voltage created depends on the density of the magnetic field and the area of the loop cutting the magnetic field lines.
A quantity called the flux measures this and is give by &phi = BA where B is the magnetic flux density and A is the area of the coil in the magnetic field.
If there are more turns in the coil then the flux is termed the magnetic flux linkage. It is given by Nφ =BAN. This assumes that the loop cuts the magnetic field lines at an angle of 90°. If the loop cuts the magnetic field lines at a different angle say, θ then the flux linkage is defined as N&phi = BANcos θ where theta is the angle by the normal to the area and the magnetic field lines as shown in Figure 1.
Faraday's Law of Induction
We said that a voltage or Electro-Motive Force (EMF), is produced when the loop is moved in the magnetic field but more qualitatively, the voltage is produced is in responce to the change in the motion. The voltage produced depends on the rate of change of flux-linkage with time. In mathematical terms,
where E is the EMF. The other symbols have their usual meanings. The minus sign is a consequence of Lenz's Law which we shall discuss in the following section.
Lenz's Law
When we move a conductor in a magnetic field the current generated creates it's own magnetic field. If the magnetic field created had an additive effect to the original magnetic field then the magnetic field would become even stronger and this would create an even stronger current which would create an even strong magnetic field, and so on. If this were to happen we could get energy for free although the universe might explode. Unfortunately we cannot make free energy the reason is down to the Lenz's law. When a current is generated, the magnetic field produced by the current is in opposition to the original magnetic field. This produces a force opposes the motion of the conductor and brings it to a halt. This is why it becomes more difficult to turn a dynamo on a bicycle as you increase in speed. We express Lenz's law in as part of Faraday's Law by inserting a minus sign.
Ørsted had discovered that electricity and magnetism were linked, electric current gave rise to magnetic fields. However no one had succeded in generating electricity by using magnetic fields, until Michael Faraday found that moving a conductor in a magnetic field (or by moving the magnet field near a stationary conductor) created a voltage. The wire must be part of an electrical circuit. Otherwise the electrons have no place to go. In other words, there is no electrical current produced with a wire with open ends. But if the ends are attached to a light bulb, to an electrical meter or even to each other, the circuit is complete and electrical current is created.
Figure 1. Inducing a current in a wire by moving the wire in a magnetic field.
Direction of Current
The direction of the current is determined by Flemming's Right hand rule. The left-hand rule is used for motors and motion produced by a magnetic field. The right-hand rule is used for generators and current generated by a motion. Using the right-hand, the thumb is in the direction of the motion, the first finger points in the direction of the field and the second finger points in the direction of the current.
Flux and Flux Linkage
To create electricity all that was required was a coil of wire, ends of which may be connected to a voltmeter. The voltage created depends on the density of the magnetic field and the area of the loop cutting the magnetic field lines.
A quantity called the flux measures this and is give by &phi = BA where B is the magnetic flux density and A is the area of the coil in the magnetic field.
If there are more turns in the coil then the flux is termed the magnetic flux linkage. It is given by Nφ =BAN. This assumes that the loop cuts the magnetic field lines at an angle of 90°. If the loop cuts the magnetic field lines at a different angle say, θ then the flux linkage is defined as N&phi = BANcos θ where theta is the angle by the normal to the area and the magnetic field lines as shown in Figure 1.
Faraday's Law of Induction
We said that a voltage or Electro-Motive Force (EMF), is produced when the loop is moved in the magnetic field but more qualitatively, the voltage is produced is in responce to the change in the motion. The voltage produced depends on the rate of change of flux-linkage with time. In mathematical terms,
where E is the EMF. The other symbols have their usual meanings. The minus sign is a consequence of Lenz's Law which we shall discuss in the following section.
Lenz's Law
When we move a conductor in a magnetic field the current generated creates it's own magnetic field. If the magnetic field created had an additive effect to the original magnetic field then the magnetic field would become even stronger and this would create an even stronger current which would create an even strong magnetic field, and so on. If this were to happen we could get energy for free although the universe might explode. Unfortunately we cannot make free energy the reason is down to the Lenz's law. When a current is generated, the magnetic field produced by the current is in opposition to the original magnetic field. This produces a force opposes the motion of the conductor and brings it to a halt. This is why it becomes more difficult to turn a dynamo on a bicycle as you increase in speed. We express Lenz's law in as part of Faraday's Law by inserting a minus sign.
Equilibrium
A body which is in equilibrium is either moving at constant velocity in a straight line, or it is not moving. If it is not moving, it said to be in static equilibrium. The reason why the body does not move is because the forces acting on it cancel each other out. In this simple phrase we are expressing the two conditions necessary in order for a body to be in equilibrium:
The sum of rotational forces or moments must add to zero.
The vector sum of all external forces, is zero.
In mathematical terms, ΣFrot = 0 and ΣF = 0
Proving Equilibrium
To prove that a body is in equilibrium, we can follow a set proceedure.
Draw the free-body diagram, which shows the forces acting on the object.
Resolve the forces in any two conveinient directions, for example, ΣFx= 0 and ΣFy = 0, which will result in two equations from which the two unknowns can be found.
Theorems on Equilibrium
If three forces are in equilibrium, then the lines of force pass through a single point.
Equilibrium in three Dimensions
So far we have discussed equilibrium where the forces are coplanar. In three dimensions we need to ensure that the sum of the moments in the three independent directions are zero the sum of the vector forces also must be zero.
Σ Fx,y,z = 0
Στx,y,z = 0
The sum of rotational forces or moments must add to zero.
The vector sum of all external forces, is zero.
In mathematical terms, ΣFrot = 0 and ΣF = 0
Proving Equilibrium
To prove that a body is in equilibrium, we can follow a set proceedure.
Draw the free-body diagram, which shows the forces acting on the object.
Resolve the forces in any two conveinient directions, for example, ΣFx= 0 and ΣFy = 0, which will result in two equations from which the two unknowns can be found.
Theorems on Equilibrium
If three forces are in equilibrium, then the lines of force pass through a single point.
Equilibrium in three Dimensions
So far we have discussed equilibrium where the forces are coplanar. In three dimensions we need to ensure that the sum of the moments in the three independent directions are zero the sum of the vector forces also must be zero.
Σ Fx,y,z = 0
Στx,y,z = 0
Useful Mathematics
The Binomial Theorem
(1 + x)n = 1 + nx + [n(n-1) x2]/(2!) + [n(n-1)(n-2) x3]/(3!) + ...
If x << 1, then
(1 + x)n ≅ 1 + n x
(1 + x)-n ≅ 1 - n x
These approximations are useful when x2 is negliable.
Quadratic Equations
ax2 + bx + c = 0 has the solution,
x ={[-b ± (b2 - 4ac)]1/2} / (2a)
Trigonometry
π rad = 180 °
1 rad = 57.3 °
The quadrants in which trigonometrical functions are positive. Is shown below:
Figure 1. Signs of trigonometric functions.
A good way to remember this is the phrase clockwise ACTS. Clockwise gives the direction from the first quadrant is clockwise and each letter from the word ACTS stands for a trigonometric function: All, Cos, Tan and Sin. The direction of the angle increases in an anti-clockwise sense.
If A and B are angles then
tan A = sin A/cos A
sin2 A + cos2 A = 1
sec2A = 1 + tan2 A
cosec2 A = 1 + cot2 A
sin (A ± B) = sin A cos B ± cos A sin B
cos(A ± B) = cos A cos B -/+; sin A sin B
tan (A ± B) = (tan A ± tan B)/(1 ∓ tan A tan B)
If t= tan (1/2) A, sin A = (2t) / (1 + t2), cos A = (1 - t2) / (1 + t2)
2 sin A cos B = sin (A + B) + sin (A - B)
2 cos A cos B = cos (A + B) + cos (A - B)
2 sin A sin B = cos (A - B) - cos (A + B)
sin A + sin B = 2 sin [(A + B)/2] cos [(A - B)/2]
sin A - sin B = 2 cos [(A + B)/2] sin [(A - B)/2]
cos A + cos B = 2 cos [(A + B)/2] cos [(A - B)/2]
cos A - cos B = 2 sin [(A + B)/2] sin [(A - B)/2]
Power Series
for all x
ex = exp x = 1 + x + x2/(2!) + ... + xr/(r!) + ... for all x
(-1 < x <et; 1)
ln (1 + x) = x - x2/ 2 + x3/3 - ... + (-1)r+1xr/r + ... (-1 < x <et; 1)
for all x
cos x = (eix + e-ix)/2 = 1 - x2/(2!) + x4/(4!) - ... + (-1)rx2r/(2r)! + ... for all x
for all x
sin x = (eix - e-ix)/(2i) = x - x3/(3!) + x5/(5!) - ... + (-1)rx2r+1/(2r + 1)! + ... for all x
for all x
cosh x = (ex + e-x)/2 = 1 + x2/(2!) + x4/(4!) + ... + x2r/(2r)! + ... for all x
for all x
(1 + x)n = 1 + nx + [n(n-1) x2]/(2!) + [n(n-1)(n-2) x3]/(3!) + ...
If x << 1, then
(1 + x)n ≅ 1 + n x
(1 + x)-n ≅ 1 - n x
These approximations are useful when x2 is negliable.
Quadratic Equations
ax2 + bx + c = 0 has the solution,
x ={[-b ± (b2 - 4ac)]1/2} / (2a)
Trigonometry
π rad = 180 °
1 rad = 57.3 °
The quadrants in which trigonometrical functions are positive. Is shown below:
Figure 1. Signs of trigonometric functions.
A good way to remember this is the phrase clockwise ACTS. Clockwise gives the direction from the first quadrant is clockwise and each letter from the word ACTS stands for a trigonometric function: All, Cos, Tan and Sin. The direction of the angle increases in an anti-clockwise sense.
If A and B are angles then
tan A = sin A/cos A
sin2 A + cos2 A = 1
sec2A = 1 + tan2 A
cosec2 A = 1 + cot2 A
sin (A ± B) = sin A cos B ± cos A sin B
cos(A ± B) = cos A cos B -/+; sin A sin B
tan (A ± B) = (tan A ± tan B)/(1 ∓ tan A tan B)
If t= tan (1/2) A, sin A = (2t) / (1 + t2), cos A = (1 - t2) / (1 + t2)
2 sin A cos B = sin (A + B) + sin (A - B)
2 cos A cos B = cos (A + B) + cos (A - B)
2 sin A sin B = cos (A - B) - cos (A + B)
sin A + sin B = 2 sin [(A + B)/2] cos [(A - B)/2]
sin A - sin B = 2 cos [(A + B)/2] sin [(A - B)/2]
cos A + cos B = 2 cos [(A + B)/2] cos [(A - B)/2]
cos A - cos B = 2 sin [(A + B)/2] sin [(A - B)/2]
Power Series
for all x
ex = exp x = 1 + x + x2/(2!) + ... + xr/(r!) + ... for all x
(-1 < x <et; 1)
ln (1 + x) = x - x2/ 2 + x3/3 - ... + (-1)r+1xr/r + ... (-1 < x <et; 1)
for all x
cos x = (eix + e-ix)/2 = 1 - x2/(2!) + x4/(4!) - ... + (-1)rx2r/(2r)! + ... for all x
for all x
sin x = (eix - e-ix)/(2i) = x - x3/(3!) + x5/(5!) - ... + (-1)rx2r+1/(2r + 1)! + ... for all x
for all x
cosh x = (ex + e-x)/2 = 1 + x2/(2!) + x4/(4!) + ... + x2r/(2r)! + ... for all x
for all x
A Brief History of Cosmological Ideas
Aristotle
The Greek philospher Aristotle proposed that the heavens were literally composed of 55 concentric, crystalline spheres to which the celestial objects were attached and which rotated at different velocities (but the angular velocity was constant for a given sphere), with the Earth at the center.
The Ptolemaic System
The prevailing theory in Europe as Copernicus was writing was that created by Ptolemy in his Almagest, dating from about 150 A.D. The Ptolemaic system drew on many previous theories that viewed Earth as a stationary center of the universe. Stars were embedded in a large outer sphere which rotated relatively rapidly, while the planets inhabited smaller spheres as . The idea that the Earth was at the centre of the universe with everything revolving around it was one that fitted with religious beleifs. Afterall, man is the most important of God's creations and so it was proper that the Earth should be at the center of a perfect and uniform universe.
Retrograde motion of Mars
The Ptolemaic model of the universe, had the Earth at the center of the universe with the Sun and the planets travelling around in circular orbits. To accurately describe the observed data, the planets travelled in smaller orbits known as epicycles as they orbited around the Earth. This reproduced the motion of the planets as the travelled across the sky. In particular, the phenomena of retrograde motion, in which the planet sometimes appears to travel backwards or loop the loop. The illustration shows the orbit of Mars over a period of several month.
Nicolaus Copernicus (1473-1543)
In 1543, Copernicus formulated another model of the universe in which the Earth went around the Sun, the Heliocentric Model of the Universe. The publication of his book, De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres) is often taken to be the beginning of the Scientific Revolution.
The orbit of the planets was still circular but in his model the problem of retrograde motion was incoporated naturally, as shown in Figure 1. The orbit of the Earth and Mars is shown. Since Mars is further from the Sun it takes longer to travel in its orbit than the Earth. Therefore, there will be ocassions when the Earth overtakes Mars in its orbit. The line of sight is shown from the Earth to Mar is shown at different intervals. As viewed from the Earth, does Mars appears to go backward in its orbit as the Earth overtakes Mars but it is no more unusual than seeing a cyclist go backwards past your side window when you are in a car travelling in the same direction at a greater speed.
Figure 1. Retrograde motion of Mars in the Copernican model.
Tycho Brahe (1546-1601)
Tycho Brahe observed the planets and made accurate observation of the stars.
Johannes Kepler (1571-1630)
Originally, Kepler had planned to be ordained as a Lutheran minister. He saw it as his Christian duty to understand the universe in terms of mathematical rules and thus understand the works of God. He first attempt at explaining the cosmos was in the work, Mysterium Cosmographicum (Mystery of the Sacred Cosmos). He devised a model of the solar system in which the orbits of the planets fitted inside spheres with radii that could accomodate each of the Platonic solids. This idea is wrong.
Kepler's model of the Solar System had the planets orbit in spheres of the same radius that could accomodate each of the Platonic solids.
In 1601 he was hired as Tycho Brahe's assistant. As a mathematician it was his job to make sense of Brahe's extremely accurate observational data for the orbit for Mars. Kepler described the effort as his "War with Mars" but it resulted in the three laws that bare his name.
Keplers First Law - the planets do not move in circles but rather in elliptic orbit, with the Sun at one focus.
Keplers Second Law - the radius vector sweeps equal areas in equal times
Keplers Third Law - the time for the period square is proportional to the cube of their average distance from the Sun cubed. T2 ∝ R3
Galileo Galilei (1564-1642)
Galileo's drawing of the Moon.
The phases of Venus.
Galileo's notebook on the Jupiter
Galileo supported the Copernican model of the universe and had used the newly invented telescope to discover evidence to support his belief. The telescope allowed Galileo to see that there were mountains on the moon, which went against the accepted religous view that the universe was perfect. Galileo also discovered spots on the Sun. People really tried hard to account for these observations without making the heavens imperfect; one suggestion was that over the mountains of the Moon there was a layer of clear crystal so the final surface would be smooth and perfect!
One observation definitely disproved the Ptolemaic model, although it didn't prove that Copernicus was right (as Tycho Brahe pointed out). This was the observation that Venus has phases, much like our Moon does. To the naked eye, Venus always appears as a bright dot in the sky. With a telescope, however, it is fairly easy to see the phases of Venus. Just as the Moon has phases, Venus too has phases based on the planet’s position relative to us and the Sun. There was no way for the Ptolemaic model (Earth centered solar system) to account for these phases. They can only occur as Galileo saw them if Venus is circling the Sun, not the Earth.
Galileo saw near Jupiter what he first thought to be stars. When he realized that the stars were actually going around Jupiter, it negated a major argument of the Ptolemaic model. Not only did this mean that the Earth could not be the only center of motion, but also it knocked a hole in another argument. The supporters of the Ptolemaic model argued that if the Earth were moving through space, the Moon would be left behind. Galileo’s observations showed that the moons of Jupiter were not being left behind as Jupiter moved.
Galileo's advocasy of the heliocentric model of the universe brought him into conflict with the Church. In 1616, the theologians of the Holy Office decleared Copernicanism, 'false and erronious' and the Pope admonished Galileo for not defending its doctrines.
Galileo was asked to published a book which was supposed to support the Geocentric view, however when Dialogo Sopra I Due Massimi Sistemi Del Mondo (Dialogues on the Two Chief Systems of the World) was published, it was an outright argument for Copernican view. The book was an imaginary conversation between three people. The Geocentric position was argued for by a doggmatic, arogant character named Simplicio. The Copernican view was supported by intelligent and wise character named, Salvanti, representing Galileo. A neutral character who was receptive to either position was also written.
The Church banned the book and ordered for Galileo to appear before the Inquisition for herecy. Threatened with torture, Galileo confessed that he was wrong. By this time, Galileo was an old man of 68 years. A death sentence would certainly not have been an unusual punishment for herecy, however Galileo was lucky and was sentenced to life imprisonment, which was latter commuted to being held under house arrest at his home outside Florence. He died in 1642.
Some 359 years after Galileo death, the Vatican cleared Galileo of any wrongdoing. Pope John Paul II said,
Thanks to his intuition as a brilliant physicist and by relying on different arguments, Galileo, who practically invented the experimental method, understood why only the sun could function as the centre of the world, as it was then known, that is to say, as a planetary system. The error of the theologians of the time, when they maintained the centrality of the Earth, was to think that our understanding of the physical world's structure was, in some way, imposed by the literal sense of Sacred Scripture
Which of course it isn't.
Isaac Newton (1642-1727)
It was Sir Isaac Newton who was able to show that Kepler's laws of planetary motion are a natural consequence of simpler and more general descriptions of motion in nature. This brought into one theory both our observations of how things move on Earth and how the planets move in the heavens. These motions are described formally as Newton's laws of motion and gravity.
Newton applied this idea to the Sun and planets and took Keplers laws and calculated that the force falls off with the square of the distance from the Sun.
Newton's law of universal gravitation states that there is a force acting between objects that pulls them together. This force is proportional to the mass of the objects and inversely proportional to the square of their distance apart.
F = (GMmr^)/r2
Newton looked at the motion of the moon around the Sun and reasoned that the force responsible for gravity on Earth might be responsible for keeping the moon in orbit around the Earth. It turned out to be so
The Greek philospher Aristotle proposed that the heavens were literally composed of 55 concentric, crystalline spheres to which the celestial objects were attached and which rotated at different velocities (but the angular velocity was constant for a given sphere), with the Earth at the center.
The Ptolemaic System
The prevailing theory in Europe as Copernicus was writing was that created by Ptolemy in his Almagest, dating from about 150 A.D. The Ptolemaic system drew on many previous theories that viewed Earth as a stationary center of the universe. Stars were embedded in a large outer sphere which rotated relatively rapidly, while the planets inhabited smaller spheres as . The idea that the Earth was at the centre of the universe with everything revolving around it was one that fitted with religious beleifs. Afterall, man is the most important of God's creations and so it was proper that the Earth should be at the center of a perfect and uniform universe.
Retrograde motion of Mars
The Ptolemaic model of the universe, had the Earth at the center of the universe with the Sun and the planets travelling around in circular orbits. To accurately describe the observed data, the planets travelled in smaller orbits known as epicycles as they orbited around the Earth. This reproduced the motion of the planets as the travelled across the sky. In particular, the phenomena of retrograde motion, in which the planet sometimes appears to travel backwards or loop the loop. The illustration shows the orbit of Mars over a period of several month.
Nicolaus Copernicus (1473-1543)
In 1543, Copernicus formulated another model of the universe in which the Earth went around the Sun, the Heliocentric Model of the Universe. The publication of his book, De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres) is often taken to be the beginning of the Scientific Revolution.
The orbit of the planets was still circular but in his model the problem of retrograde motion was incoporated naturally, as shown in Figure 1. The orbit of the Earth and Mars is shown. Since Mars is further from the Sun it takes longer to travel in its orbit than the Earth. Therefore, there will be ocassions when the Earth overtakes Mars in its orbit. The line of sight is shown from the Earth to Mar is shown at different intervals. As viewed from the Earth, does Mars appears to go backward in its orbit as the Earth overtakes Mars but it is no more unusual than seeing a cyclist go backwards past your side window when you are in a car travelling in the same direction at a greater speed.
Figure 1. Retrograde motion of Mars in the Copernican model.
Tycho Brahe (1546-1601)
Tycho Brahe observed the planets and made accurate observation of the stars.
Johannes Kepler (1571-1630)
Originally, Kepler had planned to be ordained as a Lutheran minister. He saw it as his Christian duty to understand the universe in terms of mathematical rules and thus understand the works of God. He first attempt at explaining the cosmos was in the work, Mysterium Cosmographicum (Mystery of the Sacred Cosmos). He devised a model of the solar system in which the orbits of the planets fitted inside spheres with radii that could accomodate each of the Platonic solids. This idea is wrong.
Kepler's model of the Solar System had the planets orbit in spheres of the same radius that could accomodate each of the Platonic solids.
In 1601 he was hired as Tycho Brahe's assistant. As a mathematician it was his job to make sense of Brahe's extremely accurate observational data for the orbit for Mars. Kepler described the effort as his "War with Mars" but it resulted in the three laws that bare his name.
Keplers First Law - the planets do not move in circles but rather in elliptic orbit, with the Sun at one focus.
Keplers Second Law - the radius vector sweeps equal areas in equal times
Keplers Third Law - the time for the period square is proportional to the cube of their average distance from the Sun cubed. T2 ∝ R3
Galileo Galilei (1564-1642)
Galileo's drawing of the Moon.
The phases of Venus.
Galileo's notebook on the Jupiter
Galileo supported the Copernican model of the universe and had used the newly invented telescope to discover evidence to support his belief. The telescope allowed Galileo to see that there were mountains on the moon, which went against the accepted religous view that the universe was perfect. Galileo also discovered spots on the Sun. People really tried hard to account for these observations without making the heavens imperfect; one suggestion was that over the mountains of the Moon there was a layer of clear crystal so the final surface would be smooth and perfect!
One observation definitely disproved the Ptolemaic model, although it didn't prove that Copernicus was right (as Tycho Brahe pointed out). This was the observation that Venus has phases, much like our Moon does. To the naked eye, Venus always appears as a bright dot in the sky. With a telescope, however, it is fairly easy to see the phases of Venus. Just as the Moon has phases, Venus too has phases based on the planet’s position relative to us and the Sun. There was no way for the Ptolemaic model (Earth centered solar system) to account for these phases. They can only occur as Galileo saw them if Venus is circling the Sun, not the Earth.
Galileo saw near Jupiter what he first thought to be stars. When he realized that the stars were actually going around Jupiter, it negated a major argument of the Ptolemaic model. Not only did this mean that the Earth could not be the only center of motion, but also it knocked a hole in another argument. The supporters of the Ptolemaic model argued that if the Earth were moving through space, the Moon would be left behind. Galileo’s observations showed that the moons of Jupiter were not being left behind as Jupiter moved.
Galileo's advocasy of the heliocentric model of the universe brought him into conflict with the Church. In 1616, the theologians of the Holy Office decleared Copernicanism, 'false and erronious' and the Pope admonished Galileo for not defending its doctrines.
Galileo was asked to published a book which was supposed to support the Geocentric view, however when Dialogo Sopra I Due Massimi Sistemi Del Mondo (Dialogues on the Two Chief Systems of the World) was published, it was an outright argument for Copernican view. The book was an imaginary conversation between three people. The Geocentric position was argued for by a doggmatic, arogant character named Simplicio. The Copernican view was supported by intelligent and wise character named, Salvanti, representing Galileo. A neutral character who was receptive to either position was also written.
The Church banned the book and ordered for Galileo to appear before the Inquisition for herecy. Threatened with torture, Galileo confessed that he was wrong. By this time, Galileo was an old man of 68 years. A death sentence would certainly not have been an unusual punishment for herecy, however Galileo was lucky and was sentenced to life imprisonment, which was latter commuted to being held under house arrest at his home outside Florence. He died in 1642.
Some 359 years after Galileo death, the Vatican cleared Galileo of any wrongdoing. Pope John Paul II said,
Thanks to his intuition as a brilliant physicist and by relying on different arguments, Galileo, who practically invented the experimental method, understood why only the sun could function as the centre of the world, as it was then known, that is to say, as a planetary system. The error of the theologians of the time, when they maintained the centrality of the Earth, was to think that our understanding of the physical world's structure was, in some way, imposed by the literal sense of Sacred Scripture
Which of course it isn't.
Isaac Newton (1642-1727)
It was Sir Isaac Newton who was able to show that Kepler's laws of planetary motion are a natural consequence of simpler and more general descriptions of motion in nature. This brought into one theory both our observations of how things move on Earth and how the planets move in the heavens. These motions are described formally as Newton's laws of motion and gravity.
Newton applied this idea to the Sun and planets and took Keplers laws and calculated that the force falls off with the square of the distance from the Sun.
Newton's law of universal gravitation states that there is a force acting between objects that pulls them together. This force is proportional to the mass of the objects and inversely proportional to the square of their distance apart.
F = (GMmr^)/r2
Newton looked at the motion of the moon around the Sun and reasoned that the force responsible for gravity on Earth might be responsible for keeping the moon in orbit around the Earth. It turned out to be so
Binding Energy Curve
The mass of a nucleus is less than the sum of it constituent protons and neutrons. If we took the same number of protons and neutrons as in the nucleus we were trying to recreate, we would find the total mass of the individual protons and neutrons is greater than when they are arranged as a nucleus. The difference in mass between the products and sum of the individual nucleons is known as the mass defect. The binding energy is the amount of energy required to break the nucleus into protons and neutrons again; the larger the binding energy, the more difficult that would be. Figure. 1. Shows the binding energy for each element, against their atomic number.
Figure 1. Binding energy of the elements.
Starting from Hydrogen, as we increase the atomic number, the binding energy increases. So Helium has a greater binding energy per nucleon than Hydrogen while Lithium has a greater binding energy than Helium, and Berilium has a greater binding energy than Lithium, and so on. This trend continues, until we reach iron. It begins to decrease slowly.
The binding energy curve is obtained by dividing the total nuclear binding energy by the number of nucleons. The fact that there is a peak in the binding energy curve in the region of stability near iron means that either the breakup of heavier nuclei (fission) or the combining of lighter nuclei (fusion) will yield nuclei which are more tightly bound (less mass per nucleon).
The binding energy is intimately linked with fusion and fission. The lighter elements up to Fe are available will release energy via the fusion process, while in the opposite direction the heaviest elements down Fe are more susceptable to liberate energy via fission.
Figure 1. Binding energy of the elements.
Starting from Hydrogen, as we increase the atomic number, the binding energy increases. So Helium has a greater binding energy per nucleon than Hydrogen while Lithium has a greater binding energy than Helium, and Berilium has a greater binding energy than Lithium, and so on. This trend continues, until we reach iron. It begins to decrease slowly.
The binding energy curve is obtained by dividing the total nuclear binding energy by the number of nucleons. The fact that there is a peak in the binding energy curve in the region of stability near iron means that either the breakup of heavier nuclei (fission) or the combining of lighter nuclei (fusion) will yield nuclei which are more tightly bound (less mass per nucleon).
The binding energy is intimately linked with fusion and fission. The lighter elements up to Fe are available will release energy via the fusion process, while in the opposite direction the heaviest elements down Fe are more susceptable to liberate energy via fission.
A self-optimising microreactor system
Chemists in the US have developed a microreactor system which automatically calculates the optimal conditions for the chemical reaction it is undertaking. Once computed, the conditions can then be applied to a larger-scale reaction system. The researchers say their approach can save hours or days of tedium in the laboratory, by eliminating many manual experiments that would otherwise be required, as well as reducing the amounts of reagent needed.
To demonstrate the system, the research team from the Massachusetts Institute of Technology used the reaction of 4-chlorobenzotrifluoride with 2,3-dihydrofuran - an example of a Heck reaction, widely used in organic synthesis. Three syringe pumps containing the various components of the reaction were fed into a mixer, which in turn was connected to a 140ul microreactor. The yield of product was measured by high performance liquid chromatography (HPLC), whose results were passed to a computer programmed with an 'optimisation algorithm'. This enables the computer to take information about parameters such as flow rate, temperature and concentration of reactants, relate them to the yield, and then adjust them intelligently - based on the readings from the previous cycle - to produce gradually higher yields of product. The computer is also connected to the apparatus that controls flow rate, temperature, reactant concentration and so on, enabling these adjustments to the experimental conditions to be made automatically.
Within 2 days and after multiple cycles the system had arrived at the optimal conditions for a product yield, in this case, of 83 per cent.'We then wanted to see if we could use this information to scale the experiment up,' says team member Klavs Jensen. Using the conditions calculated by the microreactor system, the experiment was scaled up to a reactor representing a 50-fold increase in volume. 'The same optimal conditions applied at this larger scale,' says Jensen.
The researchers say that their system should be applicable to many reactions that can be conducted in a microreactor and could result in far less time and material being expended on finding the optimal conditions for a reaction - something that is key in organic chemistry. Furthermore a range of optimisation algorithms exist which can be applied to a variety of complex reaction scenarios. An added bonus, says Jensen, is that the system automatically calibrates the HPLC - 'one of the more tedious parts of doing this kind of work by hand.'
Commenting on the work, Kaspar Koch, managing director of FutureChemistry, a company based in the Netherlands specialising in microreactor technology, says, 'Conventional industrial optimisation methods are still laborious and environmentally unfriendly due to the large consumption of chemicals required. This new research exemplifies the advantages of microreactor technology in a low-waste reaction self-optimisation system consuming only minute amounts of starting materials - another significant step forward to smarter and greener chemistry.'
To demonstrate the system, the research team from the Massachusetts Institute of Technology used the reaction of 4-chlorobenzotrifluoride with 2,3-dihydrofuran - an example of a Heck reaction, widely used in organic synthesis. Three syringe pumps containing the various components of the reaction were fed into a mixer, which in turn was connected to a 140ul microreactor. The yield of product was measured by high performance liquid chromatography (HPLC), whose results were passed to a computer programmed with an 'optimisation algorithm'. This enables the computer to take information about parameters such as flow rate, temperature and concentration of reactants, relate them to the yield, and then adjust them intelligently - based on the readings from the previous cycle - to produce gradually higher yields of product. The computer is also connected to the apparatus that controls flow rate, temperature, reactant concentration and so on, enabling these adjustments to the experimental conditions to be made automatically.
Within 2 days and after multiple cycles the system had arrived at the optimal conditions for a product yield, in this case, of 83 per cent.'We then wanted to see if we could use this information to scale the experiment up,' says team member Klavs Jensen. Using the conditions calculated by the microreactor system, the experiment was scaled up to a reactor representing a 50-fold increase in volume. 'The same optimal conditions applied at this larger scale,' says Jensen.
The researchers say that their system should be applicable to many reactions that can be conducted in a microreactor and could result in far less time and material being expended on finding the optimal conditions for a reaction - something that is key in organic chemistry. Furthermore a range of optimisation algorithms exist which can be applied to a variety of complex reaction scenarios. An added bonus, says Jensen, is that the system automatically calibrates the HPLC - 'one of the more tedious parts of doing this kind of work by hand.'
Commenting on the work, Kaspar Koch, managing director of FutureChemistry, a company based in the Netherlands specialising in microreactor technology, says, 'Conventional industrial optimisation methods are still laborious and environmentally unfriendly due to the large consumption of chemicals required. This new research exemplifies the advantages of microreactor technology in a low-waste reaction self-optimisation system consuming only minute amounts of starting materials - another significant step forward to smarter and greener chemistry.'
Subscribe to:
Posts (Atom)