Wednesday, 26 September 2018

Is Paul Dirac the best physicist ever?

Image result for paul dirac cover pic
credits:wikimedia

There are good reasons to say that PAM Dirac is indeed the greatest physicist.
He was present at the birth of quantum theory and took part in the debates as to the merits of the wave theory formulation due to Schroedinger, and the matrix formulation due to Heisenberg. Dirac managed to show that they were two aspects of the same thing. This alone would make him a giant. It is not clear to me that there is any similar achievement elsewhere in physics.
Like Einstein he went seeking for a new theory, and managed to produce it by himself in under two years, resulting in the Dirac equation that some think is the most beautiful equation there is. Although Dirac did not realise at the time, this theory predicted the positron and so antimatter. This prediction alone would make him a giant. It is comparable to the bending of light prediction of General Relativity.
The Dirac equation also accounted for the fine structure of the hydrogen atom, ranking as one of the greatest explanations of the unknown in physics, comparable to General Relativity explaining the orbit of Mercury.
As a piece of mathematical physics, Dirac’s work it is as brilliant as Goedel’s incompleteness theorem and as unexpected in many ways. Einstein’s General Relativity and Maxwell’s equations are the only comparable achievements, but Dirac’s use of arcane 19th century mathematics is astounding. Einstein needed Minkowski but Dirac did it all by himself working day and night in a small room in Cambridge. It is arguably a greater achievement that the annus mirabilis of Newton in 1666, also in Cambridge.
Einstein spent the rest of his life after producing GR looking for a unification of GR and QM, but never went beyond Dirac’s unification of SR and QM. Dirac’s work directly informed all that came after, especially the move of quantum theory from quantum mechanics to quantum field theory.
Finally, Dirac invented a notation for describing quantum systems, the Bra-Ket notation, that is still in use today both for teaching the theory and for using it. The only notation that can compare, that I know of, are Feynman diagrams. This alone would also make him a giant of the field.
So in terms of his lasting contributions unification, explanation, prediction, and simplification, I would indeed vote for Dirac as the greatest physicist.

Tuesday, 25 September 2018

What Is Quantum Mechanics?

Quantum mechanics is the body of scientific laws that describe the wacky behavior of photons, electrons and the other particles that make up the universe.
Credit: agsandrew | Shutterstockndd 

By  | 

Quantum mechanics is the branch of physics relating to the very small. 
It results in what may appear to be some very strange conclusions about the physical world. At the scale of atoms and electrons, many of the equations of classical mechanics, which describe how things move at everyday sizes and speeds, cease to be useful. In classical mechanics, objects exist in a specific place at a specific time. However, in quantum mechanics, objects instead exist in a haze of probability; they have a certain chance of being at point A, another chance of being at point B and so on.
Quantum mechanics (QM) developed over many decades, beginning as a set of controversial mathematical explanations of experiments that the math of classical mechanics could not explain. It began at the turn of the 20th century, around the same time that Albert Einstein published his theory of relativity, a separate mathematical revolution in physics that describes the motion of things at high speeds. Unlike relativity, however, the origins of QM cannot be attributed to any one scientist. Rather, multiple scientists contributed to a foundation of three revolutionary principles that gradually gained acceptance and experimental verification between 1900 and 1930. They are:
Quantized properties: Certain properties, such as position, speed and color, can sometimes only occur in specific, set amounts, much like a dial that "clicks" from number to number. This challenged a fundamental assumption of classical mechanics, which said that such properties should exist on a smooth, continuous spectrum. To describe the idea that some properties "clicked" like a dial with specific settings, scientists coined the word "quantized."
Particles of light: Light can sometimes behave as a particle. This was initially met with harsh criticism, as it ran contrary to 200 years of experiments showing that light behaved as a wave; much like ripples on the surface of a calm lake. Light behaves similarly in that it bounces off walls and bends around corners, and that the crests and troughs of the wave can add up or cancel out. Added wave crests result in brighter light, while waves that cancel out produce darkness. A light source can be thought of as a ball on a stick being rhythmically dipped in the center of a lake. The color emitted corresponds to the distance between the crests, which is determined by the speed of the ball's rhythm. 
Waves of matter: Matter can also behave as a wave. This ran counter to the roughly 30 years of experiments showing that matter (such as electrons) exists as particles.
In 1900, German physicist Max Planck sought to explain the distribution of colors emitted over the spectrum in the glow of red-hot and white-hot objects, such as light-bulb filaments. When making physical sense of the equation he had derived to describe this distribution, Planck realized it implied that combinations of only certain colors (albeit a great number of them) were emitted, specifically those that were whole-number multiples of some base value. Somehow, colors were quantized! This was unexpected because light was understood to act as a wave, meaning that values of color should be a continuous spectrum. What could be forbidding atoms from producing the colors between these whole-number multiples? This seemed so strange that Planck regarded quantization as nothing more than a mathematical trick. According to Helge Kragh in his 2000 article in Physics World magazine, "Max Planck, the Reluctant Revolutionary," "If a revolution occurred in physics in December 1900, nobody seemed to notice it. Planck was no exception …" 
Planck's equation also contained a number that would later become very important to future development of QM; today, it's known as "Planck's Constant."
Quantization helped to explain other mysteries of physics. In 1907, Einstein used Planck's hypothesis of quantization to explain why the temperature of a solid changed by different amounts if you put the same amount of heat into the material but changed the starting temperature.
Since the early 1800s, the science of spectroscopy had shown that different elements emit and absorb specific colors of light called "spectral lines." Though spectroscopy was a reliable method for determining the elements contained in objects such as distant stars, scientists were puzzled about why each element gave off those specific lines in the first place. In 1888, Johannes Rydberg derived an equation that described the spectral lines emitted by hydrogen, though nobody could explain why the equation worked. This changed in 1913 when Niels Bohr applied Planck's hypothesis of quantization to Ernest Rutherford's 1911 "planetary" model of the atom, which postulated that electrons orbited the nucleus the same way that planets orbit the sun. According to Physics 2000 (a site from the University of Colorado), Bohr proposed that electrons were restricted to "special" orbits around an atom's nucleus. They could "jump" between special orbits, and the energy produced by the jump caused specific colors of light, observed as spectral lines. Though quantized properties were invented as but a mere mathematical trick, they explained so much that they became the founding principle of QM.
In 1905, Einstein published a paper, "Concerning an Heuristic Point of View Toward the Emission and Transformation of Light," in which he envisioned light traveling not as a wave, but as some manner of "energy quanta." This packet of energy, Einstein suggested, could "be absorbed or generated only as a whole," specifically when an atom "jumps" between quantized vibration rates. This would also apply, as would be shown a few years later, when an electron "jumps" between quantized orbits. Under this model, Einstein's "energy quanta" contained the energy difference of the jump; when divided by Planck’s constant, that energy difference determined the color of light carried by those quanta. 
With this new way to envision light, Einstein offered insights into the behavior of nine different phenomena, including the specific colors that Planck described being emitted from a light-bulb filament. It also explained how certain colors of light could eject electrons off metal surfaces, a phenomenon known as the "photoelectric effect." However, Einstein wasn't wholly justified in taking this leap, said Stephen Klassen, an associate professor of physics at the University of Winnipeg. In a 2008 paper, "The Photoelectric Effect: Rehabilitating the Story for the Physics Classroom," Klassen states that Einstein's energy quanta aren't necessary for explaining all of those nine phenomena. Certain mathematical treatments of light as a wave are still capable of describing both the specific colors that Planck described being emitted from a light-bulb filament and the photoelectric effect. Indeed, in Einstein's controversial winning of the 1921 Nobel Prize, the Nobel committee only acknowledged "his discovery of the law of the photoelectric effect," which specifically did not rely on the notion of energy quanta.
Roughly two decades after Einstein's paper, the term "photon" was popularized for describing energy quanta, thanks to the 1923 work of Arthur Compton, who showed that light scattered by an electron beam changed in color. This showed that particles of light (photons) were indeed colliding with particles of matter (electrons), thus confirming Einstein's hypothesis. By now, it was clear that light could behave both as a wave and a particle, placing light's "wave-particle duality" into the foundation of QM.
Since the discovery of the electron in 1896, evidence that all matter existed in the form of particles was slowly building. Still, the demonstration of light's wave-particle duality made scientists question whether matter was limited to acting only as particles. Perhaps wave-particle duality could ring true for matter as well? The first scientist to make substantial headway with this reasoning was a French physicist named Louis de Broglie. In 1924, de Broglie used the equations of Einstein's theory of special relativity to show that particles can exhibit wave-like characteristics, and that waves can exhibit particle-like characteristics. Then in 1925, two scientists, working independently and using separate lines of mathematical thinking, applied de Broglie's reasoning to explain how electrons whizzed around in atoms (a phenomenon that was unexplainable using the equations of classical mechanics). In Germany, physicist Werner Heisenberg (teaming with Max Born and Pascual Jordan) accomplished this by developing "matrix mechanics." Austrian physicist Erwin Schrödinger developed a similar theory called "wave mechanics." Schrödinger showed in 1926 that these two approaches were equivalent (though Swiss physicist Wolfgang Pauli sent an unpublished result to Jordan showing that matrix mechanics was more complete).
The Heisenberg-Schrödinger model of the atom, in which each electron acts as a wave (sometimes referred to as a "cloud") around the nucleus of an atom replaced the Rutherford-Bohr model. One stipulation of the new model was that the ends of the wave that forms an electron must meet. In "Quantum Mechanics in Chemistry, 3rd Ed." (W.A. Benjamin, 1981), Melvin Hanna writes, "The imposition of the boundary conditions has restricted the energy to discrete values." A consequence of this stipulation is that only whole numbers of crests and troughs are allowed, which explains why some properties are quantized. In the Heisenberg-Schrödinger model of the atom, electrons obey a "wave function" and occupy "orbitals" rather than orbits. Unlike the circular orbits of the Rutherford-Bohr model, atomic orbitals have a variety of shapes ranging from spheres to dumbbells to daisies.
In 1927, Walter Heitler and Fritz London further developed wave mechanics to show how atomic orbitals could combine to form molecular orbitals, effectively showing why atoms bond to one another to form molecules. This was yet another problem that had been unsolvable using the math of classical mechanics. These insights gave rise to the field of "quantum chemistry."
Also in 1927, Heisenberg made another major contribution to quantum physics. He reasoned that since matter acts as waves, some properties, such as an electron's position and speed, are "complementary," meaning there's a limit (related to Planck's constant) to how well the precision of each property can be known. Under what would come to be called "Heisenberg's uncertainty principle," it was reasoned that the more precisely an electron's position is known, the less precisely its speed can be known, and vice versa. This uncertainty principle applies to everyday-size objects as well, but is not noticeable because the lack of precision is extraordinarily tiny. According to Dave Slaven of Morningside College (Sioux City, IA), if a baseball's speed is known to within a precision of 0.1 mph, the maximum precision to which it is possible to know the ball's position is 0.000000000000000000000000000008 millimeters.
The principles of quantization, wave-particle duality and the uncertainty principle ushered in a new era for QM. In 1927, Paul Dirac applied a quantum understanding of electric and magnetic fields to give rise to the study of "quantum field theory" (QFT), which treated particles (such as photons and electrons) as excited states of an underlying physical field. Work in QFT continued for a decade until scientists hit a roadblock: Many equations in QFT stopped making physical sense because they produced results of infinity. After a decade of stagnation, Hans Bethe made a breakthrough in 1947 using a technique called "renormalization." Here, Bethe realized that all infinite results related to two phenomena (specifically "electron self-energy" and "vacuum polarization") such that the observed values of electron mass and electron charge could be used to make all the infinities disappear.
Since the breakthrough of renormalization, QFT has served as the foundation for developing quantum theories about the four fundamental forces of nature: 1) electromagnetism, 2) the weak nuclear force, 3) the strong nuclear force and 4) gravity. The first insight provided by QFT was a quantum description of electromagnetism through "quantum electrodynamics" (QED), which made strides in the late 1940s and early 1950s. Next was a quantum description of the weak nuclear force, which was unified with electromagnetism to build "electroweak theory" (EWT) throughout the 1960s. Finally came a quantum treatment of the strong nuclear force using "quantum chromodynamics" (QCD) in the 1960s and 1970s. The theories of QED, EWT and QCD together form the basis of the Standard Model of particle physics. Unfortunately, QFT has yet to produce a quantum theory of gravity. That quest continues today in the studies of string theory and loop quantum gravity.
Robert Coolman is a graduate researcher at the University of Wisconsin-Madison, finishing up his Ph.D. in chemical engineering. He writes about math, science and how they interact with history. Follow Robert @PrimeViridian. Follow us @LiveScienceFacebook & Google+.

Source and credits: livescience

Monday, 24 September 2018

Massive experiment finally proves Einstein was wrong about quantum physics



Image result for einstein
credits: wikiquote
It's not often scientists get to say this, but a new study has shown once and for all that the master scientist was wrong to turn his back on quantum physics.

Dr Einstein famously didn't believe it was possible for two tiny particles - known as photons - to transmit information between them instantly, no matter how far apart they were, because doing so would break one of the universe's fundamental rules - that nothing, including information, can travel faster than light. His view is known as 'local realism'.
While quantum physics has shown it's possible through a process known as entanglement, which Dr Einstein called "spooky action at a distance", there has long been a small loophole through which his opposition has survived.
In previous experiments, pairs of photons have been entangled, then sent to different locations where they were measured.
"If the measurement results tend to agree, regardless of which properties we choose to measure, it implies something very surprising: either the measurement of one particle instantly affects the other particle (despite being far away), or even stranger, the properties never really existed, but rather were created by the measurement itself," the Spain-based Institute of Photonic Sciences said in a statement (ICFO).
Dr Einstein argued the photons themselves could influence the method of measurement, affecting the result.
"It would be like allowing students to write their own exam questions," said ICFO. "This loophole cannot be closed by choosing with dice or random number generators, because there is always the possibility that these physical systems are coordinated with the entangled particles."
So instead scientists in 10 countries around the world turned to gamers - more than 100,000 of them in fact - to generate unpredictable and random numbers, which were used to control the measurement equipment.

"People are unpredictable, and when using smartphones even more so," said study contributor Prof Andrew White of the University of Queensland.
"These random bits then determined how various entangled atoms, photons, and superconductors were measured in the experiments, closing a stubborn loophole in tests of Einstein's principle of local realism."
Essentially, the measuring equipment was being controlled by random bits of information created by people across the world, far away from the measuring equipment.
"Human choices introduce the element of free will, by which people can choose independently of whatever the particles might be doing," said ICFO.
"The obtained results strongly disagree Einstein's worldview [and] close the freedom-of-choice loophole for the first time."
ICFO researcher Morgan Mitchell said it proves that either we change the universe just by looking at it, or there truly is some way for particles to communicate instantly.
"We showed that Einstein's world-view of local realism, in which things have properties whether or not you observe them, and no influence travels faster than light, cannot be true - at least one of those things must be false."
Dr Einstein went to his grave believing both were true.
The study's findings have been published in journal Nature.
source: nature via: newshub

Thursday, 20 September 2018

Why is the Sun stationary?


Image result for sun moving pics
credits:sun gifs

Well, its not. Technically, the sun is orbiting the center of our galaxy like we orbit said Sun. However, if you mean 'why do we orbit the sun instead of the sun orbiting us or both of us orbiting over an invisible axis?' then it's because the sun is SO MASSIVE! All of the planets masses combined would have very, very little gravitational effect on the sun, therefore, the sun pulls us inward. The only thing preventing us from falling directly in is the fact that Earth is moving at about 67,000 miles per hour!
Think of it like this:
Imagine you had a sheet hanging taunt in the air. if you put two marbles on the sheet, at different ends, and pushed them at the same speed and at equal yet opposite angles. Once they get to the center, they will "orbit" around an invisible axis until they lose speed and collide in the middle. Now that would be a binary solar system, or like two planets, or a planet and a moon that's almost the same size (like Pluto and Charon).To see why the sun has control, replace one of the tiny marbles with a bowling ball. Strike that, make in a cannon ball. Immediately, you will notice that the marble moves around the cannonball just fine, yet the cannonball won't move at all. If you tried to have Sol orbit us, we would either end up getting sucked into the Earth or flung far out of orbit, and everything will go back to the way it was (minus one Earth).
Tell you what, if you'd like to explore this further, there is a website I found a while back that's pretty cool. You may have heard of it. Go to this link, My Solar System, and click download (don't worry, it's perfectly safe). It will take you to a page where you can create your own solar system, and make your own masses, planets, etc. and see how they work, with angles and velocity and such. Let me know what you think!
credits: quoran

Friday, 14 September 2018

How big is our solar system?


This artist's concept depicts NASA's Voyager 1 spacecraft entering interstellar space, or the space between stars. Interstellar space is dominated by the plasma, or ionized gas, that was ejected by the death of nearby giant stars millions of years ago. The environment inside our solar bubble is dominated by the plasma exhausted by our sun, known as the solar wind. The interstellar plasma is shown with an orange glow similar to the color seen in visible-light images from NASA's Hubble Space Telescope that show stars in the Orion nebula traveling through interstellar space. Image released Sept. 12, 2013.
Credit: NASA/JPL-Caltech

Voyager 1 has left the solar system. The big news that the spacecraft reached interstellar space on Aug. 25, 2012, after its decades-long sojourn begs the question: Just how far did it have to travel to knock on cold, dark space's door?
In other words, just how big is the solar system that earthlings call home?
That's a question whose answer is steeped in hot gas traveling faster than the speed of sound.
"There's a gas flowing outward from the sun called the solar wind, at about a million miles an hour, it's supersonic," said study researcher and Voyager 1 team member Donald Gurnett, of the University of Iowa, who is principal investigator of the plasma wave instrument. [How the Voyager Space Probes Work (Infographic)]
As the charged gas zips away from the sun, it expands and spreads out; at the same time, its density decreases.
"Fifty years ago or thereabouts, it was recognized or postulated that the solar wind has to be stopped by the interstellar gas pressure, the gas between the stars," Gurnett told LiveScience in an interview.
Scientists knew this cold, dark space between stars existed, calling the boundary between it and the bubble of hot, charged particles surrounding our solar system the heliopause. Even so, they didn't know how dense this boundary might be.
The boundary would mark the end of the solar system and the beginning of interstellar space, hence revealing the size of the solar system.
"There's been a great quest for a long time to figure out where this boundary was," Gurnett said. "It was once thought — at least two scientific papers 30 or so years ago claimed it was just beyond Jupiter."
Now that Voyager 1, which launched in 1977, has penetrated the heliopause and entered the stars' chilly quarters, Gurnett and his colleagues can say the boundary is much farther out than Jupiter's orbit.
The end of the solar system is about 122 astronomical units (AU) away from the sun, where one AU is 93 million miles (150 million kilometers). That's about three times as far out as Pluto, which is about 40 AU from the sun, or about six times farther away from Earth than Neptune's orbit.
That means Voyager 1 is about 1 light-day away from planet Earth. For comparison, the nearest star Alpha Centauri lies 4.3 light-years away. A radio signal, which travels at the speed of light (186,000 miles a second, or nearly 300,000 km/s), takes 17 hours to travel from Voyager 1 to Earth.
"Voyager is the highest-speed object ever produced by a human," Gurnett said.
The scientists involved in the mission knew the spacecraft had pushed through the heliopause on April 9, 2013, when they saw a Voyager 1 recording of a sudden spike in oscillations of plasma (hot, ionized gas) at a certain frequency. "When we saw that, it took us 10 seconds to say we had gone through the helio-pause," Gurnett said in a statement. The frequency suggested a plasma density that was 80 times higher than anything seen inside the heliosphere's outer edge.
In fact, the density was close to what astronomers would expect in interstellar space. They then back-calculated when Voyager 1 would have passed the heliopause.
Voyager 1 is beyond the solar bubble but has yet to reach the Oort Cloud, a repository of comets a light-year or so away from which many of the icy bodies travel to the inner solar system. The Oort Cloud forms a sort of icy shell around the solar system.
source: livescience

How do we measure time beyond the solar system?


DSAC is prepping for a yearlong experiment to characterize and test its suitability for use in future deep space exploration.  Image via NASA Jet Propulsion Laboratory
DSAC is prepping for a yearlong experiment to characterize and test its suitability for use in future deep space
exploration.
 Image via NASA Jet Propulsion Laboratory
By Todd ElyNASA
We all intuitively understand the basics of time. Every day we count its passage and use it to schedule our lives.
We also use time to navigate our way to the destinations that matter to us. In school we learned that speed and time will tell us how far we went in traveling from point A to point B; with a map we can pick the most efficient route – simple.
But what if point A is the Earth, and point B is Mars – is it still that simple? Conceptually, yes. But to actually do it we need better tools – much better tools.
At NASA’s Jet Propulsion Laboratory, I’m working to develop one of these tools: the Deep Space Atomic Clock, or DSAC for short. DSAC is a small atomic clock that could be used as part of a spacecraft navigation system. It will improve accuracy and enable new modes of navigation, such as unattended or autonomous.
In its final form, the Deep Space Atomic Clock will be suitable for operations in the solar system well beyond Earth orbit. Our goal is to develop an advanced prototype of DSAC and operate it in space for one year, demonstrating its use for future deep space exploration.
Speed and time tell us distance
To navigate in deep space, we measure the transit time of a radio signal traveling back and forth between a spacecraft and one of our transmitting antennae on Earth (usually one of NASA’s Deep Space Network complexes located in Goldstone, California; Madrid, Spain; or Canberra, Australia).
The Canberra Deep Space Communications Complex in Australia is part of NASA’s Deep Space Network, receiving and sending radio signals to and from spacecraft. Image via Jet Propulsion Laboratory
The Canberra Deep Space Communications Complex in Australia is part of NASA’s Deep Space Network, receiving and sending radio signals to and from spacecraft. Image via Jet Propulsion Laboratory
We know the signal is traveling at the speed of light, a constant at approximately 300,000 km/sec (186,000 miles/sec). Then, from how long our “two-way” measurement takes to go there and back, we can compute distances and relative speeds for the spacecraft.
For instance, an orbiting satellite at Mars is an average of 250 million kilometers from Earth. The time the radio signal takes to travel there and back (called its two-way light time) is about 28 minutes. We can measure the travel time of the signal and then relate it to the total distance traversed between the Earth tracking antenna and the orbiter to better than a meter, and the orbiter’s relative speed with respect to the antenna to within 0.1 mm/sec.
We collect the distance and relative speed data over time, and when we have a sufficient amount (for a Mars orbiter this is typically two days) we can determine the satellite’s trajectory.
Measuring time, way beyond Swiss precision
Fundamental to these precise measurements are atomic clocks. By measuring very stable and precise frequencies of light emitted by certain atoms (examples include hydrogen, cesium, rubidium and, for DSAC, mercury), an atomic clock can regulate the time kept by a more traditional mechanical (quartz crystal) clock. It’s like a tuning fork for timekeeping. The result is a clock system that can be ultra stable over decades.
The precision of the Deep Space Atomic Clock relies on an inherent property of mercury ions – they transition between neighboring energy levels at a frequency of exactly 40.5073479968 GHz. DSAC uses this property to measure the error in a quartz clock’s “tick rate,” and, with this measurement, “steers” it towards a stable rate. DSAC’s resulting stability is on par with ground-based atomic clocks, gaining or losing less than a microsecond per decade.
Continuing with the Mars orbiter example, ground-based atomic clocks at the Deep Space Network error contributionto the orbiter’s two-way light time measurement is on the order of picoseconds, contributing only fractions of a meter to the overall distance error. Likewise, the clocks’ contribution to error in the orbiter’s speed measurement is a minuscule fraction of the overall error (1 micrometer/sec out of the 0.1 mm/sec total).
The distance and speed measurements are collected by the ground stations and sent to teams of navigators who process the data using sophisticated computer models of spacecraft motion. They compute a best-fit trajectory that, for a Mars orbiter, is typically accurate to within 10 meters (about the length of a school bus).
The DSAC Demonstration Unit (shown mounted on a plate for easy transportation). Image via Jet Propulsion Laboratory
The DSAC Demonstration Unit (shown mounted on a plate for easy transportation). Image via Jet Propulsion Laboratory
Sending an atomic clock to deep space
The ground clocks used for these measurements are the size of a refrigerator and operate in carefully controlled environments – definitely not suitable for spaceflight. In comparison, DSAC, even in its current prototype form as seen above, is about the size of a four-slice toaster. By design, it’s able to operate well in the dynamic environment aboard a deep-space exploring craft.
DSAC mercury ion trap housing with electric field trapping rods seen in the cutouts. Image via Jet Propulsion Laboratory
DSAC mercury ion trap housing with electric field trapping rods seen in the cutouts. Image via Jet Propulsion Laboratory
One key to reducing DSAC’s overall size was miniaturizing the mercury ion trap. Shown in the figure above, it’s about 15 cm (6 inches) in length. The trap confines the plasma of mercury ions using electric fields. Then, by applying magnetic fields and external shielding, we provide a stable environment where the ions are minimally affected by temperature or magnetic variations. This stable environment enables measuring the ions’ transition between energy states very accurately.
The DSAC technology doesn’t really consume anything other than power. All these features together mean we can develop a clock that’s suitable for very long duration space missions.
Because DSAC is as stable as its ground counterparts, spacecraft carrying DSAC would not need to turn signals around to get two-way tracking. Instead, the spacecraft could send the tracking signal to the Earth station or it could receive the signal sent by the Earth station and make the tracking measurement on board. In other words, traditional two-way tracking can be replaced with one-way, measured either on the ground or on board the spacecraft.
So what does this mean for deep space navigation? Broadly speaking, one-way tracking is more flexible, scalable (since it could support more missions without building new antennas) and enables new ways to navigate.
DSAC enables the next generation of deep space tracking. Image via Jet Propulsion Laboratory
DSAC enables the next generation of deep space tracking. Image via Jet Propulsion Laboratory
DSAC advances us beyond what’s possible today
The Deep Space Atomic Clock has the potential to solve a bunch of our current space navigation challenges.
  • Places like Mars are “crowded” with many spacecraft: Right now, there are five orbiters competing for radio tracking. Two-way tracking requires spacecraft to “time-share” the resource. But with one-way tracking, the Deep Space Network could support many spacecraft simultaneously without expanding the network. All that’s needed are capable spacecraft radios coupled with DSAC.
  • With the existing Deep Space Network, one-way tracking can be conducted at a higher-frequency band than current two-way. Doing so improves the precision of the tracking data by upwards of 10 times, producing range rate measurements with only 0.01 mm/sec error.
  • One-way uplink transmissions from the Deep Space Network are very high-powered. They can be received by smaller spacecraft antennas with greater fields of view than the typical high-gain, focused antennas used today for two-way tracking. This change allows the mission to conduct science and exploration activities without interruption while still collecting high-precision data for navigation and science. As an example, use of one-way data with DSAC to determine the gravity field of Europa, an icy moon of Jupiter, can be achieved in a third of the time it would take using traditional two-way methods with the flyby mission currently under development by NASA.
  • Collecting high-precision one-way data on board a spacecraft means the data are available for real-time navigation. Unlike two-way tracking, there is no delay with ground-based data collection and processing. This type of navigation could be crucial for robotic exploration; it would improve accuracy and reliability during critical events – for example, when a spacecraft inserts into orbit around a planet. It’s also important for human exploration, when astronauts will need accurate real-time trajectory information to safely navigate to distant solar system destinations.
The Next Mars Orbiter (NeMO) currently in concept development by NASA is one mission that could potentially benefit from the one-way radio navigation and science that DSAC would enable. Image via NASA
The Next Mars Orbiter (NeMO) currently in concept development by NASA is one mission that could potentially benefit from the one-way radio navigation and science that DSAC would enable. Image via NASA
Countdown to DSAC launch
The DSAC mission is a hosted payload on the Surrey Satellite Technology Orbital Test Bed spacecraft. Together with the DSAC Demonstration Unit, an ultra stable quartz oscillator and a GPS receiver with antenna will enter low altitude Earth orbit once launched via a SpaceX Falcon Heavy rocket in early 2017.
While it’s on orbit, DSAC’s space-based performance will be measured in a yearlong demonstration, during which Global Positioning System tracking data will be used to determine precise estimates of OTB’s orbit and DSAC’s stability. We’ll also be running a carefully designed experiment to confirm DSAC-based orbit estimates are as accurate or better than those determined from traditional two-way data. This is how we’ll validate DSAC’s utility for deep space one-way radio navigation.
In the late 1700s, navigating the high seas was forever changed by John Harrison’s development of the H4 “sea watch.” H4’s stability enabled seafarers to accurately and reliably determine longitude, which until then had eluded mariners for thousands of years. Today, exploring deep space requires traveling distances that are orders of magnitude greater than the lengths of oceans, and demands tools with ever more precision for safe navigation. DSAC is at the ready to respond to this challenge.
The Conversation
Todd Ely, Principal Investigator on Deep Space Atomic Clock Technology Demonstration Mission, Jet Propulsion Laboratory, NASA
This article was originally published on The Conversation.

via: 
https://www.physicslover.in/firebase-messaging-sw.js https://www.physicslover.in/firebase-messaging-sw.js