The Global Positioning System – A National Resource

The Global Positioning System (GPS) was originally designed jointly by the U.S. Navy and the U.S. Air Force to permit the determination of position and time for military troops and guided missiles. However, GPS has also become the basis for position and time measure-ment by scientific laboratories and a wide spectrum of applications in a […]
The Global Positioning System (GPS) was originally designed jointly by the U.S. Navy and the U.S. Air Force to permit the determination of position and time for military troops and guided missiles. However, GPS has also become the basis for position and time measure-ment by scientific laboratories and a wide spectrum of applications in a multi-billion dollar commercial industry. Roughly three billion GPS receivers have been sold to delighted consumers throughout the world. Thirty-one GPS satellites are currently broadcasting navigation signals from their high-altitude vantage points in space. EARLY METHODS OF NAVIGATION The shape and size of the earth has been known from the time of antiquity. The fact that the earth is a sphere was well known to educated people as long ago as the fourth century BC. In his book On the Heavens, Aristotle gave two scientifically correct arguments. First, the shad-ow of the earth projected on the moon during a lunar eclipse appears to be curved. Second, the elevations of stars change as one travels north or south, while certain stars visible in Egypt cannot be seen at all from Greece. The actual radius of the earth was determined within one percent by Eratosthenes in about 230 BC. He knew that the sun was directly overhead at noon on the summer solstice in Syene (Aswan, Egypt), since on that day it illuminated the water of a deep well. At the same time, he measured the length of the shadow cast by a column on the grounds of the library at Alexandria, which was nearly due north. The distance between Alexandria and Syene had been well established by professional runners and camel caravans. Thus Eratosthenes was able to compute the earth’s radius from the difference in latitude that he inferred from his measurement. In terms of modem units of length, he arrived at the figure of about 6400 km. By comparison, the actual mean radius is 6371 km (the earth is not precisely spherical, as the polar radius is 21 km less than the equatorial radius of 6378 km). The ability to determine one’s position on the earth was the next major problem to be addressed. In the second century, AD the Greek astronomer Claudius Ptolemy prepared a geographical atlas, in which he estimated the latitude and longitude of principal cities of the Mediterranean world. Ptolemy is most famous, however, for his geocentric theory of planetary motion, which was the basis for astronomical catalogs until Nicholas Copernicus published his heliocentric theory in 1543. CELESTIAL NAVIGATION Historically, methods of navigation over the earth’s surface have involved the angular measure-ment of star positions to determine latitude. The latitude of one’s position is equal to the elevation of the pole star. The position of the pole star on the celestial sphere is only temporary, however, due to precession of the earth’s axis of rotation through a circle of radius 23.5 over a period of 26,000 years. At the time of Julius Caesar, there was no star sufficiently close to the north celes-tial pole to be called a pole star. In 13,000 years, the star Vega will be near the pole. It is perhaps not a coincidence that mariners did not venture far from visible land until the era of Christopher Columbus, when true north could be determined using the star we now call Polaris. Even then the star’s diurnal rotation caused an apparent variation of the compass needle. Polaris in 1492 described a radius of about 3.5 degrees about the celestial pole, compared to today. At sea, however, Columbus and his contemporaries depended primarily on the mariner’s compass and dead reckoning. The determination of longitude was much more difficult. Longitude is obtained astronomically from the difference between the observed time of a celestial event, such as an eclipse, and the corresponding time tabulated for a reference location. For each hour of difference in time, the difference in longitude is 15 degrees. NAVIGATION AT SEA Columbus himself attempted to estimate his longitude on his fourth voyage to the New World by observing the time of a lunar eclipse as seen from the harbor of Santa Gloria in Jamaica on February 29, 1504. In his distinguished biography Admiral of the Ocean Sea, Samuel Eliot Morrison states that Columbus measured the duration of the eclipse with an hour-glass and determined his position as nine hours and fifteen minutes west of Cadiz, Spain, according to the predicted eclipse time in an almanac he carried aboard his ship. Over the preceding year, while his ship was marooned in the harbor, Columbus had determined the latitude of Santa Gloria by numerous observations of the pole star. He made out his latitude to be 18 degrees, which was in error by less than half a degree and was one of the best recorded observations of latitude in the early sixteenth century, but his estimated longitude was off by some 38 degrees. Columbus also made legendary use of this eclipse by threatening the natives with the disfavor of God, as indicated by a portent from Heaven, if they did not bring desperately needed provisions to his men. When the eclipse arrived as predicted, the natives pleaded for the Admiral’s intervention, promising to furnish all the food that was needed. New knowledge of the universe was revealed by Galileo Galilei in his book The Starry Messenger. This book, published in Venice in 1610, reported the telescopic discoveries of hundreds of new stars, the craters on the moon, the phases of Venus, the rings of Saturn, sunspots, and the four inner satellites of Jupiter. Galileo suggested using the eclipses of Jupiter’s satellites as a celestial clock for the practical determination of longitude, but the calculation of an accurate ephemeris and the difficulty of observing the satellites from the deck of a rolling ship prevented use of this method at sea. Nevertheless, James Bradley, the third Astronomer Royal of England, successfully applied the technique in 1726 to determine the longitudes of Lisbon and New York with considerable accuracy. The inability to measure longitude at sea had the potential of catastrophic consequences for sail-ing vessels exploring the new world, carrying cargo, and conquering new territories. Shipwrecks were common. On October 22, 1707 a fleet of twenty-one ships under the command of Admiral Sir Cloudsley Shovel was returning to England after an unsuccessful military attack on Toulon in the Mediterranean. As the fleet approached the English Channel in dense fog, the flagship and three others foundered on the coastal rocks and nearly two thousand men perished. Stunned by this unprecedented toss, the British government in 1714 offered a prize of 20,000 British Pounds for a method to determine longitude at sea within a half a degree. The scientific establishment believed that the solution would be obtained from observations of the moon. The German cartographer Tobias Mayer, aided by new mathematical methods developed by Leonard Euler, offered improved tables of the moon in 1757. The recorded position of the moon at a given time as seen from a reference meridian could be compared with its position at the local time to determine the angular position west or east. Just as the astronomical method appeared to achieve realization, the British craftsman John Harrison provided a dif-ferent solution through his invention of the marine chronometer. The story of Harrison’s clock has been recounted in Dava Sobel’s popular book, Longitude. Both methods were tested by sea trials. The lunar tables permitted the determination of longitude within four minutes of arc, but with Harrison’s chronometer the precision was only one minute of arc. Ultimately, portions of the prize money were awarded to Mayer’s widow, Euler, and Harrison. In the twentieth century, with the development of radio transmitters, another class of navigation aids was created using terrestrial radio beacons, including Loran and Omega. Finally, the tech-nology of artificial satellites made possible navigation and position determination using line of sight signals involving the measurement of Doppler shift or phase difference. GLOBAL POSITIONING SYSTEM The success of Transit stimulated both the U.S. Navy and the U.S. Air Force to investigate more advanced versions of a space-based navigation system with enhanced capabilities. Recognizing the need for a combined effort, the Deputy Secretary of Defense established a Joint Program Office in 1973. The NAVSTAR Global Positioning System (GPS) was thus cre-ated. In contrast to Transit, GPS provides continuous coverage. Also, rather than Doppler shift, satellite range is determined from phase difference. There are two types of observables. One is pseudorange, which is the offset between a pseudorandom noise (PRN) coded signal from the satellite and a replica code generated in the user’s receiver, multiplied by the speed of light. The other is accumulated delta range (ADR), which is a measure of carrier phase. THE NAVSTAR GPS CONSTELLATION The original GPS constellation reached operational status in 1995. It consisted of 24 GPS satellites arranged in six orbital rings 10,898 nautical miles above the Earth. Each of the rings was tipped 55 degrees with respect to the equator. More than three billion satisfied users now benefit from the GPS signals streaming down from space. The determination of position may be described as the process of triangulation using the meas-ured range between the user and four or more satellites. The ranges are inferred from the time of propagation of the satellite signals. Four satellites are required to determine the three co- ordinates of position and time. The time is involved in the correction to the receiver clock and is ultimately eliminated from the measurement of position. High precision is made possible through the use of atomic clocks carried on-board the satellites. Each satellite has two cesium clocks and two rubidium clocks, which maintain time with a precision one part in ten trillionth in over a few hours, or better than 1O nanoseconds. In terms of the distance traversed by an electromagnetic signal at the speed of light, each nanosecond corresponds to about 30 centimeters. Thus the precision of GPS clocks permits a real time measurement of distance to within a few meters. With post processed carrier phase measurements, a precision of a few centimeters can be achieved today. The design of the GPS constellation had the fundamental requirement that at least four satellites must be visible at all times from any point on earth. The tradeoffs included visibility, the need to pass over the ground control stations in the United States, cost, and sparing efficiency. The orbital configuration approved in 1973 was a total of 24 satellites, consisting of 8 satellites plus one spare in each of three equally spaced orbital planes. The orbital radius was 26,562 km, corresponding to a period of revolution of 12 sidereal hours, with repeating ground traces. Each satellite arrived over a given point four minutes earlier each day. A common orbital inclination of 63º was selected to maximize the on-orbit payload mass with The operational system, as pres-ently deployed, consists of 21 primary satellites and 3 on-orbit spares, comprising four satellites in each of six orbital planes. Each orbital plane is inclined at 55º with respect to the equator. This constellation improves on the “18 plus 3” satellite constellation by more fully integrating the three active spares. There have been several generations of GPS satellites. The Block I satellites, built by Rockwell International, were launched between 1978 and 1985. They consisted of eleven prototype satellites, including one launch failure, that validated the system concept. The ten successful satellites had an average lifetime of 8.76 years. The Block II and Block llA satellites were also built by Rockwell International. Block II consists of nine satellites launched between 1989 and 1990. Block llA, deployed between 1990 and 1997, consists of 19 satellites with several! navigation enhancements. In April 1995, GPS was declared fully operational with a constellation of 24 operational spacecraft and a completed ground segment. The 28 Block II/IIA satellites have exceeded their specified mission duration of 6 years and are expected to have an average lifetime of more than 1O years. Block llR comprises 20 replacement satellites that incorporate autonomous navigation based on cross-link ranging. These satellites are being manufactured by Lockheed Martín. The first launch in 1997 resulted in a launch failure. The first llR satellite to reach orbit was also launched in 1997. The second GPS IIR satellite was successfully launched aboard a Delta 2 rocket on October 7, 1999. One to four more launches are anticipated over the next year. The fourth generation of satellites is the Block II follow-on (Block llF). This program includes the procurement of 33 satellites and the operation and support of a new GPS operational control segment. The Block llF program was awarded to Rockwell (now a part of Boeing). Further details may be found in a special issue of the Proceedings of the IEEE for January, 1999. CONTROL SEGMENT The Master Control Station for GPS is located at Schriever Air Force Base in Colorado Springs, CO. The MCS maintains the satellite constellation and performs the station keeping and attitude control maneuvers. It also determines the orbit and clock parameters with a Kalman filter using measurements from five monitor stations distributed around the world. The orbit error is about 1.5 meters. GPS orbits are derived independently by various scientific organizations using carrier phase and post-processing. The state of the art is exemplified by the work of the International GPS Service (IGS), which produces orbits with an accuracy of approximately 3 centimeters within two weeks. The system time reference is managed by the U.S. Naval Observatory in Washington, DC. GPS time is measured from Saturday/Sunday midnight at the beginning of the week. The GPS time scale is a composite “paper clock” that is synchronized to keep step with Coordinated Universal Time (UTC) and International Atomic Time (TAI). However, UTC differs from TAI by an integral number of leap seconds to maintain correspondence with the rotation of the earth, whereas GPS time does not include leap seconds. The origin of GPS time is midnight on January 5/6, 1980 (UTC). At present, TAI is ahead of UTC by 32 seconds, TAI is ahead of GPS by 19 seconds, and GPS is ahead of UTC by 13 seconds. Only 1,024 weeks were allotted from the origin before the system time is reset to zero be-cause 1O bits are allocated for the calendar function (1,024 is the tenth power of 2). Thus the first GPS rollover occurred at midnight on August 21, 1999. The next GPS rollover will take place May 25, 2019. SIGNAL STRUCTURE The satellite position at any time is computed in the user’s receiver from the navigation message that is contained in a 50 bit per second data stream. The orbit is represented for each one hour period by a set of 15 Keplerian orbital elements, with harmonic coefficients arising from perturbations, and is updated every four hours. This data stream is modulated by each of two code division multiple access, or spread spectrum, pseudorandom noise (PRN) codes: the coarse/acquisition C/A code (sometimes called the clear/access code) and the precision P code. The P code can be encrypted to produce a secure sig-nal called the Y code. This feature is known as the Anti-Spoof (AS) mode, which is intended to defeat deception jamming by adversaries. The C/A code is used for satellite acquisition and for position determination by civil receivers. The P(Y) code is used by military and other authorized receivers. The C/A code is a Gold code of register size 10, which has a sequence length of 1023 chips and a chipping rate of 1.023 MHz and thus repeats itself every 1 millisecond. (The term “chip” is used instead of “bit’ to indicate that the PRN code contains no information.) The P code is a long code of length 2.3547 x 1014 chips with a chipping rate of 10 times the C/A code of 10.23 MHz. At this rate the P code has a period of 38.058 weeks, but it is truncated on a weekly basis so that 38 segments are available for the constellation. Each satellite uses a different member of the C/A Gold code family and a different one-week segment of the P code sequence. The GPS satellites transmit signals at two carrier frequencies: the L1 component with a center frequency of 1575.42 MHz, and the L2 component with a center frequency of 1227.60 MHz. These frequencies are derived from the master clock frequency of 10.23 MHz, with L1 = 154 x 10.23 MHz and L2 = 120 x 10.23 MHz. The L1 frequency transmits both the P code and the C/A code, while the L2 frequency transmits only the P code. The second P code frequency permits a dual-frequency measurement of the ionospheric group delay. The P-code receiver has- a two-sigma root-mean-square horizontal position error of about 5 meters. The single frequency C/A code user must model the ionospheric delay with less accuracy. In addition, the C/A code is intentionally degraded by a technique called Selective Availability (SA}, which introduces errors of 50 to 100 meters by dithering the satellite clock data. Through differential GPS measurements, however, position accuracy can be improved by reducing selective availability and environmental errors. The transmitted signal from a GPS satellite has right hand circular polarization. According to the GPS Interface Control Docu-ment, the specified minimum signal strength at an elevation angle of 5 degrees into a linearly polarized receiver antenna with a gain of 3 dB (approximately equivalent to a circularly polarized antenna with a gain of O dB) is – 160 dBW for the L1 C/A code, – 163 dBW far the L1 P code, and – 166 dBW for the L2 P code. The L2 signal is transmitted at a lower power level since it is used primarily for the ionospheric delay correction. PSEUDORANGE The fundamental measurement in the Global Positioning System is pseudo- range. The user equipment receives the pseudorandom code from a satellite and, having identified the satellite, generates a replica code. The phase by which the replica code must be shifted in the receiver to maintain maximum correlation with the satellite code, multiplied by the speed of light, is approximately equal to the satellite range. It is called the pseudorange because the measurement must be corrected by a variety of factors to obtain the true range. The corrections that must be applied include signal propagation delays caused by the ionosphere and the troposphere, the space vehicle clock error, and the user’s receiver clock error. The ionosphere correction is obtained either by measurement of dispersion using the two frequencies L1 and L2 or by calculation from a mathematical model, but the tropospheric delay must be calculated since the troposphere is non dispersive. The true geometric distance to each satellite is obtained by applying these corrections to the measured pseudo- range. Other error sources and modeling errors continue to be investigated. For example, a recent modification of the Kalman filter has led to improved performance. Studies have also shown that solar radiation pressure models may need revision and there is some new evidence that the earth’s magnetic field may contribute to a small orbit period variation in the satellite clock frequencies. CARRIER PHASE Carrier phase is used to performance measurements with a precision that greatly exceeds those based on pseudorange. However, a carrier phase measurement must resolve an integral cycle ambiguity, whereas the pseudorange is unambiguous. The wavelength of the L1 carrier is about 19 centimeters. Thus with a cycle resolution of one percent, a differential measurement at the level of a few millimeters is theoretically possible. This technique has important applications to geodesy and analogous scientific programs. RELATIVITY The precision of GPS measurements is so great that it requires the application of Albert Ein-stein’s special and general theories of relativity for the reduction of its measurements. Professor Carroll Alley of the University of Maryland once articulated the significance of this fact at a scientific conference devoted to time measurement in 1979. He said, “I think it is appropriate to realize that the first practical application of Einstein’s ideas in actual engineering situations are with us in the fact that clocks are now so stable that one must take these small effects into account in a variety of systems that are now undergoing development or are actually in use in comparing time worldwide. It is no longer a matter of scientific interest and scientific application, but it has moved into the realm of engineering necessity.” According to relativity theory, a moving clock appears to run slow with respect to a similar clock that is at rest. This-effect is called “time dilation.” In addition, a clock in a weaker gravitational potential appears to run fast in comparison to one that is in a stronger gravitational potential. This gravitational effect is known in general as the “red shift” (only in this case it is actually a “blue shift”). GPS satellites revolve around the earth with a velocity of 3.874 km/s at an altitude of 20, 184 km. Thus on account of the its velocity, a satellite clock appears to run slow by 7 microseconds per day when compared to a clock on the earth’s surface. But on account of the difference in gravitational potential, the satellite clock appears to run fast by 45 microseconds per day. The net effect is that the clock appears to run fast by 38 microseconds per day. This is an enormous rate difference for an atomic clock with a preci-sion of a few nanoseconds. Thus to compensate for this large secular rate, the clocks are given a rate offset prior to satellite launch of – 4.465 parts in 10 to the tenth power from their nominal frequency of 10.23 MHz so that on average they appear to run at the same rate as a clock on the ground. The actual frequency of the satellite clocks before launch is thus 10.22999999543 MHz. Although the GPS satellite orbits are nominally circular, there is al-ways some residual eccentricity. The eccentricity causes the orbit to be slightly elliptical, and the velocity and altitude vary over one revolution. Thus, although the principal velocity and gravitational effects have been compensated by a rate offset, there remains a slight re-sidual variation that is proportional to the eccentricity. For example, with an orbital eccen-tricity of 0.02 there is a relativistic sinusoidal variation in the apparent clock time having an amplitude of 46 nanoseconds. This correction must be calculated and taken into account in the GPS receiver. The displacement of a receiver on the surface of the earth due to the earth’s rotation in inertial space during the time of flight of the signal must also be taken into account. This is a third relativistic effect that is due to the universality of the speed of light. The maximum correction occurs when the receiver is on the equator and the satellite is on the horizon. The time of flight of a GPS signal from the satellite to a receiver on the earth is then 86 milliseconds and the correction to the range measurement resulting from the receiver displacement is 133 nanoseconds. An analogous correction must be applied by a receiver on a moving platform, such as an aircraft or another satellite. This effect, as interpreted by an observer in the rotating frame of reference of the earth, is called the Sagnac effect. It is also the basis for a laser ring gyro in an inertial navigation system. GPS MODERNIZATION In 1996, a Presidential Decision Directive stated the president would review the issue of Selec-tive Availability in 2000 with the objective of discontinuing selective availability no later than 2006. In addition, both the L1 and L2 GPS signals would be made available to civil users and a new civil 10.23 MHz signal would be authorized. To satisfy the needs of aviation, the third civil frequency, known as L5, would be centered at 1176.45 MHz, in the Aeronautical Radio Navigation Services (ARNS) band, subject to approval at the World Radio Conference in 2000. According to Keith McDonald in an article on GPS modernization published in the September, 1999 GPS World, with selective availability removed, the civil GPS accuracy would be improved to about 1O to 30 meters. With the addition of a second frequency for ionospheric group delay corrections, the civil accuracy would become about 5 to 10 meters. A third frequency would permit the creation of two beat frequencies that would yield one-meter accuracy in real time. A variety of other enhancements are under consideration, including increased power, the addition of a new military code at the L1 and L2 frequencies, additional ground stations, more frequent uploads, and an increase in the number of satellites. These policy initiatives are driven by the dual needs of maintaining national security while supporting the growing dependence on GPS by commercial industry. When these upgrades would begin to be im-plemented in the Block llR and llF satellites depends on GPS funding. Besides providing position,GPS is a reference for time with an accuracy of 10 nanoseconds or better. Its broadcast time signals are used for national defense, commercial, and scientific purposes. The precision and universal availability of GPS time has produced a paradigm shift in time measurement and dissemination, with GPS evolving from a secondary source to a fundamental reference in itself. The international community wants assurance that it can rely on the availability of GPS and contin-ued U.S. support for the system. The Russian Global Navigation Satellite System (GLONASS) has been an alternative, but economic conditions in Russia have threatened its continued viability. Consequently, the European Union is considering the creation! of a navigation system of its own, called Galileo, to avoid relying on the U.S. GPS and Russian GLONASS programs. The Global Positioning System is a vital national resource. Over the past thirty years it has made the transition from concept to reality, representing today an operational system on which the entire world has become dependent. Both technical improvements and an enlightened na-tional policy will be necessary to ensure its continued growth into the twenty first century. Dr. Robert A. Nelson, P.E. was president of Satellite Engineering Research Corporation in Bethesda, Maryland, a Lecturer in the Department of Aerospace Engineering at the University of Maryland and Technical Editor of Via Satellite magazine. Dr. Nelson was the instructor for the ATI course Satellite Communications Systems Engineering for more than 20 years. Dr. Nelson passed away in May 2013. He will be remembered and missed for his many contributions to the field of Satellite Engineering. Based on an article originally published in Via Satellite. Updated on May 28, 2013 Tom Logsdon has lectured extensively and has taught 300 short courses on a variety of technical topics in 31 different countries scattered across six continents. He as written and sold 1.8 million words in-cluding 32 nonfiction books. His words; spoken and written, have been translated into a dozen different languages including French, Spanish, Serbo-Croatian, Russian, Latvian, Japanese, and International Sign Language. Tom is an expert on GPS and other navigation satellites who teaches several courses for ATlcourses including GPS & Other Radio navigation Satellites , Fundamentals of Orbital & Launch Mechanics, Integrated Navigation Systems , and Introduction to Space. About Applied Technology Institute Courses (ATlcourses or ATI) ATlcourses is a national leader in professional development seminars in the technical areas of space, communications, defense, sonar, radar,engineering, and signal processing. Since 1984, ATlcourses has presented leading-edge technical training to defense and NASA facilities, as well as DOD and aerospace contractors. ATI courses create a clear understanding of the fundamental principles and a working knowledge of current technology and applications. ATI offers customized on-site training at your facility anywhere in the United States, as well as internationally, and over 200 annual public courses in dozens of locations. ATI is proud to have world-class experts instructing courses. Call 410-956- 8805/888-501-2100, or visit them on the web at www.ATlcourses.com.  

The Global Positioning System

The Global Positioning System A National Resource by Robert A. Nelson On a recent trip to visit the Jet Propulsion Laboratory, I flew from Washington, DC to Los Angeles on a new Boeing 747-400 airplane. The geographical position of the plane and its relation to nearby cities was displayed throughout the flight on a video […]

The Global Positioning System

A National Resource

by Robert A. Nelson On a recent trip to visit the Jet Propulsion Laboratory, I flew from Washington, DC to Los Angeles on a new Boeing 747-400 airplane. The geographical position of the plane and its relation to nearby cities was displayed throughout the flight on a video screen in the passenger cabin. When I arrived in Los Angeles, I rented a car that was equipped with a navigator. The navigator guided me to my hotel in Pasadena, displaying my position on a map and verbally giving me directions with messages like “freeway exit ahead on the right followed by a left turn.” When I reached the hotel, it announced that I had arrived at my destination. Later, when I was to join a colleague for dinner, I found the restaurant listed in a menu and the navigator took me there. This remarkable navigation capability is made possible by the Global Positioning System (GPS). It was originally designed jointly by the U.S. Navy and the U.S. Air Force to permit the determination of position and time for military troops and guided missiles. However, GPS has also become the basis for position and time measurement by scientific laboratories and a wide spectrum of applications in a multi-billion dollar commercial industry. Roughly one million receivers are manufactured each year and the total GPS market is expected to approach $ 10 billion by the end of next year. The story of GPS and its principles of measurement are the subjects of this article. EARLY METHODS OF NAVIGATION The shape and size of the earth has been known from the time of antiquity. The fact that the earth is a sphere was well known to educated people as long ago as the fourth century BC. In his book On the Heavens, Aristotle gave two scientifically correct arguments. First, the shadow of the earth projected on the moon during a lunar eclipse appears to be curved. Second, the elevations of stars change as one travels north or south, while certain stars visible in Egypt cannot be seen at all from Greece. The actual radius of the earth was determined within one percent by Eratosthenes in about 230 BC. He knew that the sun was directly overhead at noon on the summer solstice in Syene (Aswan, Egypt), since on that day it illuminated the water of a deep well. At the same time, he measured the length of the shadow cast by a column on the grounds of the library at Alexandria, which was nearly due north. The distance between Alexandria and Syene had been well established by professional runners and camel caravans. Thus Eratosthenes was able to compute the earth’s radius from the difference in latitude that he inferred from his measurement. In terms of modern units of length, he arrived at the figure of about 6400 km. By comparison, the actual mean radius is 6371 km (the earth is not precisely spherical, as the polar radius is 21 km less than the equatorial radius of 6378 km). The ability to determine one’s position on the earth was the next major problem to be addressed. In the second century, AD the Greek astronomer Claudius Ptolemy prepared a geographical atlas, in which he estimated the latitude and longitude of principal cities of the Mediterranean world. Ptolemy is most famous, however, for his geocentric theory of planetary motion, which was the basis for astronomical catalogs until Nicholas Copernicus published his heliocentric theory in 1543. Historically, methods of navigation over the earth’s surface have involved the angular measurement of star positions to determine latitude. The latitude of one’s position is equal to the elevation of the pole star. The position of the pole star on the celestial sphere is only temporary, however, due to precession of the earth’s axis of rotation through a circle of radius 23.5 over a period of 26,000 years. At the time of Julius Caesar, there was no star sufficiently close to the north celestial pole to be called a pole star. In 13,000 years, the star Vega will be near the pole. It is perhaps not a coincidence that mariners did not venture far from visible land until the era of Christopher Columbus, when true north could be determined using the star we now call Polaris. Even then the star’s diurnal rotation caused an apparent variation of the compass needle. Polaris in 1492 described a radius of about 3.5 about the celestial pole, compared to 1 today. At sea, however, Columbus and his contemporarie s depended primarily on the mariner’s compass and dead reckoning. The determination of longitude was much more difficult. Longitude is obtained astronomically from the difference between the observed time of a celestial event, such as an eclipse, and the corresponding time tabulated for a reference location. For each hour of difference in time, the difference in longitude is 15 degrees. Columbus himself attempted to estimate his longitude on his fourth voyage to the New World by observing the time of a lunar eclipse as seen from the harbor of Santa Gloria in Jamaica on February 29, 1504. In his distinguished biography Admiral of the Ocean Sea, Samuel Eliot Morrison states that Columbus measured the duration of the eclipse with an hour-glass and determined his position as nine hours and fifteen minutes west of Cadiz, Spain, according to the predicted eclipse time in an almanac he carried aboard his ship. Over the preceding year, while his ship was marooned in the harbor, Columbus had determined the latitude of Santa Gloria by numerous observations of the pole star. He made out his latitude to be 18, which was in error by less than half a degree and was one of the best recorded observations of latitude in the early sixteenth century, but his estimated longitude was off by some 38 degrees. Columbus also made legendary use of this eclipse by threatening the natives with the disfavor of God, as indicated by a portent from Heaven, if they did not bring desperately needed provisions to his men. When the eclipse arrived as predicted, the natives pleaded for the Admiral’s intervention, promising to furnish all the food that was needed. New knowledge of the universe was revealed by Galileo Galilei in his book The Starry Messenger. This book, published in Venice in 1610, reported the telescopic discoveries of hundreds of new stars, the craters on the moon, the phases of Venus, the rings of Saturn, sunspots, and the four inner satellites of Jupiter. Galileo suggested using the eclipses of Jupiter’s satellites as a celestial clock for the practical determination of longitude, but the calculation of an accurate ephemeris and the difficulty of observing the satellites from the deck of a rolling ship prevented use of this method at sea. Nevertheless, James Bradley, the third Astronomer Royal of England, successfully applied the technique in 1726 to determine the longitudes of Lisbon and New York with considerable accuracy. Inability to measure longitude at sea had the potential of catastrophic consequences for sailing vessels exploring the new world, carrying cargo, and conquering new territories. Shipwrecks were common. On October 22, 1707 a fleet of twenty-one ships under the command of Admiral Sir Clowdisley Shovell was returning to England after an unsuccessful military attack on Toulon in the Mediterranean. As the fleet approached the English Channel in dense fog, the flagship and three others foundered on the coastal rocks and nearly two thousand men perished. Stunned by this unprecedented loss, the British government in 1714 offered a prize of £20,000 for a method to determine longitude at sea within a half a degree. The scientific establishment believed that the solution would be obtained from observations of the moon. The German cartographer Tobias Mayer, aided by new mathematical methods developed by Leonard Euler, offered improved tables of the moon in 1757. The recorded position of the moon at a given time as seen from a reference meridian could be compared with its position at the local time to determine the angular position west or east. Just as the astronomical method appeared to achieve realization, the British craftsman John Harrison provided a different solution through his invention of the marine chronometer. The story of Harrison’s clock has been recounted in Dava Sobel’s popular book, Longitude. Both methods were tested by sea trials. The lunar tables permitted the determination of longitude within four minutes of arc, but with Harrison’s chronometer the precision was only one minute of arc. Ultimately, portions of the prize money were awarded to Mayer’s widow, Euler, and Harrison. In the twentieth century, with the development of radio transmitters, another class of navigation aids was created using terrestrial radio beacons, including Loran and Omega. Finally, the technology of artificial satellites made possible navigation and position determination using line of sight signals involving the measurement of Doppler shift or phase difference. TRANSIT Transit, the Navy Navigation Satellite System, was conceived in the late 1950s and deployed in the mid-1960s. It was finally retired in 1996 after nearly 33 years of service. The Transit system was developed because of the need to provide accurate navigation data for Polaris missile submarines. As related in an historical perspective by Bradford Parkinson, et al. in the journal Navigation (Spring 1995), the concept was suggested by the predictable but dramatic Doppler frequency shifts from the first Sputnik satellite, launched by the Soviet Union in October, 1957. The Doppler-shifted signals enabled a determination of the orbit using data recorded at one site during a single pass of the satellite. Conversely, if a satellite’s orbit were already known, a radio receiver’s position could be determined from the same Doppler measurements. The Transit system was composed of six satellites in nearly circular, polar orbits at an altitude of 1075 km. The period of revolution was 107 minutes. The system employed essentially the same Doppler data used to track the Sputnik satellite. However, the orbits of the Transit satellites were precisely determined by tracking them at widely spaced fixed sites. Under favorable conditions, the rms accuracy was 35 to 100 meters. The main problem with Transit was the large gaps in coverage. Users had to interpolate their positions between passes. GLOBAL POSITIONING SYSTEM The success of Transit stimulated both the U.S. Navy and the U.S. Air Force to investigate more advanced versions of a space-based navigation system with enhanced capabilities. Recognizing the need for a combined effort, the Deputy Secretary of Defense established a Joint Program Office in 1973. The NAVSTAR Global Positioning System (GPS) was thus created. In contrast to Transit, GPS provides continuous coverage. Also, rather than Doppler shift, satellite range is determined from phase difference. There are two types of observables. One is pseudorange, which is the offset between a pseudorandom noise (PRN) coded signal from the satellite and a replica code generated in the user’s receiver, multiplied by the speed of light. The other is accumulated delta range (ADR), which is a measure of carrier phase. The determination of position may be described as the process of triangulation using the measured range between the user and four or more satellites. The ranges are inferred from the time of propagation of the satellite signals. Four satellites are required to determine the three coordinates of position and time. The time is involved in the correction to the receiver clock and is ultimately eliminated from the measurement of position. High precision is made possible through the use of atomic clocks carried on-board the satellites. Each satellite has two cesium clocks and two rubidium clocks, which maintain time with a precision of a few parts in 1013 or 1014 over a few hours, or better than 10 nanoseconds. In terms of the distance traversed by an electromagnetic signal at the speed of light, each nanosecond corresponds to about 30 centimeters. Thus the precision of GPS clocks permits a real time measurement of distance to within a few meters. With post-processed carrier phase measurements, a precision of a few centimeters can be achieved. The design of the GPS constellation had the fundamental requirement that at least four satellites must be visible at all times from any point on earth. The tradeoffs included visibility, the need to pass over the ground control stations in the United States, cost, and sparing efficiency. The orbital configuration approved in 1973 was a total of 24 satellites, consisting of 8 satellites plus one spare in each of three equally spaced orbital planes. The orbital radius was 26,562 km, corresponding to a period of revolution of 12 sidereal hours, with repeating ground traces. Each satellite arrived over a given point four minutes earlier each day. A common orbital inclination of 63 was selected to maximize the on-orbit payload mass with launches from the Western Test Range. This configuration ensured between 6 and 11 satellites in view at any time. As envisioned ten years later, the inclination was reduced to 55 and the number of planes was increased to six. The constellation would consist of 18 primary satellites, which represents the absolute minimum number of satellites required to provide continuous global coverage with at least four satellites in view at any point on the earth. In addition, there would be 3 on-orbit spares. The operational system, as presently deployed, consists of 21 primary satellites and 3 on-orbit spares, comprising four satellites in each of six orbital planes. Each orbital plane is inclined at 55. This constellation improves on the “18 plus 3” satellite constellation by more fully integrating the three active spares. SPACE SEGMENT There have been several generations of GPS satellites. The Block I satellites, built by Rockwell International, were launched between 1978 and 1985. They consisted of eleven prototype satellites, including one launch failure, that validated the system concept. The ten successful satellites had an average lifetime of 8.76 years. The Block II and Block IIA satellites were also built by Rockwell International. Block II consists of nine satellites launched between 1989 and 1990. Block IIA, deployed between 1990 and 1997, consists of 19 satellites with several navigation enhancements. In April 1995, GPS was declared fully operational with a constellation of 24 operational spacecraft and a completed ground segment. The 28 Block II/IIA satellites have exceeded their specified mission duration of 6 years and are expected to have an average lifetime of more than 10 years. Block IIR comprises 20 replacement satellites that incorporate autonomous navigation based on crosslink ranging. These satellites are being manufactured by Lockheed Martin. The first launch in 1997 resulted in a launch failure. The first IIR satellite to reach orbit was also launched in 1997. The second GPS 2R satellite was successfully launched aboard a Delta 2 rocket on October 7, 1999. One to four more launches are anticipated over the next year. The fourth generation of satellites is the Block II follow-on (Block IIF). This program includes the procurement of 33 satellites and the operation and support of a new GPS operational control segment. The Block IIF program was awarded to Rockwell (now a part of Boeing). Further details may be found in a special issue of the Proceedings of the IEEE for January, 1999. CONTROL SEGMENT The Master Control Station for GPS is located at Schriever Air Force Base in Colorado Springs, CO. The MCS maintains the satellite constellation and performs the stationkeeping and attitude control maneuvers. It also determines the orbit and clock parameters with a Kalman filter using measurements from five monitor stations distributed around the world. The orbit error is about 1.5 meters. GPS orbits are derived independently by various scientific organizations using carrier phase and post-processing. The state of the art is exemplified by the work of the International GPS Service (IGS), which produces orbits with an accuracy of approximately 3 centimeters within two weeks. The system time reference is managed by the U.S. Naval Observatory in Washington, DC. GPS time is measured from Saturday/Sunday midnight at the beginning of the week. The GPS time scale is a composite “paper clock” that is synchronized to keep step with Coordinated Universal Time (UTC) and International Atomic Time (TAI). However, UTC differs from TAI by an integral number of leap seconds to maintain correspondence with the rotation of the earth, whereas GPS time does not include leap seconds. The origin of GPS time is midnight on January 5/6, 1980 (UTC). At present, TAI is ahead of UTC by 32 seconds, TAI is ahead of GPS by 19 seconds, and GPS is ahead of UTC by 13 seconds. Only 1,024 weeks were allotted from the origin before the system time is reset to zero because 10 bits are allocated for the calendar function (1,024 is the tenth power of 2). Thus the first GPS rollover occurred at midnight on August 21, 1999. The next GPS rollover will take place May 25, 2019. SIGNAL STRUCTURE The satellite position at any time is computed in the user’s receiver from the navigation message that is contained in a 50 bps data stream. The orbit is represented for each one hour period by a set of 15 Keplerian orbital elements, with harmonic coefficients arising from perturbations, and is updated every four hours. This data stream is modulated by each of two code division multiple access, or spread spectrum, pseudorandom noise (PRN) codes: the coarse/acquisition C/A code (sometimes called the clear/access code) and the precision P code. The P code can be encrypted to produce a secure signal called the Y code. This feature is known as the Anti-Spoof (AS) mode, which is intended to defeat deception jamming by adversaries. The C/A code is used for satellite acquisition and for position determination by civil receivers. The P(Y) code is used by military and other authorized receivers. The C/A code is a Gold code of register size 10, which has a sequence length of 1023 chips and a chipping rate of 1.023 MHz and thus repeats itself every 1 millisecond. (The term “chip” is used instead of “bit” to indicate that the PRN code contains no information.) The P code is a long code of length 2.3547 x 1014 chips with a chipping rate of 10 times the C/A code, or 10.23 MHz. At this rate, the P code has a period of 38.058 weeks, but it is truncated on a weekly basis so that 38 segments are available for the constellation. Each satellite uses a different member of the C/A Gold code family and a different one-week segment of the P code sequence. The GPS satellites transmit signals at two carrier frequencies: the L1 component with a center frequency of 1575.42 MHz, and the L2 component with a center frequency of 1227.60 MHz. These frequencies are derived from the master clock frequency of 10.23 MHz, with L1 = 154 x 10.23 MHz and L2 = 120 x 10.23 MHz. The L1 frequency transmits both the P code and the C/A code, while the L2 frequency transmits only the P code. The second P code frequency permits a dual-frequency measurement of the ionospheric group delay. The P-code receiver has a two-sigma rms horizontal position error of about 5 meters. The single frequency C/A code user must model the ionospheric delay with less accuracy. In addition, the C/A code is intentionally degraded by a technique called Selective Availability (SA), which introduces errors of 50 to 100 meters by dithering the satellite clock data. Through differential GPS measurements, however, position accuracy can be improved by reducing SA and environmental errors. The transmitted signal from a GPS satellite has right hand circular polarization. According to the GPS Interface Control Document, the specified minimum signal strength at an elevation angle of 5 into a linearly polarized receiver antenna with a gain of 3 dB (approximately equivalent to a circularly polarized antenna with a gain of 0 dB) is – 160 dBW for the L1 C/A code, – 163 dBW for the L1 P code, and – 166 dBW for the L2 P code. The L2 signal is transmitted at a lower power level since it is used primarily for the ionospheric delay correction. PSEUDORANGE The fundamental measurement in the Global Positioning System is pseudorange. The user equipment receives the PRN code from a satellite and, having identified the satellite, generates a replica code. The phase by which the replica code must be shifted in the receiver to maintain maximum correlation with the satellite code, multiplied by the speed of light, is approximately equal to the satellite range. It is called the pseudorange because the measurement must be corrected by a variety of factors to obtain the true range. The corrections that must be applied include signal propagation delays caused by the ionosphere and the troposphere, the space vehicle clock error, and the user’s receiver clock error. The ionosphere correction is obtained either by measurement of dispersion using the two frequencies L1 and L2 or by calculation from a mathematical model, but the tropospheric delay must be calculated since the troposphere is nondispersive. The true geometric distance to each satellite is obtained by applying these corrections to the measured pseudorange. Other error sources and modeling errors continue to be investigated. For example, a recent modification of the Kalman filter has led to improved performance. Studies have also shown that solar radiation pressure models may need revision and there is some new evidence that the earth’s magnetic field may contribute to a small orbit period variation in the satellite clock frequencies. CARRIER PHASE Carrier phase is used to perform measurements with a precision that greatly exceeds those based on pseudorange. However, a carrier phase measurement must resolve an integral cycle ambiguity whereas the pseudorange is unambiguous. The wavelength of the L1 carrier is about 19 centimeters. Thus with a cycle resolution of one percent, a differential measurement at the level of a few millimeters is theoretically possible. This technique has important applications to geodesy and analogous scientific programs. RELATIVITY The precision of GPS measurements is so great that it requires the application of Albert Einstein’s special and general theories of relativity for the reduction of its measurements. Professor Carroll Alley of the University of Maryland once articulated the significance of this fact at a scientific conference devoted to time measurement in 1979. He said, “I think it is appropriate … to realize that the first practical application of Einstein’s ideas in actual engineering situations are with us in the fact that clocks are now so stable that one must take these small effects into account in a variety of systems that are now undergoing development or are actually in use in comparing time worldwide. It is no longer a matter of scientific interest and scientific application, but it has moved into the realm of engineering necessity.” According to relativity theory, a moving clock appears to run slow with respect to a similar clock that is at rest. This effect is called “time dilation.” In addition, a clock in a weaker gravitational potential appears to run fast in comparison to one that is in a stronger gravitational potential. This gravitational effect is known in general as the “red shift” (only in this case it is actually a “blue shift”). GPS satellites revolve around the earth with a velocity of 3.874 km/s at an altitude of 20,184 km. Thus on account of the its velocity, a satellite clock appears to run slow by 7 microseconds per day when compared to a clock on the earth’s surface. But on account of the difference in gravitational potential, the satellite clock appears to run fast by 45 microseconds per day. The net effect is that the clock appears to run fast by 38 microseconds per day. This is an enormous rate difference for an atomic clock with a precision of a few nanoseconds. Thus to compensate for this large secular rate, the clocks are given a rate offset prior to satellite launch of – 4.465 parts in 1010 from their nominal frequency of 10.23 MHz so that on average they appear to run at the same rate as a clock on the ground. The actual frequency of the satellite clocks before launch is thus 10.22999999543 MHz. Although the GPS satellite orbits are nominally circular, there is always some residual eccentricity. The eccentricity causes the orbit to be slightly elliptical, and the velocity and altitude vary over one revolution. Thus, although the principal velocity and gravitational effects have been compensated by a rate offset, there remains a slight residual variation that is proportional to the eccentricity. For example, with an orbital eccentricity of 0.02 there is a relativistic sinusoidal variation in the apparent clock time having an amplitude of 46 nanoseconds. This correction must be calculated and taken into account in the GPS receiver. The displacement of a receiver on the surface of the earth due to the earth’s rotation in inertial space during the time of flight of the signal must also be taken into account. This is a third relativistic effect that is due to the universality of the speed of light. The maximum correction occurs when the receiver is on the equator and the satellite is on the horizon. The time of flight of a GPS signal from the satellite to a receiver on the earth is then 86 milliseconds and the correction to the range measurement resulting from the receiver displacement is 133 nanoseconds. An analogous correction must be applied by a receiver on a moving platform, such as an aircraft or another satellite. This effect, as interpreted by an observer in the rotating frame of reference of the earth, is called the Sagnac effect. It is also the basis for a laser ring gyro in an inertial navigation system. GPS MODERNIZATION In 1996, a Presidential Decision Directive stated the president would review the issue of Selective Availability in 2000 with the objective of discontinuing SA no later than 2006. In addition, both the L1 and L2 GPS signals would be made available to civil users and a new civil 10.23 MHz signal would be authorized. To satisfy the needs of aviation, the third civil frequency, known as L5, would be centered at 1176.45 MHz, in the Aeronautical Radio Navigation Services (ARNS) band, subject to approval at the World Radio Conference in 2000. According to Keith McDonald in an article on GPS modernization published in the September, 1999 GPS World, with SA removed the civil GPS accuracy would be improved to about 10 to 30 meters. With the addition of a second frequency for ionospheric group delay corrections, the civil accuracy would become about 5 to 10 meters. A third frequency would permit the creation of two beat frequencies that would yield one-meter accuracy in real time. A variety of other enhancements are under consideration, including increased power, the addition of a new military code at the L1 and L2 frequencies, additional ground stations, more frequent uploads, and an increase in the number of satellites. These policy initiatives are driven by the dual needs of maintaining national security while supporting the growing dependence on GPS by commercial industry. When these upgrades would begin to be implemented in the Block IIR and IIF satellites depends on GPS funding. Besides providing position, GPS is a reference for time with an accuracy of 10 nanoseconds or better. Its broadcast time signals are used for national defense, commercial, and scientific purposes. The precision and universal availability of GPS time has produced a paradigm shift in time measurement and dissemination, with GPS evolving from a secondary source to a fundamental reference in itself. The international community wants assurance that it can rely on the availability of GPS and continued U.S. support for the system. The Russian Global Navigation Satellite System (GLONASS) has been an alternative, but economic conditions in Russia have threatened its continued viability. Consequently, the European Union is considering the creation of a navigation system of its own, called Galileo, to avoide relying on the U.S. GPS and Russian GLONASS programs. The Global Positioning System is a vital national resource. Over the past thirty years it has made the transition from concept to reality, representing today an operational system on which the entire world has become dependent. Both technical improvements and an enlightened national policy will be necessary to ensure its continued growth into the twenty-first century. ____________________________________________ Dr. Robert A. Nelson, P.E. is president of Satellite Engineering Research Corporation, a satellite engineering consulting firm in Bethesda, Maryland, a Lecturer in the Department of Aerospace Engineering at the University of Maryland and Technical Editor of Via Satellite magazine. Dr. Nelson is the instructor for the ATI course Satellite Communications Systems Engineering. Please see our Schedule for dates and locations.

International System Units

The International System of Units Its History and Use in Science and Industry by Robert A. Nelson On September 23, 1999 the Mars Climate Orbiter was lost during an orbit injection maneuver when the spacecraft crashed onto the surface of Mars. The principal cause of the mishap was traced to a thruster calibration table, in […]

The International System of Units

Its History and Use in Science and Industry

by Robert A. Nelson On September 23, 1999 the Mars Climate Orbiter was lost during an orbit injection maneuver when the spacecraft crashed onto the surface of Mars. The principal cause of the mishap was traced to a thruster calibration table, in which British units instead of metric units were used. The software for celestial navigation at the Jet Propulsion Laboratory expected the thruster impulse data to be expressed in newton seconds, but Lockheed Martin Astronautics in Denver, which built the orbiter, provided the values in pound-force seconds, causing the impulse to be interpreted as roughly one-fourth its actual value. The Mars spacecraft incident renews a controversy that has existed in the United States since the beginning of the space program regarding the use of metric or British units of measurement. To put the issue into perspective, this article reviews the history of the metric system and its modern version, the International System of Units (SI). The origin and evolution of the metric units, and the role they have played in the United States, will be summarized. Technical details and definitions will be provided for reference. Finally, the use of metric units in the satellite industry will be examined. ORIGIN OF THE METRIC SYSTEM The metric system was one of many reforms introduced in France during the period between 1789 and 1799, known as the French Revolution. The need for reform in the system of weights and measures, as in other affairs, had long been recognized. No other aspect of applied science affects the course of human activity so directly and universally. Prior to the metric system, there had existed in France a disorderly variety of measures, such as for length, volume, or mass, that were arbitrary in size and variable from one town to the next. In Paris the unit of length was the Pied de Roi and the unit of mass was the Livre poids de marc. These units could be traced back to Charlemagne. However, all attempts to impose the “Parisian” units on the whole country were fruitless, as they were opposed by the guilds and nobles who benefited from the confusion. The advocates of reform sought to guarantee the uniformity and permanence of the units of measure by taking them from properties derived from nature. In 1670, the abbe Gabriel Mouton of Lyons proposed a unit of length equal to one minute of arc on the earth’s surface, which he divided into decimal fractions. He suggested a pendulum of specified period as a means of preserving one of these submultiples. The conditions required for the creation of a new measurement system were made possible by the French Revolution, an event that was initially provoked by a national financial crisis. In 1787 King Louis XVI convened the Estates General, an institution that had last met in 1614, for the purpose of imposing new taxes to avert a state of bankruptcy. As they assembled in 1789, the commoners, representing the Third Estate, declared themselves to be the only legitimate representatives of the people, and succeeded in having the clergy and nobility join them in the formation of the National Assembly. Over the next two years, they drafted a new constitution. In 1790, Charles-Maurice de Talleyrand, Bishop of Autun, presented to the National Assembly a plan to devise a system of units based on the length of a pendulum beating seconds at latitude 45. The new order was envisioned as an “enterprise whose result should belong some day to the whole world.” He sought, but failed to obtain, the collaboration of England, which was concurrently considering a similar proposal by Sir John Riggs Miller. The two founding principles were that the system would be based on scientific observation and that it would be a decimal system. A distinguished commission of the French Academy of Sciences, including J. L. Lagrange and Pierre Simon Laplace, considered the unit of length. Rejecting the seconds pendulum as insufficiently precise, the commission defined the unit, given the name metre in 1793, as one ten millionth of a quarter of the earth’s meridian passing through Paris. The proposal was accepted by the National Assembly on March 26, 1791. The definition of the meter reflected the extensive interest of French scientists in the figure of the earth. Surveys in Lapland by Maupertuis in 1736 and in France by LaCaille in 1740 had refined the value of the earth’s radius and established definitively that the shape of the earth is oblate. To determine the length of the meter, a new survey was conducted by the astronomers Jean Baptiste Delambre and P.F.A. Mechain between Dunkirk, in France on the English Channel, and Barcelona, Spain, on the coast of the Mediterranean Sea. This work was begun in 1792 and completed in 1798, enduring the hardships of the “reign of terror” and the turmoil of revolution. We now know that the quadrant of the earth is 10 001 957 meters instead of exactly 10 000 000 meters as originally planned. The principal source of error was the assumed value of the earth’s flattening used in correcting for oblateness. The unit of volume, the pinte (later renamed the litre), was defined as the volume of a cube having a side equal to one-tenth of a meter. The unit of mass, the grave (later renamed the kilogramme), was defined as the mass of one pinte of distilled water at the temperature of melting ice. In addition, the centigrade scale for temperature was adopted, with fixed points at 0 C and 100 C representing the freezing and boiling points of water (now replaced by the Celsius scale). The work to determine the unit of mass was begun by Lavoisier and Hauy and was completed by Gineau and Fabbroni. They discovered that the maximum density of water occurs at 4 C, and not at 0 C as had been supposed, so the definition of the kilogram was amended to specify the temperature of maximum density. We now know that the intended mass was 0.999972 kg, i.e., 1000.028 cm3 instead of exactly 1000 cm3 for the volume of 1 kilogram of pure water at 4 C. On August 1, 1793 the National Convention, which by then ruled France, issued a decree adopting the preliminary definitions and terms. The “methodical” nomenclature, specifying fractions and multiples of the units by Latin prefixes, was chosen in favor of the “common” nomenclature, involving separate names. A new calendar was also introduced in September, 1793. Its origin was designated retroactively as September 22, 1792 to commemorate the overthrow of the monarchy and the inception of the Republic of France. The French Revolutionary Calendar consisted of twelve months of thirty days each, concluded by a five or six day holiday. The months were given poetic names that suggested the prevailing seasons. Each month was divided into three ten-day weeks, or decades. The day itself was divided into decimal fractions, with ten hours per day and 100 minutes per hour. The calendar was politically, rather than scientifically, motivated, since it was intended to weaken the influence of Christianity. It was abolished by Napoleon in 1806 in return for recognition by the Church of his authority as emperor of France. Although the calendar reform remained in effect for twelve years, the new method of keeping the time of day required the replacement of valued clocks and timepieces and was never actually used in practice. The metric system was officially adopted on April 7, 1795. The government issued a decree (Loi du 18 germinal, an III) formalizing the adoption of the definitions and terms that are in use today. A brass bar was made by Lenoir to represent the provisional meter, obtained from the survey of LaCaille, and a provisional standard for the kilogram was derived. In 1799 permanent standards for the meter and kilogram made from platinum were constructed based on the new survey by Delambre and Mechain. The full length of the meter bar represented the unit. These standards were deposited in the Archives of the Republic. They became official by an act of December 10, 1799. During the Napoleonic era, several regressive acts were passed that temporarily revived old traditions. Thus in spite of its auspicious beginning, the metric system was not quickly adopted in France. Although the system continued to be taught in the schools, lack of funds prevented the distribution of secondary standards. Finally, after a three year transition period, the metric system became compulsory throughout France as of January 1, 1840. REACTION IN THE UNITED STATES The importance of a uniform system of weights and measures was recognized in the United States, as in France. Article I, Section 8, of the U.S. Constitution provides that Congress shall have the power “to coin money … and fix the standard of weights and measures.” However, although the progressive concept of decimal coinage was introduced, the early American settlers both retained and cultivated the customs and tools of their British heritage, including the measures of length and mass. In contrast to the French Revolution, the “American Revolution” was not a revolution at all, but was rather a war of independence. In 1790, President George Washington referred the subject of weights and measures to his Secretary of State, Thomas Jefferson. In a report submitted to the House of Representatives, Jefferson considered two alternatives: if the existing measures were retained they could be rendered more simple and uniform, or if a new system were adopted, he favored a decimal system based on the principle of the seconds pendulum. As it was eventually formulated, Jefferson did not endorse the metric system, primarily because the metric unit of length could not be checked without a sizable scientific operation on European soil. The political situation at the turn of the eighteenth century also made consideration of the metric system impractical. Although France under Louis XVI had supported the colonies in the war with England, by 1797 there was manifest hostility. The revolutionary climate in France was viewed by the external world with a mixture of curiosity and alarm. The National Convention had been replaced by the Directory, and French officials who had been sympathetic to the United States either had been executed or were in exile. In addition, a treaty negotiated with England by John Jay in 1795 regarding settlement of the Northwest Territories and trade with the British West Indies was interpreted by France as evidence of an Anglo-American alliance. France retaliated by permitting her ships to prey upon American merchant vessels and Federalist President John Adams prepared for a French invasion. Thus in 1798, when dignitaries from foreign countries were assembled in Paris to learn of France’s progress with metrological reform, the United States was not invited. A definitive investigation was prepared in 1821 by Secretary of State John Quincy Adams that was to remove the issue from further consideration for the next forty-five years. He found that the standards of length, volume, and mass used throughout the 22 states of the Union were already substantially uniform, unlike the disparate measures that had existed in France prior to the French Revolution. Moreover, it was not at all evident that the metric system would be permanent, since even in France its use was sporadic and, in fact, the consistent terminology had been repealed in 1812 by Napoleon. Therefore, if the metric system failed to win support in early America, it was not for want of recognition. Serious consideration of the metric system did not occur again until after the Civil War. In 1866, upon the advice of the National Academy of Sciences, the metric system was made legal by the Thirty-Ninth Congress. The Act was signed into law on July 28 by President Andrew Johnson. TREATY OF THE METER A series of international expositions in the middle of the nineteenth century enabled the French government to promote the metric system for world use. Between 1870 and 1872, with an interruption caused by the Franco-Prussian War, an international meeting of scientists was held to consider the design of new international metric standards that would replace the meter and kilogram of the French Archives. A Diplomatic Conference on the Meter was convened to ratify the scientific decisions. Formal international approval was secured by the Treaty of the Meter, signed in Paris by the delegates of 17 countries, including the United States, on May 20,1875. The treaty established the International Bureau of Weights and Measures (BIPM). It also provided for the creation of an International Committee for Weights and Measures (CIPM) to run the Bureau and the General Conference on Weights and Measures (CGPM) as the formal diplomatic body that would ratify changes as the need arose. The French government offered the Pavillon de Breteuil, once a small royal palace, to serve as headquarters for the Bureau in Sevres, France near Paris. The grounds of the estate form a tiny international enclave within French territory. A total of 30 meter bars and 43 kilogram cylinders were manufactured from a single ingot of an alloy of 90 percent platinum and 10 percent iridium by Johnson, Mathey and Company of London. The original meter and kilogram of the French Archives in their existing states were taken as the points of departure. The standards were intercompared at the International Bureau between 1886 and 1889. One meter bar and one kilogram cylinder were selected as the international prototypes. The remaining standards were distributed to the signatories. The work was approved by the First General Conference on Weights and Measures in 1889. The United States received meters 21 and 27 and kilograms 4 and 20. On January 2, 1890 the seals to the shipping cases for meter 27 and kilogram 20 were broken in an official ceremony at the White House with President Benjamin Harrison presiding. The standards were deposited in the Office of Weights and Measures of the U.S. Coast and Geodetic Survey. U.S. CUSTOMARY UNITS The U.S. customary units were tied to the British and French units by a variety of indirect comparisons. Troy weight was the standard for the minting of coins. Congress could be ambivalent about nonuniformity in standards for trade, but it could not tolerate nonuniformity in its standards for money. Therefore, in 1827 a brass copy of the British troy pound of 1758 was secured by Ambassador to England and former Secretary of the Treasury, Albert Gallatin. This standard was kept in the Philadelphia mint and lesser copies were made and distributed to other mints. The troy pound of the Philadelphia mint was virtually the primary standard for commercial transactions until 1857 and remained the standard for coins until 1911. The semi-official standards used in commerce for a quarter century may be attributed to Ferdinand Hassler, who was appointed superintendent of the newly organized Coast Survey in 1807. In 1832 the Treasury Department directed Hassler to construct and distribute to the states standards of length, mass, and volume, and balances by which masses might be compared. As the standard of length, Hassler adopted the Troughton scale, an 82-inch brass bar made by Troughton of London for the Coast Survey that Hassler had brought back from Europe in 1815. The distance between the 27th and 63rd engraved lines on a silver inlay scale down the center of the bar was taken to be equal to the British yard. The standard of mass was the avoirdupois pound, derived from the troy pound of the Philadelphia mint by the ratio 7000 grains to 5760 grains. It was represented by a brass knob weight that Hassler constructed and marked with a star. Thus it has come to be known as the “star” pound. The system of weights and measures in Great Britain had been in use since the reign of Queen Elizabeth I. Following a reform begun in 1824, the imperial standard avoirdupois pound was made the standard of mass in 1844 and the imperial standard yard was adopted in 1855. The imperial standards were made legal by an Act of Parliament in 1855 and are preserved in the Board of Trade in London. The United States received copies of the British imperial pound and yard, which became the official U.S. standards from 1857 until 1893. When the metric system was made lawful in the United States in 1866, a companion resolution was passed to distribute metric standards to the states. The Treasury Department had in its possession several copies derived from the meter and kilogram of the French Archives. These included the “Committee” meter and kilogram, which were an iron end standard and a brass cylinder with knob copied from the French prototypes, that Hassler had brought with him when he immigrated to the United States in 1805. He had received them as a gift from his friend, J.G. Tralles, who was the Swiss representative to the French metric convocation in 1798 and a member of its committee on weights and measures. Also available were the “Arago” meter and kilogram, named after the French physicist who certified them. They were purchased by the United States in 1821 through Albert Gallatin, then minister to France. The Committee meter and the Arago kilogram were used as the prototypes for brass metric standards that were distributed to the states. In 1893, under a directive from Thomas C. Mendenhall, Superintendent of Standard Weights and Measures of the Coast and Geodetic Survey, the U.S. customary units were redefined in terms of the metric units. The primary standards of length and mass adopted were prototype meter No. 27 and prototype kilogram No. 20 that the United States had received in 1889 as a signatory to the Treaty of the Meter. The yard was defined as 3600/3937 meter and the avoirdupois pound-mass was defined as 0.4535924277 kilogram. The conversion for mass was based on a comparison between the British imperial standard pound and the international prototype kilogram performed in 1883. These definitions were used by the National Bureau of Standards (now the National Institute of Standards and Technology) from its founding in 1901 until 1959. On July 1, 1959 the definitions were fixed by international agreement among the English-speaking countries to be 1 yard = 0.9144 meter and 1 pound-mass = 0.45359237 kilogram exactly. The definition of the yard is equivalent to the relations 1 foot = 0.3048 meter and 1 inch = 2.54 centimeters exactly. The derived unit of force in the British system is the pound-force (lbf), which is defined as the weight of one pound-mass (lbm) at a hypothetical location where the acceleration of gravity has the standard value 9.80665 m/s2 exactly. Thus, 1 lbf = 0.45359237 kg x 9.80665 m/s2 = 4.448 N approximately. The slug (sl) is the mass that receives an acceleration of one foot per second squared under a force of one pound-force. Thus 1 sl = (1 lbf)/(1 ft/s2) = (4.448 N)/(0.3048 m/s2) = 14.59 kg = 32.17 lbm approximately. THE ELECTRICAL UNITS The theories of electricity and magnetism developed and matured during the early 1800s as fundamental discoveries were made by Oersted, Ampere, Faraday, and many others. The possibility of making magnetic measurements in terms of mechanical units, that is in “absolute measure,” was first pointed out by Gauss in 1833. His analysis was carried further to cover electrical phenomena by Weber, who in 1851 discussed a method by which a complete set of absolute units might be developed. In 1861 a committee of the British Association for the Advancement of Science, that included William Thomson (later Lord Kelvin), James Clerk Maxwell, and James Prescott Joule, undertook a comprehensive study of electrical measurements. This committee introduced the concept of a system of units. Four equations were sufficient to determine the units of charge q, current I, voltage V, and resistance R. These were either Coulomb’s force law for charges or Ampere’s force law for currents, the relation between charge and current q = I t, Ohm’s law V = I R, and the equation for electrical work W = V q = I 2 R t, where t is time. A fundamental principle was that the system should be coherent. That is, the system is founded upon certain base units for length, mass, and time, and derived units are obtained as products or quotients without requiring numerical factors. The meter, gram, and mean solar second were selected as base units. In 1873 a second committee recommended a centimeter-gram-second (CGS) system of units because in this system the density of water is unity. Two parallel systems of units were devised, the electrostatic and electromagnetic subsystems, depending on whether the law of force for electric charges or for electric currents was taken as fundamental. The ratio of the electrostatic to the electromagnetic unit of charge or current was a fundamental experimental constant c. The committee also conducted research on electrical standards. It issued a wire resistance standard, the “B.A. unit,” which soon became known as the “ohm.” The idea of naming units after eminent scientists was due to Sir Charles Bright and Latimer Clark. At the time, electricity and magnetism were essentially two distinct branches of experimental physics. However, in a series of papers published between 1856 and 1865, Maxwell created a unified theory based on the field concept introduced by Faraday. He predicted the existence of electromagnetic waves and identified the “ratio of the units” c with the speed of light. In 1888, Heinrich Hertz verified Maxwell’s prediction by generating and detecting electromagnetic waves at microwave frequencies in the laboratory. He also greatly simplified the theory by eliminating unnecessary physical assumptions. Thus the form of Maxwell’s equations as they are known to physicists and engineers today is due to Hertz. (Oliver Heaviside made similar modifications and introduced the use of vectors.) In addition, Hertz combined the electrostatic and electromagnetic CGS units into a single system related by the speed of light c, which he called the “Gaussian” system of units. The recommendations of the B.A. committees were adopted by the First International Electrical Congress in Paris in 1881. Five “practical” electrical units were defined as certain powers of 10 of the CGS units: the ohm, farad, volt, ampere, and coulomb. In 1889, the Second Congress added the joule, watt, and a unit of inductance, later given the name henry. In 1901, Giorgi demonstrated that the practical electrical units and the MKS mechanical units could be incorporated into a single coherent system by (1) selecting the meter, kilogram, and second as the base units for mechanical quantities; (2) expanding the number of base units to four, including one of an electrical nature; and (3) assigning physical dimensions to the permeability of free space 0, with a numerical value of 4 x107 in a “rationalized” system or 107 in an “unrationalized” system. (The term “rationalized,” due to Heaviside, concerned where factors of 4 should logically appear in the equations based on symmetry.) The last assumption implied that the magnetic flux density B and magnetic field H, which are related in vacuum by the equation B = 0 H, are physically distinct with different units, whereas in the Gaussian system they are of the same character and are dimensionally equivalent. An analogous situation occurs for the electric fields D and Ethat are related by D = 0 E, where 0 is the permittivity of free space given by c2 = 1 / 0 0. In 1908, an International Conference on Electrical Units and Standards held in London adopted independent, easily reproducible primary electrical standards for resistance and current, represented by a column of mercury and a silver coulombmeter, respectively. These so-called “international” units went into effect in 1911, but they soon became obsolete with the growth of the national standards laboratories and the increased application of electrical measure-ments to other fields of science. With the recognition of the need for further international coopera-tion, the 6th CGPM amended the Treaty of the Meter in 1921 to cover the units of electricity and photometry and the 7th CGPM created the Consultative Committee for Electricity (CCE) in 1927. By the 8th CGPM in 1933 there was a universal desire to replace the “international” electrical units with “absolute” units. Therefore, the International Electrotechnical Commission (IEC) recommended to the CCE an absolute system of units based on Giorgi’s proposals, with the practical electrical units incorporated into a comprehensive MKS system. The choice of the fourth unit was left undecided. At the meeting of the CCE in September 1935, the delegate from England, J.E. Sears, presented a note that set the course for future action. He proposed that the ampere be selected as the base unit for electricity, defined in terms of the force per unit length between two long parallel wires. The unit could be preserved in the form of wire coils for resistance and Weston cells for voltage by calibration with a current balance. This recommendation was unanimously accepted by the CCE and was adopted by the CIPM. Further progress was halted by the intervention of World War II. Finally, in 1946, by authority given to it by the CGPM in 1933, the CIPM officially adopted the MKS practical system of absolute electrical units to take effect January 1, 1948. INTERNATIONAL SYSTEM OF UNITS (SI) By 1948 the General Conference on Weights and Measures was responsible for the units and standards of length, mass, electricity, photometry, temperature, and ionizing radiation. At this time, the next major phase in the evolution of the metric system was begun. It was initiated by a request of the International Union of Pure and Applied Physics “to adopt for international use a practical international system of units.” Thus the 9th CGPM decided to define a complete list of derived units. Derived units had not been considered previously because they do not require independent standards. Also, the CGPM brought within its province the unit of time, which had been the prerogative of astronomers. The work was started by the 10th CGPM in 1954 and was completed by the 11th CGPM in 1960. During this period there was an extensive revision and simplification of the metric unit definitions, symbols, and terminology. The kelvin and candela were added as base units for thermodynamic temperature and luminous intensity, and in 1971 the mole was added as a nineth base unit for amount of substance. The modern metric system is known as the International System of Units, with the international abbreviation SI. It is founded on the nine base units, summarized in Table 1, that by convention are regarded as dimensionally independent. All other units are derived units, formed coherently by multiplying and dividing units within the system without the use of numerical factors. Some derived units, including those with special names, are listed in Table 2. For example, the unit of force is the newton, which is equal to a kilogram meter per second squared, and the unit of energy is the joule, equal to a newton meter. The expression of multiples and submultiples of SI units is facilitated by the use of prefixes, listed in Table 3. (Additional information is available on the Internet at the websites of the International Bureau of Weights and Measures at http://www.bipm.fr and the National Institute of Standards and Technology at http://physics.nist.gov/cuu .) METRIC STANDARDS One must distinguish a unit, which is an abstract idealization, and a standard, which is the physical embodiment of the unit. Since the origin of the metric system, the standards have undergone several revisions to reflect increased precision as the science of metrology has advanced. The meter. The international prototype meter standard of 1889 was a platinum-iridium bar with an X-shaped cross section. The meter was defined by the distance between two engraved lines on the top surface of the bridge instead of the distance between the end faces. The meter was derived from the meter of the French Archives in its existing state and reference to the earth was abandoned. The permanence of the international prototype was verified by comparison with three companion bars, called “check standards.” In addition, there were nine measurements in terms of the red line of cadmium between 1892 and 1942. The first of these measurements was carried out by A. A. Michelson using the interferometer which he invented. For this work, Michelson received the Nobel Prize in physics in 1907. Improvements in monochro-matic light sources resulted in a new standard based on a well-defined wavelength of light. A single atomic isotope with an even atomic number and an even mass number is an ideal spectral standard because it eliminates complexity and hyperfine structure. Also, Doppler broadening is minimized by using a gas of heavy atoms in a lamp operated at a low temperature. Thus a particular orange krypton-86 line was chosen, whose wavelength was obtained by direct comparison with the cadmium wavelength. In 1960, the 11th CGPM defined the meter as the length equal to 1 650 763.73 wavelengths of this spectral line. Research on lasers at the Boulder, CO laboratory of the National Bureau of Standards contributed to another revision of the meter. The wavelength and frequency of a stabilized helium-neon laser beam were measured independently to determine the speed of light. The wavelength was obtained by comparison with the krypton wavelength and the frequency was determined by a series of measurements traceable to the cesium atomic standard for the second. The principal source of error was in the profile of the krypton spectral line representing the meter itself. Consequently, in 1983 the 17th CGPM adopted a new definition of the meter based on this measurement as “the length of the path traveled by light in vacuum during a time interval of 1/299 792 458 of a second.” The effect of this definition is to fix the speed of light at exactly 299 792 458 m/s. Thus experimental methods previously interpreted as measurements of the speed of light c (or equivalently, the permittivity of free space 0) have become calibrations of length. The kilogram. In 1889 the international prototype kilogram was adopted as the standard for mass. The prototype kilogram is a platinum-iridium cylinder with equal height and diameter of 3.9 cm and slightly rounded edges. For a cylinder, these dimensions present the smallest surface area to volume ratio to minimize wear. The standard is carefully preserved in a vault at the International Bureau of Weights and Measures and is used only on rare occasions. It remains the standard today. The kilogram is the only unit still defined in terms of an arbitrary artifact instead of a natural phenomenon. The second. Historically, the unit of time, the second, was defined in terms of the period of rotation of the earth on its axis as 1/86 400 of a mean solar day. Meaning “second minute,” it was first applied to timekeeping in about the nineteenth century when pendulum clocks were invented that could maintain time to this precision. By the twentieth century, astronomers realized that the rotation of the earth is not constant. Due to gravitational tidal forces produced by the moon on the shallow seas, the length of the day is increasing by about 1.4 milliseconds per century. The effect can be measured by comparing the computed paths of ancient solar eclipses on the assumption of uniform rotation with the recorded locations on earth where they were actually observed. Consequently, in 1956 the second was redefined in terms of the period of revolution of the earth about the sun for the epoch 1900, as represented by the Tables of the Sun computed by the astronomer Simon Newcomb of the U.S. Naval Observatory in Washington, DC. The operational significance of this definition was to adopt the linear coefficient in Newcomb’s formula for the mean longitude of the sun to determine the unit of time. The rapid development of atomic clocks soon permitted yet another definition. Accordingly, in 1967 the 13th CGPM defined the second as “the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two ground states of the cesium-133 atom.” This definition was based on observations of the moon, whose ephemeris is tied indirectly to the apparent motion of the sun, and was equivalent to the previous definition within the limits of experimental uncertainty. The ampere. The unit of electric current, the ampere, is defined as that constant current which, if maintained in each of two parallel, infinitely long wires with a separation of 1 meter in vacuum, would produce a force per unit length between them equal to 2 x 10-7 N/m. This formal definition serves to establish the value of the constant 0 as 4 x 107 N/A2 exactly. Although the base unit for electricity is the ampere, the electrical units are maintained through the volt and the ohm. In the past, the practical representation of the volt was a group of Weston saturated cadmium-sulfate electrochemical standard cells. A primary calibration experiment involved the measurement of the force between two coils of an “ampere balance” to determine the current, while the cell voltage was compared to the potential difference across a known resistance. The ohm was represented by a wire-wound standard resistor. Its resistance was measured against the impedance of an inductor or a capacitor at a known frequency. The inductance can be calculated from the geometrical dimensions alone. From about 1960, a so-called Thompson-Lampard calculable capacitor has been used, in which only a single measurement of length is required. Since the early 1970s, the volt has been maintained by means of the Josephson effect, a quantum mechanical tunneling phenomenon discovered by Brian Josephson in 1962. A Josephson junction may be formed by two superconducting niobium films separated by an oxide insulating layer. If the Josephson junction is irradiated by microwaves at frequency f and the bias current is progressively increased, the current-voltage characteristic is a step function, in which the dc bias voltage increases discontinuously at discrete voltage intervals equal to f / KJ , where KJ = 2 e / h is the Josephson constant, h is Planck’s constant, and e is the elementary charge. The ohm is now realized by the quantum Hall effect, a characteristic of a two-dimensional electron gas discovered by Klaus von Klitzing in 1980. In a device such as a silicon metal-oxide-semiconductor field-effect transistor (MOSFET), the Hall voltage VH for a fixed current I increases in discrete steps as the gate voltage is increased. The Hall resistance, or RH = VH / I , is equal to an integral fraction of the von Klitzing constant, given by RK = h / e2 = 0 c / 2 , where is the fine structure constant. In practice, RK can be measured in terms of a laboratory resistance standard, whose resistance is obtained by comparison with the impedance of a calculable capacitor, or it can be obtained indirectly from. A new method to determine the relation between the mechanical and electromagnetic units that has shown much promise is by means of a “watt balance,” which has greater precision than an ordinary ampere balance. In this experiment, a current I is passed through a test coil suspended in the magnetic field of a larger coil so that the force F balances a known weight mg. Next the test coil is moved axially through the magnetic field and the velocity v and induced voltage V are measured. By the equivalence of mechanical and electrical power, F v = V I. The magnetic field and apparatus geometry drop out of the calculation. The voltage V is measured in terms of the Josephson constant KJ while the current I is calibrated by the voltage across a resistance known in terms of the von Klitzing constant RK. The experiment determines KJ 2 RK (and thus h), which yields KJ if RK is assumed to be known in terms of the SI ohm. The Josephson and quantum Hall effects provide highly uniform and conveniently reproducible quantum mechanical standards for the volt and the ohm. For the purpose of practical engineering metrology, conventional values for the Josephson constant and the von Klitzing constant were adopted by international agreement starting January 1, 1990. These values are KJ-90 = 483 597.9 GHz/V and RK-90 = 25 812.807 exactly. The best experimental SI values, obtained as part of an overall least squares adjustment of the fundamental constants completed in 1998, differ only slightly from these conventional values. METRIC UNITS IN INDUSTRY The International System of Units (SI) has become the fundamental basis of scientific measurement worldwide. It is also used for everyday commerce in virtually every country of the world but the United States. Congress has passed legislation to encourage use of the metric system, including the Metric Conversion Act of 1975 and the Omnibus Trade and Competitiveness Act of 1988, but progress has been slow. The space program should have been the leader in the use of metric units in the United States and would have been an excellent model for education. Burt Edelson, Director of the Institute for Advanced Space Research at George Washington University and former Associate Administrator of NASA, recalls that “in the mid-‘80s, NASA made a valiant attempt to convert to the metric system” in the initial phase of the international space station program. However, he continued, “when the time came to issue production contracts, the contractors raised such a hue cry over the costs and difficulties of conversion that the initiative was dropped. The international partners were unhappy, but their concerns were shunted aside. No one ever suspected that a measurement conversion error could cause a failure in a future space project.” Economic pressure to compete in an international environment is a strong motive for contractors to use metric units. Barry Taylor, head of the Fundamental Constants Data Center of the National Institute of Standards and Technology and U.S. representative to the Consultative Committee on Units of the CIPM, expects that the greatest stimulus for metrication will come from industries with global markets. “Manufacturers are moving steadily ahead on SI for foreign markets,” he says. Indeed, most satellite design technical literature does use metric units, including meters for length, kilograms for mass, and newtons for force, because of the influence of international partners, suppliers, and customers. CONCLUSION As we begin the new millennium, there should be a renewed national effort to promote the use of SI metric units throughout industry, and to assist the general public in becoming familiar with the system and using it regularly. The schools have taught the metric system in science classes for decades. It is time to put aside the customary units of the industrial revolution and to adopt the measures of precise science in all aspects of modern engineering and commerce, including the United States space program and the satellite industry.  

Table 1. SI Base Units



Quantity                                      Unit       
                                       Name        Symbol       

length                                 meter       m
mass                                   kilogram    kg
time	                               second      s
electric current                       ampere      A
thermodynamic temperature              kelvin      K
amount of substance                    mole        mol
luminous intensity                     candela     cd


         

Table 2. Examples of SI Derived Units


	
Quantity                                Unit              
                        Special Name   Symbol	Equivalent    

plane angle             radian	       rad      1
solid angle             steradian      sr       1
angular velocity                                rad/s
angular acceleration                            rad/2 
frequency               hertz          Hz       s-1
speed, velocity                                 m/s
acceleration                                    m/s2 
force	                newton	       N        kg m/s2 
pressure, stress        pascal	       Pa       N/m2 
energy, work, heat      joule	       J        kg m2 /s2,  N m
power	                watt           W        kg m2/s3,  J/s
power flux density                              W/m2
linear momentum, impulse                        kg m/s,  N s
angular momentum                                kg m2/s,  N m s
electric charge         coulomb        C        A s
electric potential, emf	volt           V        W/A,  J/C	
magnetic flux           weber          Wb       V s
resistance              ohm                     V/A
conductance             siemens        S        A/V,  -1
inductance              henry          H        Wb/A
capacitance             farad          F        C/V
electric field strength                         V/m,  N/C
electric displacement                           C/m2
magnetic field strength                         A/m
magnetic flux density   tesla          T        Wb/m2,  N/(A m)
Celsius temperature     degree Celsius C        K
luminous flux           lumen          lm       cd sr
illuminance             lux            lx       lm/m2
radioactivity           becquerel      Bq       s-1


       

Table 3. SI Prefixes


	
Factor   Prefix   Symbol   Factor   Prefix   Symbol         

1024      yotta     Y      10-1      deci       d
1021      zetta     Z      10-2      centi      c
1018      exa       E      10-3      milli      m
1015      peta      P      10-6      micro	
1012      tera      T      10-9      nano       n
109       giga      G      10-12     pico       p
106       mega      M      10-15     femto      f
103       kilo      k      10-18     atto       a	
102       hecto     h      10-21     zepto      z
101       deka      d      10-24     yocto      yze

                         


 

Rocket Thrust Equation and Launch Vehicles

The fundamental principles of propulsion and launch vehicle physics by Robert A. Nelson A satellite is launched into space on a rocket, and once there it is inserted into the operational orbit and is maintained in that orbit by means of thrusters onboard the satellite itself. This article will summarize the fundamental principles of rocket […]

The fundamental principles of propulsion and launch vehicle physics

by Robert A. Nelson

A satellite is launched into space on a rocket, and once there it is inserted into the operational orbit and is maintained in that orbit by means of thrusters onboard the satellite itself. This article will summarize the fundamental principles of rocket propulsion and describe the main features of the propulsion systems used on both launch vehicles and satellites.

The law of physics on which rocket propulsion is based is called the principle of momentum. According to this principle, the time rate of change of the total momentum of a system of particles is equal to the net external force. The momentum is defined as the product of mass and velocity. If the net external force is zero, then the principle of momentum becomes the principle of conservation of momentum and the total momentum of the system is constant. To balance the momentum conveyed by the exhaust, the rocket must generate a momentum of equal magnitude but in the opposite direction and thus it accelerates forward.

The system of particles may be defined as the sum of all the particles initially within the rocket at a particular instant. As propellant is consumed, the exhaust products are expelled at a high velocity. The center of mass of the total system, subsequently consisting of the particles remaining in the rocket and the particles in the exhaust, follows a trajectory determined by the external forces, such as gravity, that is the same as if the original particles remained together as a single entity. In deep space, where gravity may be neglected, the center of mass remains at rest.

The configuration of a chemical rocket engine consists of the combustion chamber, where the chemical reaction takes place, and the nozzle, where the gases expand to create the exhaust. An important characteristic of the rocket nozzle is the existence of a throat. The velocity of the gases at the throat is equal to the local velocity of sound and beyond the throat the gas velocity is supersonic. Thus the combustion of the gases within the rocket is independent of the surrounding environment and a change in external atmospheric pressure cannot propagate upstream.

The thrust of the rocket is given by the theoretical equation :

F = lm(dot) ve + ( pe – pa ) Ae

This equation consists of two terms. The first term, called the momentum thrust, is equal to the product of the propellant mass flow rate m(dot) and the exhaust velocity ve with a correction factor l for nonaxial flow due to nozzle divergence angle. The second term is called the pressure thrust. It is equal to the difference in pressures pe and pa of the exhaust velocity and the ambient atmosphere, respectively, acting over the area Ae of the exit plane of the rocket nozzle. The combined effect of both terms is incorporated into the effective exhaust velocity c. Thus the thrust is also written

F = m(dot) c

where an average value of c is used, since it is not strictly constant.

The exhaust exit pressure is determined by the expansion ratio given by

which is the ratio of the area of the nozzle exit plane Ae and the area of the throat At . As the expansion ratio e increases, the exhaust exit pressure pe decreases.

The thrust is maximum when the exit pressure of the exhaust is equal to the ambient pressure of the surrounding environment, that is, when pe = pa. This condition is known as optimum expansion and is achieved by proper selection of the expansion ratio. Although optimum expansion makes the contribution of the pressure thrust zero, it results in a higher value of exhaust velocity ve such that the increase in momentum thrust exceeds the reduction in pressure thrust.

A conical nozzle is easy to manufacture and simple to analyze. If the apex angle is 2a , the correction factor for nonaxial flow is

= ½ (1 + cos a )

The apex angle must be small to keep the loss within acceptable limits. A typical design would be a = 15° , for which l = 0.9830. This represents a loss of 1.7 percent. However, conical nozzles are excessively long for large expansion ratios and suffer additional losses caused by flow separation. A bell-shaped nozzle is therefore superior because it promotes expansion while reducing length.

ROCKET PROPULSION PARAMETERS

The specific impulse Isp of a rocket is the parameter that determines the overall effectiveness of the rocket nozzle and propellant. It is defined as the ratio of the thrust and the propellant weight flow rate, or

Isp = F / m(dot) g = c / g

where g is a conventional value for the acceleration of gravity (9.80665 m/s2 exactly). Specific impulse is expressed in seconds.

Although gravity has nothing whatever to do with the rocket propulsion chemistry, it has entered into the definition of specific impulse because in past engineering practice mass was expressed in terms of the corresponding weight on the surface of the earth. By inspection of the equation, it can be seen that the specific impulse Isp is physically equivalent to the effective exhaust velocity c, but is rescaled numerically and has a different unit because of division by g. Some manufacturers now express specific impulse in newton seconds per kilogram, which is the same as effective exhaust velocity in meters per second.

Two other important parameters are the thrust coefficient CF and the characteristic exhaust velocity c*. The thrust coefficient is defined as

CF = F / At pc = m(dot) c / At pc

where F is the thrust, At is the throat area, and pc is the chamber pressure. This parameter is the figure of merit of the nozzle design. The characteristic exhaust velocity is defined as

c* = At pc / m(dot) = c / CF

This parameter is the figure of merit of the propellant. Thus the specific impulse may be written

Isp = CF c* / g

which shows that the specific impulse is the figure of merit of the nozzle design and propellant as a whole, since it depends on both CF and c*. However, in practice the specific impulse is usually regarded as a measure of the efficiency of the propellant alone.

LAUNCH VEHICLE PROPULSION SYSTEMSM

In the first stage of a launch vehicle, the exit pressure of the exhaust is equal to the sea level atmospheric pressure 101.325 kPa (14.7 psia) for optimum expansion. As the altitude of the rocket increases along its trajectory, the surrounding atmospheric pressure decreases and the thrust increases because of the increase in pressure thrust. However, at the higher altitude the thrust is less than it would be for optimum expansion at that altitude. The exhaust pressure is then greater than the external pressure and the nozzle is said to be underexpanded. The gas expansion continues downstream and manifests itself by creating diamond-shaped shock waves that can often be observed in the exhaust plume.

The second stage of the launch vehicle is designed for optimum expansion at the altitude where it becomes operational. Because the atmospheric pressure is less than at sea level, the exit pressure of the exhaust must be less and thus the expansion ratio must be greater. Consequently, the second stage nozzle exit diameter is larger than the first stage nozzle exit diameter.

For example, the first stage of a Delta II 7925 launch vehicle has an expansion ratio of 12. The propellant is liquid oxygen and RP-1 (a kerosene-like hydrocarbon) in a mixture ratio (O/F) of 2.25 at a chamber pressure of 4800 kPa (700 psia) with a sea level specific impulse of 255 seconds. The second stage has a nozzle expansion ratio of 65 and burns nitrogen tetroxide and Aerozene 50 (a mixture of hydrazine and unsymmetrical dimethyl hydrazine) in a mixture ratio of 1.90 at a chamber pressure of 5700 kPa (830 psia), which yields a vacuum specific impulse of 320 seconds.

In space, the surrounding atmospheric pressure is zero. In principle, the expansion ratio would have to be infinite to reduce the exit pressure to zero. Thus optimum expansion is impossible, but it can be approximated by a very large nozzle diameter, such as can be seen on the main engines of the space shuttle with e = 77.5. There is ultimately a tradeoff between increasing the size of the nozzle exit for improved performance and reducing the mass of the rocket engine.

In a chemical rocket, the exhaust velocity, and hence the specific impulse, increases as the combustion temperature increases and the molar mass of the exhaust products decreases. Thus liquid oxygen and liquid hydrogen are nearly ideal chemical rocket propellants because they burn energetically at high temperature (about 3200 K) and produce nontoxic exhaust products consisting of gaseous hydrogen and water vapor with a small effective molar mass (about 11 kg/kmol). The vacuum specific impulse is about 450 seconds. These propellants are used on the space shuttle, the Atlas Centaur upper stage, the Ariane-4 third stage, the Ariane-5 core stage, the H-2 first and second stages, and the Long March CZ-3 third stage.

SPACECRAFT PROPULSION SYSTEMS

The spacecraft has its own propulsion system that is used for orbit insertion, station keeping, momentum wheel desaturation, and attitude control. The propellant required to perform a maneuver with a specified velocity increment Dv is given by the “rocket equation”

D m = m0 [ 1 – exp(- Dv / Isp g) ]

where m0 is the initial spacecraft mass. This equation implies that a reduction in velocity increment or an increase in specific impulse translates into a reduction in propellant.

In the case of a geostationary satellite, the spacecraft must perform a critical maneuver at the apogee of the transfer orbit at the synchronous altitude of 35,786 km to simultaneously remove the inclination and circularize the orbit. The transfer orbit has a perigee altitude of about 200 km and an inclination roughly equal to the latitude of the launch site. To minimize the required velocity increment, it is thus advantageous to have the launch site as close to the equator as possible.

For example, in a Delta or Atlas launch from Cape Canaveral the transfer orbit is inclined at 28.5° and the velocity increment at apogee is 1831 m/s; for an Ariane launch from Kourou the inclination is 7° and the velocity increment is 1502 m/s; while for a Zenit flight from the Sea Launch platform on the equator the velocity increment is 1478 m/s. By the rocket equation, assuming a specific impulse of 300 seconds, the fraction of the separated mass consumed by the propellant for the apogee maneuver is 46 percent from Cape Canaveral, 40 percent from Kourou, and 39 percent from the equator. As a rule of thumb, the mass of a geostationary satellite at beginning of life is on the order of one half its mass when separated from the launch vehicle.

Before performing the apogee maneuver, the spacecraft must be reoriented in the transfer orbit to face in the proper direction for the thrust. This task is sometimes performed by the launch vehicle at spacecraft separation or else must be carried out in a separate maneuver by the spacecraft itself. In a launch from Cape Canaveral, the angle through which the satellite must be reoriented is about 132°.

Once on station, the spacecraft must frequently perform a variety of stationkeeping maneuvers over its mission life to compensate for orbital perturbations. The principal perturbation is the combined gravitational attractions of the sun and moon, which causes the orbital inclination to increase by nearly one degree per year. This perturbation is compensated by a north-south stationkeeping maneuver approximately once every two weeks so as to keep the satellite within 0.05° of the equatorial plane. The average annual velocity increment is about 50 m/s, which represents 95 percent of the total stationkeeping fuel budget. Also, the slightly elliptical shape of the earth’s equator causes a longitudinal drift, which is compensated by east-west stationkeeping maneuvers about once a week, with an annual velocity increment of less than 2 m/s, to keep the satellite within 0.05° of its assigned longitude.

In addition, solar radiation pressure caused by the transfer of momentum carried by light and infrared radiation from the sun in the form of electromagnetic waves both flattens the orbit and disturbs the orientation of the satellite. The orbit is compensated by an eccentricity control maneuver that can sometimes be combined with east-west stationkeeping. The orientation of the satellite is maintained by momentum wheels supplemented by magnetic torquers and thrusters. However, the wheels must occasionally be restored to their nominal rates of rotation by means of a momentum wheel desaturation maneuver in which a thruster is fired to offset the change in angular momentum.

Geostationary spacecraft typical of those built during the 1980s have solid propellant rocket motors for the apogee maneuver and liquid hydrazine thrusters for stationkeeping and attitude control. The apogee kick motor uses a mixture of HTPB fuel and ammonium perchlorate oxidizer with a specific impulse of about 285 seconds. The hydrazine stationkeeping thrusters operate by catalytic decomposition and have an initial specific impulse of about 220 seconds. They are fed by the pressure of an inert gas, such as helium, in the propellant tanks. As propellant is consumed, the gas expands and the pressure decreases, causing the flow rate and the specific impulse to decrease over the mission life. The performance of the hydrazine is enhanced in an electrothermal hydrazine thruster (EHT), which produces a hot gas mixture at about 1000 ° C with a lower molar mass and higher enthalpy and results in a higher specific impulse of between 290 and 300 seconds.

For example, the Ford Aerospace (now Space Systems/Loral) INTELSAT V satellite has a Thiokol AKM that produces an average thrust of 56 kN (12,500 lbf) and burns to depletion in approximately 45 seconds. On-orbit operations are carried out by an array of four 0.44 N (0.1 lbf) thrusters for roll control, ten 2.0 N (0.45 lbf) thrusters for pitch and yaw control and E/W stationkeeping, and two 22.2 N (5.0 lbf) thrusters for repositioning and reorientation. Four 0.3 N (0.07 lbf) EHTs are used for N/S stationkeeping. The nominal mass of the spacecraft at beginning of life (BOL) is 1005 kg and the dry mass at end of life (EOL) is 830 kg. The difference of 175 kg represents the mass of the propellant for a design life of 7 years.

Satellites launched in the late 1980s and 1990s typically have an integrated propulsion system that use a bipropellant combination of monomethyl hydrazine as fuel and nitrogen tetroxide as oxidizer. The specific impulse is about 300 seconds and fuel margin not used for the apogee maneuver can be applied to stationkeeping. Also, since the apogee engine is restartable, it can be used for perigee velocity augmentation and supersynchronous transfer orbit scenarios that optimize the combined propulsion capabilities of the launch vehicle and the spacecraft.

.

For example, the INTELSAT VII satellite, built by Space Systems/Loral, has a Marquardt 490 N apogee thruster and an array of twelve 22 N stationkeeping thrusters manufactured by Atlantic Research Corporation with a 150:1 columbium nozzle expansion ratio and a specific impulse of 235 seconds. For an Ariane launch the separated mass in GTO is 3610 kg, the mass at BOL is 2100 kg, and the mass at EOL is 1450 kg. The mission life is approximately 17 years.

The Hughes HS-601 satellite has a similar thruster configuration. The mass is approximately 2970 kg at launch, 1680 kg at BOL, and 1300 kg for a nominal 14 year mission.

An interesting problem is the estimation of fuel remaining on the spacecraft at any given time during the mission life. This information is used to predict the satellite end of life. There are no “fuel gauges” so the fuel mass must be determined indirectly. There are three principal methods. The first is called the “gas law” method, which is based on the equation of state of an ideal gas. The pressure and temperature of the inert gas in the propellant tanks is measured by transducers and the volume of the gas is computed knowing precisely the pressure and temperature at launch. The volume of the remaining propellant can thus be deduced and the mass determined from the known density as a function of temperature. Corrections must be applied for the expansion of the tanks and the propellant vapor pressure. The second method is called the “bookkeeping” method. In this method the thruster time for each maneuver is carefully measured and recorded. The propellant consumed is then calculated from mass flow rate expressed in terms of the pressure using an empirical model. The third method is much more sophisticated and is based on the measured dynamics of the spacecraft after a stationkeeping maneuver to determine its total mass. In general, these three independent methods provide redundant information that can be applied to check one another.

NEW TECHNOLOGIES

Several innovative technologies have substantially improved the fuel efficiency of satellite stationkeeping thrusters. The savings in fuel can be used to increase the available payload mass, prolong the mission life, or reduce the mass of the spacecraft.

The first of these developments is the electric rocket arcjet technology. The arcjet system uses an electric arc to superheat hydrazine fuel, which nearly doubles its efficiency. An arcjet thruster has a specific impulse of over 500 seconds. Typical thrust levels are from 0.20 to 0.25 N. The arcjet concept was developed by the NASA Lewis Research Center in Cleveland and thrusters have been manufactured commercially by Primex Technologies, a subsidiary of the Olin Corporation.

AT&T’s Telstar 401 satellite, launched in December 1993 (and subsequently lost in 1997 due to an electrical failure generally attributed to a solar flare) was the first satellite to use arcjets. The stationkeeping propellant requirement was reduced by about 40 percent, which was critical to the selection of the Atlas IIAS launch vehicle. Similar arcjet systems are used on INTELSAT VIII and the Lockheed Martin A2100 series of satellites. INTELSAT VIII, for example, has a dual mode propulsion system incorporating a bipropellant liquid apogee engine that burns hydrazine and oxidizer for orbit insertion and four arcjets that use monopropellant hydrazine in the reaction control subsystem for stationkeeping.

Electrothermal hydrazine thrusters continue to have applications on various geostationary satellites and on some small spacecraft where maneuvering time is critical. For example, EHTs are used on the IRIDIUM satellites built by Lockheed Martin.

The most exciting development has been in the field of ion propulsion. The propellant is xenon gas. Although the thrust is small and on the order of a few millinewtons, the specific impulse is from 2000 to 4000 seconds, which is about ten to twenty times the efficiency of conventional bipropellant stationkeeping thrusters. Also, the lower thrust levels have the virtue of minimizing attitude disturbances during stationkeeping maneuvers.

The xenon ion propulsion system, or XIPS (pronounced “zips”), is a gridded ion thruster developed by Hughes. This system is available on the HS-601 HP (high power) and HS-702 satellite models and allows for a reduction in propellant mass of up to 90 percent for a 12 to 15 year mission life. A typical satellite has four XIPS thrusters, including two primary thrusters and two redundant thrusters.

Xenon atoms, an inert monatomic gas with the highest molar mass (131 kg/kmol), are introduced into a thruster chamber ringed by magnets. Electrons emitted by a cathode knock off electrons from the xenon atoms and form positive xenon ions. The ions are accelerated by a pair of gridded electrodes, one with a high positive voltage and one with a negative voltage, at the far end of the thrust chamber and create more than 3000 tiny beams. The beams are neutralized by a flux of electrons emitted by a device called the neutralizer to prevent the ions from being electrically attracted back to the thruster and to prevent a space charge from building up around the satellite.

The increase in kinetic energy of the ions is equal to the work done by the electric field, so that

½ m v2 = q V

where q, m, and v are the charge, mass, and velocity of the ions and V is the accelerating voltage, equal to the algebraic difference between the positive voltage on the positive grid and the negative voltage on the neutralizer. The charge to mass ratio of xenon ions is 7.35 ´ 105 C/kg.

The HS-601 HP satellite uses 13-centimeter diameter XIPS engines to perform north-south stationkeeping and to assist the spacecraft’s gimballed momentum wheel for roll and yaw control. The accelerating voltage is about 750 volts and the ions have a velocity of 33,600 m/s. The specific impulse is 3400 seconds with a mass flow rate of 0.6 mg/s and 18 mN of thrust. Each ion thruster operates for approximately 5 hours per day and uses 500 W from the available 8 kW total spacecraft power.

The HS-702 spacecraft has higher power 25-centimeter thrusters to perform all stationkeeping maneuvers and to complement the four momentum wheels arranged in a tetrahedron configuration for attitude control. The accelerating voltage is 1200 volts, which produces an ion beam with a velocity of 42,500 m/s. The specific impulse is 4300 seconds, the mass flow rate is 4 mg/s, and the thrust is 165 mN. Each HS-702 ion thruster operates for approximately 30 minutes per day and requires 4.5 kW from the 10 to 15 kW solar array. The stationkeeping strategy maintains a tolerance of ± 0.005° that allows for the collocation of several satellites at a single orbital slot.

The HS-702 satellite has a launch mass of up to 5200 kg and an available payload mass of up to 1200 kg. The spacecraft can carry up to 118 transponders, comprising 94 active amplifiers and 24 spares. A bipropellant propulsion system is used for orbit acquisition, with a fuel capacity of 1750 kg. The XIPS thrusters need only 5 kg of xenon propellant per year, a fraction of the requirement for conventional bipropellant or arcjet systems. The HS-702 also has the option of using XIPS thrusters for orbit raising in transfer orbit to further reduce the required propellant mass budget.

The first commercial satellite to use ion propulsion was PAS-5, which was delivered to the PanAmSat Corporation in August 1997. PAS-5 was the first HS-601 HP model, whose xenon ion propulsion system, together with gallium arsenside solar cells and advanced battery performance, permitted the satellite to accommodate a payload twice as powerful as earlier HS-601 models while maintaining a 15 year orbital life. Four more Hughes satellites with XIPS technology were in orbit by the end of 1998. In addition, Hughes also produced a 30-centimeter xenon ion engine for NASA’s Deep Space 1 spacecraft, launched in October 1998.

Another type of ion thruster is the Hall effect ion thruster. The ions are accelerated along the axis of the thruster by crossed electric and magnetic fields. A plasma of electrons in the thrust chamber produces the electric field. A set of coils creates the magnetic field, whose magnitude is the most difficult aspect of the system to adjust. The ions attain a speed of between 15,000 and 20,000 m/s and the specific impulse is about 1800 seconds. This type of thruster has been flown on several Russian spacecraft.

SUMMARY

The demand for ever increasing satellite payloads has motivated the development of propulsion systems with greater efficiency. Typical satellites of fifteen to twenty years ago had solid apogee motors and simple monopropellant hydrazine stationkeeping thrusters. Electrically heated thrusters were designed to increase the hydrazine performance and the principle was further advanced by the innovation of the arcjet thruster. Bipropellant systems are now commonly used for increased performance and versatility.

The future will see a steady transition to ion propulsion. The improvements in fuel efficiency permit the savings in mass to be used for increasing the revenue-generating payloads (with attendant increase in solar arrays, batteries, and thermal control systems to power them), extending the lifetimes in orbit, or reducing the spacecraft mass to permit a more economical launch vehicle.

Author

Dr. Robert A. Nelson, P.E. is president of Satellite Engineering Research Corporation, a satellite engineering consulting firm in Bethesda, Maryland, a Lecturer in the Department of Aerospace Engineering at the University of Maryland and Technical Editor of Via Satellite magazine.

Antennas: The Interface with Space

Antennas: The Interface with Space by Robert A. Nelson The antenna is the most visible part of the satellite communication system. The antenna transmits and receives the modulated carrier signal at the radio frequency (RF) portion of the electromagnetic spectrum. For satellite communication, the frequencies range from about 0.3 GHz (VHF) to 30 GHz (Ka-band) […]

Antennas: The Interface with Space

by Robert A. Nelson

The antenna is the most visible part of the satellite communication system. The antenna transmits and receives the modulated carrier signal at the radio frequency (RF) portion of the electromagnetic spectrum. For satellite communication, the frequencies range from about 0.3 GHz (VHF) to 30 GHz (Ka-band) and beyond. These frequencies represent microwaves, with wavelengths on the order of one meter down to below one centimeter. High frequencies, and the corresponding small wavelengths, permit the use of antennas having practical dimensions for commercial use. This article summarizes the basic properties of antennas used in satellite communication and derives several fundamental relations used in antenna design and RF link analysis.

HISTORY OF ELECTROMAGNETIC WAVES

The quantitative study of electricity and magnetism began with the scientific research of the French physicist Charles Augustin Coulomb. In 1787 Coulomb proposed a law of force for charges that, like Sir Isaac Newton’s law of gravitation, varied inversely as the square of the distance. Using a sensitive torsion balance, he demonstrated its validity experimentally for forces of both repulsion and attraction. Like the law of gravitation, Coulomb’s law was based on the notion of “action at a distance,” wherein bodies can interact instantaneously and directly with one another without the intervention of any intermediary. At the beginning of the nineteenth century, the electrochemical cell was invented by Alessandro Volta, professor of natural philosophy at the University of Pavia in Italy. The cell created an electromotive force, which made the production of continuous currents possible. Then in 1820 at the University of Copenhagen, Hans Christian Oersted made the momentous discovery that an electric current in a wire could deflect a magnetic needle. News of this discovery was communicated to the French Academy of Sciences two months later. The laws of force between current bearing wires were at once investigated by Andre-Marie Ampere and by Jean-Baptiste Biot and Felix Savart. Within six years the theory of steady currents was complete. These laws were also “action at a distance” laws, that is, expressed directly in terms of the distances between the current elements. Subsequently, in 1831, the British scientist Michael Faraday demonstrated the reciprocal effect, in which a moving magnet in the vicinity of a coil of wire produced an electric current. This phenomenon, together with Oersted’s experiment with the magnetic needle, led Faraday to conceive the notion of a magnetic field. A field produced by a current in a wire interacted with a magnet. Also, according to his law of induction, a time varying magnetic field incident on a wire would induce a voltage, thereby creating a current. Electric forces could similarly be expressed in terms of an electric field created by the presence of a charge. Faraday’s field concept implied that charges and currents interacted directly and locally with the electromagnetic field, which although produced by charges and currents, had an identity of its own. This view was in contrast to the concept of “action at a distance,” which assumed bodies interacted directly with one another. Faraday, however, was a self-taught experimentalist and did not formulate his laws mathematically. It was left to the Scottish physicist James Clerk Maxwell to establish the mathematical theory of electromagnetism based on the physical concepts of Faraday. In a series of papers published between 1856 and 1865, Maxwell restated the laws of Coulomb, Ampere, and Faraday in terms of Faraday’s electric and magnetic fields. Maxwell thus unified the theories of electricity and magnetism, in the same sense that two hundred years earlier Newton had unified terrestrial and celestial mechanics through his theory of universal gravitation. As is typical of abstract mathematical reasoning, Maxwell saw in his equations a certain symmetry that suggested the need for an additional term, involving the time rate of change of the electric field. With this generalization, Maxwell’s equations also became consistent with the principle of conservation of charge. Furthermore, Maxwell made the profound observation that his set of equations, thus modified, predicted the existence of electromagnetic waves. Therefore, disturbances in the electromagnetic field could propagate through space. Using the values of known experimental constants obtained solely from measurements of charges and currents, Maxwell deduced that the speed of propagation was equal to speed of light. This quantity had been measured astronomically by Olaf Romer in 1676 from the eclipses of Jupiter’s satellites and determined experimentally from terrestrial measurements by H.L. Fizeau in 1849. He then asserted that light itself was an electromagnetic wave, thereby unifying optics with electromagnetism as well. Maxwell was aided by his superior knowledge of dimensional analysis and units of measure. He was a member of the British Association committee formed in 1861 that eventually established the centimeter-gram-second (CGS) system of absolute electrical units. Maxwell’s theory was not accepted by scientists immediately, in part because it had been derived from a bewildering collection of mechanical analogies and difficult mathematical concepts. The form of Maxwell’s equations as they are known today is due to the German physicist Heinrich Hertz. Hertz simplified them and eliminated unnecessary assumptions. Hertz’s interest in Maxwell’s theory was occasioned by a prize offered by the Berlin Academy of Sciences in 1879 for research on the relation between polarization in insulators and electromagnetic induction. By means of his experiments, Hertz discovered how to generate high frequency electrical oscillations. He was surprised to find that these oscillations could be detected at large distances from the apparatus. Up to that time, it had been generally assumed that electrical forces decreased rapidly with distance according to the Newtonian law. He therefore sought to test Maxwell’s prediction of the existence of electromagnetic waves. In 1888, Hertz set up standing electromagnetic waves using an oscillator and spark detector of his own design and made independent measurements of their wavelength and frequency. He found that their product was indeed the speed of light. He also verified that these waves behaved according to all the laws of reflection, refraction, and polarization that applied to visible light, thus demonstrating that they differed from light only in wavelength and frequency. “Certainly it is a fascinating idea,” Hertz wrote, “that the processes in air that we have been investigating represent to us on a million-fold larger scale the same processes which go on in the neighborhood of a Fresnel mirror or between the glass plates used in exhibiting Newton’s rings.” It was not long until the discovery of electromagnetic waves was transformed from pure physics to engineering. After learning of Hertz’s experiments through a magazine article, the young Italian engineer Guglielmo Marconi constructed the first transmitter for wireless telegraphy in 1895. Within two years he used this new invention to communicate with ships at sea. Marconi’s transmission system was improved by Karl F. Braun, who increased the power, and hence the range, by coupling the transmitter to the antenna through a transformer instead of having the antenna in the power circuit directly. Transmission over long distances was made possible by the reflection of radio waves by the ionosphere. For their contributions to wireless telegraphy, Marconi and Braun were awarded the Nobel Prize in physics in 1909. Marconi created the American Marconi Wireless Telegraphy Company in 1899, which competed directly with the transatlantic undersea cable operators. On the early morning of April 15, 1912, a 21-year old Marconi telegrapher in New York City by the name of David Sarnoff received a wireless message from the Marconi station in Newfoundland, which had picked up faint SOS distress signals from the steamship Titanic. Sarnoff relayed the report of the ship’s sinking to the world. This singular event dramatized the importance of the new means of communication. Initially, wireless communication was synonymous with telegraphy. For communication over long distances the wavelengths were greater than 200 meters. The antennas were typically dipoles formed by long wires cut to a submultiple of the wavelength. Commercial radio emerged during the 1920s and 1930s. The American Marconi Company evolved into the Radio Corporation of America (RCA) with David Sarnoff as its director. Technical developments included the invention of the triode for amplification by Lee de Forest and the perfection of AM and FM receivers through the work of Edwin Howard Armstrong and others. In his book Empire of the Air: The Men Who Made Radio, Tom Lewis credits de Forest, Armstrong, and Sarnoff as the three visionary pioneers most responsible for the birth of the modern communications age. Stimulated by the invention of radar during World War II, considerable research and development in radio communication at microwave frequencies and centimeter wavelengths was conducted in the decade of the 1940s. The MIT Radiation Laboratory was a leading center for research on microwave antenna theory and design. The basic formulation of the radio transmission formula was developed by Harald T. Friis at the Bell Telephone Laboratories and published in 1946. This equation expressed the radiation from an antenna in terms of the power flow per unit area, instead of giving the field strength in volts per meter, and is the foundation of the RF link equation used by satellite communication engineers today.

TYPES OF ANTENNAS

A variety of antenna types are used in satellite communications. The most widely used narrow beam antennas are reflector antennas. The shape is generally a paraboloid of revolution. For full earth coverage from a geostationary satellite, a horn antenna is used. Horns are also used as feeds for reflector antennas. In a direct feed reflector, such as on a satellite or a small earth terminal, the feed horn is located at the focus or may be offset to one side of the focus. Large earth station antennas have a subreflector at the focus. In the Cassegrain design, the subreflector is convex with an hyperboloidal surface, while in the Gregorian design it is concave with an ellipsoidal surface. The subreflector permits the antenna optics to be located near the base of the antenna. This configuration reduces losses because the length of the waveguide between the transmitter or receiver and the antenna feed is reduced. The system noise temperature is also reduced because the receiver looks at the cold sky instead of the warm earth. In addition, the mechanical stability is improved, resulting in higher pointing accuracy. Phased array antennas may be used to produce multiple beams or for electronic steering. Phased arrays are found on many nongeostationary satellites, such as the Iridium, Globalstar, and ICO satellites for mobile telephony.

GAIN AND HALF POWER BEAMWIDTH

The fundamental characteristics of an antenna are its gain and half power beamwidth. According to the reciprocity theorem, the transmitting and receiving patterns of an antenna are identical at a given wavelength The gain is a measure of how much of the input power is concentrated in a particular direction. It is expressed with respect to a hypothetical isotropic antenna, which radiates equally in all directions. Thus in the direction (q , f ), the gain is

G(q , f ) = (dP/dW)/(Pin /4p )

where Pin is the total input power and dP is the increment of radiated output power in solid angle dW. The gain is maximum along the boresight direction. The input power is Pin = Ea2 A / h Z0 where Ea is the average electric field over the area A of the aperture, Z0 is the impedance of free space, and h is the net antenna efficiency. The output power over solid angle dWis dP = E2 r2 dW/ Z0, where E is the electric field at distance r. But by the Fraunhofer theory of diffraction, E = Ea A / r l along the boresight direction, where l is the wavelength. Thus the boresight gain is given in terms of the size of the antenna by the important relation

G = h (4 p / l2) A

This equation determines the required antenna area for the specified gain at a given wavelength. The net efficiency h is the product of the aperture taper efficiency ha , which depends on the electric field distribution over the antenna aperture (it is the square of the average divided by the average of the square), and the total radiation efficiency h * = P/Pin associated with various losses. These losses include spillover, ohmic heating, phase non-uniformity, blockage, surface roughness, and cross polarization. Thus h = ha h *. For a typical antenna, h = 0.55. For a reflector antenna, the area is simply the projected area. Thus for a circular reflector of diameter D, the area is A = p D2/4 and the gain is

G = h (p D / l )2

which can also be written

G = h (p D f / c)2

since c = l f, where c is the speed of light (3 ´ 108 m/s), l is the wavelength, and f is the frequency. Consequently, the gain increases as the wavelength decreases or the frequency increases. For example, an antenna with a diameter of 2 m and an efficiency of 0.55 would have a gain of 8685 at the C-band uplink frequency of 6 GHz and wavelength of 0.050 m. The gain expressed in decibels (dB) is

10 log(8685) = 39.4 dB.

Thus the power radiated by the antenna is 8685 times more concentrated along the boresight direction than for an isotropic antenna, which by definition has a gain of 1 (0 dB). At Ku-band, with an uplink frequency of 14 GHz and wavelength 0.021 m, the gain is 49,236 or 46.9 dB. Thus at the higher frequency, the gain is higher for the same size antenna. The boresight gain G can be expressed in terms of the antenna beam solid angle WA that contains the total radiated power as

G = h * (4p / WA )

which takes into account the antenna losses through the radiation efficiency h *. The antenna beam solid angle is the solid angle through which all the power would be concentrated if the gain were constant and equal to its maximum value. The directivity does not include radiation losses and is equal to G / h *. The half power beamwidth is the angular separation between the half power points on the antenna radiation pattern, where the gain is one half the maximum value. For a reflector antenna it may be expressed

HPBW = a = k l / D

where k is a factor that depends on the shape of the reflector and the method of illumination. For a typical antenna, k = 70° (1.22 if a is in radians). Thus the half power beamwidth decreases with decreasing wavelength and increasing diameter. For example, in the case of the 2 meter antenna, the half power beamwidth at 6 GHz is approximately 1.75° . At 14 GHz, the half power beamwidth is approximately 0.75° . As an extreme example, the half power beamwidth of the Deep Space Network 64 meter antenna in Goldstone, California is only 0.04 ° at X-band (8.4 GHz). The gain may be expressed directly in terms of the half power beamwidth by eliminating the factor D/l . Thus,

G = h (p k / a )2

Inserting the typical values h = 0.55 and k = 70° , one obtains

G = 27,000/ (a° )2

where a° is expressed in degrees. This is a well known engineering approximation for the gain (expressed as a numeric). It shows directly how the size of the beam automatically determines the gain. Although this relation was derived specifically for a reflector antenna with a circular beam, similar relations can be obtained for other antenna types and beam shapes. The value of the numerator will be somewhat different in each case. For example, for a satellite antenna with a circular spot beam of diameter 1° , the gain is 27,000 or 44.3 dB. For a Ku-band downlink at 12 GHz, the required antenna diameter determined from either the gain or the half power beamwidth is 1.75 m. A horn antenna would be used to provide full earth coverage from geostationary orbit, where the angular diameter of the earth is 17.4° . Thus, the required gain is 89.2 or 19.5 dB. Assuming an efficiency of 0.70, the horn diameter for a C-band downlink frequency of 4 GHz would be 27 cm.

EIRP AND G/T

For the RF link budget, the two required antenna properties are the equivalent isotropic radiated power (EIRP) and the “figure of merit” G/T. These quantities are the properties of the transmit antenna and receive antenna that appear in the RF link equation and are calculated at the transmit and receive frequencies, respectively. The equivalent isotropic radiated power (EIRP) is the power radiated equally in all directions that would produce a power flux density equivalent to the power flux density of the actual antenna. The power flux density F is defined as the radiated power P per unit area S, or F = P/S. But P = h * Pin , where Pin is the input power and h * is the radiation efficiency, and S = d2 WA ,where d is the slant range to the center of coverage and WA is the solid angle containing the total power. Thus with some algebraic manipulation,

F = h * (4p / WA )( Pin / 4p d2) = G Pin / 4p d2

Since the surface area of a sphere of radius d is 4p d2, the flux density in terms of the EIRP is

F = EIRP / 4p d2

Equating these two expressions, one obtains

EIRP = G Pin

Therefore, the equivalent isotropic radiated power is the product of the antenna gain of the transmitter and the power applied to the input terminals of the antenna. The antenna efficiency is absorbed in the definition of gain. The “figure of merit” is the ratio of the antenna gain of the receiver G and the system temperature T. The system temperature is a measure of the total noise power and includes contributions from the antenna and the receiver. Both the gain and the system temperature must be referenced to the same point in the chain of components in the receiver system. The ratio G/T is important because it is an invariant that is independent of the reference point where it is calculated, even though the gain and the system temperature individually are different at different points.

ANTENNA PATTERN

Since electromagnetic energy propagates in the form of waves, it spreads out through space due to the phenomenon of diffraction. Individual waves combine both constructively and destructively to form a diffraction pattern that manifests itself in the main lobe and side lobes of the antenna. The antenna pattern is analogous to the “Airy rings” produced by visible light when passing through a circular aperture. These diffraction patterns were studied by Sir George Biddell Airy, Astronomer Royal of England during the nineteenth century, to investigate the resolving power of a telescope. The diffraction pattern consists of a central bright spot surrounded by concentric bright rings with decreasing intensity. The central spot is produced by waves that combine constructively and is analogous to the main lobe of the antenna. The spot is bordered by a dark ring, where waves combine destructively, that is analogous to the first null. The surrounding bright rings are analogous to the side lobes of the antenna pattern. As noted by Hertz, the only difference in this behavior is the size of the pattern and the difference in wavelength. Within the main lobe of an axisymmetric antenna, the gain G(q ) in a direction q with respect to the boresight direction may be approximated by the expression

G(q ) = G – 12 (q / a )2

where G is the boresight gain. Here the gains are expressed in dB. Thus at the half power points to either side of the boresight direction, where q = a /2, the gain is reduced by a factor of 2, or 3 dB. The details of the antenna, including its shape and illumination, are contained in the value of the half power beamwidth a . This equation would typically be used to estimate the antenna loss due to a small pointing error. The gain of the side lobes can be approximated by an envelope. For new earth station antennas with D/l > 100, the side lobes must fall within the envelope 29 – 25 log q by international regulation. This envelope is determined by the requirement of minimizing interference between neighboring satellites in the geostationary arc with a nominal 2° spacing.

TAPER

The gain pattern of a reflector antenna depends on how the antenna is illuminated by the feed. The variation in electric field across the antenna diameter is called the antenna taper. The total antenna solid angle containing all of the radiated power, including side lobes, is

WA = h * (4p / G) = (1/ha) (l2 / A)

where ha is the aperture taper efficiency and h * is the radiation efficiency associated with losses. The beam efficiency is defined as

e = WM / WA

where WM is thesolid angle for the main lobe. The values of ha and are e calculated from the electric field distribution in the aperture plane and the antenna radiation pattern, respectively. For a theoretically uniform illumination, the electric field is constant and the aperture taper efficiency is 1. If the feed is designed to cause the electric field to decrease with distance from the center, then the aperture taper efficiency decreases but the proportion of power in the main lobe increases. In general, maximum aperture taper efficiency occurs for a uniform distribution, but maximum beam efficiency occurs for a highly tapered distribution. For uniform illumination, the half power beamwidth is 58.4° l /D and the first side lobe is 17.6 dB below the peak intensity in the boresight direction. In this case, the main lobe contains about 84 percent of the total radiated power and the first side lobe contains about 7 percent. If the electric field amplitude has a simple parabolic distribution, falling to zero at the reflector edge, then the aperture taper efficiency becomes 0.75 but the fraction of power in the main lobe increases to 98 percent. The half power beamwidth is now 72.8° l /D and the first side lobe is 24.6 dB below peak intensity. Thus, although the aperture taper efficiency is less, more power is contained in the main lobe, as indicated by the larger half power beamwidth and lower side lobe intensity. If the electric field decreases to a fraction C of its maximum value, called the edge taper, the reflector will not intercept all the radiation from the feed. There will be energy spillover with a corresponding efficiency of approximately 1 – C2. However, as the spillover efficiency decreases, the aperture taper efficiency increases. The taper is chosen to maximize the illumination efficiency, defined as the product of aperture taper efficiency and spillover efficiency. The illumination efficiency reaches a maximum value for an optimum combination of taper and spillover. For a typical antenna, the optimum edge taper C is about 0.316, or – 10 dB (20 log C). With this edge taper and a parabolic illumination, the aperture taper efficiency is 0.92, the spillover efficiency is 0.90, the half power beamwidth is 65.3° l /D, and the first side lobe is 22.3 dB below peak. Thus the overall illumination efficiency is 0.83 instead of 0.75. The beam efficiency is about 95 percent.

COVERAGE AREA

The gain of a satellite antenna is designed to provide a specified area of coverage on the earth. The area of coverage within the half power beamwidth is

S = d2 W

where d is the slant range to the center of the footprint and W is the solid angle of a cone that intercepts the half power points, which may be expressed in terms of the angular dimensions of the antenna beam. Thus

= K a b

where a and b are the principal plane half power beamwidths in radians and K is a factor that depends on the shape of the coverage area. For a square or rectangular area of coverage, K = 1, while for a circular or elliptical area of coverage, K = p /4. The boresight gain may be approximated in terms of this solid angle by the relation

G = h¢ (4p / W ) = (h¢ / K)(41,253 / a° b° )

where a° and b° are in degrees and h¢ is an efficiency factor that depends on the the half power beamwidth. Although h¢ is conceptually distinct from the net efficiency h , in practice these two efficiencies are roughly equal for a typical antenna taper. In particular, for a circular beam this equation is equivalent to the earlier expression in terms of a if h¢ = (p k / 4)2 h . If the area of the footprint S is specified, then the size of a satellite antenna increases in proportion to the altitude. For example, the altitude of Low Earth Orbit is about 1000 km and the altitude of Medium Earth Orbit is about 10,000 km. Thus to cover the same area on the earth, the antenna diameter of a MEO satellite must be about 10 times that of a LEO satellite and the gain must be 100 times, or 20 dB, as great. On the Iridium satellite there are three main mission L-band phased array antennas. Each antenna has 106 elements, distributed into 8 rows with element separations of 11.5 cm and row separations of 9.4 cm over an antenna area of 188 cm ´ 86 cm. The pattern produced by each antenna is divided into 16 cells by a two-dimensional Butler matrix power divider, resulting in a total of 48 cells over the satellite coverage area. The maximum gain for a cell at the perimeter of the coverage area is 24.3 dB. From geostationary orbit the antenna size for a small spot beam can be considerable. For example, the spacecraft for the Asia Cellular Satellite System (ACeS), being built by Lockheed Martin for mobile telephony in Southeast Asia, has two unfurlable mesh antenna reflectors at L-band that are 12 meters across and have an offset feed. Having different transmit and receive antennas minimizes passive intermodulation (PIM) interference that in the past has been a serious problem for high power L-band satellites using a single reflector. The antenna separation attenuates the PIM products by from 50 to 70 dB.

SHAPED BEAMS

Often the area of coverage has an irregular shape, such as one defined by a country or continent. Until recently, the usual practice has been to create the desired coverage pattern by means of a beam forming network. Each beam has its own feed and illuminates the full reflector area. The superposition of all the individual circular beams produces the specified shaped beam. For example, the C-band transmit hemi/zone antenna on the Intelsat 6 satellite is 3.2 meters in diameter. This is the largest diameter solid circular aperture that fits within an Ariane 4 launch vehicle fairing envelope. The antenna is illuminated by an array of 146 Potter horns. The beam diameter a for each feed is 1.6° at 3.7 GHz. By appropriately exciting the beam forming network, the specified areas of coverage are illuminated. For 27 dB spatial isolation between zones reusing the same spectrum, the minimum spacing s is given by the rule of thumb s ³ 1.4 a , so that s ³ 2.2° . This meets the specification of s = 2.5° for Intelsat 6. Another example is provided by the HS-376 dual-spin stabilized Galaxy 5 satellite, operated by PanAmSat. The reflector diameter is 1.80 m. There are two linear polarizations, horizontal and vertical. In a given polarization, the contiguous United States (CONUS) might be covered by four beams, each with a half power beamwidth of 3° at the C-band downlink frequency of 4 GHz. From geostationary orbit, the angular dimensions of CONUS are approximately 6° ´ 3° . For this rectangular beam pattern, the maximum gain is about 31 dB. At edge of coverage, the gain is 3 dB less. With a TWTA ouput power of 16 W (12 dBW), a waveguide loss of 1.5 dB, and an assumed beam-forming network loss of 1 dB, the maximum EIRP is 40.5 dBW. The shaped reflector represents a new technology. Instead of illuminating a conventional parabolic reflector with multiple feeds in a beam-forming network, there is a single feed that illuminates a reflector with an undulating shape that provides the required region of coverage. The advantages are lower spillover loss, a significant reduction in mass, lower signal losses, and lower cost. By using large antenna diameters, the rolloff along the perimeter of the coverage area can be made sharp. The practical application of shaped reflector technology has been made possible by the development of composite materials with extremely low coefficients of thermal distortion and by the availability of sophisticated computer software programs necessary to analyze the antenna. One widely used antenna software package is called GRASP, produced by TICRA of Copenhagen, Denmark. This program calculates the gain from first principles using the theory of physical optics.

SUMMARY

The gain of an antenna is determined by the intended area of coverage. The gain at a given wavelength is achieved by appropriately choosing the size of the antenna. The gain may also be expressed in terms of the half power beamwidth. Reflector antennas are generally used to produce narrow beams for geostationary satellites and earth stations. The efficiency of the antenna is optimized by the method of illumination and choice of edge taper. Phased array antennas are used on many LEO and MEO satellites. New technologies include large, unfurlable antennas for producing small spot beams from geostationary orbit and shaped reflectors for creating a shaped beam with only a single feed.

Author

Dr. Robert A. Nelson, P.E. is president of Satellite Engineering Research Corporation, a satellite engineering consulting firm in Bethesda, Maryland, a Lecturer in the Department of Aerospace Engineering at the University of Maryland and Technical Editor of Via Satellite magazine.

Iridium : From Concept to Reality

On the 23rd day of this month, a revolutionary communication system will begin service to the public. Iridium will be the first mobile telephony system to offer voice and data services to and from handheld telephones anywhere in the world. Industry analysts have eagerly awaited this event, as they have debated the nature of the […]
On the 23rd day of this month, a revolutionary communication system will begin service to the public. Iridium will be the first mobile telephony system to offer voice and data services to and from handheld telephones anywhere in the world. Industry analysts have eagerly awaited this event, as they have debated the nature of the market, the economics, and the technical design. As with any complex engineering system, credit must be shared among many people. However, the three key individuals who are recognized as having conceived and designed the system are Bary Bertiger, Dr. Raymond Leopold, and Kenneth Peterson of Motorola, creators of the Iridium system. The inspiration was an occasion that has entered into the folklore of Motorola. (The story, as recounted here, was the subject of a Wall Street Journal profile on Monday, December 16, 1996.) On a vacation to the Bahamas in 1985, Bertiger’s wife, Karen, wanted to place a cellular telephone call back to her home near the Motorola facility in Chandler, AZ to close a real-estate transaction. After attempting to make the connection without success, she asked Bertiger why it wouldn’t be possible to create a telephone system that would work anywhere, even in the remote Caribbean outback. Bertiger took the problem back to colleagues Leopold and Peterson at Motorola. Numerous alternative terrestrial designs were discussed and abandoned. In 1987 research began on a constellation of low earth orbiting satellites that could communicate directly with telephones on the ground and with one another — a kind of inverted cellular telephone system. But as they left work one day in 1988, Leopold proposed a crucial element of the design. The satellites would be coordinated by a network of “gateway” earth stations connecting the satellite system to existing telephone systems. They quickly agreed that this was the sought-after solution and immediately wrote down an outline using the nearest available medium — a whiteboard in a security guard’s office. Originally, the constellation was to have consisted of 77 satellites. The constellation was based on a study by William S. Adams and Leonard Rider of the Aerospace Corporation, who published a paper in The Journal of the Astronautical Sciences in 1987 on the configurations of circular, polar satellite constellations at various altitudes providing continuous, full-earth coverage with a minimum number of satellites. However, by 1992 several modifications had been made to the system, including a reduction in the number of satellites from 77 to 66 by the elimination of one orbital plane. The name Iridium was suggested by a Motorola cellular telephone system engineer, Jim Williams, from the Motorola facility near Chicago. The 77-satellite constellation reminded him of the electrons that encircle the nucleus in the classical Bohr model of the atom. When he consulted the periodic table of the elements to discover which atom had 77 electrons, he found Iridium — a creative name that has a nice ring. Fortunately, the system had not yet been scaled back to 66 satellites, or else he might have suggested the name Dysprosium. The project was not adopted by senior management immediately. On a visit to the Chandler facility, however, Motorola chairman Robert Galvin learned of the idea and was briefed by Bertiger. Galvin at once endorsed the plan and at a subsequent meeting persuaded Motorola’s president John Mitchell. Ten years have elapsed from this go-ahead decision, and thirteen years since Bertiger’s wife posed the question. In December 1997 the first Iridium test call was delivered by orbiting satellites. Shortly after completion of the constellation in May 1998, a demonstration was conducted for franchise owners and guests. The new system was ready for operation, and Iridium is now on the threshold of beginning service. REGULATORY HURDLES In June, 1990 Motorola announced the development of its Iridium satellite system at simultaneous press conferences in Beijing, London, Melbourne, and New York. The Iridium system was described in an application to the Federal Communications Commission (FCC) filed in December of that year, in a supplement of February 1991, and an amendment in August 1992. At the time, an internationally allocated spectrum for this service by nongeostationary satellites did not even exist. Thus Motorola proposed to offer Radio Determination Satellite Service (RDSS) in addition to mobile digital voice and data communication so that it might qualify for use of available spectrum in the RDSS L-band from 1610 to 1626.5 MHz. A waiver was requested to provide both two-way digital voice and data services on a co-primary basis with RDSS. Following the submission of Motorola’s Iridium proposal, the FCC invited applications from other companies for systems to share this band for the new Mobile Satellite Service (MSS). An additional four proposals for nongeostationary mobile telephony systems were submitted to meet the June 3, 1991 deadline, including Loral/Qualcomm’s Globalstar, TRW’s Odyssey, MCHI’s Ellipsat, and Constellation Communications’ Aries. Collectively, these nongeostationary satellite systems became known as the “Big LEOs”. The American Mobile Satellite Corporation (AMSC) also sought to expand existing spectrum for its geostationary satellite into the RDSS band. At the 1992 World Administrative Radio Conference (WARC-92) in Torremolinos, Spain, L-band spectrum from 1610 to 1626.5 MHz was internationally allocated for MSS for earth-to-space (uplink) on a primary basis in all three ITU regions. WARC-92 also allocated to MSS the band 1613.8 to 1626.5 MHz on a secondary basis and spectrum in S-band from 2483.5 to 2500 MHz on a primary basis for space-to-earth (downlink). In early 1993 the FCC adopted a conforming domestic spectrum allocation and convened a Negotiated Rulemaking proceeding. This series of meetings was attended in Washington, DC by representatives of the six applicants and Celsat, which had expressed an intention to file an application for a geostationary satellite but did not meet the deadline. The purpose of the proceeding was to provide the companies with the opportunity to devise a frequency- sharing plan and make recommendations. These deliberations were lively, and at times contentious, as Motorola defended its FDMA/TDMA multiple access design against the CDMA technologies of the other participants. With frequency division multiple access (FDMA), the available spectrum is subdivided into smaller bands allocated to individual users. Iridium extends this multiple access scheme further by using time division multiple access (TDMA) within each FDMA sub-band. Each user is assigned two time slots — one for sending and one for receiving — within a repetitive time frame. During each time slot, the digital data are burst between the mobile handset and the satellite. With code division multiple access (CDMA), the signal from each user is modulated by a pseudorandom noise (PRN) code. All users share the same spectrum. At the receiver, the desired signal is extracted from the entire population of signals by multiplying by a replica code and performing an autocorrelation process. The key to the success of this method is the existence of sufficient PRN codes that appear to be mathematically orthogonal to one another. Major advantages cited by CDMA proponents are inherently greater capacity and higher spectral efficiency. Frequency reuse clusters can be smaller because interference is reduced between neighboring cells. In April, 1993 a majority report of Working Group 1 of the Negotiated Rulemaking Committee recommended full band sharing across the entire MSS band by all systems including Iridium. Coordination would be based on an equitable allocation of interference noise produced by each system. The FDMA/TDMA system would be assigned one circular polarization and the CDMA systems would be assigned the opposite polarization. This approach required that each system would be designed with sufficient margin to tolerate the level of interference received from other licensed systems. Motorola issued a minority report which stated that the Iridium system must have its own spectrum allocation. It proposed partitioning of the MSS L-band spectrum into two equal 8.25 MHz segments for the FDMA/TDMA and CDMA access technologies, with the upper portion being used by the FDMA/TDMA system where it would be sufficiently isolated from neighboring frequencies used by radio astronomy, GPS, and Glonass. Faced with this impasse, the FCC in January 1994 adopted rulemaking proposals which allocated the upper 5.15 MHz of the MSS L-band spectrum to the sole FDMA/TDMA applicant, Iridium, and assigned the remaining 11.35 MHz to be shared by multiple CDMA systems. However, if only one CDMA system were implemented, the 11.35 MHz allotment would be reduced to 8.25 MHz, leaving 3.10 MHz available for additional spectrum to Iridium or a new applicant. The response to the Commission’s proposals from the Big LEO applicants was generally favorable. Without this compromise, the alternative would have been to hold a lottery or auction to allocate the spectrum. The Iridium system was designed to operate with the full spectrum allocation. However, with 5.15 MHz, the system is a viable business proposition. The additional 3.10 MHz, should it become available, further adds to the system’s attractiveness. The FCC also proposed that the MSS spectrum could be used only by Low Earth Orbit (LEO) and Medium Earth Orbit (MEO) satellite systems. Therefore, the geostationary orbit (GEO) systems of AMSC and Celsat would not be permitted in this band. To qualify for a Big LEO license, the Commission proposed that the service must be global (excluding the poles) and that companies must meet stringent financial standards. In October, 1994 the FCC issued its final rules for MSS, closely following language of the January proposed rulemaking. However, it allowed the CDMA systems to share the entire 16.5 MHz of downlink spectrum in S-band. The Commission gave the Big LEO applicants a November 16 deadline to amend their applications to conform to the new licensing rules. On January 31, 1995 the FCC granted licenses to Iridium, Globalstar, and Odyssey but withheld its decision on Ellipsat and Aries pending an evaluation of their financial qualifications. The latter companies finally received licenses in June last year, while in December TRW dropped its Odyssey system in favor of partnership with ICO, the international subsidiary of Inmarsat which entered the competition in 1995. Outside the United States, Iridium must obtain access rights in each country where service is provided. The company expects to have reached agreements with 90 priority countries that represent 85% of its business plan by the start of service this month. Altogether, Iridium is seeking access to some 200 countries through an arduous negotiating process. FINANCING Iridium LLC was established by Motorola in December, 1991 to build and operate the Iridium system, with Robert W. Kinzie as its chairman. In December, 1996 Edward F. Staiano was appointed Vice Chairman and CEO. Iridium LLC, based in Washington, DC, is a 19-member international consortium of strategic investors representing telecommuni-cation and industrial companies, including a 25 percent stake by its prime contractor, Motorola, Inc. In August 1993, Motorola and Iridium LLC announced they had completed the first-round financing of the Iridium system with $800 million in equity. The second round was completed in September, 1994, bringing the total to $1.6 billion. In July of last year $800 million in debt financing was completed. Iridium World Communications, Ltd., a Bermuda company, was formed to serve as a vehicle for public investment in the Iridium system. In June 1997 an initial $240 million public offering was made on the NASDAQ Stock Exchange. TECHNICAL DESCRIPTION The Iridium constellation consists of 66 satellites in near-polar circular orbits inclined at 86.4° at an altitude of 780 km. The satellites are distributed into six planes separated by 31.6° around the equator with eleven satellites per plane. There is also one spare satellite in each plane. Starting on May 5, 1997, the entire constellation was deployed within twelve months on launch vehicles from three continents: the U.S. Delta II, the Russian Proton, and the Chinese Long March. The final complement of five 700 kg (1500 lb) satellites was launched aboard a Delta II rocket on May 17. With a satellite lifetime of from 5 to 8 years, it is expected that the replenishment rate will be about a dozen satellites per year after the second year of operation. The altitude was specified to be within the range 370 km (200 nmi) and 1100 km (600 nmi). The engineers wanted a minimum altitude of 370 km so that the satellite would be above the residual atmosphere, which would have diminished lifetime without extensive stationkeeping, and a maximum altitude of 1100 km so that the satellite would be below the Van Allen radiation environment, which would require shielding. Each satellite covers a circular area roughly the size of the United States with a diameter of about 4400 km, having an elevation angle of 8.2° at the perimeter and subtending an angle of 39.8° with respect to the center of the earth. The coverage area is divided into 48 cells. The satellite has three main beam phased array antennas, each of which serves 16 cells. The period of revolution is approximately 100 minutes, so that a given satellite is in view about 9 minutes. The user is illuminated by a single cell for about one minute. Complex protocols are required to provide continuity of communication seamlessly as handover is passed from cell to cell and from satellite to satellite. The communications link requires 3.5 million lines of software, while an additional 14 million lines of code are required for navigation and switching. As satellites converge near the poles, redundant beams are shut off. There are approximately 2150 active beams over the globe. The total spectrum of 5.15 MHz is divided into 120 FDMA channels, each with a bandwidth of 31.5 kHz and a guardband of 10.17 kHz to minimize intermodulation effects and two guardbands of 37.5 kHz to allow for Doppler frequency shifts. Within each FDMA channel, there are four TDMA slots in each direction (uplink and downlink). The coded data burst rate with QPSK modulation and raised cosine filtering is 50 kbps (corresponding to an occupied bandwidth of 1.26 ´ 50 kbps / 2 = 31.5 kHz). Each TDMA slot has length 8.29 ms in a 90 ms frame. The supported vocoder information bit rate is 2.4 kbps for digital voice, fax, and data. The total information bit rate, with rate 3/4 forward error correction (FEC) coding, is 3.45 kbps (0.75 ´ (8.28 ms/90 ms) ´ 50 kbps = 3.45 kbps), which includes overhead and source encoding, exclusive of FEC coding, for weighting of parameters in importance of decoding the signal. The bit error ratio (BER) at threshold is nominally 0.01 but is much better 99 percent of the time. The vocoder is analogous to a musical instrument synthesizer. In this case, the “instrument” is the human vocal tract. Instead of performing analogue-to-digital conversion using pulse code modulation (PCM) with a nominal data rate of 64 kbps (typical of terrestrial toll-quality telephone circuits), the vocoder transmits a set of parameters that emulate speech patterns, vowel sounds, and acoustic level. The resulting bit rate of 2.4 kbps is thus capable of transmitting clear, intelligible speech comparable to the performance of high quality terrestrial cellular telephones, but not quite the quality of standard telephones. The signal strength has a nominal 16 dB link margin. This margin is robust for users in exterior urban environments, but is not sufficient to penetrate buildings. Satellite users will have to stand near windows or go outside to place a call. Handover from cell to cell within the field of view of an orbiting satellite is imperceptible. Handover from satellite to satellite every nine minutes may occasionally be detectable by a quarter-second gap. Each satellite has a capacity of about 1100 channels. However, the actual number of users within a satellite coverage area will vary and the distribution of traffic among cells is not symmetrical. CALL ROUTING The Iridium satellites are processing satellites that route a call through the satellite constellation. The system is coordinated by 12 physical gateways distributed around the world, although in principle only a single gateway would be required for complete global coverage. Intersatellite links operate in Ka-band from 23.18 to 23.38 GHz and satellite-gateway links operate in Ka-band at 29.1 to 29.3 GHz (uplink) and 19.4 to 19.6 GHz (downlink). For example, a gateway in Tempe, Arizona serves North America and Central America; a gateway in Italy serves Europe and Africa; a gateway in India serves southern Asia and Australia. There are 15 regional franchise owners, some of whom share gateway facilities. The constellation is managed from a new satellite network operations center in Lansdowne, Virginia. As described by Craig Bond, Iridium’s vice president for marketing development, the user dials a telephone number with the handset using an international 13 digit number as one would do normally using a standard telephone. The user presses the “send” button to access the nearest satellite. The system identifies the user’s position and authenticates the handset at the nearest gateway with the home location register (HLR). Once the user is validated, the call is sent to the satellite. The call is routed through the constellation and drops to the gateway closest to the destination. There it is completed over standard terrestrial circuits. For a call from a fixed location to a handset, the process is reversed. After the call is placed, the system identifies the recipient’s location and the handset rings, no matter where the user is on the earth. It is projected that about 95 percent of the traffic will be between a mobile handset and a telephone at a fixed location. The remaining 5 percent of the traffic represents calls placed from one handset to another handset anywhere in the world. In this case, the call “never touches the ground” until it is received by the handset of the intended recipient. By comparison, a “bent pipe” satellite system, such as Globalstar, requires that a single satellite see both the user and the nearest gateway simultaneously. Thus many more gateways are needed. For example, in Africa Globalstar will require about a dozen gateways, while Iridium has none at all. Globalstar advocates would counter that this is not a disadvantage, since their system places the complexity on the ground rather than the satellite and offers greater flexibility in building and upgrading the system. HANDSET The Iridium handsets are built by Motorola and Kyocera, a leading manufacturer of cellular telephones in Japan. Handsets will permit both satellite access and terrestrial cellular roaming capability within the same unit. The unit also includes a Subscriber Identity Module (SIM) card. Major regional cellular standards are interchanged by inserting a Cellular Cassette. Paging options are available, as well as separate compact Iridium pagers. The price for a typical configuration will be around $3,000. The handsets will be available through service providers and cellular roaming partners. In June, Iridium finalized its 200th local distribution agreement. Information on how to obtain Iridium telephones will be advertised widely. Customers will also be actively solicited through credit card and travel services memberships. Distribution of the handsets and setup will typically be through sales representatives who will interface with the customer directly. Rental programs will also be available to give potential customers the opportunity to try out the system on a temporary basis. MARKET Iridium has conducted extensive research to measure the market. As described by Iridium’s Bond, the intended market can be divided into two segments: the vertical market and the horizontal market. The vertical market consists of customers in remote areas who require satellites for their communications needs because they cannot access conventional terrestrial cellular networks. This market includes personnel in the petroleum, gas, mining, and shipping industries. It also includes the branches of the U.S. military. In fact, the U.S. government has built a dedicated gateway in Hawaii capable of serving 120,000 users so that it can access the Iridium system at a lower per minute charge. The horizontal market is represented by the international business traveler. This type of customer wants to keep in contact with the corporate office no matter where he or she is in the world. Although mindful of the satellite link, this customer doesn’t really care how the telephone system works, as long as it is always available easily and reliably. It has been consistently estimated that the total price for satellite service will be about $3.00 per minute. This price is about 25 percent to 35 percent higher than normal cellular roaming rates plus long distance charges. When using the roaming cellular capability, the price will be about $1.00 to $1.25 per minute. The expected break-even market for Iridium is about 600,000 customers globally, assuming an undisclosed average usage per customer per month. The company hopes to recover its $5 billion investment within one year, or by the fourth quarter of 1999. Based on independent research, Iridium anticipates a customer base of 5 million by 2002. PROBLEMS As might be expected for a complex undertaking, the deployment of the constellation and the manufacture of the handsets has not been without glitches. So far, a total of nine spacecraft have suffered in-orbit failures. In addition, Iridium has announced delays in the development of the handset software. Of the 72 satellites launched, including spares, one lost its stationkeeping fuel when a thruster did not shut off, one was damaged as it was released from a Delta II launch vehicle, and three had reaction wheel problems. In July two more satellites failed because of hardware problems. Delta II and Long March rockets, scheduled to begin a maintenance program of launching additional spares, were retargeted to deploy nine replacement birds to the orbital planes where they are needed in August. Investors are also nervous about final software upgrades to the handsets. Following alpha trials last month, beta testing of the units was scheduled to commence within one week of the September 23 commercial activation date. The Motorola handsets are expected to be available to meet initial demand, but those made by Kyocera may not be ready until later. [Note added: On September 9, Iridium announced that the debut of full commercial service would be delayed until November 1 because more time is needed to test the global system.] The fifteen gateways have been completed. Equipment for the China gateway, the last one, was shipped recently. Like a theatrical production, the players are frantically completing last minute details as the curtain is about to go up and Iridium embarks upon the world stage. THE FUTURE Iridium is already at work on its Next Generation system (Inx). Planning the system has been underway for more than a year. Although details have not been announced, it has been suggested that the system would be capable of providing broadband services to mobile terminals. In part, it would augment the fixed terminal services offered by Teledesic, which Motorola is helping to build, and might include aspects of Motorola’s former Celestri system. It has also been reported that the Inx terminal would provide greater flexibility in transitioning between satellite and cellular services and that the satellite power level would be substantially increased. As customers sign up for satellite mobile telephony service, the utility and competitive advantage will become apparent. Information will flow more freely, the world will grow still smaller, and economies around the world will be stimulated. There will also be a profound effect on geopolitics and culture. Just as satellite television helped bring down the Berlin Wall by the flow of pictures and information across international boundaries, the dawning age of global personal communication among individuals will bring the world community closer together as a single family. _______________________________ Dr. Robert A. Nelson, P.E. is president of Satellite Engineering Research Corporation, a satellite engineering consulting firm in Bethesda, Maryland, a Lecturer in the Department of Aerospace Engineering at the University of Maryland and Technical Editor of Via Satellite magazine.