How to Promote Your ATI Course in Social Media

How to Promote Your ATI Course in Social Media LinkedIn for ATI Rocket Scientists   Did you know that for 52% of professionals and executives, their LinkedIn profile is the #1 or #2 search result when someone searches on their name? For ATI instructors, that number is substantially lower – just 17%. One reason is […]
How to Promote Your ATI Course in Social Media LinkedIn for ATI Rocket Scientists   Did you know that for 52% of professionals and executives, their LinkedIn profile is the #1 or #2 search result when someone searches on their name? For ATI instructors, that number is substantially lower – just 17%. One reason is that about 25% of ATI instructors do not have a LinkedIn profile. Others have done so little with their profile that it isn’t included in the first page of search results. If you are not using your LinkedIn profile, you are missing a huge opportunity. When people google you, your LinkedIn profile is likely the first place they go to learn about you. You have little control over what other information might be available on the web about you. But you have complete control over your LinkedIn profile. You can use your profile to tell your story – to give people the exact information you want them to have about your expertise and accomplishments.   Why not take advantage of that to promote your company, your services, and your course? Here are some simple ways to promote your course using LinkedIn… On Your LinkedIn Profile Let’s start by talking about how to include your course on your LinkedIn profile so it is visible anytime someone googles you or visits your profile. 1. Add your role as an instructor. Let people know that this course is one of the ways you share your knowledge. You can include your role as an instructor in several places on your profile:
  • Experience – This is the equivalent of listing your role as a current job. (You can have more than one current job.) Use Applied Technology Institute as the employer. Make sure you drag and drop this role below your full-time position.
  • Summary – Your summary is like a cover letter for your profile – use it to give people an overview of who you are and what you do. You can mention the type of training you do, along with the name of your course.
  • Projects – The Projects section gives you an excellent way to share the course without giving it the same status as a full-time job.
  • Headline – Your Headline comes directly below your name, at the top of your profile. You could add “ATI Instructor” at the end of your current Headline.
Start with an introduction, such as “I teach an intensive course through the Applied Technology Institute on [course title]” and copy/paste the description from your course materials or the ATI website. You can add a link to the course description on the ATI website. This example from Tom Logsdon’s profile, shows how you might phrase it:   Here are some other examples of instructors who include information about their courses on their LinkedIn profile:
  • Buddy Wellborn – His Headline says “Instructor at ATI” and Buddy includes details about the course in his Experience section.
  • D. Lee Fugal – Mentions the course in his Summary and Experience.
  • Jim Jenkins – Courses are included throughout Jim’s profile, including his Headline, Summary, Experience, Projects, and Courses.
  • 2. Link to your course page.
In the Contact Info section of your LinkedIn profile, you can link out to three websites. To add your course, go to Edit Profile, then click on Contact Info (just below your number of connections, next to a Rolodex card icon). Click on the pencil icon to the right of Websites to add a new site. Choose the type of website you are adding. The best option is “Other:” as that allows you to insert your own name for the link. You have 35 characters – you can use a shortened version of your course title or simply “ATI Course.” Then copy/paste the link to the page about your course. This example from Jim Jenkins’ profile shows how a customized link looks:   3. Upload course materials. You can upload course materials to help people better understand the content you cover. You could include PowerPoint presentations (from this course or other training), course handouts (PDFs), videos or graphics. They can be added to your Summary, Experience or Project. You can see an example of an upload above, in Tom Logsdon’s profile. 4. Add skills related to your course. LinkedIn allows you to include up to 50 skills on your profile. If your current list of skills doesn’t include the topics you cover in your course, you might want to add them. Go to the Skills & Endorsements section on your Edit Profile page, then click on Add skill. Start typing and let LinkedIn auto-complete your topic. If your exact topic isn’t included in the suggestions, you can add it. 5. Ask students for recommendations. Are you still in touch with former students who were particularly appreciative of the training you provided in your course? You might want to ask them for a recommendation that you can include on your profile. Here are some tips on asking for recommendations from LinkedIn expert Viveka Von Rosen. 6. Use an exciting background graphic. You can add an image at the top of your profile – perhaps a photo of you teaching the course, a photo of your course materials, a graphic from your presentation, or simply some images related to your topic. You can see an example on Val Traver’s profile. Go to Edit Profile, then run your mouse over the top of the page (just above your name). You will see the option to Edit Background. Click there and upload your image. The ideal size is 1400 pixels by 425. LinkedIn prefers a JPG, PNG or GIF. Of course, only upload an image that you have permission to use.   Share News about Your Course You can also use LinkedIn to attract more attendees to your course every time you teach. 7. When a course date is scheduled, share the news as a status update. This lets your connections know that you are teaching a course – it’s a great way to reach the people who are most likely to be interested and able to make referrals. Go to your LinkedIn home page, and click on the box under your photo that says “Share an update.” Copy and paste the URL of the page on the ATI website that has the course description. Once the section below populates with the ATI Courses logo and the course description, delete the URL. Replace it with a comment such as: “Looking forward to teaching my next course on [title] for @Applied Technology Institute on [date] at [location].” Note that when you finish typing “@Applied Technology Institute” it will give you the option to click on the company name. When you do that ATI will know you are promoting the course, and will be deeply grateful! When people comment on your update, it’s nice to like their comment or reply with a “Thank you!” message. Their comment shares the update with their network, so they are giving your course publicity. If you want to start doing more with status updates, here are some good tips about what to share (and what not to share) from LinkedIn expert Kim Garst. 8. Share the news in LinkedIn Groups. If you have joined any LinkedIn Groups in your areas of expertise, share the news there too. Of course, in a Group you want to phrase the message a little differently. Instead of “Looking forward to teaching…” you might say “Registration is now open for…” or “For everyone interested in [topic], I’m teaching…” You could also ask a thought-provoking question on one of the topics you cover. Here are some tips about how to start an interesting discussion in a LinkedIn Group. 9. Post again if you still have seats available. If the course date is getting close and you are looking for more people to register, you should post again. The text below will work as a status update and in most LinkedIn Groups. “We still have several seats open for my course on [title] on [date] at [location]. If you know of anyone who might be interested, could you please forward this? Thanks. ” “We have had a few last-minute cancellations for my course on [title] on [date] at [location]. Know anyone who might be interested in attending?” 10. Blog about the topic of the course. When you publish blog posts on LinkedIn using their publishing platform, you get even more exposure than with a status update:
  • The blog posts are pushed out to all your connections.
  • They stay visible on your LinkedIn profile, and
  • They are made available to Google and other search engines.
A blog post published on LinkedIn will rank higher than one posted elsewhere, because LinkedIn is such an authority site. So this can give your course considerable exposure. You probably have written articles or have other content relevant to the course. Pick something that is 750-1500 words. To publish it, go to your LinkedIn home page, and click on the link that says “Publish a post.” The interface is very simple – easier than using Microsoft Word. Include an image if you can. You probably have something in your training materials that will be perfect. At the end of the post, add a sentence that says: “To learn more, attend my course on [title].” Link the title to the course description on the ATI website. For more tips about blogging, you are welcome to join ProResource’s online training website. The How to Write Blog Posts for LinkedIn course is free. Take the first step The most important version of your bio in the digital world is your LinkedIn summary. If you only make one change as a result of reading this blog post, it should be to add a strong summary to your LinkedIn profile. Write the summary promoting yourself as an expert in your field, not as a job seeker. Here are some resources that can help: Write the first draft of your profile in a word processing program to spell-check and ensure you are within the required character counts. Then copy/paste it into the appropriate sections of your LinkedIn profile. You will have a stronger profile that tells your story effectively with just an hour or two of work! Contributed by guest blogger Judy Schramm. Schramm is the CEO of ProResource, a marketing agency that works with thought leaders to help them create a powerful and effective presence in social media. ProResource offers done-for-you services as well as social media executive coaching. Contact Judy Schramm at jschramm@proresource.com or 703-824-8482.  

Geographic Information Systems

What Is a Geographic Information System? In 1988 the Federal Interagency Coordinating Committee defined the term Geographic Information System in the following manner: “a system of computer hardware, software, and procedures designed to support the capture, management, manipulation, analysis, and display of spatially referenced data for solving complex planning and management problems.” In essence, such […]
What Is a Geographic Information System? In 1988 the Federal Interagency Coordinating Committee defined the term Geographic Information System in the following manner: “a system of computer hardware, software, and procedures designed to support the capture, management, manipulation, analysis, and display of spatially referenced data for solving complex planning and management problems.” In essence, such a system is an electronic spreadsheet coupled with powerful graphic-manipulation and display capabilities. The three most important elements of a typical Geographic Information System can be summarized as follows: 1. Cartographic capability 2. Data management capability 3. Analytical capability The cartographic capabilities built into a Geographic Information System permit the computer – amply aided by skilled human operators – to produce accurate maps and engineering drawings in a convenient pictorial format. Once the digital maps have been constructed and annotated, the computer is used to manipulate the finished product in various specific ways to produce layered maps bristling with colorful attribute symbols. The data management capabilities enable the GIS operators to store and manipulate map-related information in convenient graphic and non-graphic formats. The storage and manipulation of the non-graphic information is often called “attribute processing”. Operators who are trained to handle the attribute processing can select the desired map data to produce colorful reports laced with a rich mixture of graphics, tabular information, and pictorial attributes. The analytical capabilities associated with today’s GIS software permit the trained operators to process and interpret spatial, tabular, and graphical data in a variety of useful ways. They can, for instance, measured the distance between two points or determine the areas of the various shapes pictured on the screen. The analytical capabilities also help the operators plan, design, and manage such important resources as roads, buildings, bridges, and waterways with maximum practical efficiency. Reaping The Practical Benefits of GIS Technology All around the world, government professionals, utility engineers, and efficiencyminded entrepreneurs have been quietly investing tens of millions of dollars in attempting to perfect a wide variety of Geographic Information Systems. The GIS routines they have been financing are capable of storing, manipulating, and analyzing complicated electronic maps to increase the efficiency of various largescale operations including city planning, resource management, emergency vehicle dispatch, and water distribution.
Even the simplest Geographic Information Systems contain a rich mixture of graphical and alphanumeric information stored in a database that can be manipulated electronically by trained human operators. The information contained in the various layers can be combined, modified, analyzed, and displayed in limitless combinations. The spatial information, its associated attributes, and any necessary alphanumeric labels and notations are imaged and printed using full-color computer-driven printers and video displays.
Regional and state governments, for example, use GIS to develop country maps, devise the most efficient deployments for public buses, repair roads, collect taxes, chart the spread of contagious diseases, and nail down new election districts. GIS technology is also being used in some of the most economically underdeveloped countries in the world. As you will learn at a later blog, technicians in Gambia, a tiny country on the West Coast of Africa, have been using GIS processing techniques coupled with inexpensive Navstar GPS receivers to monitor illegal fishing activities in their country’s territorial waters. Jack Dangermond, President of Environmental Systems Research; is convinced that Geographic Information Systems will rapidly spread to other Third-World countries whose citizens will experience immediate benefits. “GIS technology, because of its low-cost, high reliability, user-friendliness and wide usefulness, will be adopted by many users outside the highly developed technological societies,” he asserts. “This offers tremendous promise for improving the future for billions of people on planet Earth.” Of course, Geographic Information Systems will be broadly adopted by users around the world only if sponsors can foresee measurable economic benefits. Fortunately, for several decades, such benefits have been reported in industry literature and by many users. In 1968, for instance, the Texas Electronic Service Company introduced a grid-based load-management system for its massive electrical transformers. Using rather primitive GIS techniques, company technicians easily found and documented $1 billion in savings over a four-year period. Similarly, when the Denver Water Department implemented a GIS-based system for its engineering and planning functions, professional technicians on their staff pinpointed immediate savings in time, energy, and labor. Before automation, drafters typically spent two months turning out drawings for each set of 100 cross-sectional maps. After automation, those same products were typically completed in less than two days

The Wide Area Augmentation System (WAAS)

THE MICROWAVE LANDING SYSTEM As soon as a reasonably full constellation of Navstar satellites began to arrive in space, the Federal Aviation Administration approved the use of well-design Navstar receivers as a supplemental means of airborne navigation. With that approval, properly equipped airplanes could use the system for point-two-point vectoring and non-precision approach. While the […]
THE MICROWAVE LANDING SYSTEM As soon as a reasonably full constellation of Navstar satellites began to arrive in space, the Federal Aviation Administration approved the use of well-design Navstar receivers as a supplemental means of airborne navigation. With that approval, properly equipped airplanes could use the system for point-two-point vectoring and non-precision approach. While the GPS constellation was being installed, the Microwave Landing System (MLS) was being touted as the favored means for landing airplanes under bad-whether conditions at properly instrumented airports all around the world. A total of 1250 American airports were schedule for Microwave Landing System installations, but, even so, eighty percent of our countries airfields would still have lacked such landing aids. The Microwave Landing System, unfortunately, fell behind schedule and went over budget while clever new approaches were greatly enhancing the capabilities of the Navstar system. With these new concepts in mind, the FAA’s experts have essentially abandoned the Microwave Landing System in favor of a Navstar-based approach toward flight vectoring and air traffic control. Roughly one-third of the world’s airplanes are based in the United States. Consequently, officials in other countries are expected to rely on the GPS in a similar manner. They are of course, in addition, building and installing space-based navigation systems of their own to replace and accentuate the capabilities of the GPS system. FUTURE APPROACHES TO AIR TRAFFIC CONTROL The backbone of the Federal Aviation Administration’s rapidly evolving concept for future air traffic control is based on its Wide-Area Augmentation System (WAAS). The WAAS architecture calls for a total commitment two dependent surveillance techniques based on wide-area differential navigation. If it’s proposed architecture successfully materializes, every airplane that flies in the American airspace (excluding hang gliders and ultralights) will probably be equipped with a differential GPS receiver rigged to handle wide-area differential navigation. In a conventional differential navigation system, each differential base station broadcasts pseudo-range and pseudo-range-rate corrections directly to the users within a circular coverage region a few hundred nautical miles in diameter. This approach is conceptually simple and easy to implement, but as many as 500 differential base stations would be required to provide seamless coverage for the lower 48 states. Wide-area differential navigation, by contrast, can provide coverage over a comparable area with only 25 to 30 monitor stations linked to a centrally located master station. As Figure 1 indicates, the widely scattered monitor stations transmit real-time pseudorange measurements and other information to the master station where computer processing algorithms process all the measurements simultaneously as a unit. By processing large matrix arrays of overdetermined measurements, the master station produces and broadcasts information associated with each GPS satellite that is within sight of the United States: 1. 3-D satellite ephemeris corrections 2. Clock-bias errors 3. Real-time ionospheric corrections. Each local receiver then plucks off the appropriate constants associated with its current navigation solution. Careful computer processing of those values coupled with an appropriate set of conventional real-time pseudo-range measurements allows each user to obtain a dramatically improved navigation solution with essentially differential accuracy over the entire coverage area in real time.
The FAA’s Wide Area Augmentation System employs 25 to 30 widely dispersed monitor stations that transmit real-time pseudo-range and pseudo-range-rate corrections to a centrally located master station. The master station then computes generalize “differential corrections” that span the entire lower forty-eight states. These values are then transmitted up to a small collection of geostationary satellites serving the system for rebroadcast back down to the users on or near the ground below.

Geodetic Surveying

POSITIONING MAJOR LANDMARKS In 1988 a team of surveyors used the signals from the Navstar satellites to reestablish the locations of 250,000 landmarks sprinkled across the United States. According to one early press report, their space-age measurements caused the research team to “move the Washington Monument 94.5 feet to the northwest ” And during that […]
POSITIONING MAJOR LANDMARKS In 1988 a team of surveyors used the signals from the Navstar satellites to reestablish the locations of 250,000 landmarks sprinkled across the United States. According to one early press report, their space-age measurements caused the research team to “move the Washington Monument 94.5 feet to the northwest ” And during that same surveying campaign, they moved the Empire State Building 120.5 feet to the northeast, and they repositioned Chicago’s Sears Tower 90.1 feet to the northwest. In reality, of course, the Navstar satellites do not give anyone the power to move large, imposing structures, but the precise signals they broadcast do provide our geodetic experts with amazingly accurate and convenient position-fixing capabilities that have been quietly revolutionizing today’s surveying profession. Someday soon the deed to your house may be specified in GPS coordinates. Surveying with a GPS receiver entails a number of critical advantages over classical ground-based methods for pinpointing the locations of widely scattered landmarks on the Earth’s undulating surface. For one thing, intervisibility between benchmarks is not required. Navstar receivers positioned at surveyors benchmarks often have access to the signals from the GPS satellites sailing overhead even though they may not be within sight of one another. This can be especially important in tree-shrouded areas, such is the dense rain forests of Indonesia and Brazil. In such cluttered conditions, conventional surveying teams sometimes spend hours E erecting big, portable towers at each site to achieve the required intervisibility high above the forest canopy. When it is time to move on, they tear the towers down one by one and lug their girders to different locations, and then build them back up again. GPS surveying is advantageous because it is essentially weather-independent, and because it permits convenient and accurate day-night operations. With carrier-aided navigation techniques, site-to-site positioning errors as small as a quarter of an inch can sometimes be achieved. The signals from the space-based Transit Navigation System have been used for many years to aid specialized terrestrial surveying operations. Unfortunately, Transit surveying suffers from a number of practical limitations as compared with similar operations using the GPS. A Transit satellite, for instance, climbs up above the horizon, on average, only every hour or so compared with the continuous GPS satellite observations. Moreover, achieving and accuracy of a foot or so requires approximately 48 hours of intermittent access to the signals from the Transit satellites. By contrast, the GPS provides inch-level accuracies with the satellite observation interval lasting, at most, only about 1 hour. DETERMINING THE SHAPE OF PLANET EARTH For thousands of years scientists have tried to determine the size and shape of planet Earth. During those centuries, shapes resembling tabletops, magnifying glasses, turkey eggs, and Bartlett pears have all, at one time or another, been chosen to model its conjectured shape. The ancient Babylonians, for instance, were convinced that the earth was essentially flat, probably due to erroneous everyday observations. But by 900 BC, they had changed their minds and decided it was shaped like a convex disc. This will belief probably arose when some observant mariner noticed that, whenever a sailboat approaches the horizon, it’s hull drops out of view while the sail was still clearly visible. By 1000 BC Egyptian and Greek scientists had concluded that the earth was a big, round ball. In that era, in fact , Erastothenes managed to make a surprisingly accurate estimate of the actual circumference of the spherical earth. He realized that such an estimate was possible when he happened to notice that it noontime on a particular day, the sun’s rays plunged directly down a well and Aswan, but at that same time due north at Alexandria it’s rays came down at a more shallow angle. Once he had measured the peak elevation angle of the solar disk at Alexandria on the appropriate day (see Figure 1), Erastothenes estimated the distance from Aswan to Alexandria – probably by noting the travel times of sailing boats or camel caravans. He then a value weighted a simple ratio to get an estimate for the circumference of planet Earth. Translating measurement units across centuries is not an easy thing to do, but our best guess indicates that his estimate for the earth’s radius was too large by around 15 percent. Twenty-five centuries later, Christopher Columbus underestimated the Earth’s radius by 25 percent. He wanted to believe that he inhabited a smaller planet so the Orient would not be prohibitively far away from Europe, sailing west. In 1687, England’s intellectual giant, Sir Isaac Newton, displayed his powerful insights when he reasoned that his home planet, Earth must have a slight midriff bulge. Its shape, he reasoned is governed by hydrostatic equilibrium, as it spinning mass creates enough centrifugal force to sling a big curving girdle of water upward against the pull of gravity. Newton’s mathematical calculations showed that this enormous water-girdle must be around 17 miles high. But were the landmasses affected in the same way as that bulge of water in the seas? Newton understood that if the earth was rigid enough, the landmasses would not be reshaped by the centrifugal forces but he reasoned that, since there were no mountains 17 miles high, the landmasses must be similarly affected, otherwise, no islands would peak up through the water in the vicinity of the equator.
In 1000 B.C., the highly insightful Greek mathematician Erastothenes estimated the radius of the earth by measuring the elevation angle of the sun at Alexandria when it was known to be overhead due south at Aswan (Syene). Then using a simple ratio, he scaled up the measured distance separating those two Egyptian cities to obtain a surprisingly accurate estimate for the circumference of the spherical earth.
GPS CALIBRATIONS AT THE TURTMANN TEST RANGE Surveying demonstrations carried out at the Turtmann Test Range in the Swiss Alps have demonstrated that, when a GPS receiver is operated in the carrieraided (interferometry) mode, it can provide positioning inaccuracies comparable to those obtained from the finest available laser-ranging techniques. Figure 2 summarizes the positioning accuracies that the Swiss surveying team was able to achieve in the Turtmann test campaign. In this clever bird’s-eye-view depiction of the range, the various baseline lengths or all accurately proportioned. The short vectors are proportional to the surveying errors in the horizontal plane, but they have been magnified 100,000 times, compared with the dimensions of the baseline lengths. I n one early test series, the one sigma deviations between the GPS measurements and the earlier glacier-ranging calibrations turned out to be Sigma X = 0.2 inches Sigma Y = 0.15 inches Sigma Z = 0.17 inches. In an earlier test involving only for base stations with three unknown baseline lengths of 382.2 feet, 1644.4 feet, and 333 feet, the average surveying errors were: Sigma X = 0.2 inches Sigma Y = 0.35 inches Sigma Z = 0.35 inches. Both sets of measurements were estimated using static surveying techniques in which the GPS receiver sits at each site for about a half-hour to record several hundred pseudo-range measurements. All of the measurements from the various sites are then processed simultaneously to achieve the desired results.  

The Atomic Clocks Carried Aboard The Navstar GPS Satellites

CESIUM ATOMIC CLOCKS Only in the modern era of atomic clocks has timekeeping technology provided sufficient accuracy to allow the successful construction of the Navstar Global Positioning System. The evenly spaced timing pulses coming down from each Navstar satellite are generated by an atomic clock that contains no gears or cogs. It’s extraordinary timekeeping abilities […]
CESIUM ATOMIC CLOCKS Only in the modern era of atomic clocks has timekeeping technology provided sufficient accuracy to allow the successful construction of the Navstar Global Positioning System. The evenly spaced timing pulses coming down from each Navstar satellite are generated by an atomic clock that contains no gears or cogs. It’s extraordinary timekeeping abilities arise from the quantum mechanical behavior of certain specific atoms (cesium, rubidium, hydrogen), which tend to have a single outer-shell electron. Cesium atoms can exist in either of two principal states. In the high-energy state, the spin axis of the lone outer-shell electron is parallel to the spin axis of the atom’s nucleus. In the low-energy state, the electron spins in a anti-parallel direction. For cesium, the energy difference between the two spin states corresponds to an electromagnetic frequency of 9,192,631,770 cycles per second. Thus, when a cloud of cesium gas is struck by radio wave oscillating near that particular frequency, some of the low-energy atoms will absorb one quantum of energy and, consequently, their outershell electron will flip over and begin spinning in the opposite direction. The closer the trigger frequency can be adjusted to 9,192,631,770 cycles per second, the more lowenergy electrons will reverse their direction of spin. The heart of the cesium atomic clock is a voltage-controlled crystal oscillator – a small vibrating slab of quartz similar to the one that hums inside a digital watch. Small variations in the voltage feeding a voltage-controlled crystal oscillator create corresponding variations in its oscillation frequency. Any necessary adjustments are handled by a feedback control loop consisting of a cesium atomic clock wrapped around the quartz crystal oscillator. A schematic diagram of the cesium atomic clocks carried onboard the GPS satellites is sketched in Figure 1. First solid cesium is vaporized at 100 degrees Centigrade and then it is routed through a collimator to form a steady stream of cesium gas, which, in its natural state, consists of an equal mixture of high-energy and low-energy atoms.
The low-energy atoms floating around inside the resonating chamber of this cesium atomic clock are hit with a radio wave as close as possible to 9,192,631,770 oscillations per second. Depending on the accuracy of that trigger frequency, larger or smaller numbers of low-energy atoms will absorb one quanta of energy to become highenergy atoms – which are subsequently converted into cesium ions by the hot-wire ionizer (bottom right). The resulting ion current automatically adjusts the frequency of the quartz crystal oscillator, which, in turn, creates more timing pulses and precisely controlled electromagnetic waves.
A selector magnet is then used to separate the cesium atoms into two separate streams. The high-energy atoms are discarded, the low-energy atoms are deflected into a resonating cavity with precisely machined dimensions were they are hit with radio waves generated by a voltage-controlled crystal oscillator coupled to a solid-state frequency multiplier circuit. The closer the trigger frequency is to 9,192,631,770 oscillations per second, the more outer shell electrons will be inverted to produce highenergy cesium atoms. When the atoms emerge from the resonating cavity, they are again sorted by a selector magnet into two separate streams. This time the low-energy atoms are discarded. The high-energy atoms are deflected onto a hot-wire ionizer, which strips off their outer-shell electrons to produce a stream of cesium ions. The resulting current is then routed into a feedback control loop connected to the voltage controlled crystal oscillator whose oscillation frequency is constantly adjusted to produce new radio waves. By adjusting the frequency to maximize the ion current and dithering the oscillator to make its frequency straddle the desired value of 9,192,631,770 oscillations per second, the frequency stability of the quartz crystal oscillator can be maintained within one part in 5 billion. Thus, the feedback control loop just described stabilizes the frequency of the quartz crystal by a factor of 10,000 or so, compared with a free-running quartz crystal with similar design characteristics. RUBIDIUM ATOMIC CLOCKS The rubidium atomic clocks carried on board the GPS satellites are, in many respects, similar to the cesium atomic clocks, but there are also important differences in their design. For one thing, the rubidium atoms are not used up while the device is keeping time. Instead, the atoms reside permanently inside the resonating chamber. The sensing mechanisms that monitor and adjust the clocks stability are also based on distinctly different scientific principles. As the rubidium atoms linger inside the resonating chamber, they are impacted with electromagnetic waves whose oscillation frequencies are as close as possible to 6,834,682,613 oscillations per second (see Figure 2). As the transmission frequency is adjusted closer and closer to that precise target value, larger numbers of rubidium atoms will absorb exactly one quanta of energy. When they do, their spin-states automatically reverse to convert them from low-energy to high-energy atoms.
Unlike the cesium atomic clock, the atoms in a rubidium atomic clock remain always in the gaseous state. The trigger frequencies for the two devices are also different. For a rubidium atomic clock the trigger frequency is 6,834,682,613 oscillations per second. When the rubidium atoms inside the resonating cavity are hit with a trigger frequency as close as possible to that value, larger numbers of them are converted from low-energy atoms two high-energy atoms – that is, the spin axis of their lone outer shell electron is parallel to the spin axis of the nucleus. Successful inversions are monitored by shining a rubidium lamp through the resonating cavity. When larger numbers of rubidium atoms have a been converted to the high-energy state, the gaseous cesium in the resonating cavity is more opaque to rubidium light.
The rubidium atomic clock converges toward the desired frequency through a feedback control loop whose status is continuously evaluated by shining the beam of rubidium lamp through the resonating chamber. The gas inside the chamber becomes more or less opaque to rubidium light, depending on how many of the rubidium atoms inside have been successfully inverted. The intensity of the rubidium light passing through the chamber is measured by a photo detector, similar to the electric eye in a digital camera. The output from the photo detector is fed into a set of solid-state integrated circuits rigged to make subtle and continuous adjustments to the frequency of the voltagecontrolled crystal oscillator. Pulses from the crystal oscillator, which vibrates at 5 million oscillations per second, are used in generating the evenly spaced C/A- and P-code pulses broadcast by the satellites. A portion of the output from the voltage-controlled crystal oscillator is also fed into a set of frequency multiplier circuits which generate the desired 6,834,682,613 oscillation-per-second frequency, which is, in turn, routed into the atomic clock’s resonating chamber. DEVELOPING ATOMIC CLOCKS LIGHT ENOUGH TO TRAVEL INTO SPACE When the architecture for the Navstar navigation system was first being selected, many experts argued convincingly that the atomic clocks should remain firmly planted on the ground. The C/A- and P-code pulse trains, they believed, should be sent up to the satellites through radio links for rebroadcast back down to the users down below. This contention position was quite defensible because all available atomic clocks were big and heavy, power-hungry, an extremely temperamental. The best available cesium atomic clocks operated by the National Bureau of Standards, for instance, were larger than a household deep-freeze, and they had to be tended by a fretful army of highly trained technicians. However, emerging technology soon produced much smaller and far more dependable atomic clocks. After years of intellectual struggle, the cesium and rubidium atomic clocks on board the Navstar satellites have turned out to be surprisingly small and compact. They also consume moderate quantities of electricity and can operate for several years without failure. The rubidium clocks carried aboard the Navstar satellites are roughly the same size as a car battery. Each one weighs about fifteen pounds. The cesium atomic clocks are a little bigger. They weigh thirty pounds each. The earliest Navstar GPS performance specifications called for atomic clocks with fractional frequency stabilities of one part in 1 trillion. The fractional frequency stability of an atomic clock can be defined as the one sigma error pulse to pulse divided by the duration between pulses. An atomic clock with a fractional frequency stability of one part in 1 trillion is capable of keeping time to within one second over at interval of 30,000 years. Although this performance specification may seem rather stringent, the first few spaceborne atomic clocks were two to five times more stable than required. Consequently, the specification goal was eventually raised to two parts in 10 trillion. The Navstar clocks have turned out to be surprisingly accurate and stable, but clock reliability problems plagued the first few GPS satellites. On the average, only five on orbit months went by before a satellite component failure occurred. Almost always it was an atomic clock component that failed. With intense design efforts, these problems were eventually brought under control so that, today, the probability that at least one of the four atomic clocks on the Block II satellite will still be operating at the end of its 7.5 year mission is estimated to be 99.44 percent.  

Precise Time Synchronization

EARLY DISASTERS AT SEA Eighteenth-century British sailors exhibited an almost haughty disdain for accurate navigation. When one of them was asked how to navigate a sailing ship from London to the New World, he replied: “sail south until the butter melts, then turn right.” For decades thereafter, Britain ruled the waves, but her seamen paid […]
EARLY DISASTERS AT SEA Eighteenth-century British sailors exhibited an almost haughty disdain for accurate navigation. When one of them was asked how to navigate a sailing ship from London to the New World, he replied: “sail south until the butter melts, then turn right.” For decades thereafter, Britain ruled the waves, but her seamen paid for their lack of navigational expertise with precious ships and expensive cargoes. Sometimes they paid with their own lives. A special exhibit in the British Maritime Museum at Greenwich highlights some of the painful consequences of an inaccurate navigation. In 1691, for instance, several ships of war were lost off Plymouth when the navigator mistook the Dead Man for Barry head. And in 1707 another devastating incident occurred when Sir Cloudsley Shovel was assigned to guide a flotilla from Gibraltar to the docks of London. After 12 days shrouded in heavy fog, he ran aground at the Scilly Islands. Four ships and 2500 British seamen were lost. These and a number of other similar disasters at sea motivated Parliament to establish the British Board of Longitude, a committee composed of the finest scientists of the day. They were charged with the responsibility of discovering some practical scheme for determining the locations of British ships on transoceanic voyages. In 1714 the Board offered a 20,000 British pound prize to anyone who could provide them with a method for fixing the ship’s position to within 30 nautical miles after six weeks at sea. One promising possibility originally proposed by the Italian scientist Galileo would have required that navigators take precise sightings of the moons of Jupiter as they were eclipsed by the planet. If this technique had been adopted, special astronomical guides listing the predicted times for each of eclipses would’ve been furnished to the captain of every flagship, or perhaps every ship in the British fleet. Galileo’s elegant theory was entirely sound, but unfortunately, it’s 18th-century proponents were never able to devise a way to make the necessary observations under the rugged conditions existing at sea. Another approach called for a series of “light ships” to be anchored along the principal shipping lanes of the North Atlantic. The crew of each lightship would fire luminous “star shells” at regular intervals timed to explode at an altitude of 6400 feet. A ship in the area could calculate the distance to the nearest lightship by timing the duration between the visible flash and the sound of the exploding shell. JOHN HARRISON’S MARINE CHRONOMETER Even before the dawning of the 18th century, the latitude of a maritime vessel was relatively easy to ascertain at any location in the northern hemisphere, it’s latitude equals the elevation angle of the Pole Star but determining its longitude is always been far more difficult because the Earth’s rotation causes the stars to sweep across the sky 15 degrees for every passing hour. A one-minute timing error thus translates into a 15- nautical mile error in a longitudinal position. Unfortunately, measuring the time with sufficient accuracy aboard a rocking, rolling ship presented a formidable set of engineering problems 1714, when the British Board of Longitude made its tantalizing announcement, a barely educated British cabinetmaker named John Harrison was perfectly poised to win the prize. Harrison had always been clever with his hands, and he had been blessed with a natural talent for repairing and building precision machinery. Moreover, when the British Board of Longitude announced its fabulously inviting proposition, John Harrison just happen to be a poor but energetic 21-your-old Flushed with the boundless enthusiasm of youth, he began to design and build a series of highly precise timekeeping devices. It took him almost 50 years of difficult labor, but in 1761 he was finally ready to claim the prize. Harrison’s solution involved a new kind of shipboard timepiece, the Marine Chronometer which was amazingly accurate for it’s day. Onboard a rolling ship, in nearly any kind of weather, it gained or lost, on average, only about one second per day. Even by today’s standards, Harrison’s Marine chronometer was a marvel of engineering design. He constructed certain parts of it from bimetallic strips to compensate for temperature changes, he used swiveling gimbal mounts to minimize the effects of waveinduced motions, and he rigged it with special mechanisms so that it would continue to keep accurate time while it was being wound. Once the Marine Chronometer was widely adopted for Marine navigation, a sailor who failed to wind it, when it it was his assigned job to do so, could be charged with a capital crime. Over a period of 47 years, Harrison built four different versions of the Marine chronometer, all of which are, today, on display in Greenwich at the British Maritime Museum. Unfortunately, by the time John Harrison managed to finish his fourth and final Marine Chronometer, he did not have enough strength left to stake his claim. So he persuaded his son, William, to travel from London to Jamaica to demonstrate its fabulously accurate navigational capabilities. During that entire six-week journey, the Marine Chronometer lost less than one minute. And upon arrival at Jamaica, it helped fix the position of the ship to an accuracy of 20 nautical miles. Disputes raged for years thereafter as to whether John Harrison should be declared the winner. At one point, the members of the Board of Longitude insisted on confiscating his clever invention. They even tested it upside down, although Harrison had not designed it to keep accurate time in that unlikely method of operation. Eventually, through the intervention of royalty John Harrison was awarded the entire 20,000 British Pound prize. CELESTIAL NAVIGATION TECHNIQUES The Marine Chronometer has, for decades, been used in conjunction with the sextant to fix the longitudes and latitudes of vessels at sea. The sextant is an optical device that can be used to measure the elevation angle of any visible celestial body above the local horizon. While sighting planet or star through the optical train of the sextant, the navigator makes careful adjustments until the stars image is superimposed on the local horizon. A calibrated scale mounted on the side of the instrument then displays the elevation angle of the star. A precisely timed sextant sighting of this type fixes the position of the ship along a circular line of position lying on the spherical earth. By making a similar sighting on a second celestial body, with a different elevation angle, the navigator can construct a second circular line a position that will, generally speaking, intersect the first circle at two locations. He or she can then resolve the ambiguity either by having a fairly accurate estimate of the ship’s position or by taking one more sextant sighting on a third celestial body. Celestial navigation is still widely used by Mariners all around the world, although its popularity is eroding as other more accurate and convenient navigational techniques passed into common use. Lewis and Clark used celestial navigation when they constructed accurate maps of the North American wilderness and many Arctic explorers employ similar methods to guide the initial phases of their expeditions toward the north and the south poles. The Apollo astronauts also relied on sextant sightings for a backup navigation system as they coasted silently through cis lunar space. For those and many other applications of celestial navigation, precise time measurements are inevitably the key to achieving the desired accuracy and the desired confidence in the measured results. A BRIEF HISTORY OF TIME Over the past one thousand years advancing technology has given us several generations of increasingly accurate clocks. Indeed, as the graph in Figure 1 demonstrates, today’s best timekeeping devices are at least a trillion times (12 orders of magnitude) more stable and accurate than the finest clocks available 800 years ago. At the beginning of the 20th century, the most accurate timekeeping devices were water clocks and candle clocks, which, on average, gained or lost approximately one hour per day. Balance clocks, which were widely adopted in the 14th century, kept time to within 15 minutes per day. The next major advancing clockmaking technology was triggered by a simple observation by Galileo who, in 1651 (so the story goes) happened to wander into the church at the Leaning Tower of Pisa. Once inside, he noticed something that quickly captured his fancy: a candle suspended on the end of a chain swinging in the breeze. Numerous other churchgoing Italians had witnessed the same thing hundreds of times before. But Galileo noticed something all of them had failed to recognize: the amount of time required for the candle to swing back-and-forth was independent of the length of it swinging arc. When it traveled along the short arc it moved more slowly. When it traveled along a longer arc it moved faster to compensate. Galileo never used his clever pendulum principle to build a better clock, but he did suggest that others do so, and they were quick to follow that sound advice. Grandfather clocks, with their highly visible pendulums, are today’s most obvious result. A well-built grandfather clock loses or gains perhaps twenty seconds in an average day. Another important advancement came when, in 1761, after decades of labor, John Harrison managed to perfect his fourth Marine Chronometer, a precision shipboard timepiece that reduce timing errors to approximately one second per day. Thus, his device was just about a stable and accurate as a modern digital wristwatch they can be purchased for $30 and any large department store.
During the past 800 years timekeeping accuracies have improved by at least twelve orders of magnitude as innovative clock making technologies have been continuously introduced. In the twelfth century the best available timekeeping devices, candle clocks and water clocks, lost or gained fifty or sixty minutes during a typical day. Some of today’s hydrogen masers would require several million years to gain or lose a single second. In the intervening centuries, pendulum clocks, Marine Chronometers, quartz crystal oscillators, and cesium atomic clocks have all, in turn, greatly improved mankind’s ability to keep accurate time
In the 1940s clocks driven by tiny quartz crystal oscillators raised timekeeping accuracies to impressive new levels of precision. A quartz crystal oscillator is a tiny slab of quartz machined to precise dimensions that oscillates at an amazingly regular frequency. Once quartz crystal oscillators had been perfected, they turned out to be more stable and accurate then the timing standard of the day, which was based on the Earth’s steady rate of rotation. Astronomers measured the relentless passage of time by making optical sightings at the zenith crossings of celestial bodies as they swept across the sky. A few years later a new kind of official time standard was adopted based on atomic clocks driven by the unvarying oscillation frequencies of cesium, rubidium, and hydrogen atoms. Voting networks that include the timing pulses from widely separated atomic clocks still serve as a global time standard for the Western World. Today’s hydrogen masers are highly temperamental, but they are so stable and accurate they would require millions of years to lose or gain a single second.  

Interferometry Techniques

INTERFEROMETRY Most of today’s receivers use the pseudorandom code C/A- and of the P-code pulse sequences broadcast by the GPS satellites to obtain their current positioning solutions. But a more sophisticated technique called interferometry derives information for its navigation solutions from the sinusoidal carrier waves coming down from the satellites. Interferometry solutions, which are also […]
INTERFEROMETRY Most of today’s receivers use the pseudorandom code C/A- and of the P-code pulse sequences broadcast by the GPS satellites to obtain their current positioning solutions. But a more sophisticated technique called interferometry derives information for its navigation solutions from the sinusoidal carrier waves coming down from the satellites. Interferometry solutions, which are also called carrier-aided solutions, are more difficult to obtain, but, in situations where they are valid, they can provide surprisingly large reductions in the navigation errors, especially for static and low-dynamic surveying applications. Some airborne and spaceborne applications can also benefit from carrier-aided processing techniques. THE CLASSICAL MICHAELSON-MORLEY INTERFEROMETRY EXPERIMENT Interferometry methods first received widespread attention when they were used in the famous Michaelson-Morley experiment, which proved conclusively that they either did not exist. The ether was a fanciful substance that was believed to carry electromagnetic waves through the vacuum of space. Nineteenth-century scientists endow the ether with a number of semi-magical properties, such as complete weightlessness, total transparency, an infinite rigidity. If the ether existed, it surely carried beams of light along with it in some preferred direction. The Earth travels around the sun at 67,000 miles per hour, and the sun whirls around the center of the Milky Way galaxy at an even faster rate. Only by the most improbable coincidence would an earth-based observer be stationary with respect to the ether. Michelson and Morley devised a clever device for measuring the velocity of light in various directions to see how the movement of the ether might affect its propagation speed. Their mechanism broke a beam of light into two parts, sent those two parts along mutually perpendicular paths, and then brought them back together again to check their propagation velocities relative to one another. First the light was sent through an optical filter and a focusing lens to create parallel rays of monochromatic light (see Figure 1). Then it was directed toward a partially silvered mirror that reflected half the light, but allowed the other half to pass on through. The portion that passed through the partially silvered mirror hit a fixed, fully silvered mirror and was reflected back to the surface of the partially silvered mirror. The portion that was reflected by the partially silvered mirror traveled to a movable fully silvered mirror whose position could be manually adjusted by turning two small thumb screws.
Michelson-Morley’s interferometry apparatus uses a half-silvered mirror to divide a beam of monochromatic light into two parts: one part is sent to a fixed mirror, the other is reflected to an adjustable (movable) mirror. The beams then retrace their paths and recombined to form interference fringe patterns – concentric bands of dark and light. When the adjustable mirror is moved up or down one-quarter of a wavelength, the dark concentric bands become light and vice versa.
Constructive and destructive interference between the two reunited beams created concentric circles of light and dark. Each time the thumb screws were adjusted enough to shorten the path length by one-quarter of a wavelength of the monochromatic light, the dark rings turn to light and vice versa. In 1907 Albert Abraham Michelson was awarded the Nobel Prize for his pioneering work in interferometry techniques. And yet for decades thereafter, the methods that he and his talented colleagues perfected were used for only a few rather esoteric applications. Today, by contrast, interferometry techniques are improving our lives in a hundred dozen different ways, most of which are totally hidden from public view. MEASURING ATTITUDE ANGLES WITH SPECIAL NAVSTAR RECEIVERS A specially designed Navstar receiver can make use of simple interferometry techniques to determine its angular orientation with respect to the electromagnetic waves coming down from the GPS satellites. This is accomplished by processing a series of carrier wave measurements from a single satellite picked up by two different user-set antennas separated by a rigid bar. As Figure 2 indicates, the carrier waves from a distant satellite travel along essentially parallel trajectories to reach the two antennas. If the rigid bar is tipped at an angle with respect to the wavefront, the path lengths followed by the two parallel carrier waves will be unequal. Consequently, if we display both carrier waves on an oscilloscope, they will be displaced with respect to one another. Their phase mismatch can be used to determine the relative orientation angle theta, which is sketched in the lower left-hand corner Of Figure 2. Multiple measurements of this type using the L-band signals from various GPS satellites – together with the information they broadcast defining their Keplerian orbital elements – allow the receiver to determine its three independent attitude angles in real time. A larger separation distance between the two antennas (longer rigid rod) can theoretically increase the accuracy with which the attitude angles can be ascertained. Ambiguities in the solution arise from the fact that the receiver cannot distinguish between a pair of path lengths that differ by one-half a wavelength, one and one half wavelengths, two and one half wavelengths and so on. Consequently, the angle theta could have a large number of different values. Several promising solutions to this problem are constantly being explored. ELIMINATING THE SOLUTION AMBIGUITIES Each Navstar satellite transmits L1 and L2 carrier waves that are 7.5 and 9.6 inches long, respectively, so an antenna separation distance of only a few feet can create an enormous number of solution ambiguities. These ambiguities can be resolved, to some extent, by making precise measurements and then using careful computer processing techniques. An alternate approach makes use of an electronically shifted antenna that gradually increases the separation distance between the two antenna phase centers. At first, the two interface ports on the receiver are both fed from the same antenna. Then gradually, the other antenna feed is electronically shifted along a straight line from one end of the rigid bar to the other. During this interval the receiver keeps track of the number of wavelengths that have swept by, thus greatly reducing the possibility of unresolvable solution ambiguities. Other promising approaches include software resolution and antennas mounted on the two ends of a rigid rotating rod.
The angular orientation of a rigid bar separating two antennas can be measured by a special GPS receiver that uses interferometry techniques to determine the desired solution. This is possible because the carrier wave must follow a longer path to reach the antenna on the left then it follows to reach the one on the right. Increasing the separation distance between the two antennas improves the accuracy of the device, but larger separation distances also give rise to a much larger number of solution ambiguities.

The Science of Navigation

ANCIENT NAVIGATION Mankind’s earliest navigational experiences are lost in the shadows of the past. But history does record a number of instances in which ancient mariners observed the locations of the sun, the moon, and the stars to help direct their vessels across vast, uncharted seas. Bronze age Minoan seamen, for instance, followed torturous trade […]
ANCIENT NAVIGATION Mankind’s earliest navigational experiences are lost in the shadows of the past. But history does record a number of instances in which ancient mariners observed the locations of the sun, the moon, and the stars to help direct their vessels across vast, uncharted seas. Bronze age Minoan seamen, for instance, followed torturous trade routes to Egypt and Crete, and even before the birth of Christ, the Phoenicians brought many shiploads of tin from Cornwall. Twelve hundred years later, the Vikings were probably making infrequent journeys across the Atlantic to settlements in Greenland and North America. How did these courageous navigators find their way across such enormous distances in an era when integrating accelerometers and handheld receivers were not yet available in the commercial marketplace? Herodotus tells us that the Phoenicians used the Pole Star to guide their ships along dangerous journeys, and Homer explains how the wise goddess instructed Odysseus to “keep the Great Bear on his left hand” during his return from Calypso’s Island. CELESTIAL NAVIGATION Eventually, the magnetic compass reduced mankind’s reliance on celestial navigation. One of the earliest references to compass navigation was made in 1188, when Englishmen Alexander Neckam published a colorful description of an early version consisting of “a needle placed upon a dart which sailors used to steer when the Bear is hidden by clouds.” Eighty years later the Dominican friar Vincent of Beauvais explained how daring seamen, whose boats were deeply shrouded in fog, would “magnetize the needle with a lodestone and place it through a straw floating in water.” He then went on to note that “when the needle comes to rest it is pointing at the Pole Star.” The sextant, which was developed and refined over several centuries, made Polaris and its celestial neighbors considerably more useful to navigators on the high seas. When the sky was clear, this simple device–which employs adjustable mirrors to measure the elevation angles of stellar objects with great precision– could be used to nail down the latitude of the ship so that ancient navigators could maintain an accurate east-west heading. However, early sextants were largely useless for determining longitude because reliable methods for measuring time aboard ship were not yet available. The latitude of a ship equals the elevation of the Pole Star above the local horizon, but its longitude depends on angular measurements and the precise time. The earth spins on its axis 15 degrees every hour, consequently, a one second timing error translates into a longitudinal error of 0.004 degrees–about 0.25 nautical miles at the equator. The best 17th-century clocks were capable of keeping time to an accuracy of one or two seconds over an interval of several days, when they were sitting on dry land. But, when they were placed aboard ship and subjected to wave pounding, salt spray, and unpredictable variations in temperature, pressure, and humidity, they either stopped running entirely or else were too unreliable to permit accurate navigation. To the maritime nations of 17th century Europe, the determination of longitude was no mere theoretical curiosity. Sailing ships by the dozens were sent to the bottom by serious navigational errors. As a result of these devastating disasters caused by inaccurate navigation, a special act of Parliament established the British Board of Longitude, a study group composed of the finest scientists living in the British Isles. They were ordered to devise a practical scheme for determining both latitude and longitude of English ships sailing on long journeys. After heated debate, the Board offered a prize of 20,000 British pounds to anyone who could devise a method for fixing a ship’s longitude within 30 nautical miles after a transoceanic voyage lasting six weeks. One proposal advanced by contemporary astronomers would have required that navigators take precise sightings of the moons of Jupiter as they were eclipsed by the planet. If practical trials had demonstrated the workability of this novel approach, ephemeras tables would have been furnished to the captain of every flagship or perhaps every ship in the British fleet. The basic theory was entirely sound, but, unfortunately, no one was able to devise a workable means for making the necessary observations under the rugged conditions existing at sea. THE MARINE CHRONOMETER However, in 1761, after 47 years of painstaking labor, a barely educated British cabinet maker named John Harrison successfully claimed the 20,000 British pound prize, which in today’s purchasing power would amount to about $1 million. Harrison solution centered around his development of a new shipboard timepiece, the marine chronometer, which was amazingly accurate for its day. On a rocking, rolling ship in nearly any kind of weather, it gained or lost, on average, only about one second per day. Thus, under just about the worst conditions imaginable, Harrison’s device was nearly twice as accurate as the finest landbased clocks developed up to that time. During World War II, ground-based radionavigation systems came into widespread use when military commanders in the European theater needed to vector their bombers toward specific targets deep in enemy territory. Both Allied and Axis researchers soon learned that ground-based transmitters could provide reasonably accurate navigation within a limited coverage regime. In the intervening years America and various other countries have operated a number of ground-based radionavigation systems. Many of them – Decca, Omega, Loran – have been extremely successful. But in recent years, American and former Soviet scientists have been moving their navigation transmitters upward from the surface of the earth into outer space. There must be some compelling reason for installing navigation transmitters aboard orbiting satellites. After all, it costs something like $100 million to construct a navigation satellite and another $100 million to launch it into space. Moreover, at least a half-dozen orbiting satellites are needed for a practical spaceborne radionavigation system. WHAT IS NAVIGATION? Navigation can be defined as the means by which a craft is given guidance to travel from one known location to another. Thus, when we navigate, we not only determine where we are, we also determine how to go from where we are to where we want to be. 1. Piloting 2. Dead reckoning 3. Celestial navigation 4. Inertial navigation 5. Electronic or radionavigation Piloting, which consists of fixing the craft’s position with respect to familiar landmarks, is the simplest and most ancient method of navigation. In the 1920s bush pilots often employed piloting to navigate from one small town to another. Such a pilot would fly along the railroad tracks out across the prairie, swooping over isolated farmhouses along the way. Upon arrival at a village or town, the pilot would search for a water tower with the town’s name printed in bold letters to make sure the intended destination have not been overshot. Dead reckoning is a method for determining position by extrapolating a series of velocity increments. In 1927 Charles Lindbergh used dead reckoning when he flew his beloved Spirit Of St. Louis on a 33-hour journey from Long Island to Le Bourget Field outside Paris. Incidentally, Lindberg hated the name. The original name was “dead reckoning” (deduced reckoning), but newspapers of the day could never resist calling it “dead reckoning” to remind their readers of the many pilots who had lost their lives attempting to find their way across the North Atlantic. Celestial navigation is a method of computing position from precisely timed sightings of the celestial bodies, including the stars and the planets. Primitive celestial navigation techniques date back thousands of years, but celestial navigation flourished anew when cabinetmaker John Harrison constructed surprisingly accurate clocks for use in conjunction with sextant sightings aboard British ships sailing on the high seas. The uncertainty in a celestial navigation measurement builds up at a rate of a quarter of a nautical mile for every second timing error. This cumulative error arises from the fact that the earth rotates to displace the stars along the celestial sphere. Inertial navigation is a method of determining a craft’s position by using integrating accelerometers mounted on gyroscopically stabilized platforms. Years ago navigators aboard the Polaris submarine employed inertial navigation systems when they successfully sailed under the polar ice caps. Electronic or radionavigation is a method of determining craft’s position by measuring the travel time of an electromagnetic wave as it moves from transmitter to receiver. The position uncertainty in a radionavigation system amounts to at least one foot for every billionth of a second timing error. This error arises from the fact that an electromagnetic wave travels at a rate of 186,000 miles per second or one foot in one billionth of a second ACTIVE AND PASSIVE RADIONAVIGATION According to the Federal Radionavigation Plan published by the United States government, approximately 100 different types of domestic radionavigation systems are currently being used. All of them broadcast electromagnetic waves, but the techniques they employ to fix the user’s position are many and varied. Yet, despite its apparent complexity, radionavigation can be broken into two major classifications: 1. Active radio navigation 2. Passive radio navigation. A typical active radionavigation system is sketched in Figure 1. Notice that the navigation receiver fixes its position by transmitting a series of precisely timed pulses to a distant transmitter, which immediately rebroadcast them on a different frequency. The slant range from the craft to the distant transmitter is established by multiplying half of the two-way signal travel time by the speed of light. In a passive radionavigation system (see Figure 1), a distant transmitter sends out a series of precisely timed pulses. The navigation receiver picks up the pulses, measures their signal travel time, and then multiplies by the speed of light to get the slant range to that transmitter. A third navigational approach is called bent pipe navigation. In a bent-pipe navigation system a transmitter attached to a buoy or a drifting balloon broadcasts a series of timed pulses up to an orbiting satellite. When the satellite picks up each timed pulse, it immediately rebroadcasts it on a different frequency. A distant processing station picks up the timed pulses and then uses computerprocessing techniques to determine the approximate location of the buoy or balloon.
Most radionavigation systems determine the user’s position by measuring the signal travel time of an electromagnetic wave as it travels from one location to another. In active radionavigation the timed signal originates on the craft doing the navigating. In passive radionavigation it originates on a distant transmitter.

Using GIS Technology To Protect Gambia’s Territorial Waters

“Gambia, West Africa, is a sliver of a country dwarfed by the enormity of the African continent, like a tiny Band-Aid on the side of an elephant.” That eye-catching sentence opens a colorful GPS World article written by Carlo Cesa and Don Trone. The article is entitled, “A GPS Fish Story: Getting Gambian Waters Under […]
“Gambia, West Africa, is a sliver of a country dwarfed by the enormity of the African continent, like a tiny Band-Aid on the side of an elephant.” That eye-catching sentence opens a colorful GPS World article written by Carlo Cesa and Don Trone. The article is entitled, “A GPS Fish Story: Getting Gambian Waters Under Control.” Gambia is an underdeveloped country, but because it happens to lie along the coast of Africa, its citizens control – under international law – nutrient-rich waters teeming with fish. Unfortunately, large numbers of fishermen swarm in from other countries – Korea, China, Greece, Spain. For years those visiting fishermen have been taking fish illegally from Gambian waters. By some estimates, foreign vessels catch at least half the fish. Consequently, new methods for protecting Gambia’s territorial waters are desperately needed. Video and still cameras working in partnership with inexpensive Navstar receivers and an application-specific GIS database provided a high-technology approach that can be implemented by relatively unskilled technicians. Specially equipped airplanes fly over the fishing waters in random time-varying patterns. Then, whenever the flight crew spots a suspicious-looking vessel, the pilot swoops down as low as 60 feet over the water so the vessel’s tell-tale markings can be imaged with video and still cameras (see Figure 1).
In order to monitor illegal fishing near its shores, the government of Gambia is making use of a Geographic Information System skillfully coupled with an airborne imaging system driven by inexpensive Navstar receivers. Whenever the government agents spot a suspicious-looking vessel plying Gambian waters, they use onboard video and film cameras to record its appearance and its movements across the sea. GPS position coordinates and timing measurements (accurate to a small fraction of a second) are automatically imprinted on each frame of the film, thus making legal prosecution convenient and practical.
Each image is automatically stamped with relevant flight data, GIS database information, and current GPS-derived longitude and latitude positioning coordinates. This real-time information clearly establishes the location of the vessel and any illegal activities of the crew being photographed, thus providing visual proof of clandestine fishing operations. Gambia is an underdeveloped country populated by only about one million citizens. But the relatively simple GIS/GPS technology its technicians have perfected, in cooperation with Western experts, is quickly being duplicated in many other parts of the world. Norway, Germany, Sierra Leone, Senegal, and New Zealand have all implemented vaguely similar monitoring systems to guard their shores against illegal fishing fleets. “Gambia has proved that advanced technology doesn’t have to be complex and expensive,” Carlo Cesa and Don Trone conclude. “Their approach can enable smaller and less economically developed countries to participate in the technology explosions of the more prosperous nations.”  

Using GIS Technology To Grow Bigger Sugar Beets

The clattering farm tractors of yesteryear were uncomplicated machines equipped with That’s all you only a few accessories, all of which could be easily maintained and repaired by the farm families fortunate enough to own them. By contrast, today’s mechanical descendants, rumbling across Nebraska’s sugar beet fields, are often bristling with cabmounted Navstar receivers, digital […]
The clattering farm tractors of yesteryear were uncomplicated machines equipped with That’s all you only a few accessories, all of which could be easily maintained and repaired by the farm families fortunate enough to own them. By contrast, today’s mechanical descendants, rumbling across Nebraska’s sugar beet fields, are often bristling with cabmounted Navstar receivers, digital computers, full-color video displays, and electronic database memories programmed with custom-tailored Geographic Information Systems. The sugar beet is a delicate plant requiring protection during the early phases of its lifecycle. Consequently, fast-growing cover crops – oats, barely, rye – are commonly seeded throughout the same field just prior to the planting of the sugar beet seeds. Then, when the sugar beets are being planted, a narrow stream of plant-selective herbicide is laid down with the beet seeds to destroy nearby weeds while allowing the protective cover crop to grow between the rows. The local soil type and its organic content are of crucial importance in determining the optimum quantities of herbicide to apply. Too much herbicide damages the delicate sugar beets, too little allows weeds to grow and choke them within their rows. Three different soil types are commonly found in close proximity in western Nebraska:
  • Loam
  • Sandy clay loam
  • Coarse-textured sandy soil
When all three soil types share the same sugar beet field, the optimum amount of herbicide for effective results often varies as much as 50 percent. Many of Nebraska’s sugar beet fields employ center-pivot irrigation systems in which an elevated self-propelled irrigation fixture pivots around a gigantic circle spraying water as it moves forward. Some center-pivot units irrigate flat, circular fields a half-mile or more in a diameter with practically no supporting labor. Historically, the tractors planting sugar beet seeds have simultaneously applied uniform amounts of herbicide to destroy any weeds beginning to grown along the narrow beet seed rows. This compromise approach toward herbicide application is simple and easy to implement, but because soil types vary so much within a typical circular field, it does not achieve optimum results. Fortunately, a Navstar receiver mounted in the cab of a tractor coupled with an onboard GIS database can help the operator optimize the application of herbicides in various portions of the field. Aerial photographs are used to pinpoint soil-type variations. These images are then digitized to form contour maps which are, in turn, fed into an onboard GIS database. Differential navigation signals broadcast by local FM radio stations are used to fix the current position of the tractor to an accuracy of 3 to 5 feet. Farming industry surveys indicate that about 5 percent of America’s large-scale factory farms now use GIS technology to achieve substantial improvements in the application of liquid fertilizers and herbicides. “Each area of the field receives only those specific nutrients that are recommended to produce the desired crop,” explains John Mann, president of Soil Teq, Inc., of Minnetonka, Minnesota. Everyone benefits from the high-tech approach. Costs are lower, productivity is higher, and pollution levels in local streams resulting from fertilizer-infused runoff quickly decline.