Mapping Our Big, Wide, Wonderful World

INTRODUCTION Professional surveyors measure, map, and analyze relatively large portions of the Earth’s surface. Armed with precision instruments, they define and record accurate land contours and property boundaries. And they pinpoint the locations of natural landmarks and man-made structures. Surveying has, for centuries, been an essential element of civilized human existence. But it’s practical, everyday […]
INTRODUCTION Professional surveyors measure, map, and analyze relatively large portions of the Earth’s surface. Armed with precision instruments, they define and record accurate land contours and property boundaries. And they pinpoint the locations of natural landmarks and man-made structures. Surveying has, for centuries, been an essential element of civilized human existence. But it’s practical, everyday im-portance is often overlooked. Accurate surveying measurements, and the maps that result, make individual property ownership possible. And property ownership, in turn, fosters fruitful human interactions, accentuates the steady accumulation of wealth, and en-hances social prosperity. “Property is that which is necessary for all civil societies,” observed the famous Scottish philosopher David Hume. America’s 12th president, Abraham Lincoln, echoed a similar sentiment when he concluded that: Property is the fruit of labor . . . It is a positive good in the world. Journalist Leo Rosen was not inclined to contradict President Lincoln’s enthusiastic endorsement. “Property is a sacred trust,” he once concluded, “expressly granted by God, the Bible, and the Recorder’s Office.” Compelling evidence that property boundaries were being established by sur-veyors as early as 1400 BC has been found among stone carvings found on the broad floodplains and the fertile valleys of ancient Egypt. During the Roman occupation of that prosperous and fertile kingdom, Roman technicians studied, absorbed, and copied the techniques the Egyptians had perfected while they were constructing the great pyramid at Giza.with its nearly perfect proportions and its surprisingly precise north-south alignment. Around 15 BC, Roman engineers made at least one innovative contribution to the art and science of surveying when they mounted a large, thin wheel in barrel-fashioned on the bottom of a sturdy cart. When their clever mechanism was pushed along the ground, it automatically dropped a single pebble into a small container with each 360-degree revolution. The number of pebbles rattling around in the container provided a direct measure of the distance traveled by the device. When perfected, it became the world’s first crude, but reliable, odometer! Roman surveyors refined the methods and mechanisms pioneered by the Egyptians and used their techniques in surveying more than 40,000 miles of Roman roads and in laying out hundreds of miles of aqueducts funneling water to their thirsty cities. SURVEYING INVENTIONS THAT SPROUTED UP DURING THE RENAISSANCE In 1620 the famous English mathematician Edmund Gunter develop the earliest surveying chain. It was widely used by surveyors until the steel tape carne into existence 400 years later. The vernier, a precise auxiliary scale that permitted more accurate readings of dis-tances and angles, was invented in 1631. It was followed by the micrometer micro-scope in 1638 and telescope sights in 1669. The spirit level followed around 1700.A spirit level relies on a small bubble floating in a liquid-filled glass cylinder that is precisely centered when the device is perpendicular the local direction of gravity. . By the 1920s photogrammetry–the science of constructing accurate maps from aerial photographs–came into general use. And, 50 years later, in the 1970s, orbiting satellites began to serve as dedicated reference points for measuring millions of attitude angles and distances. These measurements allowed contem-porary experts to construct ground-level maps with unprecedented levels of accuracy and convenience. By the 1990s spaceborne centimeter-level surveying had become convenient, practical, and considerably less expensive, too. Surveying methodologies can be divided into two broad categories: plane surveying – which typically involves distances shorter than 12 miles, and geodetic surveying–which spans areas so large the curvature of the earth must come into play. PLANE SURVEYING Plane surveying assumes that the earth is flat in a small local area. Under this con-dition, relatively simple computational algorithms from Euclidean geometry and plane trigonometry can be employed in processing the measurements the surveyor makes. The region being surveyed is typically divided into a small chain of triangles or quadrangles. When the simpler triangles.are employed, the three interior angles for each tri-angle must sum to 180 degrees and the common side being shared by a pair of the adjacent triangles must be constrained to have the same length in both of the relevant trigonometric calculations. Specialized numerical adjustments force the computations to produce mutually consistent results. The approach that relies on quadrangles involves four sides, eight angles, and two diagonals. All shared dimensions are forced to end up with mutually consistent results. GEODETIC SURVEYING ON A MUCH LARGER. SCALE Geodetic surveying must be applied when the areas being surveyed are so ex-tensive the Earth’s curvature has an appreciable effect on the surveyor’s measurements. In this case spherical trigonometry is required despite the fact that it involves greater complexity and more intricate visualization for those in-terpreting the results. In 1687 Sir Isaac Newton demonstrated that the earth exhibits a pronounced bulge at the equator. Its first-order spherical shape is distorted by the centrifugal forces induced by its daily rotation.The shape it assumes can be approximated as a oblate spheroid with an equatorial diameter approximately 27 miles longer than its polar diameter. Huge numbers of measurements affecting the Earth’s non-spherical shape have been incorporated into a variety of mathematical reconstructions of the Earth’s equatorial bulge. These approximations are called datums when they are being used in connection with geodetic surveying. Leveling measurements establishing a fictitious local sea level are often used in constructing the precise oblate spheroids used in modeling and analyzing surveying operations. One of the earliest and most popular of these models is the Clark ellipsoid of 1866. For more than a century it has been employed as an engineering model defining the shape and gravitational characteristics of our home planet. Surfaces determined by leveling measurements approximate the average long-term sea level of our home planet. Such surfaces are distorted slightly because, at high northern and southern latitudes, the outer edge of the oblate spheroid is in closer proximity to the Earth’s center where most of its gravity is concentrated. MODERN ACCOMPLISHMENTS IN AERIAL PHOTOGRAPHY Military commanders have always struggled to capture and hold the “high ground” because an elevated vantage point often provides an unobstructed view of enemy activities on the ground below. During the American Civil War (circa 1860) hot-air balloons carried reconnaissance experts up among the clouds where they could observe enemy troop deployments and equipment placements. During World War I and World War II, substantial resources were expended by the various combatants in attempting to survey the sprawling battlefields scattered across continent-wide dimensions. And, when peace ascended over though the smoke-powder battlegrounds, the accuracy and convenience of military surveying and mapmaking operations were appreciably accentuated by aerial observations. Later in Kentucky (the author’s home state) tobacco acreages were measured, estimated, and controlled by precise government-sponsored surveys of this type. Indeed, this allotment system is still1 .. today, controlled by that same highly efficient approach to terrestrial surveying. SURVEYING GOD’S GREEN EARTH WITH ORBITING SATELLITES Orbiting satellites became relatively inaccurate surveying tools shortly after the Rus-sians launched their first Sputnik into outer space in October of 1957. The earliest American satellites used in this manner were the two 100-foot Echo Balloons clearly visible from the surface of the earth. These aluminum-coated mylar balloons allowed crude, but convenient, mapping of otherwise inaccessible regions of the Earth. This could be accomplished by bouncing a sequence of brief radar pulses off the skin of the balloon and timing the bent-pipe signal travel-times between a known location on earth and the one that was yet to be determined. Camera-equipped satellites have also found widespread applications in surveying and mapmaking enterprises. Shortly after the first Sputnik reached orbit, President Eisenhower presented the ambassador of Brazil with an accurate map of his forest-shrouded country. NASA’s imaging experts had kludged it together by combining dozens of satellite images into a countrywide composite. Later the six Transit Navigation Satellites and the two dozen or so satellites in the GPS constellation made surveying considerably more accurate, convenient, and cost-effective. GPS-derived sub-centimeter accuracies soon became possible using the precise timing measurements made available by the GPS satellites and their international competitors. Positioning errors were dramatically reduced compared with most conventional surveying techniques. In part, this became possible because ground-based and space-based hardware units and new software modules were soon providing accurate and reliable positioning corrections. EPILOGUE Professional surveyors measure, map, and analyze relatively large portions of the Earth’s surface. Armed with precision instruments, they define and record accurate land contours and property boundaries. And they pinpoint the spatial locations of natural landmarks and man-made structures. Surveying has, for many centuries, been an essential element of civilized human existence. But it’s practical, everyday importance is sometimes overlooked. Hopefully, this brief article will help bring the fundamental importance of pre-cision surveying back into sharp focus. Tom Logsdon Seal Beach, California February, 2015  

The Bumpy Road to Space

The recent abort, and eventual successful launch, of the Space-X mission to resupply the space station is one of many bumps in the road to commercial space.  One should not expect the road to be smooth, or that replacing a Russian supply system with over a half century and almost 1,000 missions in its heritage […]
The recent abort, and eventual successful launch, of the Space-X mission to resupply the space station is one of many bumps in the road to commercial space.  One should not expect the road to be smooth, or that replacing a Russian supply system with over a half century and almost 1,000 missions in its heritage will be easy.  While we all hope that the commercial efforts of such companies as Space-X and Orbital Science Corporation will succeed, we also know many problems will arise. According to Ed Keith, an ATI teacher of rocket and missile design and technology, the NASA commercial space road is a major step in the right direction.  On the other hand, he sees many bumps along that same road.  Historically, American launch vehicles have been developed and operated with large government budgets.  New commercial ventures have an incentive to do the same type of missions at much lower cost.  This means that some short cuts are made, some new risks are accepted, and new ways of doing business are employed. In Mr. Keith’s three day class on Fundamentals of Rockets and Missiles, the questions of commercial versus government design standards are compared.  The apparent effect is that a commercial rocket DDT&E (Design, Development, Test & Evaluation) effort, like the Space-X Falcon, should cost about one-fifth of what a government DDT&E program costs for a comparable sized rocket.  This cost difference is documented in some cost models or Cost Estimation Relationships (CER).  These same cost models fail to explain why any but commercials should be chosen.  Mr. Keith’s explanation is that the shortcuts have one major impact; lower initial reliability.  Indeed, the first three launch attempts of the Space-X Falcon-1 launch vehicles all failed.  Since then, there have been two successful launches of the Falcon-1 and three successful launches of the much larger Falcon-9.  Commercial space ventures have the opportunity to take calculated risk short cuts that government programs are mandated to avoid, and the business incentive to make wiser trade-offs and choices. This does not mean that the road to commercial space will be smooth from here on in.  A more realistic expectation is for the road to be bumpy.  Space-X has had five successful launches in a row, but their proven historical reliability is five successes in eight tries, or 62.5% reliability. The best we can say regarding the Falcon-9 rocket is that we can be confident it is at least 75% reliable at this time.  If, or when, a Falcon-9 rocket fails in the future, it should be considered a bump on the way to commercial space, not a failure of this new way of doing business. Even this latest successful launch cannot be counted as a victory for commercial space until the Dragon Space Capsule successfully docks with the Space Station.  While the launch is the most risky six minutes of the mission, Space-X still must get the craft safely to a docking port with all the cargo intact.  The difficulty and risks of rendezvous and docking of a spacecraft to the Space Station should not be underestimated. There will always be critics of commercial space who will look for negative occurrences to undermine commercial style ventures.  There is also a high probability that a number of future commercial space missions will include embarrassing failures.  The criteria for success in commercial space should not be whether the road is bumpy with occasional failures.  The success criteria should be whether access to space is better, faster and cheaper using commercial methods and incentives than is practical with the type of government bureaucratic methods and incentives that have dominated the final frontier for the past half century. Dr. Tom Logsdon teaches Orbital Mechanics and Global Positioning Satellite technology classes for ATI.  His colleague, Edward L Keith, teaches Fundamentals of Rockets and Missiles, Space Mission Analysis and Design and other rocket related classes for ATI. These instructors are available to reporters who need more information. Contact ATI at 410-956-8805.
Sign Up For ATI Courses eNewsletter

Requiem for the Space Shuttle

The shuttle transportation was, by any reasonable standard, one of the most complicated engineering projects in the long history of science and technology.  But, as it was implemented, it never made much economic sense.  In part, this disappointing outcome, came about because its payload was too big and heavy to achieve reliable and cost-effective operation. […]
The shuttle transportation was, by any reasonable standard, one of the most complicated engineering projects in the long history of science and technology.  But, as it was implemented, it never made much economic sense.  In part, this disappointing outcome, came about because its payload was too big and heavy to achieve reliable and cost-effective operation.   Why was the shuttle payload so big and heavy?   The shuttle payload was originally baselined at 65,000 pounds.  It never actually carried that much weight: the heaviest payload it ever flew into space was around 50,000 ponds.  But, as a practical matter, even that lighter payload was much too heavy.  Military users insisted on heavy-life capabilities because they wanted to use the shuttle transportation system to launch their big, heavy spy satellites into space.   In my view, a 15,000-pound payload weight would have been a more practical selection.  With a correspondingly lighter orbiter, those troublesome thermal tiles would have been unnecessary.  And the booster could have been towed (using Kevlar cables) from the shuttle landing strip at Cape Canaveral by 747 airplanes up to a 40,000-foot attitude with a release velocity of about 600 miles per hour.   Unmanned cargo missions using the amazingly inexpensive Russian Soyuz booster – or an American equivalent – could have carried heavy components into low-altitude earth orbits at much more affordable prices. As Figure 1 indicates, the Russians offered to sell the Americans Soyuz missions with 15,400-pound payloads for $12 million each.  On such a mission, the delivery cost for each pound of payload would have been only $780, or about 1/6th the comparable cost of the American Delta II booster.  In my opinion, we should have bought 1000 Soyuz boosters. Instead, we put severe restrictions on the use for boosting American satellites into space.   In my view we lost a golden opportunity.  But, actually, chemical rockets – Soyuz, Delta II, the shuttle transportation system – are the problem, not the solution.  So what is the alternative?       Satellites Without Rockets   As I have often told my students in my “Launch and Orbital Mechanics” short courses:  “There is nothing wrong with the space program that the elimination of chemical rockets wouldn’t cure.” Chemical rockets are dirty, dangerous, fragile, unreliable, and horribly expensive.   A simple mathematic derivation shows that a typical multistage rocket of modern design wastes about 97-percent of its energy accelerating propellants it’s going to burn later.  If cars were similarly inefficient, few people would want to own one.   Is there a better way to launch payload into space?  In my 4-day short courses on “Launch and Orbital Mechanics”, held at key locations around the country, I list and discuss 30 alternatives to chemical rockets.  These include solar electric propulsion, laser-powered rockets, maglev boosters, nuclear powered rockets, tethered satellites, and skyhooks (space elevators).  These alternatives, implemented in the proper combination, could revolutionize the way future generations conduct large-scale operations and do business in space.   What If the Space Shuttle Engineers Had Designed My Car?   Many times, over the years, I have taught at Vandenberg Air Force Base in California where satellites are launched into near polar orbits.  Vandenberg is 175 miles from my home in Seal Beach, California.  It is one of the few short-course locations I drive to in my car.  Mostly I fly to the various locations where the courses are offered.   A few years ago, I was driving back home from Vandenberg Air Force Base when an interesting question occurred to me:  “What would my car be like if the engineers who designed the space shuttle orbiter had designed it?   When I got back to Seal Beach, I kludged together Figure 2.  Study its contents to see how incredibly inefficient the shuttle transportation system turned out to be. Notice, for example, that only 1 percent of the lift-off weight of the shuttle transportation system using the strut tower brace is useful payload that ends up being left in space.  If my car had been designed with similar payload-carrying capabilities, it would be able to deliver only one 21-pound briefcase to Vandenberg or any other destination 175 miles away.   Expendable rockets are not much more efficient.  On a typical mission only about 2.5 to 3.0 percent of their lift-off weight is useful payload.  Isn’t it becoming abundantly clear why there’s nothing wrong with the space program that the elimination of chemical rockets wouldn’t cure?”    
Sign Up For ATI Courses eNewsletter

The Shot Heard ‘Round Aunt Effie’s Cabbage Patch

It was the middle of March, but still the ground was covered with fresh snow and the wind swept in over the north pasture and swirled around the gangling apparatus. He flipped up his rough collar against the wind, but it was hopeless; even fastening all the snaps on his galoshes and buttoning the bottom […]
It was the middle of March, but still the ground was covered with fresh snow and the wind swept in over the north pasture and swirled around the gangling apparatus. He flipped up his rough collar against the wind, but it was hopeless; even fastening all the snaps on his galoshes and buttoning the bottom button of his topcoat would not have kept out the chilly Massachusetts wind. He glanced out at the hazy horizon and then up at the launch apparatus hoping he had thought of everything. The test conditions were far from ideal. The cold air could crack the nozzle and even if it got aloft the wind could drive his awkward little vehicle into the ground before burnout. But it was pointless to consider the risks now, he was committed. The launch would take place today. He posed for a quick photograph and then, crouched behind a wooden lean-to, he cautiously pointed a blowtorch in the direction of the ungainly framework. In an instant, the tiny rocket hurled itself 41 feet into the air and within 2.5 seconds the terrifying roar was over. It was 1926, Charles Lindbergh had not yet made his transatlantic flight, and yet Dr. Robert Goddard stood over the remains of his tiny rocket, smoldering and unimpressive in the snow, and dreamed of rocket flights to the moon and beyond. There would be other launches far more impressive. Forty years later, television newsman Walter Cronkite would desperately brace himself against the windows of his trailer as they rattled from the blast of a rocket 3 miles away; but here today in Aunt Effie’s cabbage patch, the world’s first liquid-fueled rocket had been flight tested.

Hazardous Encounters with Orbital Space Debris

On July 2, 1982, during the final day of their mission, astronauts Ken Mattingly and Henry Hartsfield, riding the space shuttle Columbia, flew uncomfortably close to a spent Russian Intercosmos rocket high above the northwester coast of Australia. By coincidence, that same region of space had experienced an earlier encounter with orbiting space debris when […]
On July 2, 1982, during the final day of their mission, astronauts Ken Mattingly and Henry Hartsfield, riding the space shuttle Columbia, flew uncomfortably close to a spent Russian Intercosmos rocket high above the northwester coast of Australia. By coincidence, that same region of space had experienced an earlier encounter with orbiting space debris when America’s Skylab crashed in the outback in 1979. Astronauts Mattingly and Hartsfield were warned in advance, but they could not catch a glimpse of the big Intercosmos rocket as it whizzed by their spacecraft at 7000 mi/h. Six months later, Russia’s Cosmos 1402 abruptly slammed into the earth. Like its sister ship Cosmos 954, it was a spy satellite — powered by a nuclear reactor fueled with radioactive uranium. But, unlike its sister ship, Cosmos 954 crashed to earth on the sovereign territory of an innocent nation. In 1978, when Cosmos954 fell in northern Canada, the Canadian government spent $6million cleaning up the mess. Later, with some resistance, the Soviet Union reimbursed Canada for half that amount. Military engineers track approximately 7000 objects in space as big as a soccer ball or bigger. A few hundred of them are functioning satellites. The rest are a varied lot: spent rockets, protective shrouds, clamps, fasteners, jagged fragments from space vehicle explosions, even an astronaut’s silver glove. In addition to the 7000 objects of trackable size, tens of thousands of smaller ones are presently swarming around our planet. These orbiting fragments are hazardous, but not to the people living on the ground below. On the average, human beings occupy the surface of the earth, only 17 tiny bodies per square mile. The Skylab was among the largest reentry bodies ever to plunge through the atmosphere, but scientific calculations indicated that the probability of any specific individual being hit by Skylab debris was only about 1 in 200 billion. Actually, no calculations at all are needed to demonstrate that the probability of being bashed by orbital space debris is extremely small. More than 1500 large, hypervelocity meteorites are known to have plunged through the atmosphere and hit the earth — roughly 8 per year for the past 200 years. Many of them shattered into smaller fragments on reentry, but not one single human being’s death certificate reads “death by meteorite.” And yet, if we go back far enough into the dim shadows of history, we may find at least one reliable reference to human injury and death caused by falling meteorites. It is buried in the bible’s book of Joshua, in a passage describing how terrified soldiers fleeing from battle were killed by “stones falling from heaven.”

Some Spectacular Space-Age Repairs

“Houston, we have a problem.” “Say again, Apollo 13.” “We have a problem.” It was a problem all right! Seconds before, a violent explosion had ripped through the Apollo Service Module, knocking out two of its three fuel cells and dumping the astronauts’ precious oxygen supplies into black space. At first they managed to remain […]
“Houston, we have a problem.” “Say again, Apollo 13.” “We have a problem.” It was a problem all right! Seconds before, a violent explosion had ripped through the Apollo Service Module, knocking out two of its three fuel cells and dumping the astronauts’ precious oxygen supplies into black space. At first they managed to remain fairly calm, but as their crippled spacecraft hurtled on toward the moon, a fresh crisis suddenly unfolded: The lithium hydroxide canisters in the LEM (Lunar Excursion Module) and the Service Module turned out to be noninterchangeable, and as a result, the air the astronauts were breathing was rapidly becoming polluted. Fortunately, they were able to patch together a workable connection to the canisters in the Service Module, thus making them usable in their overcrowded “lifeboat” LEM. During the next few years other astronauts successfully achieved a number of other spectacular spaceborne repairs, thus proving that astronauts were definitely not merely along for the ride of “Spam in a can” as a cynical journalist once wryly observed. When the micrometeoroid shield was ripped off the main body of the Skylab, for instance, the astronauts erected a big cooling parasol to shield themselves from the burning rays of the sun. On the next mission, astronauts Jack R. Lousma and Owen K. Garriott remodeled the Skylab’s parasol sunshade by erecting two 55-foot metal poles to form a large A-frame tent over their freshly occupied home in space. Other Skylab astronauts repaired an ailing battery, retrieved exposed film from the Apollo telescope mount, and removed and replaced several gyroscopes used in stabilizing their wobbling craft. These complicated tasks were all performed in full space suits outside the protective envelope of the Skylab modules. The retrieval and redeployment of the Solar Max satellite — which was filmed with IMAX cameras operated by other space shuttle astronauts — provides another powerful illustration of the skill and dexterity of humans in space.Ì Space-age robots have also performed in a similarly impressive manner. For instance, when the television camera mounted on the elbow of the shuttle’s 50-foot robot arm sent back pictures of a big chunk of ice growing on the outside of the waste-water vent on the shuttle orbiter, the Canadian robot arm helped the astronauts execute a clever solution. Rather than risk possible damage to the shuttle’s delicate heat shield, should chunks of the ice break loose during reentry the astronauts were instructed to use the robot arm like a big, heavy trip hammer to knock the ice loose. On another mission, the robot arm was ready to release the Earth Radiation Budget Satellite into the blackness of space. Unfortunately, during deployment, its solar arrays got stuck in an awkward position so the astronauts used the robot arm to shake the satellite vigorously. Then they held it up to the warming rays of the sun so its solar array could unfold.

Routing Ships on the High Seas

Researchers and technicians at Oceanroutes in Palo Alto, California, earn their daily bread using three different types of satellites for finding safe and efficient trajectories for large oceangoing vessels. Each optimum route takes into account real-time weather conditions, the physical characteristics of the ship, and the wishes of the ship’s master — who is given […]
Researchers and technicians at Oceanroutes in Palo Alto, California, earn their daily bread using three different types of satellites for finding safe and efficient trajectories for large oceangoing vessels. Each optimum route takes into account real-time weather conditions, the physical characteristics of the ship, and the wishes of the ship’s master — who is given an updated trajectory twice each day. The Navstar constellation provides accurate positioning information that is relayed from the ship to Palo Alto through INMARSAT satellites. Weather satellites from various countries furnish the necessary meteorological reports. Sitting in their comfortable offices in Palo Alto and in several other cities around the globe, Oceanroute’s engineers work with more than a thousand ships in a routine month. Each recommended route is custom designed for that particular ship “on that specific voyage, with the given cargo load, status of trim and draft, with the ship’s own distinctive speed and sea-handling characteristics.” The computer program emphasizes emerging weather, but it also takes into account currents, fog, choke points, navigational hazards, and sea ice in northern regions. Some cargoes, such as fruit and oil, are temperature-sensitive; others, such as automobiles and heavy machinery, may shift under heavy waves. Still others have time-critical deliveries. The Oceanroute’s program successfully takes these and numerous other factors into account whenever it makes its routing recommendations. The cost of the service for a typical voyage is $800, a fee that is repaid 30 to 40 times over by shortened travel times and more efficient maritime operations. In 43,000 crossings aided by Oceanroute’s computers, travel times have been reduced an average of four hours in the Atlantic and eight hours in the Pacific. Operating a large oceangoing vessel can cost as much as $1,000 per hour, so time savings alone can translate into enormous reductions in cost. Other expenses are also reduced. When Oceanroute’s services were not yet available, the cost of repairing weather-damaged ships ran from $32,000 to $53,000 in an average year. Today, for some companies, these costs have plummeted to only about $6,000. Cargo damage has also declined. One international auto dealer told a team of Oceanroute’s researchers that his cargo damage claims had dropped by over $500,000 per year.

Booster Rocket Pioneers

The rockets that hurl the Navstar satellites into orbit are direct descendents of the highly destructive Chinese “fire arrows” built and launched by Chinese military engineers 750 years ago. The earliest Chinese rockets were slender tubes stuffed with gunpowder and fastened to long flat sticks that jutted out behind the rocket to promote stable flight. […]
The rockets that hurl the Navstar satellites into orbit are direct descendents of the highly destructive Chinese “fire arrows” built and launched by Chinese military engineers 750 years ago. The earliest Chinese rockets were slender tubes stuffed with gunpowder and fastened to long flat sticks that jutted out behind the rocket to promote stable flight. In 1232 they were launched in large quantities on the outskirts of Peking, when special Chinese rocket brigades successfully pushed back Mongol cavalrymen. And, in 1249, they were used with great effect by the Moors in their military campaign along the Iberian Peninsula. Near the beginning of the nineteenth century, Englishman William Congreve concocted superior powder blends and moved the stabilizing stick to the center of the rocket for improved accuracy. In 1807 the British blasted Copenhagen with 25,000 Congreve rockets. nine years later, when they bombarded Fort McHenry , they inadvertently provided “the rocket’s red glare,” which helped inspire America’s National Anthem. In 1903 a lonely Russian schoolteacher, Konstantin Tsiolkovsky , correctly concluded that rockets fueled with liquid hydrogen and liquid oxygen would be considerably more efficient than the simpler solid-fueled rockets then in use. He also devised a concept for stacking rockets one atop the other to yield the enormous speeds necessary for successful interplanetary travel. Twenty-three years later Dr. Robert Goddard knelt on the frozen ground in his Aunt Effie’s cabbage patch at Auburn, Massachusetts, and casually used a blowtorch to ignite the world’s first liquid-fueled rocket. Goddard is today revered for his expansive expertise, but during his lifetime his contemporaries criticized him unmercifully because he had once dared to mention the possibility of sending a small flash powder to impact the moon. Years later, when one of his liquid-fueled rockets reached its design altitude of 2,000 feet, a banner headline wryly commented: “Moon Rocket Misses Target by 237,799 Miles!” The rockets built by the Goddard team were all handcrafted machines, but Germany’s rocketeers , working under the direction of Werner von Braun, constructed liquid-fueled rockets in mass-¼production quantities. When World War II fizzled to a halt, many of the German scientists came to Los Alamos to help American’s military and space-age rocketeers . In 1961, when President Kennedy courageously announced that the United States would conquer the moon, America’s rocketeers had not yet orbited a single astronaut. The Saturn V moon rocket they later developed for the mission, was the pinnacle of the rocket maker’s art. But it was expendable; NASA’s space shuttle is a “reusable” booster. It delivers payloads weighing as much as 50,000 pounds and brings others back to earth for refurbishment and repair, gently landing — as TV Newsman Edwin Newman once observed: “like a butterfly with sore feet.”

Watching an Apple Falling from a Tree

Watching an Apple Falling from a Tree In 1665, Isaac Newton left Cambridge University and returned to his hometown of Woolsthorpe to escape the worst ravages of the Black Plague. Safely back among familiar surroundings, he made landmark discoveries that have provided us with precisely the keys we needed to conquer space. Although the young […]

Watching an Apple Falling from a Tree

In 1665, Isaac Newton left Cambridge University and returned to his hometown of Woolsthorpe to escape the worst ravages of the Black Plague. Safely back among familiar surroundings, he made landmark discoveries that have provided us with precisely the keys we needed to conquer space.

Although the young Newton had reportedly been a mediocre student in the early grades, his powerful intelligence asserted itself even before he reached his teenage years. When he was still a tow-headed youngster, for instance, he managed to construct a charming little windmill backed up by one mouse-power so it could go on turning when the wind refused to blow. Later, he made a paper kite rigged to carry a small lantern high above the British countryside. The people of Woolsthorpe had never before seen flickering lights floating across the nighttime sky, so the young Isaac may have been responsible for some of the earliest sightings of UFOs.

At the age of 23, while relaxing on his mother’s farm, Isaac Newton, by his own account, saw an apple falling from a tree. That simple incident caused him to wonder why apples always tumble down. That apple tumbled down toward the ground while the pale August moon continued to sail contentedly overhead. Soon he theorized that the force of gravity tugged on apple and moon falls off systematically with increasing altitude in the same way a light beam dissipates as we move farther away from its source. Double the distance and its intensity falls of by a factor of 4.

Thus, by Newton’s reckoning, the force of gravity pulling on the moon should be about 1/3000th as strong as the gravity we experience at the surface of the earth. In 1 minute, he soon calculated a falling apple would be pulled downward about 10 miles, but the moon would fall toward the earth only about 16 ft. During that same 1-minute interval, the moon’s orbital velocity also carried it sideways 38 miles. Consequently, its horizontal and vertical motion combine to bring it back onto the same gently curving circular path over and over again.

Isaac Newton figured out how gravity works because of a fortunate encounter with his mother’s favorite apple tree. Armed with only his inverse square law of gravitation, three deceptively simple laws of motion, and one of the most powerful intellects that ever pondered anything, Newton quietly set about to unravel the hidden secrets of the universe.

Using Wounded Dogs to Navigate Ships on the High Seas

Using Wounded Dogs to Navigate Ships on the High Seas Finding the latitude of a sailing ship can be surprisingly easy: sight the elevation of the Pole Star above the local horizon. Finding longitude turns out to be quite a bit harder because, as the earth rotates, the stars sweep across the sky 15 degree […]

Using Wounded Dogs to Navigate Ships on the High Seas

Finding the latitude of a sailing ship can be surprisingly easy: sight the elevation of the Pole Star above the local horizon. Finding longitude turns out to be quite a bit harder because, as the earth rotates, the stars sweep across the sky 15 degree every hour. A one second timing error thus translates into a 0.25 nautical mile error in position. How is it possible to measure time on board a ship at sea with sufficient accuracy to make two-dimensional a practical enterprise?

One 18th century innovator, whose name has long since been forgotten, advocated the use of a special patent medicine said to involve some rather extraordinary properties. Unlike other popular nostrums of the day, the Power of Sympathy, as its inventor, Sir Klenm Digby, called it, was applied not to the wound but to the weapon that inflicted it. The World of Mathematics, a book published by Simon and Schuster, describes how this magical remedy was to be employed as an aid to maritime navigation.

Before sailing, every ship should be furnished with a wounded dog. A reliable observer on shore, equipped with a reliable clock and a bandage from the dog’s wound, would do the rest. very hour on the dot, he would immerse the dog’s bandage in a solution of the Power of Sympathy and the dog on shipboard would yelp the hour.

As far as we know, this intriguing method of navigation was never actually tested under realistic field conditions, so we have no convincing evidence that it would have worked as advertised.

Rocket Propulsion Fundamentals

Rocket Propulsion Fundamentals White hot combustion by-products blasted rearward with blinding speed generate the rocket’s propulsive force that that hurls a rocket skyward. Pressure inside the rocket combustion chamber pushes in all directions to form balanced pairs of opposing forces which nullify one another, except where the hole for the exhaust nozzle is placed. Here […]

Rocket Propulsion Fundamentals

White hot combustion by-products blasted rearward with blinding speed generate the rocket’s propulsive force that that hurls a rocket skyward. Pressure inside the rocket combustion chamber pushes in all directions to form balanced pairs of opposing forces which nullify one another, except where the hole for the exhaust nozzle is placed. Here the pressure escapes, causing an unbalanced force at the opposite side of the combustion chamber that pushes the rocket up towards its orbital destination. Both rockets and jets are based on the same principle that causes a toy balloon, carelessly released, to swing in kamikaze spirals around the dining room. A jet sucks its oxygen from the surrounding air, but a rocket carries its own supply of oxidizer on board. This oxidizer can be stored in a separate tank, mixed with the fuel, or chemically embedded in oxygen-rich compounds. A rocket usually has two separate tanks, one containing the fuel, the other containing the oxidizer. The two fluids are pumped or pushed under pressure into a small combustion chamber above the exhaust nozzle, where burning takes place to create a thrust. A solid rocket rocket is like a slender tube filled with gunpowder; the fuel and oxidizer are mixed together in a rubbery cylindrical slug called the grain. Solid propellants are not pumped into a separate combustion chamber. Instead, burning takes place along the entire length of the cylinder. Consequently, the tank walls must be built strong enough to withstand the combustion pressure. Rocket design decisions are dominated by the desire to produce the maximum possible velocity when the propellants are burned. A rocket’s velocity can be increased in two principal ways: by using propellants with a high efficiency and by making the rocket casing and its engines as light as design constraints permit. Unfortunately, efficient propellants tend to have some rather undesirable physical and chemical properties. Liquid oxygen is a good oxidizer, but it will freeze all lubricants and crack most seals. Hydrogen is a good fuel but it can spark devastating explosions. Fluorine is even better but it is so reactive it can even cause metals to burn. Miniaturized components, special fabrication techniques and high strength alloys can all be used to shave excess weight. But there are limits beyond which further weight reductions are impractical. The solution is to use staging techniques whereby a series of progressively smaller rockets are stacked one atop the other. Such a multistage rocket cuts down its own weight as it flies along by discarding empty tanks and heavy engines. However, orbiting even a small payload with a multistage rocket requires an enormous booster. The Saturn moon rocket, for example, outweighed the Apollo capsule it carried into space by a factor of 60 to 1.

Schemes for Enhancing the Saturn V Translunar Payload Capability

SCHEMES FOR ENHANCING THE SATURN V MOON ROCKET’S TRANSLUNAR PAYLOAD CAPABILITY INTRODUCTION When I was a teenager struggling to master algebra, geometry, and trigonometry in a tiny little high school in the Bluegrass region of Kentucky, I loved doing mathematical derivations. Those squiggly little math symbols arranged in such neat geometrical patterns on the printed […]

SCHEMES FOR ENHANCING THE SATURN V MOON ROCKET’S TRANSLUNAR PAYLOAD CAPABILITY

INTRODUCTION

When I was a teenager struggling to master algebra, geometry, and trigonometry in a tiny little high school in the Bluegrass region of Kentucky, I loved doing mathematical derivations. Those squiggly little math symbols arranged in such neat geometrical patterns on the printed pages held endless fascination for me. But never in my wildest dreams, could I ever have imagined that I might someday be stringing together long, complicated mathematical derivations that would allow enthusiastic American astronauts to hop around on the surface of the moon like gigantic kangaroos. Nor could I have imagined that someday my Technicolor derivations would end up saving more money than a typical American production line worker could earn in a thousand lifetimes of fruitful labor. I was born and raised in a very poor family. My brother once characterized us as “gravel driveway poor”. At age 18 I had never eaten in a restaurant. I had never stayed in a hotel. I had never visited a museum. But, somehow, I managed to work my way through Eastern Kentucky University, one of the most inexpensive colleges in the state. I graduated in 1959 with a major in mathematics and physics eighteen months after the Russians hurled their first Sputnik into outer space. That next summer I accepted a position with Douglas Aircraft in Santa Monica, California, and what a wonderful position that turned out to be! At Douglas Aircraft we were launching one Thor booster rocket into outer space every other week. In 1961 after I earned my Master’s degree in mathematics at the University of Kentucky, I was recruited to work on Project Apollo. And I am convinced that anyone who ever worked on the Apollo Project would tell you that Apollo was the pinnacle of the rocket maker’s art. At age 18 I had never eaten in a restaurant. I had never stayed in a hotel. I had never visited a museum. But somehow, by some miracle, six year later at age 24, I was getting up every day and going to work and helping to put American astronauts on the moon!

WHAT IS A MATHEMATICAL DERIVATION?

A mathematical derivation is a series of mathematical and logical steps that starts with something that every expert can agree is true and ends up with a useful conclusion, usually one or more mathematical equations amenable to an easy solution. Hollywood’s version of a mathematical derivation is almost always carried out on big, long blackboards with no words anywhere. But I did all of my derivations with words and in Technicolor – using colored pencils and colored marking pens – on big, oversized quad pads four times as big as a standard sheet of paper. One day when my daughter, Donna, was about five years old, she wandered into my den and watched me struggling over a particularly difficult derivation. “This is embarrassing,” she maintained, “My father colors better than I do.” Most of the derivations my friend, Bob Africano, and I put together in those exciting days centered around our struggles to enhance the performance capabilities of the mighty Saturn V moon rocket. The Saturn V was 365 feet tall. It weighed six million pounds. It generated 7.5 million pounds of thrust and, over nine pulse-pounding Apollo missions, it carried 24 American astronauts into the vicinity of the moon. Twelve of those astronauts walked on the moon’s surface. The other twelve circled around it without landing. Even the simplest mathematical derivation can be difficult, frustrating work and, over the years, we put together hundreds of pages of them. For ten years, and more, we worked 48 to 60 hours per week. We were well paid and treated extremely well and we loved what we were doing for a living. But we were often teetering on the ragged edge of exhaustion. One night at a party I observed that doing mathematical derivations for a living was like “digging ditches with your brain!” In my career I followed the dictum of the British mathematician Bertram Russell. “When you’re young and vigorous, you do mathematics,” he once wrote. “In middle age you do philosophy. And in your dotage, you write novels.” Sad to say, I just finished my first novel! It is intended to become a Hollywood motion picture entitled the 51st State. So this white paper is being composed while I am in my dotage.

THOSE CHALLENGING DAYS AT ROCKWELL INTERNATIONAL

I joined the staff of Rockwell International at Downey, California, in 1964. Each morning I would jaywalk across Clark Avenue to get to work in Building 4. I was assigned to a systems engineering group consisting of about 20 engineers and support personnel led by supervisor, Paul Hayes. Paul was proficient in several branches of mathematics and he carefully checked and rechecked the mathematical derivations we were publishing in internal letters, company reports, and in the technical papers we were presenting at big conventions around the country and in a few foreign countries, too. Most of our time and effort was devoted to figuring out how to operate the S-II stage (the second stage of the Saturn V moon rocket) with maximum practical efficiency. We didn’t make any modifications to the hardware; the hardware was already built. Instead, we used the mathematics and the physics we had learned in school, as effectively as possible, to maximize the payload of the mighty Saturn V. Over about ten years on the project, Africano and I – and various others – developed hundreds of pages of useful mathematical derivations. Various other engineers scattered around the country were also trying to figure out how to send more payload to the moon. Joe Jackson, Scott Perrine, and Wayne Deaton at NASA Huntsville, for instance, and Carol Powers and Chuck Leer and their colleagues at TRW in Redondo Beach, California all made significant contributions to this important work. During those early days we were taking mathematics and physics courses at UCLA and UCI (The University of California at Irvine) and teaching courses of our own at the California Museum of Science and Industry, Cerritos College, USC, and at Rockwell International in Downey and Seal Beach, California. Our supervisor, Paul Hayes, showed remarkable patience and leadership when the mathematics (or my own stubbornness) led me down blind alleys. On one occasion, for example, I spent about 3 weeks formulating a more precise family of guidance equations for our six-degree-of-freedom trajectory program. Unfortunately, when those equations were finally finished, checked, and programmed the rocket’s trajectory hardly changed at all. I was rather apologetic, but Paul had an entirely different way of looking at what we were doing for a living. “It’s OK” he told me, softly. “Try something else.” He exhibited the same magnanimous attitude when I insisted on using disk storage to replace the nine magnetic tapes we were using for “scratch-pad” memory. We burned up two weeks or so reprogramming the routines in an attempt to save computer time (which in those days cost $700 per hour!). Unfortunately, as our programmer, Louise Henderson, had predicted, no computer-time saving at all resulted from this tedious and time-consuming effort. Paul Hayes realized that we could not make major breakthroughs in the difficult fields of applied mathematics, orbital mechanics and systems engineering unless we were willing to risk humiliating failures along the way! Fortunately, we did, eventually, perfect four powerful mathematical algorithms that saved an amazing amount of money for the Apollo program. These algorithms, which required no hardware changes and cost virtually nothing to implement, involved at least eight difficult branches of advanced mathematics. In 1969 Bob Africano and I summarized the salient characteristics of these four mathematical algorithms in a technical paper we presented at a meeting of the American Institute of Aeronautics and Astronautics (AIAA) at the Air Force Academy in Colorado Spring, Colorado. It was entitled “Schemes for Enhancing the Performance Capabilities of the Saturn V Moon Rocket.” In that paper we showed how those four mathematical algorithms increased the translunar payload-carrying capabilities of the Saturn V by about 4700 pounds. Measured in 1969 dollars, each pound of that payload was worth $2000, or about 5 times its weight in 24-karat gold. NASA ended up flying nine manned missions around the moon. Consequently, those mathematical algorithms, liberally laced with physics and astrodynamics, ended up saving the American space program $2.5 billion valued in accordance with today’s cost of $1140 for each pound of gold. In the paragraphs to follow, I will attempt to summarize the methods we used to achieve those important payload gains and to describe the mathematical techniques we employed in accentuating the rocket’s performance.

PROPELLANT UTILIZATION SYSTEMS

A large liquid-fueled rocket usually includes two separate tanks, one containing the fuel and the other containing the oxidizer. These two fluids are pumped or forced under pressure into the combustion chamber immediately above the exhaust nozzle, where burning of the propellants takes place. If we would load 1000 rockets with the required quantities of fuel and oxidizer, then fly them to their destination orbits, we could expect – due to random statistical variations along the way – to have a small amount of fuel left over on 500 of those flights and a small amount of oxidizer left over on the other 500. Neither the fuel nor the oxidizer can be burned by itself because burning requires a mixture of the two fluids. In order to minimize the average weight of the fuel and oxidizer residuals on the upper stages of the Saturn V rocket, the designers had introduced so-called Propellant Utilization Systems. A Propellant Utilization System employs sensors to monitor the quantities of fuel and oxidizer remaining throughout the flight. It then makes automatic real-time adjustments in the burning-mixture-ratio to achieve nearly simultaneous depletion of the two fluids when the rocket burns out. For the Saturn V, the necessary measurements were made with capacitance probes running along the length of the fuel tank and the oxidizer tank. A capacitance probe is a slender rod encased within a hollow cylinder. Openings at the bottom of the hollow cylinder allow the fluid level on the inside of it to duplicate its level on the outside. As the fluid level inside the cylinder decreases, the electrical capacitance of the circuit changes to provide a direct measure of the amount of fluid remaining in the tank. These continuous fluid-level measurements are then used in making small real-time adjustments in the rocket’s burning-mixture-ratio to achieve nearly simultaneous depletion of the two propulsive fluids.

THE PROGRAMMED MIXTURE RATIO SCHEME

The Propellant Utilization System on the S-II stage increased the performance of the booster by an extra 1400 `pounds of payload headed toward the moon. Unfortunately, modeling the behavior of the propellant utilization systems in flight created a complicated problem for the mission planning engineers. When we were simulating the translunar trajectories and the corresponding payload capabilities for the Saturn V, we found that, if we ran two successive simulations with identical inputs, each simulation would yield a slightly different payload at burnout. These rather unexpected payload variations came about because the computer program’s subroutines automatically simulated slightly different statistical variations in the Propellant Utilization System during each flight. In order to circumvent this difficulty, we did what engineers almost always do – we called a meeting. And at that meeting we brainstormed various techniques for making those pesky payload variations go away. Fortunately, no one in attendance that day was able to come up with a workable solution. Sitting in the back of the room was long, lanky propulsion specialist named Bud Brux. who said almost nothing during the meeting. But, when Bud Brux got back to his office, he began thinking about the problem we had encountered. “Hey, wait a minute!” he thought, “The reason we build a rocket is to put payload into space. If something is causing that payload to vary, maybe we should try to accentuate the effect, rather than trying to make it go away.” Bud Brux then wrote us a simple, two-page internal letter suggesting that we vary the mixture ratio as much as we possible in a few of our computer simulations to see if we could produce important performance gains. We were not particularly excited by the letter he wrote; we received lots of internal letters in those days. But, when those first few trajectory simulations came back from the computer, our excitement shot up by a decibel or two. On the best of those simulations, the Saturn V moon rocket was able to carry nearly 2700 extra pounds of payload to the moon, each pound of which was worth $2000 – or five times its weight in 24-karat gold.  
The five J-2 engines mounted on the second stage of the Saturn V moon rocket were originally designed to burn their propellants at a constant steady-state mixture ration of 5 to 1 (5 pounds of liquid oxygen for every pound of liquid hydrogen). By working our way through the proper mathematical derivations, however, we showed that, if we started out with a mixture ratio of 5.5 to1, then abruptly shifted to 4.5 to 1, the booster rocket could hurl an extra 2700 pounds onto its translunar trajectory. This so-called Programmed Mixture Ratio Scheme required no hardware changes. We merely opened 5 existing valves a little wider in mid flight.
The sketches in Figure 1 highlight some of the salient characteristics of the Programmed Mixture Ratio Scheme as applied to the second stage of the Saturn V moon rocket. Early in that rocket’s flight, we set the burning-mixture ratio at 5.5 to 1 (5.5 pounds of oxidizer for every pound of fuel). But 70 percent of the way through the burn we abruptly shifted that mixture ratio to a lower value of 4.5 to 1. As the small graphs in Figure 1 indicate, this shift in the mixture ratio provided the rocket with high thrust early in its flight at a slightly lower specific impulse.* Then, following the Programmed Mixture Ratio shift, it had a lower thrust, but a higher specific impulse. After studying the computer simulations and putting together several dozen pages of mathematical derivations, we concluded that the abrupt Programmed Mixture Ratio shift caused the rocket to leave more of its exhaust molecules lower and slower as it flew toward the moon. This, in turn, put less energy into the exhaust molecules and correspondingly more energy into the payload. The resulting performance gains are not insignificant. On each of the missions we flew to the moon, the Programmed Mixture Ratio Scheme allowed us to send 2700 extra pound of payload onto the rocket’s translunar trajectory! When the last Apollo mission had been completed, I wrote an internal letter highlighting the clever insights and the important engineering accomplishments of our illustrious colleague. “If Bud Brux had sent us a note telling us where five solid gold Cadillacs were buried in the company parking lot,” I concluded, “it would not have been worth as much as the note he actually wrote!” In my view, mathematical derivations that involve moving objects such as a booster rocket or an orbiting satellite can be surprisingly interesting. Those that center around objects that move along optimal trajectories are even more interesting. But the most interesting derivations of all, involve objects that move along optimal trajectories that are experiencing random statistical variations. The work that we did on optimal fuel biasing fell into the third category with random statistical variations superimposed on a booster rocket that was moving along an optimal trajectory. __________________ * The specific impulse of a rocket propellant combination provides us with a measure of the efficiency of the rocket. It equals the number of seconds a pound of the propellant can produce a pound of thrust.

OPTIMAL FUEL BIASING

If we load 1000 identical hydrogen-oxygen rockets with the desired amounts of fuel and oxidizer in the proper ratio and then fly all 1000 of them into earth orbit along 1000 statistically varying trajectories, approximately 500 of them will end up with fuel residuals at burnout, and the other 500 will end up with oxidizer residuals. Moreover, on the average, the 500 oxidizer residuals will turn out to be approximately five times heavier than the 500 fuel residuals because a typical hydrogen-oxygen rocket carries five pounds of oxidizer for every pound of fuel. Consequently, if we would add a little extra fuel to each of those 1000 rockets before lift-off, that extra fuel would reduce the statistical frequency of the heavier oxidizer residuals. Moreover, the few remaining oxidizer residuals that do occur will be lighter because of the fuel bias we have added. In practice, however, figuring out precisely how much extra fuel to add to achieve optimal mission performance turned out to be a difficult and expensive problem in statistics. Our first approach toward determining the optimal fuel bias is flowcharted in Figure 3. In each of our simulations we command the computer to choose a fuel bias and then sample a series of statistically varying values having to do with the variation of the rocket’s thrust, its flow rate, its specific impulse, its mixture ratio, and so on. The computer then substituted each of these statistical values into our optimal trajectory simulation program, and at burnout, it recorded the type of residual (fuel or oxidizer) and its corresponding weight. This so-called “Monte Carlo” simulation procedure was repeated hundreds or thousands of times to allow the computer to construct an accurate statistical “snapshot” similar to the one sketched at the bottom of Figure 2. Repetitions of those computerized procedures executed with different fuel-bias levels allowed us to determine the fuel bias that provided the optimum rocket performance. This technique worked as advertised, but it turned out to be extremely costly, in the days when computer simulation time was so incredibly expensive. However, after several hours of mind-bending mathematical manipulations, I managed to reduce the essence of the optimization problem we faced to a single mathematical equation. It was an integral equation from calculus with variable limits of integration based on the normal distribution functions from the statistics courses I had been attending at UCLA.
In the 1960’s this Monte Carlo sampling procedure provided our analysis team with a simple and convenient method for finding the optimum amount of fuel bias to add to the S-II Stage to minimize its “3-sigma” fuel and oxidizer residuals. Although this procedure was conceptually simple and easy to implement, finding the optimum fuel bias turned out to be extremely costly in an era when a rather primitive IBM 7094 computer rented for $700 per hour. On a typical Apollo mission we were burning though $95,000 worth of computer time to find the optimum bias level. Practical alternatives were mathematically elusive, but eventually we developed a far more economical approach based on Leibniz’ rule for the differentiation of integral equations.
That equation, though simple in appearance, could not be integrated to get a simple answer in closed form. Fortunately, that summer I had been studying a powerful branch of mathematics called the calculus of variations pioneered, in part by my hero, Isaac Newton. Isaac Newton, Christmas present to the world, was born on December 25, 1642. In that era, if a talented mathematician would solve a difficult mathematical problem, he would sometimes pose the problem to various other famous mathematicians before publishing the solution. Such a problem had been posed by the Bernoulli brothers, two famous Swiss mathematicians. It centered around the optimal shape for a wire on which a small bead would slide in minimum time from one point to another under the influence of gravity. The Bernoulli brothers had posed this problem to Newton’s rival Gottfried Wilhelm von Leibniz who had not been able to solve it within the three months they had allotted. So he requested six more months in which to devise a solution. The Bernoulli brothers granted his request, but they also included Newton in their new challenge.* That day Newton came home from a tiring day of working in the British mint, read his mail, and began working on the problem. By the time he fell into bed that night, he had devised a brilliant solution which he published anonymously. On seeing the solution, John Bernoulli is said to have remarked, “I recognize the lion by his paw!” In his view, no other living mathematician was clever enough to have devised the published solution. As luck would have it, one of the key relationships in the calculus of variations turns out to be Leibniz’s rule for the differentiation of integral equations with variable limits of integration! I had never seen Leibniz’s rule applied to a statistics problem, but it turned out to be the key to obtaining the solution to the optimal fuel biasing problem we were ______________ * Egged on by British and continental mathematicians and scientists, Newton and Leibniz engaged in a lifetime rivalry. At one point, however, Leibniz paid Isaac Newton a supreme compliment: “Of all the mathematics developed up until the time of Isaac Newton,” he wrote, “Newton’s was, by far, the better half.” seeking. By using Leibniz’s rule, some well-known identities from statistics, a back-handed interpretation of “standard deviation”, and a closed-form version of the rocket equation as derived in 1903 by that lonely Russian school teacher, Konstantin Tsioikovsky, I finally managed to develop a simple closed-form solution to our optimal fuel-biasing problem! For Rockwell International’s hydrogen-fueled S-II stage, our Monte Carlo approach had typically required 10,000 computer simulations executed at a total cost of $95,000 per flight. The new closed-form approach, based on Leibniz’s rule, required only 13 computer simulations at a cost of around $3000. My supervisor, Paul Hayes, again demonstrated his leadership when he secretly submitted a company suggestion in my name indicating that I had managed to develop a derivation that saved the Saturn S-II Program over $700,000 based on nine manned missions flown into the vicinity of the moon. Paul was sorely disappointed when the reply came back from the suggestion group: No award was to be forthcoming because, as they pointed out: “That’s what he does for a living.” The parametric curves at the bottom of Figure 3, which were constructed using the closed-form equations I derived, were used to determine the optimum fuel-bias level. For a typical Apollo mission, the optimum amount of fuel to add turned out to be about 600 pounds, assuming that we wanted the smallest residual propellant remaining at the “3 sigma” probability level (99.87 percent). Bob Africano and I later published a technical paper in which we discussed the fact that biasing to minimize residuals is not the same as biasing to maximize payload. We reasoned that these two bias levels must be slightly different because, when we add fuel bias to minimize the residuals, the fuel bias itself represents a dead weight that the rocket must carry into space. However, we soon discovered that no matter how many times we manipulated the relevant mathematical symbols, we could not discover the desired relationship. Several years later, however, John Wolfe, a superb space shuttle engineer, read our paper and figured out how to bias to maximize payload. John Wolfe was such a generous soul, he even claimed, in print, that Bob Africano and I had solved the problem on our own. Actually, all we had done was to formulate the problem. John Wolfe, himself, provided the solution!
A clever mathematical algorithm based on Leibniz’ rule for the differentiation of integral equations with variable limits of integration allowed us to find the fuel bias that would minimize the “3-sigma” fuel and oxidizer residuals remaining at burnout of the Saturn S-II stage. This new approach saved $92,000 per flight while achieving essentially identical results. Later a highly creative space shuttle engineer, John Wolfe, figured out how to modify our procedure to maximize the payload of the reusable space shuttle.
It was not a difficult derivation; we understood it immediately. But finding it did required a rather unusual mathematical approach that had eluded us throughout several dozen oversized pages of Technicolor derivations.

POSTFLIGHT TRAJECTORY RECONSTRUCTION

On January 1, 1801, the first minor planet, Ceres, was spotted by alert telescope-equipped astronomers as it hooked around the sun. Ceres, which we now call an asteroid, was a new type of object never seen by anyone on Earth up until that time. Unfortunately, after Ceres had been in view for only 41 days, it traveled so close to the harsh rays of the sun it was lost from view. The astronomers who were tracking it were afraid that it might never be found again. However, as Figure 4 indicates, the famous German mathematician Carl Frederich Gauss accepted the challenge of trying to reconstruct the trajectory of Ceres from the small number of closely spaced astronomical observations available to him. Under his brilliant direction, Ceres was located again on the other side of the Sun on the last day of 1801, almost exactly one year after it had first been discovered.* More than 160 years later in 1962, we adapted the mathematical methods Gauss had used in reconstructing the orbit of Ceres to determine the performance of the Saturn V moon rocket on a typical mission. When we were executing a preflight trajectory simulation, we would feed the thrust and flow-rate profiles into the program together with the initial weight of the vehicle, its guidance angle histories, and the like, and then we would simulate the resulting trajectory of the rocket. In a postflight trajectory simulation, we did exactly the opposite. We would feed the program the trajectory of the ______________ * When Gauss was in elementary school in Germany, one of his teachers asked her students to “add up all the values of the 100 integers ranging from 1 to 100.” While his classmates were struggling to obtain the solution, the young Gauss wrote down the answer immediately. He had noticed that there were 50 pairs of numbers – each of which totaled 101; they were 1 + 100, 99 + 2, 98 + 3 . . and so the desired total was equal to 50 (101) = 5050. rocket – as ascertained by the tracking and telemetry measurements – and then we would use the computer to determine the thrust and flow-rate profiles and the guidance angles the booster must have had in order to have traveled along the observed trajectory.
In 1801 the brilliant German mathematician Carl Frederich Gauss devised a marvelously efficient mathematical algorithm that allowed the astronomers of his day to relocate the asteroid Ceres – a tiny pinpoint of light – as it emerged from the harsh rays of the sun. Approximately 160 years later our analysis team adapted this so-called iterative least squares hunting procedure to help us reconstruct the postflight trajectories of the various stages of the mighty Saturn V. Over time these mathematical techniques increased the rocket’s translunar payload by 800 pounds.
Years later in a television interview on the ABC television network, my host asked me what a trajectory expert does for a living. “We predict where the rocket will go before the flight,” I replied. “Then, after the flight, we try to explain why it didn’t go there.” Those of us who worked as trajectory experts on the Saturn V moon rocket developed one of the most sophisticated postflight trajectory reconstruction programs ever formulated up until that time. It included more than 10,000 lines of computer code (five boxes of IBM cards!) and it required 300 inputs per simulation, all of which had to be correct if the program was to produce the desired results. Unfortunately, 75 percent of our simulations blew up due to incorrect inputs. A small percent of the others blew up because we made various mistakes when we made modifications to the program. In a typical postflight reconstruction, we simulated a 400-second segment of the rocket’s trajectory which required about 2.5 hours of computer time on an IBM 7094 mainframe computer at a cost of about $700 per hour. Our six-degree-of-freedom iterative least squares hunting procedure was structured so we could, on any given simulation, choose up to nine independent variables, such as vehicle attitude, slant range, inertial velocity, and the like. We could choose up to nine dependent variables, such as the rocket’s thrust profile, flow-rate history, the initial weight of the rocket, and so on. We initially formulated the six-degree-of-freedom trajectory program so that all the search variables were added to or multiplied by the prime variables (e.g .the thrust profile or the weight history of the rocket stage). Later we figured out how to include additive or multiplicative polynomials with variable coefficients that were determined automatically by the computer. We also figured out how to “segment” (chop up) the relevant polynomials with automatic computer-based determination of the polynomial coefficients in each of the segments being determined independently. The independent variables were measured during the flight with tracking devices located on the ground and telemetry devices carried onboard the rocket. On a typical Saturn V trajectory reconstruction, the computer calculated about 30 partial derivatives at each of the 400 time points spaced one second apart. The resulting partial derivatives – around 12,000 of them – were arranged sequentially in a special matrix format and recorded on as many as nine magnetic tapes. On a typical Apollo flight, the average deviation between the predicted preflight trajectory and the actual postflight trajectory was about one mile. However, after 2.5 hours of simulation time on an IBM 7094 computer, the iterative least squares hunting procedure typically reduced this average error to only about one foot! After running a series of computer simulations of this type, we were able to get a much better handle on the statistical variations in the dependent variables such as the rocket’s thrust and it’s specific impulse. This new knowledge, in turn, allowed us to increase the performance capabilities of the rocket by several hundred pounds of payload headed for the moon.

THE LEGACY

Today virtually every large liquid rocket that flies into space takes advantage of the performance-enhancement techniques we pioneered in conjunction with the Apollo moon flights. NASA’s reusable space shuttle, for example, employs modern versions of optimal fuel biasing and postflight trajectory reconstruction. However, more of the critical steps are accomplished automatically by the computer. Russia’s huge tripropellant rocket, which was designed to burn kerosene-oxygen early in its flight, the switch to hydrogen-oxygen for the last part, yields important performance gains for precisely the same reason the Programmed Mixture Ratio scheme did. In short, the fundamental ideas we pioneered are still providing a rich legacy for today’s mathematicians and rocket scientists most of whom have no idea how it all crystallized more that 40 years ago.

THE CONCLUSION

Figure 5 summarizes the performance gains and a sampling of the mathematical procedures we used in figuring out how to send 4700 extra pounds of payload to the moon on each of the manned Apollo missions. We achieved these performance gains by using a number of advanced mathematical techniques, nine of which are listed on the chart. No costly hardware changes were necessary. We did it all with pure mathematics! In those days each pound of payload was estimated to be worth five times its weight in 24-karat gold. As the calculations in the box in the lower right-hand corner of Figure 5 indicate, the total saving per mission amounted to $280 million, measured in 2009 dollars. And, since we flew nine manned missions from the earth to the moon, the total savings amounted to $2.5 billion in today’s purchasing power! We achieved these savings by using advanced calculus, partial differential equations, numerical analysis, Newtonian mechanics, probability and statistics, the calculus of variations, non linear least squares hunting procedures, and matrix algebra. These were the same branches of mathematics that had confused us, separately and together, only a few years earlier at Eastern Kentucky University, the University of Kentucky, UCLA, and USC. I was born and raised in a very poor family. At age 18 I had never eaten in a restaurant. I had never stayed in a hotel. I had never visited a museum. But somehow, by some miracle, six years later, at age 24, I was getting up every day and going to work and helping to put American astronauts on the moon! Even as a teenager I loved doing mathematical derivations. Those squiggly little math symbols arranged in such neat geometrical patterns were endlessly fascinating to me. But never in my wildest dreams, could I ever have imagined that someday I might be stringing together long, complicated mathematical derivations that would allow enthusiastic American astronauts to hop around on the surface on the moon like gigantic kangaroos! Nor could I have ever imagined that someday my Technicolor derivations would end up saving more money than a typical American production line worker could earn in a thousand lifetimes of fruitful labor!
Over a period of two years or so a small team of rocket scientists and mathematics used at least nine branches of advanced mathematics to increase the performance capabilities of the Saturn V moon rocket by more than 4700 pounds of translunar payload. As the calculations in the lower right-hand corner of this figure indicate, the net overall savings associated with the nine manned missions we flew to the moon totaled $2,500,000,000 in today’s purchasing power. These impressive performance gains were achieved with pure mathematical manipulations. No hardware modifications at all were required.

BIBLIOGRAPHY

 
  • Logsdon, Tom. Orbital Mechanics: Theory and Applications. John Wiley and Sons. New York, N.Y. 1998.
  • Logsdon, Tom. The Rush Toward the Stars. Franklin Publishing Co. Palisade, New Jersey, 1970. Also published by Wm. C. Browne (paperback). Dubuque, Iowa. 1969.
  • Logsdon, Tom. Mobile Communication Satellites: Theory and Applications. McGraw Hill. New York, N.Y. 1995. Also published by McGraw Hill (paperback). Singapore. 1995.
  • Logsdon, Tom. Six, Simple, Creative Solutions That Shook the World. nine Seas Publishing Co. Seal Beach, CA. 1993. Also published by Addison Wesley Publishing Co. (paperback) under the title Breaking Through. 1993.
  • Africano, R.C. and T.S. Logsdon. Schemes for Enhancing the Saturn V Translunar Payload Capability. AIAA 5th Propulsion Joint Specialist Conference. U.S. Air Force Academy, Colorado. June 9-13, 1969.
  • Logsdon, T.S. and R.C. Africano. An Alternative to Monte Carlo. AIAA reprint No. G7-210 presented to AIAA 5th Aerospace Science Meeting. New York, N.Y. January 3, 1967. Also published under the title A Modified Monte Carlo Procedure. AIAA Journal. Volume 6. No. 6. June 1968. pp. 111-117.
  • Jackson, J. Propulsion System Evaluations Through Flight Simulation. Marshall Space Flight Center. Huntsville, Alabama. August 31, 1962.
  • Powers, C.S. Precision Determination of Vacuum Specific Impulse from Trajectory Data. Presented to the AIAA 5th Symposium on Ballistic Missile and Space Technology. May, 1960.
  • Lear, C.W. A Summary of Equations for the Minuteman Propulsion Best Estimate Program (BEEP). Space Technology Laboratories, Inc. 9732. 6-63-5. January, 1963.