Liquid Fueled Rockets

The opposite of solid is liquid.

Back before the 1900s, the only type of rocket fuel used was solid. Then, in 1926, Robert Goddard flew a very small rocket named Nell, after his daughter. This rocket had the distinction as being the first rocket to use liquid propellant. It reached the amazing height of about 100 meters. Not super high, but it was really a “Wright Brothers” moment. Nothing would be the same.

Robert Goddard with his rocket Nell.

An aside on Robert Goddard: Goddard was basically the father of modern rocketry. When the press found out what he had done, they interviewed him and actually made fun of him, because they didn’t understand how rockets work. They thought that the rocket exhaust pushed on the atmosphere, enabling the rockets to go. They did not understand that you don’t need an atmosphere, or the ground, to lift something into space. In fact, the New York Times printed an editorial about Goddard that stated that he “does not know the relation of action and reaction, and of the need to have something better than a vacuum against which to react—to say that would be absurd. Of course he only seems to lack the knowledge ladled out daily in high schools.” Oops. That journalist didn’t read this blog, where I explain that rockets go because of Newton’s laws of motion. Goddard became very withdrawn and basically did his research in private. But, people still knew about him. In fact, German rocket scientists knew about him. They would call him up and ask him about different aspects of rockets. Being a scientist, he would share what he knew. Being an American, he told the government about the German’s interest in rocketry, which the government ignored. Robert Goddard died in August 1945. Just north of Washington DC, there is the NASA Goddard Research Facility, where scientists and engineers build many of the satellites that are in orbit today. The New York Times published an apology to Goddard the day after the launch of Apollo 11, which stated “Further investigation and experimentation have confirmed the findings of Isaac Newton in the 17th Century and it is now definitely established that a rocket can function in a vacuum as well as in an atmosphere. The Times regrets the error.”

An example of a liquid fuel for rocketry is hydrogen with oxygen used as an oxidizer. When these two molecules are mixed together, they release a very large amount of energy, and water results. I am sure that you all know that oxygen is a gas at room temperature, and I am sure that many of you know that hydrogen is also a gas at room temperature. Why, therefore, is this post about liquids and not gasses? Well, a gas takes up a huge amount of space compared to a liquid. Oh, sure, you could store gasses in pressure tanks, but have you ever tried to lift a gas tank? They are extremely heavy, since they have to hold the massive pressures of the gasses. So, the rockets would have to be very, very heavy in order to hold all of the gas. It is much easier to cool the gas down to extremely low temperatures and cause them to liquify. Then, you can store the liquids in relatively light weight tanks.

A liquid propellent engine typically uses two different liquids, a fuel and an oxidizer to create a large amount of energetic (hot) gas that is expelled at extremely fast speeds. The liquids need to be kept in separate tanks to keep them apart until they are pumped into the combustion chamber. The word “pumped” is important here, since, the flow rate of a pump can be controlled. This means that a liquid rocket engine can be throttled, such that the burn-rate of the propellent can be controlled, unlike a solid rocket booster. In fact, the liquid can be cut off completely, stopping all thrust. The flow can then be resumed at a later time, allowing the engine to restart. These are huge advantages over solid rocket boosters.

Liquid propellents are typically more efficient that solid propellents. This means that less fuel is needed, and since the amount of fuel exponentially increases as more is needed (see the rocket equation post!), any increase in fuel efficiency can dramatically decrease the size of the rocket.

Why would anyone ever use a solid propellent rocket, then? Well, there are a few disadvantages to liquid fueled rockets.

The first is that the liquids typically have to be stored at an extremely cold temperature. For example, oxygen turns into a liquid at -183 C (-297F). It is not super easy to store, nor is it super easy to fuel the rocket. Also, the rocket sits on the ground for a while before being launched into space, so there sometimes is insulation on the storage tank. For example, the large orange thing on the space shuttle is not rocket, but is a storage tank for the liquid oxygen and hydrogen. It is orange because that is the color of the foam insulation that wraps the entire tank. This insulation was the direct cause of the Columbia accident in 2003. I will talk about this in a separate post.

The Space Shuttle Columbia during its final launch. The orange thing that the shuttle is attached to is the storage tank for the liquid hydrogen and oxygen.  It is orange because the tank is completely covered in several inches of (orange) foam insulation.

There is one advantage of having super-cooled liquids on the rocket, though. The combustion chamber and the rocket’s nozzle can get extremely hot, since the whole point of the rocket engine is to make extremely hot gasses and direct them out of the nozzle. The combustion chamber and nozzle can actually melt because of this. But, because there is very cold liquid around, the rocket’s plumbing goes around the nozzle and combustion chamber before leading into the chamber. This cools down the extremely hot metal, and warms up the liquids before they are combined together. All of this plumbing and pumping makes liquid engines much more complicated than solid engines.

Another disadvantage of liquid propellents is the storage tanks. Imagine trying to ride a bike with a half-empty five-gallon bucket of water balanced on your handlebars. As the water slushes around, it moves the center of gravity around. If you are trying to go in a (relatively) straight line, and stay balanced, this can significantly complicate things. The same is true with a liquid propellent rocket. The tanks move from being full, to being empty, with every stage in between. All the while, the rocket has to stay on course. The tanks, therefore, need to have baffles and be specially designed so that the liquids won’t slosh. Yet another complication.

If you drove a car before about 1995, you probably remember having a carburetor. These were devices that combined the fuel with oxygen. Now, pretty much all cars have fuel injectors that automatically adjust the amount of fuel and oxygen mixture. So, imagine 50 years before the first automobile fuel injectors were built, building them for rockets. The fuel and the oxidizer had to be mixed just right in order for the burn to be as efficient as possible. On the Saturn-V, the fuel was being used at such a huge rate, that they couldn’t get the mixture to be very good, so they ended up having pockets of fuel and oxidizer, which would then cause explosions. The engineers didn’t have time to develop new fuel injectors, so they made the walls of the combustion chamber thicker in order to withstand the explosions. Engineering! Fuel injectors are complicated when you are using hundreds of gallons of fuel a second. Today, fuel injector technology is much better, but we don’t have many rockets that use fuel as quickly as a Saturn-V did.

In summary, liquid fueled rockets have some big advantages over solid rockets, namely the engines can be throttled and even turned off, allowing more precise orbital insertion; and the liquid fuels tend to be more efficient than solid fuels. On the other hand, rockets that use liquids as fuels tend to be more complicated, with lots of pumps, massive plumbing systems, and fuel injectors. Most rockets that launch satellites into space tend to have a main stack that is liquid with strap-on solid boosters that provide a bit of extra lift in the first minutes of the launch. That way the rockets compromise between the simplicity, but reduced performance of the solid rockets, while allowing the complex, but precise orbital insertion and raw power of the liquid fueled rockets.



News: QB50 and Space Debris Conference

Yesterday was a very long and a very busy day for me – I traveled to Europe to attend the 7th European Conference on Space Debris and we had two satellites launch into space as part of the QB50 mission.

QB50 is a European led mission that has about 35 CubeSats that have been launched to the international space station (ISS).  Each of the satellites, which are about 4 inches by 4 inches by 8 inches (like, really small), carries one of three different sensors that measure the space environment.  The Europeans provided the instrument and the launch, while each group provided the satellite.  University of Michigan build two satellites, called Atlantis and Columbia, that carry the FIPEX instrument.  FIPEX measures the atomic and molecular oxygen density in the thermosphere.  Oxygen is the main gas in the thermosphere, so, in effect, these satellites will measure the air density.  This is important for satellite orbit prediction and collision avoidance. Below is a picture of these two satellites with a bunch of the students, faculty, staff, and engineers that worked on them.


On April 18, 2017, the satellites were taken up to the ISS on a normal resupply mission. They are in deployers called NanoRacks, which push the satellites out into space from the ISS. According to QB50 officials, this should happen in the first few weeks of May. The satellites will then turn on, deploy some drag panels, and start to communicate with the ground station at UM.  We will then command the FIPEX instrument to turn on and start to take data.

While the launch was happening, I was participating in the 7th European Conference on Space Debris.  This conference has about 350 people who are investigating all sorts of aspects of space debris: new techniques for discovering it, quantifying how much there is, and looking at ways of removing it, just to name a few.

A quick refresher on space debris: There are over 20,000 objects orbiting Earth that are about the size of a softball or larger. Since we have hundreds of active satellites, this debris cloud is a problem, since if a piece of debris hits an active satellite (or another piece of debris), it will destroy it and create even more debris. People talk about a Kessler Syndrome, which is basically where low Earth orbit becomes crowded with debris which leads to collisions, which leads to more debris, which leads to more collisions, etc.  This has the potential for running away and basically making low Earth orbit unusable.

I got to watch a talk by Kessler yesterday.  He is a retired NASA employee. Sort of cool to see such a talk.

So far, I have watched a bunch of talks on how to measure debris and some missions that are trying to raise money to remove debris.  Measuring the debris is very interesting, since you can sort of do this with a relatively inexpensive camera.  Just before sunrise and just after sunset, the ground is in darkness, but the sun is still shining on satellites. If you look up in the sky during these times, you can see this reflected light and observe the satellites. Which is pretty awesome.  If you take pictures with a camera, you can figure out the speed of the debris, which gives you its orbital characteristics and roughly how big the object is (from its brightness).  The better your camera, the smaller the debris you can see. I may try to do this with some students. It seems like a great project.

For the debris removal missions, there are a bunch of hurtles: (1) getting to space is very expensive, so it may cost so much to get the junk down, that it is not worth it; (2) rendezvousing with the debris is really hard, since it is quite difficult to automatically track and maneuver into position; (3) capturing the debris is hard, since it may be spinning and oddly shaped; and (4) deorbiting the debris is a challenge, since you have to rigidly attached the debris to some sort of thruster and then use a bunch of fuel to deorbit it.  This means that the missions are pretty expensive and have a LOT of technical hurtles to get over in order to be feasible.  But, they are pretty interesting to learn about!



Back to the Basics: Solid Rocket Engines

As you can probably tell, I tend to get ahead of myself.  This blog has talked about all sorts of crazy different ways of getting into space, except for the two most common methods, which are solid rocket engines and liquid rocket engines. The difference may seem to be subtle, but they are actually quite big. I will talk about solid rocket engines first.

Most people have probably seen a solid rocket engine (maybe even held one in their hands), since they are what are used in model rockets. In fact, they have been used for about a thousand years. The first fire arrows were arrows with a little solid rocket engine tied on to them.  Basically, people used to take gun powder, shove it in a tube, tie it to an arrow, and light it.  The arrow would travel at much higher speeds than if an archer fired it, but they were also very hard to aim. Before World War II, the largest improvement in the technology came when William Hale, in about 1845, packed the gun powder into a cylinder and had the exhaust come out in such a way that the cylinder would spin, which provided stabilization to the rocket. It also eliminated the need for a big stick.

Korean Fire Arrows. 

Did you know that we launched rockets against Mexico City?  How crazy is that?

When the Civil War started, we started to use modern artillery, like guns, which were much more accurate and could be used by a vast number of the solders. Rockets basically went out of favor during this time, although there were a few people working on liquid propellant rockets (like Goddard in the early 20th century), just to try to get the ideas right. I will talk about this when I talk about liquid rockets.

Anyways, during World War II, the Germans figured out how to really use rockets. They used liquid propellant engines in order to launch rockets against the allies. Now, the liquid propellant technology was a huge breakthrough, but they also came up with a bunch of other breakthroughs, namely a guidance system to actually steer the rockets on their 200 mile journey as well as a telemetry system to see if they actually were headed in the correct direction. This sounds like a simple thing to do, but this was before computers were invented, let alone GPS.  But, that is once again, for another post.

A revolutionary German V-2 Rocket from WWII

The lesson here is that the Germans developed the underlying technologies that allowed the rocket to go from being a spinning cylinder that would sometimes work, to a supersonic vehicle that could hit a target accurately from 200 miles away.

Once World War II was over, the US and USSR militaries wanted to used rockets for a couple of uses, but one of the most important was for launching atomic bombs at other countries.  This was because an airplane could be shot down when trying to deliver an atomic bomb. Also, it could take an airplane hours to get to a target. With a rocket, almost any place on Earth could be hit in 45 minutes or less.

What rocket scientists learned is that using liquid propellants on Intercontinental Ballistic Missiles is not really smart.  This is because liquid propellants can’t actually be stored in the rocket – they have to be stored in coolant tanks that keep the liquids very cold.  For example, liquid oxygen needs to be stored at about -300° F (-183° C), which is a wee bit cold. So, when someone wants to fire a liquid propellant rocket, they have to fuel it first, which can take a long time.

On the other hand, those old rockets from the 1840s were packed with gun powder and were stable for a many years, and could be fired immediately when needed.  A perfect solution for global thermonuclear war! Therefore, scientists developed solid propellant engines. These are pretty simple.  You take a solid fuel and a solid oxidizer and mix them together with something that will bind them.  You put it in a tube and shape it, and ta-da, you have a solid rocket engine. There is obviously more to it than that, but that is the basic premise. There are a lot of resources out there that will tell you the chemical makeup of solid rocket fuels.

A Minuteman-III ICBM

When you pack the fuel (called grain) into the cylinder, where are a wide variety of shapes that it can be.  The simplest is packing the outer walls and leave a hole in the center (see wonderful drawing below).  The point here is that only the part of the grain that is exposed will burn, so when you ignite this, then a very small portion of the grain starts to burn. Then, as the grain burns away, more surface area is exposed, so more grain can burn. If you are looking up the rocket from the bottom, you will see a little circle of clear area at first, then as the grain burns, the circle gets larger and larger, with more and more grain burning. Since more grain is burning, the thrust actually increases as a function of time, until the outer wall is reached, and the thrust stops, since the fuel is completely depleted. This is called a progressive burn. Model rockets have this grain pattern.

A progressive burn grain pattern.  This is viewing up the rocket from the bottom.


If you pack the grain in exactly the opposite way, with a big circle in the middle, and a thin shell of clear area at the outer shell, then as the grain burns, the circle of grain gets smaller and smaller, which creates a regressive burn.

A regressive burn grain pattern, viewed from the bottom of the rocket.

Interestingly, you can do some strange shapes that will allow the burn to be neutral, where the exposed surface area of the grain remains mostly constant as a function of time.  (If you think about it, the area always has to roughly be the area of outer shell, since this is the final area.) This is what the space shuttle’s solid rocket booster engines used – an 11-point star configuration for a roughly constant burn for the entire 104 seconds that they thrusted.  More on the space shuttle’s solid booster in another blog post!

One huge disadvantage of solid rocket boosters is that once you fire them, you can’t stop them – they burn until all of the grain is used up. With a liquid propellant rocket engine, you can throttle the engine and even turn it off and back on. There is none of that control in a solid rocket engine. Engineers design the shape of the grain to give them the type of burn that they want, and when it is lit, it goes.  This can be somewhat mitigated by making the top of the rocket so that it can pop off, exposing the grain at both the top and bottom, allowing the rocket to thrust both down and UP at the same time. This creates a balance in the (upward and downward) forces, allowing the rocket to go nowhere. That is pretty insane, and, I would imagine, quite dangerous, but it is something.

Another disadvantage is that, because the rocket exhausts all of the fuel in one go, the rocket basically can’t do any final adjustments. Therefore, the uncertainty on where it will end up can be rather large. For nuclear weapons, being off by a few (10s of) miles is not a big deal, since there are a lot of warheads on each one and they all go sort of cluster-bomb at the end anyways. If you want to get something to orbit, being off by a few (10s of) miles can be somewhat bad.

Ok, let’s summarize. Some advantages of solid rocket engines:

  1. Solid rocket engines are dead simple and are therefore easy to design and cheap.
  2. You can store a fully fueled solid rocket engine for a long time and fire it whenever.
  3. Solid rocket engines can be strapped on to liquid propellant rockets to give them an initial boost. Hence, solid rockets are sometimes called boosters.
  4. Did I mention that they are so simple that you can buy them at Walmart?

Some disadvantages:

  1. Once you ignite the solid rocket engine, it will fire until used up.
  2. Because they can’t be relit, they are not super accurate.
  3. They are almost always not as powerful as liquid propellant engines. I didn’t talk about this, but it is true!

Solid rocket engines end up being used as strap-on boosters and intercontinental ballistic missiles, and are not used very often to put stuff into orbit on their own.  That is, unless you don’t have a lot of money and need to get to space for cheap. You can pick up a bottom of the line Pegasus-XL for about $28,000,000 from Orbital-ATK (and build a paper one for free!). And yes, they accept bitcoin.




More Interstellar Travel

This week, researchers announced that they have found seven rocky planets around another sun that may be capable of having water – three of which are in the habitable zone of the solar system. The star, called TRAPPIST-1, is about 39 light years away.


This discovery raises the issue, once again, of interstellar travel.  A while ago, I wrote a post that discusses how long it would take to get to our nearest neighbor star using modern technologies. Long story short: it would take a many thousands of years.

Since then, articles have been published that discuss getting to another sun using lasers. In the last post, I talked about using lasers to get to Mars. This time, I will talk about the idea of using lasers to get to another star.

The idea of using light to accelerate things has been discussed for a long time. Basically, light bouncing off of a reflective surface will impart some pressure on that surface.  There are two extremely interesting things about using light to accelerate things: (1) if it is sunlight, it is free, which is the cheapest type of energy; and (2) if it is not sunlight, the energy can be generated somewhere besides on the spacecraft (like on the Earth or the Moon), which means that the spacecraft can be much smaller and won’t need huge engines with gigantic fuel tanks. The big disadvantage of using light to accelerate things is that the efficiency is absolutely horrible, with a huge amount of the energy being completely wasted.

The general idea with using lasers for interstellar travel is exactly the same as using lasers for getting to Mars: bounce a laser off the spacecraft, or a gigantic sail, and accelerate it up to an extremely large speed, moving in the right direction. For a trip to Mars, you could imagine having a similar laser system on Mars, so the spacecraft could be slowed down. On an interplanetary trip, the spacecraft would simply (quickly) pass through the solar system of the other star.

So, why don’t we do this now? Well, there are a bunch of reasons:

  1. We don’t have lasers that are large enough and can operate for long enough to accelerate something (relatively large) up to close to the speed of light. The article above talks about using a laser array that is in orbit that would be about 6 miles across.  That is a pretty big array of lasers.  Basically, you would need a ton of lasers that would all fire for a very short amount of time, but combined, the array would provide a constant stream of energy that would rapidly accelerate the spacecraft.
  2. The article talks about accelerating the spacecraft up to speeds of 1/3 of the speed of light in 10 minutes. That would be an acceleration that is 17,000 times gravity. We don’t build spacecraft that can experience that type of acceleration – even 20-30 times gravity is pretty horrible for a spacecraft!
  3. Even with the huge laser array, the spacecraft that is getting the energy would have to be super tiny.  The article talks about a spacecraft that is something like 1 inch in size.  That is pretty small! Considering that the smallest satellites in orbit around the Earth are CubeSats, which are about 4 inches cubed (and that is REALLY hard!), it is unlikely that we will launch anything even CubeSat size on an interstellar trip.
  4. Since the spacecraft are so small, it is hard to imagine how we would get signals from it. Let’s take the New Horizons mission to Pluto as an example. New Horizons has a dish that has a diameter of 2.1 meters. At Pluto, it had a bandwidth of 4.5 kilo-Bytes/sec. Compared to a standard cable modem, this is about 1000 times slower. Ok, New Horizons is not going to stream Netflix – that is clear.  But, it has taken the mission over six months to stream all of the images that they took in their flyby of Pluto. That is a very slow bandwidth! Bandwidth falls off as the square of the distance between objects.  So, if we launched New Horizons to a solar system 39 light years away (which is about 62,500 times further away), the bandwidth would be 0.0000012 Bytes/second. Yikes!  That is slow!  If the spacecraft were really only an inch in size, then the antenna could only be about that big, which means that the bandwidth would decrease by a factor of 10,000. That is really not good.  So, this is the largest issues with this idea.  In some ways, it is like trying to use your cell phone to call someone on Earth from Pluto. “Can you hear me now?” “Uh. No.”

    The New Horizons spacecraft.  The antenna is about 2 meters across. The black thing to the right of the antenna is the power source for the spacecraft.  It is a Radioisotope Thermoelectric Generator (RTG), which is just cool to say.
  5. Since the spacecraft would quickly move away from our own sun, it would not have a power source for the entire trip to the other solar system. Which is bad for two reasons: we wouldn’t be able to communicate with it, since it takes energy to send signals, and it would quickly cool down to the background temperature of the universe, which is -269°C. Not many electronic components will survive those temperatures! So, we would HAVE to have a power source that would last the trip, just to heat up the system. That would be big.
  6. It would take at least 40 years as a bare minimum to get there, and would pass through the system in about 10 minutes. On the first front, you would have to count on the scientific community to keep its eye on the prize for the 40+ years of the trip. And, it should be noted that 40 years is the absolute minimum, assuming that we can accelerate it up to almost the speed of light. If we can get it up to 1/3 of the speed of light, it would take 120 years. That is a fair bit of time.  Next, when it arrives and passes through the system, it will need to have an extremely fast camera, since it will be passing by the planets at >500,000,000 MPH. That is a good camera.  Not a cell phone camera.

Ok, I think that you get the point.  This is a really, really hard mission. We are not really close to having interstellar travel. It is great to think about these things, but they really are science fiction at this point.

But, if we just …..


Using Lasers to Get Moving

In the chain of crazy ideas of how to get to space and how to get from one planet to another, there is an idea to use lasers. Actually, there are a couple of ideas on how to do this.  This is the first of a two-parter where I talk about this idea.  The first part will cover one project that has actually gotten off the ground (literally) and an idea on getting to Mars, while the second post will look at interstellar travel with lasers.

The first idea on using lasers makes a tiny bit of sense.  It is called Lightcraft (get it – light and craft?).  The general idea with this is that you have an object that has a very specific shape on the bottom side. Then you shoot a laser at it and the shaped bottom focuses the laser so that it superheats the air that is touching the object.  The air then is propelled away from the object, resulting in a net thrust that is towards the top of the object.

Interesting, eh?  They have actually tested this with some very shiny objects that are about the size of a fist and are pretty light. Here is a picture:


That is actually almost real size, too! These little things have flown about 75 meters into the air.  That is not, um, unimpressive, I guess. There are several problems with this technology, since it is hard to keep the Lightcraft pointed in the right direction and keep the laser pointed directly at it, and all sorts of other things. My guess is that they have not had the right public relations people and the large amount of funding that is needed to take a project like this from the tiny prototype stage to anything of real size.

Recently, another team has also been working on using lasers to move things about in the solar system.  This idea with this team is to use very high powered lasers in a similar way as we would use the sun and a solar sail.

A quick aside on solar sails (boy, I really need to write a post about solar sails):  When light hits an object, it actually imparts a super, super small amount of momentum.  When you feel the sun beating down on you on a very bright days, it is totally because it is actually beating down on you. Well, technically it is, but in reality, the amount of force on you by sunlight is less than a paperclip put on you.  Like, way less.  But, if you were out in space, and you had a huge reflective “sail”, the light would shine on it and impart a very small force – something like a pound for a sail that is about 1 km². But, imagine if you could turn the brightness of the sun up by a factor of 100. Or 1,000. Or 1,000,000! Then you could get some real force to act on your spaceship!

So, the general idea with using lasers is that you could have a reflective surface on your ship that you would point a really really really intense laser at.  This would impart a large force on the ship and accelerate it. The beauty of this plan is that the lasers all would need to be powered here on Earth, so we could generate it using a nuclear power plant or hydroelectric or even good old-fashioned coal.  The ship could be very small, since it wouldn’t need a lot of fuel to accelerate it, since that power is coming from Earth.

In the article that I linked to above, the researchers say that they could envision getting to Mars in 3 days using this type of technology. Please excuse me if you heard a cough that subtly masked my slight doubt of this claim.

The first (and most obvious) issue with this is that you would have to have some sort of a laser system that would be on Mars to slow down the ship.  So, you would have to build something like a nuclear power plant on Mars. I am sure that this is not really likely to happen soon, since we are so successful at building them here in the United States (sarcasm). But, there are probably less regulations on Mars, so it should be easier. But then there is the whole getting all of the (highly radioactive) materials to Mars to actually build the plant.  Well, any ways, we will get there eventually!

Ok, so now that we have a laser system on Mars and a laster system on Earth, how much acceleration would we need to get to Mars in 3 days?  Well, we would accelerate for half the distance and then decelerate for the other half of the distance.  If we make a very simple approximation that the acceleration is constant, the problem is easy to solve.  Let’s assume that Mars and Earth are the closest they can be together, which is 0.3 AU, or about 45 million kilometers. We need to accelerate through about half that distance in about 1.5 days. Do a little math and we get that the acceleration needs to be a constant 5.3 m/s², which is about half of the acceleration of Earth on the surface.  This is extremely reasonable!

The problem with this is that the power from the laser falls off as the distance squared. This means that the acceleration that the laser system could supply would have to start off extremely large, then would fall to almost nothing, or that the power that is consumed by the laser would have to start off relatively small, and would have to increase dramatically.

Let’s think about how high-powered of a laser you would have to have in either case. I am going to simplify the problem significantly, since I am a relatively simple person. The sun, for reference, exerts about 4.5667e-6 Newtons of force per meter squared of area. This is an incredibly small force! Like, really, really, really small.  In order to exert that much force, the energy in that light is about 1350 Watts, which is a LOT of energy.  So, this idea is not very efficient at all!

Let’s say that we want to send something to Mars that is a 100 kg, or about 220 lbs. This is an extremely small satellite. If we want to accelerate it at a rate of 5.3 m/s², like the example above, we would have to use 530 Newtons. If we had a sail hooked up to this object that was, say, 100m by 100m (about the size of a football field), how much force would the sun exert on it?  About 0.0457 Newtons.  That is not much! And that is taking about 1350 W, as described above.  So, we would need a laser that is about 11,600 times more powerful than the sun to give us our 530N of force.  That would require a 15.7 Mega-Watt laser.  And this would only accelerate it at the 5.3 m/s² for a little while, since the distance between the laser and the satellite would increase and the received power from the laser would decrease.

Let’s say that the laser delivered the 15.7MW (or 530N of force) at a distance of about 10 Earth radii away from the surface of the Earth (I had to choose a distance, and this was quite arbitrary, but whatever). If you wanted to continue to accelerate the satellite at 5.3 m/s² all the way to the halfway point between Earth and Mars, the power of the laser would have to increase by a factor of about 500,000 times while it was shining on the craft. This means that in order to accelerate it all the way to the halfway point, the laser would have to be a 7,800,000 MW (7.8 Tera-Watt) laser, and would have to fire (ramping up in intensity) for about 36 hours.

Practical?  I don’t know.  This website talks about a 2,000 TW laser that was fired for 1 pico-second (not very close to 36 hours). Another website talks about getting a 10 TW laser that fires for about a femtosecond (that is also pretty short), but fits on a desk.

Where could we get the power? Well, if the sun delivers 1,350W of power per 1m x 1m area, then we would need about 5,800 km² of solar panel area to get that much energy.  Oops, solar cells are not perfectly efficient (more like 25% efficient), so we would need about 23,000 km² of area, which is about 150 km by 150 km of solar cells. This is about the size of New Jersey.

Anyways, the idea is that power on Earth is very cheap, while getting that power into space is really painful.  So, it is ok to take a HUGE hit on efficiency to accelerate something up to enormous speeds in space using Earth-based systems, instead of trying to haul some sort of chemical rocket engine up to space. In fact, chemical rockets will never get us to another star, so it is a non-starter. But, that is a conversation for next week (I promise!)




Reinventing Innovation With Small Satellites

When NASA first formed, getting to space was extremely dangerous and no one really knew how to do it. Therefore, there was an acceptance of a large amount of risk. Part of the reason for this is that the amount of money invested in NASA was enormous – at one point in the late 1960s, NASA’s budget was 6% of the total federal budget. This allowed NASA to rapidly solve problems by trying new and innovative things and iterate on the design over and over until it worked. For example, the Atmospheric Explorer satellites, built in the 1970s, had four versions that each lasted only a few months until AE-E lasted for many years. The IMP series of satellites measured characteristics of space between the Earth and the Sun. There were seven satellites that lasted only a few months until IMP-8 lasted 30+ years. The reason that NASA could do this is because the ratio between the cost of a satellite to the total budget of the agency was quite small. Therefore, failing on a few satellites didn’t matter very much.

NASA today is quite different. The budget is 0.5% of the total federal budget and dropping every year. The total number of satellites launched is significantly smaller, and each one needs to succeed, since they cost so much compared to NASA’s total budget. While failure still happens at NASA, it is not something that is taken as a lesson; it is taken as a failure. Therefore, there are a plethora of lessons learned, sets of rules that need to be followed, and significant numbers of reviews that must be held to move the project to the next phase. While this helps to ensure success, and it is a natural outcome of a system with limited budgets, it stifles innovation and greatly increases the cost of each satellite.

When a satellite is launched, the rocket (or launch vehicle) has a certain capability. For example, a given rocket may be able to launch a satellite with a mass of 1000 kg to a 700 km altitude. If the satellite that is being designed is 950 kg, the rocket then has excess capacity and, in order to get the satellite to the exact desired orbit, 50 kg of ballast needs to be added to the rocket. Sometimes, this ballast can be another satellite.

The idea of a CubeSat was created when Bob Twiggs suggested that there could be a standardized deployer for very small satellites that could be used as ballast on almost any launch vehicle. The deployer could be very strong and could ensure that if anything went wrong with the little satellite inside it, the primary payload (i.e., large satellite) would not be affected. The P-POD deployer was invented to allow cheap access to space for extremely small satellites as secondary payloads. It created a standard for a tiny satellite, dubbed a “CubeSat”, that could fit into the P-POD deployer and be launched into space wherever a P-POD could be used. A P-POD accepts a satellite that is roughly 4 inches by 4 inches by 12 inches, or 3 satellites that are roughly 4 inches on a side (hence the name “CubeSat”).

The invention of the P-POD deployer has revolutionized space, since is allows cheap access to space for anyone who can build a satellite that can fit within its extremely limited confines. At first, it was considered too small to be of any use for any real science to be conducted. This has since changed, though. The National Science Foundation has embraced CubeSats as both an educational tool and a science tool. They have funded over 10 CubeSats that have been used to conduct science ranging from lighting detection to radiation belt monitoring.

CubeSats have been successful because that have placed the community into a place where the satellites are once again quite cheap compared to the total budget of the institution. This has allowed innovation and rapid technology development to reenter the satellite industry. Because of this, a large number of companies have been created to support such an industry, creating smaller and smaller supporting hardware for tiny satellites. For example, small, high-speed radios have recently been introduced that can be used to downlink massive amount of data. Without such radios, it would be extremely expensive to get all of the data from the satellite to the ground.

With the emergence of small-satellite hardware, agencies such as NASA and the Department of Defense could start to look at creating missions that are designed in a radically different way – instead of launching a single satellite that was quite expensive, they could explore how to get the same science return with two or more smaller, cheaper, satellites.

Another technology that has led to the creation of missions such as CYGNSS, a satellite constellation mission that I am involved with, is the Global Navigation Satellite System (GNSS), which is the general name for constellations such as GPS. These satellites were designed to provide precise position and time information for military systems, but their use has become significantly more general. GPS is ubiquitous, with everyone having a receiver in their phone, which has done two things – pushed GPS receivers to smaller and smaller sizes and lowered the price for those receivers.

In space, a GPS receiver can be used to do many different things. One of the most common is to use the delay between when the GPS satellite sent the signal and when the monitoring satellite received the signal to tell something about the atmosphere. This can be done because electromagnetic waves travel at different speeds in different medium. So, for example, if the waves travel through the ionosphere, they slow down a bit and take longer to reach the satellite. This difference can be measured and the amount of ionosphere between the two satellites can be determined. Just like the ionosphere, the waves slow down when they travel through water vapor, so, when the waves have to travel through the atmosphere, the amount of water between the satellites can be determined. This is the primary use of GPS on satellites today – radio occultation to determine atmospheric characteristics. Because of the inexpensive nature of GPS receivers, constellations can be launched that take advantage of this. The COSMIC satellites are an example.

The CYGNSS satellite mission uses GPS signals, but in a very different way than other satellites – it measures how much of the signal is reflected off of the ocean’s surface, which says something about the roughness of the ocean. The amount of scattered signal is dependent upon the surface roughness, which itself is dependent upon the winds over the water. Therefore, the scattered GPS signal strength that is measured by a satellite such as CYGNSS depends on the wind speed over the ocean. CYGNSS takes advantage of there being the GPS satellites in orbit around the Earth continuously sending signals towards the Earth.

Past satellites that have measured the wind speed over the ocean have had to both transmit a radio wave and receive the wave back. In order to measure the signal, the transmitter had to be quite large and powerful. Since CYGNSS does not contain the transmitter, it can be significantly smaller.

In addition, CYGNSS takes advantage of a number of components that were designed for very small satellites (like CubeSats), such as the radio for communication with the ground station, the star trackers that provide information on the orientation of the satellite, the momentum wheels, which keep the CYGNSS satellites oriented in the proper direction, and the computer systems that run everything on the satellites. This has allowed NASA to launch eight very small satellites for cheaper than a normally priced large satellite, and it is all due to the shrinking of space technologies and the global availability of the GPS network.

By pushing the boundaries of what can be done with extremely small satellites, NASA has started to shift to considering constellation missions that can give us an idea of what is happening over a large portion of the globe instead of in a single place.  It has allowed innovation and rapid development of cheap technologies to flourish. This will result in some amazing new science to take place in the next decade!



Space Shuttle Atlantis

Here are some pictures from Kennedy Space Center of Space Shuttle Atlantis.

A view from the front with a wide angle lens.
A view from the side with a horribly distorting 8mm fisheye lens.  Interesting effect, but it makes you think that the shuttle is small, while it is gigantic.
The three main engines. Feel the burn.