GE Flow Battery Aims For 240 Mile EV Range And Beyond

We were just fooling around with the notion that new fuel cell technology could shake up the electric vehicle market, when here comes GE with another alternative: a flow battery that combines with a fuel cell to push EV range up to the Department of Energy’s goal of 240 miles, and even farther. The official rated range of Tesla Motors’ highly regarded but highly costly Model S is already 265 miles on a lithium-ion battery pack, so the big factor here is going to be affordability. With that in mind let’s take a look at that GE flow battery and see what’s doing.

Editor's Note: The Energy Collective currently features an interview with Dr. Cheryl Martin, Deputy Director of ARPA-E, the program that funded GE's efforts. The interview can be found by clicking here.

The New GE Flow Battery

A typical flow battery consists of two separate liquids flowing on either side of a membrane. Like fuel cells, EV flow batteries would generate electricity on board the vehicle through an electrochemical reaction, rather than drawing electricity from the grid and storing it.

The challenge has been to lower the cost of the main components, including the liquids and the membrane. Another big challenge is to achieve an energy density level that enables the whole battery system to shrink down to a size and weight workable for passenger vehicles.

GE’s flow battery technology is water-based, but before you get all excited about filling up your gas tank with water bear with us for a second. By water-based they simply mean a water-based solution of inorganic chemicals.

Here’s how it would work, keeping in mind that the idea is to combine the “best properties” of both flow batteries and fuel cells:

A hydrogenated organic liquid carrier is fed to the anode of a PEM fuel cell where it is electrochemically dehydrogenated, generating electricity, while air oxygen is reduced at the cathode to water. To recharge the flow battery, the reactions are reversed and the organic liquid is electrochemically re-hydrogenated, or rapidly replaced with the hydrogenated form at a refueling station.

The result, in theory, is an energy density of up to 1350 Wh/kg, which according to GE would be a record-setter for secondary batteries.

More to the point, the GE research team anticipates that their flow battery system could be produced for 75 percent less than the cost of a typical lithium-ion battery pack, which right now is the gold standard for EV batteries.

However, if you really want to go ahead and buy an EV now, don’t wait on GE. Between lower operating costs, subsidies, and a downward trend in battery prices, the cost of a good EV has already dropped to the affordability range for many car buyers. You can always trade it in a few years down the line, whenever GE’s new flow battery hits the market.

According to GE Global Research, which is heading up the project, the next step is to translate the labwork into a working prototype and demonstrate the feasibility of the technology, so commercialization is still a long way off.

We Built This New GE Flow Battery!

GE is not shy about crediting its research partners, so why should we be? GE Global Research has been recognized by the Obama Administration as an Energy Frontier Research Center funded by the Department of Energy, charged with developing game-changing energy storage technologies. According to GE, it is the only corporate research center chosen for such a role.

The project itself comes under the Energy Department’s ARPA-E RANGE initiative, which has the goal of making EV ownership just as affordable and convenient as owning a gasoline vehicle.

Other partners include Yale Universityâ€" Crabtree Group, Yale Universityâ€" Batista Group, Stanford University, and Lawrence Berkeley National Laboratory.

All Roads Lead To Cheaper Flow Batteries

Aside from GE’s approach, other research teams are addressing the membrane cost issue by doing away with it altogether. At MIT, for example, they’re working on a bromine based flow battery with no membrane.

Electric vehicles represent just one market for flow batteries, by the way. Another major market is grid-scale energy storage, and we taxpayers have been hard at work on that one, too.

One good example is Pacific Northwest National Laboratory, which has partnered with the company UniEnergy to develop a grid-scale flow battery based on two different vanadium ions (vanadium is a soft metal).

Another example comes from Sandia National Laboratories, which is using a solution of liquid salts called MetiLs in its low cost flow battery project.

GE Flow Battery Aims For 240-Mile EV Range…And Beyond was originally published on: CleanTechnica. To read more from CleanTechnica, join over 30,000 others and subscribe to our RSS feed, follow us on Facebook, follow us on Twitter, or visit our homepage.

Authored by:

Tina Casey

Tina Casey is specializes in military and corporate sustainability, advanced technology, emerging materials, biofuels, and water and wastewater issues. She is currently a Senior Reporter at Cleantechnica.com and a Staff Writer at TriplePundit.com. Tina’s articles are reposted frequently on Reuters, Scientific American, and many other sites. You can also follow her on Twitter @TinaMCasey and ...

See complete profile

Read More

Update on Fukushima Leaks: Unrepresentative Sampling Supports Fear Mongering

Nuclear Fear Mongering

After posting Fear mongering over WATER leaks at Fukushima Dai-ichi a number of people challenged the concentration numbers I used in the supporting calculations. This August 23, 2013 Tepco press release contains numbers that roughly correspond to those I used, so I pressed the challengers for a source.

They pointed me to a Tepco handout dated August 19, 2013 which contains a table of measurements that are vastly different from the ones that were reported in the press release that I cited. The line labeled as “leakage water” includes numbers that are also vastly different from the huge number of similar measurement tables that Tepco has published on their web site.

This handout gave me pause and made me wonder if I had made a serious error in trying to calm people down. If the numbers from that handout are correct and representative, they show there is something to worry about, at least in the local area.

I turned to my friends to try to help sort out the problem. Some advised sticking with the highest measured numbers in order to bound the problem and prove to people frighted about nuclear energy and radioactivity that nuclear professionals are not taking their concerns lightly. That course of action does not appeal to me.

It is not constructive. It just reinforces fear; it does not reflect reality. Radioactive material is finite; it cannot be spread or diluted without becoming less and less concentrated. It is wrong to take the highest reading you can find and then mathematically assume that it is a representative sample. I kept digging and eventually figured out that the numbers that people were using to frighten others were from an isolated sample that was not representative of anything.

Here is an extract from the comment thread on the original post that deserves to be read by more people.

@Rod
Here is a link to the original press release from Tepco.
http://www.tepco.co.jp/cc/press/2013/1229852_5117.html

From the link;
“In addition, it is as follows: nuclide analysis results of water analyzed so far.

 4.6 × 10^1 Bq/cm3: 134 cesium
 cesium 137: 1.0 × 10^2 Bq/cm3
 131: less than detection limit (detection limit : 3.1 × 10^0 Bq/cm3)
 Cobalt 60: 1.2 × 10^0 Bq/cm3
 manganese 54: 1.9 × 10^0 Bq/cm3
 antimony 125:7.1 × 10^1 Bq/cm3
 all beta: 8.0 × 10^4 Bq/cm3
 chlorine Concentration: 5200 ppm”

Doesn’t appear from these numbers that there is a unit conversion issue, and this is the Tepco press release, so I would agree that the level of scrutiny and fact checking is much higher than numbers buried in a table on page 5.

When converted to Bq/l, the all beta count is equal to 80 million Bq/l. This is the same number reported by the media.

The cesium 137 number seen here when converted to Bq/l is equal to 100,000 Bq/l. This is ten thousand times the legal drinking water limit for Cesium 137 in drinking water. I confirmed this from the Health Canada website on the Guidelines for safe drinking water and the level for artificial radionuclides was listed at 10Bq/l.

80 million Bq/l means there are 80 million clicks per second on a geiger counter.
Converted to counts per minute this water is registering;

80 million counts per second x 60 seconds = 4.8 billion counts per minute.

4.8 billion counts per minute.
This is a staggering amount of radiation.

(Note the use of nonstandard units like “counts per minute” and the purposeful selection of numbers that sound as scary as possible to go along with the selection of “staggering” as an adjective.)

A reader who posts as CW responded:

This measurement was taken from a pool of water .1 cubic meters in volume on the ground, and appears to be anomalous compared to all other water readings at the site. Until confirmed with other readings from the tank it’s very possible these readings come from cross-contamination from another area of the site, possibly tracked in on a worker’ s boot.

Here is my response:

@fascinated1

Based on the voluminous number of readings from all other locations, I believe that the particular sample described in that single press release is highly unrepresentative of the average content of the tanks. Tepco is a company that has experienced more than two years worth of focused demonization from both enemies and “friends” about its “lack of transparency.” It has decided to take a “worst case scenario” approach and treat the sample as if it actually says something about the potential magnitude of the radioactive material that might be released.

I believe that the particular small puddle probably was contaminated. I do not have full details needed to make a complete diagnosis from 12,000 miles away, but my experience with holding tanks is that they often develop a sludge at the bottom as particulate material settles out of the water column.

Similar scary reports have happened as a result of fish sampling. A small fish (29 cm long) that is a known bottom feeder showed up with what looked like a very high concentration of radioactive material that, when scaled to a tuna weighing a couple hundred kilograms, showed a very frightening possible release.

http://www.huffingtonpost.com/2013/01/23/fukushima-fish-2500-times-radiation-limit-nuclear-disaster_n_2536775.html

No other fish have been found with that kind of concentration. I suspect that the small fish ate material that happened to contain a physically tiny, but quite radioactive, bit of cesium. After all, a single milligram of cesium-137 contains about 3E9 (3 billion) Bq of radioactivity. The quantity of cesium required to produce a concentration of 254,000 Bq per kg in a 2 kg fish is just 0.0002 milligrams. It would most likely be undetectable without magnification on a physical basis, but it sure is easy to find with a radiation detector.

Hot particles exist; the material released from Fukushima Dai-ichi is not uniformly spread over all of the places that it could have reached. There are a finite number of particles, however, an a finite probability (very small) of encountering enough of them to cause any harm.

It is quite unproductive, unless your goal is to frighten people about nuclear energy, to pretend that the single measurement means there is a risk worth worrying about.

That small accumulation of water, described as 0.1 cubic meter in volume, was also the place where a radiation meter located about 50 cm above the water read 100 mSv/hour (beta + gamma) but just 1.5 mSv/hour gamma. A sheet of paper or a meter or two of distance would be sufficient shielding to protect a person from nearly all of the radiation from that pool. Since most human beings are not likely to drink from a puddle of water on the ground, no one would be likely to ingest the material that was causing the high radiation readings.

It is the height of absurdity to make believe that a 0.1 cubic meter puddle on an industrial clean up site is something people who live in the United States should worry about. Heck, no one anywhere should worry that the material is going to harm them.

As someone who has handled a spill or two in my career, many containing far more dangerous materials than reported to have been in this puddle, I would guess that the cleanup was pretty simple.

Of course, since it was done by nuclear professionals, it is possible that it took many hours and cost tens of thousands of dollars. As Galen Winsor told the world many years ago, certain types of people in the nuclear business have turned revenue-increasing “feather bedding” practices into an extreme art form. (Feather bedding is also known as paycheck protection, but the practice is onerous when conducted by contracting companies that collect billions in revenue for doing tasks using 2-100 times as many hours as needed.)

The post Update on Fukushima water leaks â€" unrepresentative sample used to support fear mongering appeared first on Atomic Insights.

Photo Credit: Nuclear Fear Mongering/shutterstock

Authored by:

Rod Adams

Rod Adams gained his nuclear knowledge as a submarine engineer officer and as the founder of a company that tried to develop a market for small, modular reactors from 1993-1999. He began publishing Atomic Insights in 1995 and began producing The Atomic Show Podcast in March 2006. He now works for B&W mPower, but his posts on the Internet reflect his personal views and not necessarily the ...

See complete profile

Read More

GE Flow Battery Aims For 240-Mile EV Range…And Beyond

Batteries GE develops flow battery for EVs

Published on August 29th, 2013 | by Tina Casey


We were just fooling around with the notion that new fuel cell technology could shake up the electric vehicle market, when here comes GE with another alternative: a flow battery that combines with a fuel cell to push EV range up to the Department of Energy’s goal of 240 miles, and even farther. The official rated range of Tesla Motors’ highly regarded but highly costly Model S is already 265 miles on a lithium-ion battery pack, so the big factor here is going to be affordability. With that in mind let’s take a look at that GE flow battery and see what’s doing.

The New GE Flow Battery

A typical flow battery consists of two separate liquids flowing on either side of a membrane. Like fuel cells, EV flow batteries would generate electricity on board the vehicle through an electrochemical reaction, rather than drawing electricity from the grid and storing it.

The challenge has been to lower the cost of the main components, including the liquids and the membrane. Another big challenge is to achieve an energy density level that enables the whole battery system to shrink down to a size and weight workable for passenger vehicles.

GE’s flow battery technology is water-based, but before you get all excited about filling up your gas tank with water bear with us for a second. By water-based they simply mean a water-based solution of inorganic chemicals.

Here’s how it would work, keeping in mind that the idea is to combine the “best properties” of both flow batteries and fuel cells:

A hydrogenated organic liquid carrier is fed to the anode of a PEM fuel cell where it is electrochemically dehydrogenated, generating electricity, while air oxygen is reduced at the cathode to water. To recharge the flow battery, the reactions are reversed and the organic liquid is electrochemically re-hydrogenated, or rapidly replaced with the hydrogenated form at a refueling station.

The result, in theory, is an energy density of up to 1350 Wh/kg, which according to GE would be a record-setter for secondary batteries.

More to the point, the GE research team anticipates that their flow battery system could be produced for 75 percent less than the cost of a typical lithium-ion battery pack, which right now is the gold standard for EV batteries.

However, if you really want to go ahead and buy an EV now, don’t wait on GE. Between lower operating costs, subsidies, and a downward trend in battery prices, the cost of a good EV has already dropped to the affordability range for many car buyers. You can always trade it in a few years down the line, whenever GE’s new flow battery hits the market.

According to GE Global Research, which is heading up the project, the next step is to translate the labwork into a working prototype and demonstrate the feasibility of the technology, so commercialization is still a long way off.

We Built This New GE Flow Battery!

GE is not shy about crediting its research partners, so why should we be? GE Global Research has been recognized by the Obama Administration as an Energy Frontier Research Center funded by the Department of Energy, charged with developing game-changing energy storage technologies. According to GE, it is the only corporate research center chosen for such a role.

The project itself comes under the Energy Department’s ARPA-E RANGE initiative, which has the goal of making EV ownership just as affordable and convenient as owning a gasoline vehicle.

https://plus.google.com/102291313118764969093/posts

// ]]>

Other partners include Yale Universityâ€" Crabtree Group, Yale Universityâ€" Batista Group, Stanford University, and Lawrence Berkeley National Laboratory.

 All Roads Lead To Cheaper Flow Batteries

Aside from GE’s approach, other research teams are addressing the membrane cost issue by doing away with it altogether. At MIT, for example, they’re working on a bromine based flow battery with no membrane.

Electric vehicles represent just one market for flow batteries, by the way. Another major market is grid-scale energy storage, and we taxpayers have been hard at work on that one, too.

One good example is Pacific Northwest National Laboratory, which has partnered with the company UniEnergy to develop a grid-scale flow battery based on two different vanadium ions (vanadium is a soft metal).

Another example comes from Sandia National Laboratories, which is using a solution of liquid salts called MetiLs in its low cost flow battery project.

Follow me on Twitter and Google+.


Tags: , , , , , , , , , , , , , ,


About the Author

Tina Casey specializes in military and corporate sustainability, advanced technology, emerging materials, biofuels, and water and wastewater issues. Tina’s articles are reposted frequently on Reuters, Scientific American, and many other sites. You can also follow her on Twitter @TinaMCasey and Google+.



Read More

BMW i3 Shifts EV Market With Composites

Cars blog_2013_08_29-1

Published on August 30th, 2013 | by Rocky Mountain Institute

Originally published on RMI Outlet.
by Greg Rucks

blog_2013_08_29-1Fuel economy is greatly affected by an automobile’s weight. Nevertheless, for years our automobiles got heavier. In the U.S. the average curb weight of a passenger vehicle climbed 26 percent from 1980 to 2006. Advances in powertrain technology have not lead to drastically higher mile per gallon ratings because of this increased weight, among other factors. However, the recent release of the BMW i3 signals the beginning of a shift toward lightweighting that will help to drive the efficiency and competitiveness of electric vehicles. BMW is using carbon-fiber-reinforced plastic on its new electric i3 to shave off up to 770 pounds from the autobody compared to using traditional materials, without significant price increase. The result is a four-passenger car that can go 100 miles on a charge with a sticker price of just over $40,000.

It’s no coincidence that BMW’s development of its first production electric vehicle coincided with a dramatic investment in a new design paradigm based on carbon fiber composites.

BMW’S BLANK SLATE

BMW knew it couldn’t just slap batteries and motors into one of its existing models. The i3’s battery pack weighs in at over 1000 pounds, so body weight reduction was critical to offsetting the batteries’ weight. To achieve a range approaching 100 miles (an influential number generally viewed as the acceptable minimum for electric vehicles) on one of its existing vehicles, it would have needed a very large battery pack to move around that heavy steel. This would have further increased mass, in turn requiring more heavy batteries, and so on, a vicious (and expensive) cycle given that just a 10 percent increase in battery capacity (the equivalent of increasing the i3’s range by about 10 miles) would add about 100 pounds of mass and $1200 of cost to the vehicle. Plus, every mile driven in a more massive electric vehicle requires more energy, making its equivalent mile-per-gallon rating worse and its operating cost higher.

To achieve a level of weight reduction that could begin to effectively offset all that electric powertrain mass, BMW designers knew they would need to rethink the vehicle’s design from the ground up. They would need to change the body’s shape to better integrate the new electric drivetrain and motors, and they would need materials that could offer the same structural integrity with less weight. Despite carbon fiber composites’ higher cost per pound as compared to steel, every pound saved by virtue of the new materials’ structural advantage was a pound the battery pack would not have to move around. The business case for making a dramatic investment in an all-new material, with its own unique structural characteristics, manufacturing processes, production facilities, and supply chain suddenly made sense.

BMW spent about ten years doing exactly that, forging partnerships with new industries, vertically integrating a global supply chain, building new manufacturing facilities, and incorporating carbon fiber composite parts on its existing vehicles to get its feet wet. Currently BMW is able to produce an i3 body about every 20 hours, allowing it to kick out a shade over 400 vehicles per year, not many by auto industry standards, though an important start. And at a selling price in the low $40,000s, the i3 will be out of reach for mainstream consumers, who have a price break point of $30,000 (though federal and state incentives may help to knock the i3 sticker price down closer to an acceptable number for some consumers in the right markets).

RMI SCALING UP AUTOCOMPOSITES

Like BMW, Rocky Mountain Institute recognizes the transformative potential of carbon fiber composite. If adopted by the automotive industry at scale, total global demand for carbon fiber would very quickly skyrocket, and needed investments in disruptive technology to make the material cheaper would come pouring in from material companies eager to gain a foothold in their largest potential growth market.

Whole vehicles like the i3 would then become much more cost effective and the way would be paved for a world filled with affordable, carbon fiber intensive vehicles 50 percent lighter than today’s vehicles, powered by electrified powertrains, needing no oil and emitting no greenhouse gases. The faster we can scale this new material industry, the faster that vision will become reality.

That’s why RMI launched its Autocomposites project in a workshop last November. The workshop hosted 45 key decision makers from across the automotive and carbon fiber composite industries, all trained on the goal of identifying the most promising near-term applications for carbon fiber composite in existing vehicles. One carbon fiber composite part incorporated on just one vehicle would double automotive demand, very quickly creating the scale and growth needed to kickstart investment, reduce cost, spur competition, and seed innovation.

While many parts offered significant user value that could offset the higher material cost, among the most promising applications identified by workshop participants was the door inner, the internal structure and framing of the door that absorbs impact energy in the event of a crash. It was largely due to this safety component that the door inner rose to the top of the crop in terms of potential value, because carbon fiber composite absorbs up to six times more crash energy per pound than steel, and customers are willing to pay a premium for safety. Because carbon fiber is stiffer than steel, structural members can be made narrower while providing the same structural integrity. The window frame could thus become thinner, providing more visibility and further enhancing a carbon fiber composite door inner’s value proposition. In the end, the large material cost premium associated with introducing carbon fiber on the door inner was estimated to be more than offset by user value according to initial cost modeling and value quantification performed at the workshop.

Since the November 2012 workshop, RMI and its automotive industry counterpart, Munro & Associates, have launched the Autocomposites Commercialization Launchpad (ACL), a league of the most capable carbon fiber composite and automotive manufacturers in the industry. The door inner is the ACL’s first commercialization project, and eight major companies have now signed on to move forward with design, production, and testing. The ACL is aiming to produce 50,000 units per year or greaterâ€"a production volume never before achieved with carbon fiber composite in any industryâ€"for a mainstream vehicle by 2018. The ACL will be capable of launching parallel commercialization projects to further accelerate learning and scale this new industry.

Whether starting with whole carbon fiber composite vehicles at low volume or individual parts at high volume, the goal is the same: very quickly scaling a new material industry to pave the way to a transformed transportation system built on the unparalleled lightweighting potential of widely-adopted carbon fiber composite.


Tags: , , , , , , ,


About the Author

Since 1982, Rocky Mountain Institute has advanced market-based solutions that transform global energy use to create a clean, prosperous and secure future. An independent, nonprofit think-and-do tank, RMI engages with businesses, communities and institutions to accelerate and scale replicable solutions that drive the cost-effective shift from fossil fuels to efficiency and renewables. Please visit http://www.rmi.org for more information.



Read More

Reinventing Batteries for Electric Vehicles: Interview with ARPA-E Deputy Director Cheryl Martin

Last week ARPA-E announced 22 recipients for $36 million in total awards under the agency’s new Robust Affordable Next Generation Energy Storage Systems, or RANGE, program.

RANGE aims to reinvent the electric vehicle battery at the system level by targeting outside the box concepts that could change the way we look at electricity storage options for transportation.

Some of the approaches in RANGE include solid-state batteries without a liquid electrolyte that could be integrated into a vehicle’s frame; inherently robust battery designs, including electrolytes that stiffen upon impact to safely absorb and disperse the force of a collision; and aqueous and flow batteries influenced by ARPA-E’s grid-scale storage work. These are just some of the radical electric vehicle battery designs funded by RANGE in an effort to hasten adoption of electric vehicles by dramatically improving driving range, enhancing safety and reliability, and reducing battery costs.

The new program is ARPA-E’s fifteenth focused program area since the innovation agency’s launch in 2009 (the agency also offered open funding rounds in 2009 and 2012).

In an exclusive interview for TheEnergyCollective.com, I recently spoke with ARPA-E’s Deputy Director Dr. Cheryl Martin about RANGE and the agency’s efforts to help re-envision and reinvent the electric vehicle battery. What follows is a lightly edited transcript of our conversation.

Jesse Jenkins (TheEnergyCollective.com): Dr. Martin, thanks for joining me. To begin, tell our readers what the goals of this new program are? What do you hope ARPA-E will accomplish with the 22 projects funded by RANGE?

Cheryl MartinDr. Cheryl Martin (ARPA-E): Well, we all know that the need for innovation in batteries and energy storage for electric vehicles is not a new subject. But we felt strongly that there was a whole different way of looking at the problem, which is a very good definition of ARPA-E’s role in the energy innovation system. We redefine the problem in a new way and see if we can come up with new approaches and new opportunities. Take the area of biofuels, for instance. At ARPA-E, we looked at how we produce cellulosic fuels and asked if we could engineer biofuel feedstocks to show new and different qualities that would improve their ability to make fuel â€" and that meant altering millions of years of plant evolution .

So that brings us to electric vehicles. We know the major problems are range anxiety and cost. We [ARPA-E] have run programs addressing this issue that are working on getting these high energy density battery chemistries to work. Programs like BEEST [Batteries for Electrical Energy Storage in Transportation] are making great strides and we’re really excited about what we’ve seen already. We are also hopeful that what others, like the DOE Energy Innovation Hubs, are doing in this space will be really productive.

Having funded BEEST and other high-density EV battery projects, it was time for us to take a whole different approach, and ask: Can you have a battery that operates fundamentally differently than current technologies? And since it’s the car as a whole that has to deal with range issues, and you’re driving the car - not the battery - we asked: How can you look at batteries from a systems approach and consider the bigger picture?

Well, you could approach that question and say that maybe the battery doesn’t have to be just a battery, but could have other functions in the vehicle. Maybe it can absorb impacts? Then it does more than just be a battery -- it helps with safety in crashes. Or you could really push the question and think differently about whether the battery can be part of the car in a different way? That’s where some of these ideas with polymeric systems or solid-state systems come in that are much more robust -- where the battery itself could be integral to the vehicle’s structural design. So you see, if you start to think more expansively, then you can also think about some of the aqueous chemistries that we haven’t looked at for vehicles before.

Walk me through a couple of the projects selected for awards through this program. What projects have you excited?

Let’s look at a couple projects in this portfolio using solid-state chemistries. Colorado-based Solid Power and Bettergy, located in Peekskill, New York, are trying to change the battery’s electrolyte from a traditional liquid electrolyte to a solid-state electrolyte. So instead of focusing on reducing cost and improving robustness of a liquid electrolyte they’re using new nano-particles and synthesis of solid-state materials to develop an entirely different system.

We’ve all been trained to think of a battery as two electrodes stuck in a liquid electrolyte. Think about that science class experiment you did where you stick two electrodes in a lemon or a pickle or something! But what if now the whole battery were solid-state? That’s a whole different approach with different possibilities that can help us create new learning curves or cost and performance ranges compared to traditional batteries.

Another set of projects is focused on making batteries more robust. One project, led by Oak Ridge National Laboratory, focuses on an impact-resistant electrolyte. When the battery is hit by a force, as in a collision, the liquid electrolyte thickens up and absorbs that energy and dissipates it later. That’s a pretty cool concept, right?! It would help make batteries more crash resistant, for example.

If ARPA-E hits a home-run with this program, what will the world look like 10 or 20 years from now?

Well, once we can demonstrate some of these technologies, I think others will also come along and say, Wow, I wonder what else I can do in this space? We’re hoping this program has ripple effects that go well beyond what we’re putting in to these specific projects and well beyond just talking about batteries for transportation.

If we really succeed here, the idea of using a suite of options to think about energy storage and electric transportation will be different. So I think if this is successful, we’ll have many more electric vehicles with batteries with whole different structures. It will free up designers to think about different ways to put their vehicles together. There’ll be cross-overs back to grid-level storage as well. And I love to think about spillovers to other areas that we just can’t envision today, like advanced batteries for computers or mobile electronics.

This isn’t ARPA-E’s only program focused on alternative transportation or energy storage technologies. The GRIDS program focuses on grid-scale energy storage. The BEEST program also focused on EV batteries. AMPED is also focused on advanced power electronics and battery controls: everything from sensors to control systems to diagnostics to understand the health of battery. So how does RANGE relate to the rest of ARPA-E’s portfolio?

BEEST was one of our earliest focused programs. It focused on developing new, high-energy density battery chemistries. One way to get more distance out of your vehicle is to get more energy density out of your battery, which gives you more energy, and thus longer range, in a smaller package. So that’s one approach to the key challenge of range anxiety and cost.

But if you look at AMPED [Advanced Management and Protection of Energy Storage Devices], the idea was that the reason our batteries are as large as they are is that they have this big safety margin, since we often don’t really understand enough about the health of the battery. So if you want to reduce the cost and size of batteries to improve range, you could think about having more intelligent controls and battery health diagnostics, so you could have a smaller battery without even changing the chemistry of the battery.

So together, with RANGE, these three programs give us a suite of options to tackle the key challenges on the electric transportation side: range limitations and costs of electric vehicle ownership.

On the grid-level side, our GRIDS [Grid-Scale Rampable Intermittent Dispatchable Storage] program is funding things like compressed air, wave disk engines, flow batteries, and more. Our view is that RANGE and AMPED may potentially also have value in the area of grid-level batteries in the future. In the meantime, learning about liquid or aqueous battery chemistries from GRIDS has influenced our thinking about new possibilities for EV batteries.

Shifting to a higher-level perspective, how are the seemingly-never-ending budget fights in Congress affecting the agency and your thinking on how you approach these critical energy innovation challenges?

One of the great things about ARPA-E and the way we’re structured is that we fully fund our projects right up front. So when we say we’ve awarded $900,000 a specific project, that money is already set aside from our current year’s budget. So even if that project takes 2-3 years to complete, those scientists don’t have to worry about their budget being cut mid-project. That makes a huge different. Our researchers can stay focused on the problem at hand and hopefully knock down these technical challenges to get these technologies moving and change our energy future.

You know, ARPA-E has had tremendous bipartisan support from its inception. We were founded based on the bipartisan “Rising Above the Gathering Storm” report. We’ve had Senators and Congressional leaders from both sides of the aisle championing the agency’s work. They’ve been very supportive of the work we do because we really are in that transformative early R&D stage and take seriously the development and deployment of these technologies to ensure they have real impact. The feedback so far from Congress in generally has been really positive. So I’m encouraged that as we continue this dialogue over budget priorities across the energy spectrum, that ARPA-E’s important contributions will be recognized.

In the meantime, we’ll do everything we can with every bit of funding that we’re allowed to invest.

Each program has a budget of around $20 to $35 million. So as we look at the budget, because we don’t have ongoing funding commitments locking up into the future, we can think about how many new programs can we launch that can have a real impact. We also have the option to get creative with how we fund things as well. So, for example, we did an open solicitation last year to fund smaller areas of work that are really technically important but may not need $30 million or constitute a full program. So we have a set of tools to get the most out of the money Congress directs to ARPA-E.

So if you’re making your case to the public or to Congress, why is ARPA-E worth continued investment despite tightening federal budgets?

ARPA-E is fundamentally focused on using innovation to create new options for America’s energy future, be that in energy production or generation, transmission, use, or transportation. It’s ARPA-Es role to be out there across the spectrum, really pushing the envelope to create those options for the future. Hopefully, to confront the greatest energy challenges we face.. 

So what’s our role in all of this? ARPA-E aims to be catalytic and accelerative. It’s why we take really seriously picking important areas and working with our project teams to help them make the right connections and understand where they need to go to have an impact even after ARPA-E’s funding period. These teams are creating things that future decision makers, business leaders, innovators â€" future generations â€" are going to need. And if you don’t do that research now, you won’t have those tools ready for folks when we need them in the future.

I do think the new RANGE program really exemplifies that commitment. It embodies everything from taking new ways to define and tackle a problem to look at new ways to design the fundamental look and feel of battery and to open up new frontiers for battery design. So I’m really proud of this program and what it could mean if some of this invention happens.

We're also honest that some of this won’t happen. We are asking people to take big swings with their projects, to aim for the fences. Some of these won’t work as intended. We know that. When projects hit a roadblock, we work hard with all of our project teams to find a way forward, but if it doesn’t work, it doesn’t work, and we stop that project and redirect funds to what is working. That’s one of the real strengths of ARPA-E.

And of course there is plenty of learning to be had even from the paths that don’t work out too. If something doesn’t work out, now we have a new way to frame the question and new ways to think about next steps. So I’d say the only failure at ARPA-E is a failure to stop funding something we know we should have stopped. Everything else is really an opportunity.

Read More

Two Californian Cities Vying For Solar Capital Of The US

Clean Power blog_2013_08_27_1

Published on August 28th, 2013 | by Guest Contributor

This post originally published on RMI Outlet
by Devi Glick

The fight for the title “Solar Capital of the U.S.” is on, and for two towns in California, things are heating up faster than a solar panel during summer peak! Earlier this year Lancaster, on the edge of the Mojave Desert north of Los Angeles, became the first city in the U.S. to mandate solar on new buildings. Months later and 400 miles away, Sebastopolâ€"not far from Napa and Sonomaâ€"followed suit. On the surface they couldn’t be more different … one a conservative, blue-collar city; the other a pocket of liberal, small-town wine country charm. Yet the sun unites them.

When their respective measures passed, both mayors echoed the importance of the mandates in addressing climate change. “The one thing we have to recognize is just how desperate this situation is with global warming, and at the same time recognize that we can actually fix it,” said Lancaster’s Mayor Parris. In Sebastopol, Mayor Keyes echoed the same underlying concern for global climate change: “I think it is the obvious way to go. Every time you build a house you’re making the matter worse.”

Requiring that all new houses install solar panels, or are designed to be solar ready at the time of construction, does have many cost advantages. Up until now, most solar mandates have only required new-build homes to be solar ready (California, Arizona), or at the very least, offer the option of solar ready to customers (Colorado, New Jersey) but they have stopped short of actually requiring the installation of solar panels on the roof. This is where Lancaster and Sebastopol are different.

LANCASTER AND SEBASTOPOL

Lancaster and Sebastopol were the first two cities in the U.S. to pass a solar mandate of this kind, but demographically and geographically, they are worlds apart. Both are in sunny California, but as the prevalence of solar installs in Germany (a country with worse solar resource than those almost anywhere in the lower 48 of the U.S.) shows, solar resources are not the only significant factor driving solar adoption. There was almost no pushback when the measures were proposed, perhaps not all that surprising for liberal Sebastopol, given Sonoma County’s climate goal of reducing GHG emissions 25 percent below 1990 levels by 2015, but eye-opening in Lancaster, given the city’s largely conservative constituents.

CITY STATS

http://www.city-data.com


MANDATE DETAILS

In Lancaster, the mandate says that all new residential homes on lots of 7,000 sq. ft. (≈ 1/6 acre) or larger must install a solar system of 1.0â€"1.5 kW. In Sebastopol, all new residential and commercial buildings are required to install 2 watts of power per square foot of insulated building area or offset at least 75 percent of the building’s annual electric load. Based on the median house size in the western U.S. of 2,150 square feet, that is approximately 4.3 kW per home, or at least 3â€"4 times as much per home as in Lancaster (and likely even more when taking into account the fact that the solar array needs to offset at least 75 percent of the buildings electricity load).

Despite Sebastapol’s stronger per-house mandate, the absolute impact is much smaller than that of Lancaster. Two hundred new homes are forecasted to be built in Lancaster this year, which means that at minimum 200â€"300 kW of solar will come online in Lancaster as a result of new buildings, and likely even more if housing developers find that there is strong demand for solar among new-home buyers in the area. In Sebastopol only 16 new homes were built per year in both 2010 and 2011, so assuming that trend continues, that works out to around 70 kW (again, at the mandated minimum) of solar coming online in Sebastopol as a result of new buildings.

SO WHO DESERVES THE TITLE OF “SOLAR CAPITAL OF THE U.S.”?

My call is a draw. There are so many ways to evaluate the claim of solar capital: based on the strictness of the mandate, the total quantity of solar installed, the quantity of solar installed per house or per capita, etc… Sebastapol’s mandate is more efficient at achieving the goal of minimizing the impact of each new house built, but is in a location where few new homes are actually being built, and the ability of residents to invest in solar without mandates is high. Frankly, the recent mandate is rather secondary to their solar position (as of 2011 in any case) as the #1 solar city in California in installs per resident. Lancaster’s mandate is more flexible, and it will affect many more homes, but the per-home impact will be smaller, which seems reasonable considering the lower ability of its citizenry to provide funds for solar. In addition, Lancaster’s actions are much more eye-opening considering the city’s medium-income status and political make-up.

This is not to say that a solar mandate is always the best option to achieve new-build real estate with low carbon impact. A city can mandate efficient home design, via enforced, high-quality building codes, to reduce the energy needs of a new house by the equivalent of the 1-kW solar panel from the start. Or it could build solar gardens throughout the city to bring online enough renewable generation capacity to offset a portion of energy needed for new builds, while taking advantage of solar gardens’ economy of scale. When cities have their own municipal utilities, a very broad array of market-driven efficiency and renewable generation choices at the distributed level can be unleashed with the right utility programs and price signals. Choosing such options depends on a region’s technical, economic, electrical system control, and political context and constraints. For Lancaster and Sebastopol, these mandates may indeed be their best option, and in the perception fight for “Solar Capital of the U.S.”, they have created winners out of both cities.


Tags: , , ,


About the Author

is many, many people all at once. In other words, we publish a number of guest posts from experts in a large variety of fields. This is our contributor account for those special people. :D



Read More

Energy Cost Innovation, Part 3: Global Impact of Low-Cost Clean Energy

PART 3: GLOBAL IMPACTS OF LOW-COST CLEAN ENERGY

PART 1: Liquid Fuel Nuclear Reactors introduced the history and technology of liquid fuel nuclear reactors â€" the path not taken as the world followed Rickover’s forceful choice of solid fuel reactors.

PART 2: Energy Cost Innovation described the opportunities for substantial cost reductions and presented the specific attributes of the molten salt reactor that lead to lower costs for energy.

Coal and prosperity

Worldwide, coal is the largest and fastest growing source for electric power, growing 50% in a decade. The 1,400 GWe of world coal power capacity is literally planned to double. The 2009 update of MIT’s Future of Nuclear Power shows new coal plants cost $2.30/watt, contributing about 2.3 cents/kWh to power costs [at 8% cost of capital, 40 year lifetime, 90% capacity factor]. Inexpensive US coal at $45 per ton contributes $0.018/kWh to electrical energy costs. Including operations costs, coal power costing about 5.6 cents/kWh is generally the least expensive energy source worldwide.

Affordable electric power is crucial to the developing world’s economies and their peoples’ prosperity. Peabody Coal CEO Gregory Boyce states the case for coal, “…there are 3.6 billion people in the world - more than half the global population - who lack adequate energy access. And another 2 billion will require power as the world population grows in the next two decades. …each year, we lose more than 1.5 million people to the effects of energy poverty.” Boyce called for recalibrating priorities to: eliminate energy poverty as priority one; create energy access for all by 2050; advance all energy forms for long-term access, recognizing coal is the only fuel that can meet the world's rising energy demand (italics added).

The World Bank seems to agree, lending $5.3 billion for 29 coal plants, but at the same time decrying increasing CO2 emissions they say may raise global temperatures 4°C. Economics trumps politics. In a global economy, the most economical power source will dominate. The failures of United Nations Framework Convention on Climate Change meetings in Kyoto, Copenhagen, Tianjin, Cancun, Bangkok, Bonn, Panama, and Durban testify to the power of economics and the importance of low-cost energy.

Global warming

Airborne coal soot causes 13,000 annual deaths in the US and 400,000 in China. Burning coal for power is the largest source of atmospheric CO2, which drives global warming, which threatens irreversible climate damage, ending glacial water flows needed to sustain food production for hundreds of millions of people, and shrinking the polar cold water regions of the ocean where algae start the ocean food chain. Atmospheric CO2 dissolving into the ocean acidifies it, killing corals and stressing ocean life. Demand for biofuels increases destruction of CO2 absorbing forests and jungles. The World Bank predicts food shortages will be among the first consequences within just two decades, along with damage to cities from fiercer storms and migration as people try to escape the effects. In sub-Saharan Africa, increasing droughts and excessive heat are likely to mean that within about 20 years the staple crop maize will no longer thrive in about 40% of current farmland. 

Ending coal CO2 emissions

Boeing-like factory production of one DMSR of 100 MWe size per day could phase out existing coal-burning power plants worldwide in 38 years, ending 10 billion tons of CO2 emissions from coal plants now supplying 1,400 GW of electric power. Energy cheaper than coal is crucial to reducing CO2 emissions.

Figure 8. Replacing coal plants with one 100 MWe DMSR per day zeros 10 GT of annual CO2 emissions in 38 years

Figure 8. Replacing coal plants with one 100 MWe DMSR per day zeros 10 GT of annual CO2 emissions in 38 years.

Ending energy poverty

The world population growing from 6.7 to 9 billion will increase resource competition, exacerbating environment stress. Yet the OECD nations, with adequate energy supplies, have birthrates lower than needed for population replacement. In developing nations, electricity can liberate women from chores of fetching water, cleaning, providing food, and raising children. Women with freed time can become educated, obtain jobs, gain independence, and make reproductive choices.

Figure 9. When economic well-being measured by the gross domestic product exceeds a threshold, birthrate drops sharply.

Figure 9. When economic well-being measured by the gross domestic product exceeds a threshold, birthrate drops sharply. (Data from CIA World Factbook.)

Nations with GDP per capita over $7,500 have sustainable birthrates. Electricity for water, sanitation, lighting, cooking, refrigeration, communications, health care, and industry contributes to economic development and improved personal incomes. Those nations with per capita electricity of 2,000 kWh/year (an average power of 230 W, 1/6 US use) do achieve GDP of $7,500 per capita, which leads to sustainable birthrates.

Producing CO2-neutral carbonaceous synfuels

Petroleum is the second largest source of world CO2 emissions, after coal. Cheap oil has been an important driver of world GDP. Concerns over peak oil have diminished with techniques for extraction of tight oil, but rising prices and decreasing EROI are implicated in GDP stagnation. High priced oil creates an economic opportunity for synthetic liquid fuels.

Synthesizing hydrocarbon fuels requires a source of hydrogen and a source of carbon. Nuclear heat and electricity can power dissociation of hydrogen from water. At a temperature of 950°C, the sulfur-iodine process works at a chemical/thermal conversion efficiency approaching 50%. The 43% efficient copper-chloride process can operate at 530°C, a temperature compatible with currently certified nuclear structural materials to be used in near-term DMSRs.

Potentially the carbon source can be CO2 that makes up about 0.037% of the atmosphere. Historically direct air capture has been criticized as uneconomic, but this can change with the availability of low-cost, high-temperature nuclear heat. Jeffrey Martin and William Kubic observed that alkaline lakes absorb about 30 times the CO2 of similar size fields of switchgrass. Their project Green Freedom conceived of trays of potassium carbonate solution exposed to the airflow within nuclear plant cooling towers. The potassium carbonate readily absorbs CO2 [by CO2 + K2CO3  + H2O à 2 KHCO3] creating potassium bicarbonate. The CO2 would be electro-chemically removed, requiring about 410 kJ/mole-CO2 of electric energy and 100 kJ/mole-CO2 of thermal energy. The chemical manufacturing processes for conversion of CO2 and hydrogen to methanol are proven; ExxonMobil has a process for converting methanol to gasoline. The complete facility could produce 17,000 barrels per day of gasoline at an estimated consumer cost of $5/gallon (2007), requiring an investment of approximately $5 billion. The fuel cycle would be carbon neutral, because just as much CO2 would be put into the atmosphere by burning synfuels as removed by air capture.

Nobel laureate George Olah advocates methanol fuel per se in The Methanol Economy because it is largely compatible with the existing gasoline infrastructure. Methanol has been used for decades to power race cars at the Indianapolis-500. Although it has about half the energy density of gasoline, methanol can be used in flex-fuel vehicles or  modified engines in ordinary vehicles.

Figure 10. CO2 is absorbed by lye [KOH] changed to lime [Ca(OH)2] heated to release CO2

Figure 10. CO2 is absorbed by lye [KOH] changed to lime [Ca(OH)2] heated to release CO2.

Jim Holm has an ambitious Skyscrubber concept to capture even more CO2, using Carbon Engineering’s process with high temperature nuclear heat. He proposes replacing the world’s 1200 largest coal plants, eliminating 10 Gt/y of CO2 emissions, and also capturing 2 Gt/y of CO2. His concept uses the high-temperature, helium-gas-cooled TRISO-fuel reactor which is closer to deployment than MSR, but MSR may be economically essential to replace the coal plants because MSRs are predicted to be lower cost than TRISO reactors.

Biomass carbon sources

Plants absorb carbon from air. Biomass energy technology strives to harvest the combustion energy stored in plants, but they can be alternatively used as a carbon source for synfuels. Biomass and hydrogen can be combined with nuclear heat to manufacture synfuels such as diesel more efficiently than does cellulosic ethanol technology.

Biomass can be processed in a heated, entrained-flow chemical reactor to create liquid fuels. The required thermal energy can be supplied from an MSR, adding hydrogen from water dissociation, and raising the temperature of the oxygen-free production process to approximately 1000-1200°C using an electricity-powered plasma arc. The role of the biomass is not so much to provide energy but to contribute the carbon that is combined with hydrogen and MSR energy to synthesize the biofuel.

By avoiding oxidation of the biomass, the synfuel mass yield is 1 tonne of diesel per 1.7 tonnes of biomass. This is 3.3 times that of anticipated cellulosic ethanol processes such as enzymatic fermentation or gasification. This reduces land use requirements for biomass production by 70%, reducing competition with land for food crops. Estimated costs for diesel fuel production in this manner are $4 per gallon.

The US consumes about 7 billion barrels of petroleum products per year. Dry biomass growth is about 6 tonnes/ha/yr, so to supply all US petroleum substitutes this way would require 160 million hectares for biomass crops. Forestland and farmland area in the US totals about 670 million hectares, so meeting US fuels needs this way is barely conceivable, if fuel use is reduced. Other potential carbon sources include cattle dung of 2.5 Gt/year, but collection costs are high. City sewers efficiently collect biomass at a rate of 100 grams per person per day.

No such biomass refineries are in production, and there is considerable chemical engineering development to be accomplished before constructing such billion-dollar plants. The major oil companies have the expertise to develop them. Petroleum’s high energy density and a century of engineering experience in its use have made it essential to the world economy. It could take another century to replace it with synthetic carbonaceous fuels.

Ammonia

The US uses 20 million tons of ammonia and ammonia fertilizer products annually. Energy for production of ammonia uses 1-2% of all world energy. Over 80% of ammonia is used for fertilizers that are responsible for food production sustaining 1/3 of the world population. Ammonia fertilizers were a component of the 20th century Green Revolution credited with saving over one billion people from starvation. Today ammonia is principally produced from natural gas, releasing CO2. World food production is highly dependent on fossil fuels.

Figure 11. Marangoni Toyota G86 Eco Explorer runs on ammonia fuel.

Figure 11. Marangoni Toyota G86 Eco Explorer runs on ammonia fuel. 

Ammonia, NH3, may be used as a fuel in internal combustion engines. With hydrogen from dissociation of water and nitrogen from air, ammonia can be produced without relying on carbon sources. Here’s a great video explanation by 12-year-old Katie. Like propane, liquid ammonia can be transported in tanks pressurized to about 13 atmospheres. It has been used as fuel for the X-15 rocket plant, WW II busses in Belgium, trucks and cars.

The NH3 Fuel Association advocates ammonia as a fuel for internal combustion engines. Today engineers are improving spark-ignited internal-combustion engines and diesel engines fueled with ammonia or ammonia with additives such as biodiesel, ethanol, hydrogen, cetane, or gasoline. Sturman Industries is developing an ammonia fueled hydraulic engine â€" no crank, no cam, no carbon.  Direct ammonia fuel cells can convert ammonia and air’s oxygen directly to electric power, without the need to thermally crack NH3 to release hydrogen.

Figure 12. Ammonia can generate electricity in fuel cells

Figure 12. Ammonia can generate electricity in fuel cells.

Solid state ammonia synthesis

Today the Haber-Bosch ammonia production process annually manufactures 500 million tons of ammonia from natural gas, water, air, and electricity. This process alone accounts for 3-5% of world natural gas consumption. For each tonne of ammonia produced, stripping carbon from CH4 releases 1.8 tonnes of CO2 to the atmosphere â€" about 10% of world coal plant emissions.

The company NHThree has designed a state ammonia synthesis (SSAS) plant fed by air, water, and electricity. Nitrogen is obtained from an air separation unit (ASU). There is never any separated explosive hydrogen gas. SSAS works like a solid oxide fuel cell, but in reverse, with a proton conducting ceramic membrane. The ceramic membranes are tubes, and the SSAS can be scaled up by using more tubes. In addition to electricity, an MSR can provide the 650°C steam heat for the SSAS cells.

Figure 13. Solid state ammonia synthesis: 6 H2O + 2 N2 â†' 3 O2 + 4 NH3

Figure 13. Solid state ammonia synthesis: 6 H2O + 2 N2 à 3 O2 + 4 NH3

With factory reactor production, MSR electric power is projected to cost $0.03/kWh, leading to ammonia costs of about $200 per tonne. Ammonia from natural gas today costs about $600/tonne. This new SSAS process has been demonstrated in the laboratory, but it requires chemical engineering development to generate ammonia in commercial quantities.

Cost of ammonia fuel

 The heat of combustion is the thermal energy that would be released in an internal combustion engine. The crude oil source energy cost of $4/gallon gasoline is about $2.67; other costs: taxes, refining, and distribution, only add $1.33. Assuming other costs stay the same, energy-equivalent ammonia fuel could sell for 2/3 the cost of gasoline.

Fuel

Heat of combustion

Energy source price

Fuel energy cost

Ammonia

  22 MJ/kg

$0.20/kg

$0.01/MJ

Gasoline

132 MJ/gal

$2.67/gal

$0.02/MJ

Ammonia safety

Ammonia is the second most common industrial chemical. In the US ammonia is distributed by a 3,000 mile network of pipelines, principally for agricultural use. In a vehicle, ammonia would be liquid in tanks pressurized to 200 psi, similar to propane (177 psi). Compare this to tanks needed for compressed natural gas (3000 psi) or hydrogen (5000 psi). In an accident, spill, or leak ammonia dissipates rapidly because it is lighter than air. Its pungent odor is alerting. Ammonia is difficult to ignite, with an ignition temperature of 650°C. Unlike gasoline an ammonia fire can be extinguished with plain water.

Inhaling an ammonia concentration of one half percent for a half hour has a 50% fatality risk. Inhalation of 500 ppm is dangerous to health. Chronic exposures of 25 ppm are not cumulatively dangerous as humans and other mammals naturally excrete NH3 in the urea cycle, but ammonia is toxic to fish. The hazards of ammonia are different but equivalent to those of gasoline. Ammonia is toxic and gasoline is explosive. A 2009 Iowa State University analysis concludes

“In summary, the hazards and risks associated with the truck transport, storage, and dispensing of refrigerated anhydrous ammonia are similar to those of gasoline and LPG. The design and siting of the automotive fueling stations should result in public risk levels that are acceptable by international risk standards. Previous experience with hazardous material transportation systems of this nature and projects of this scale would indicate that the public risk levels associated with the use of gasoline, anhydrous ammonia, and LPG as an automotive fuel will be acceptable.”

In summary, nuclear ammonia is a suitable vehicle fuel. It emits no CO2 when burned. Its production can be CO2 free. It would require larger, stronger vehicle fuel tanks. The raw materials are air, water, and external low-cost energy from a MSR. Unlike carbonaceous synfuels, there is no expense to collect carbon from sources such as air or biomass. The economics of replacing gasoline with ammonia this way depend upon the energy-cost-innovative technology of the MSR.

Driving world GDP growth

World government debt exceeds $50 trillion, having doubled in 10 years. The public debt of the G7 advanced economies exceeds 100% of GDP, probably contributing to low economic growth. In 2012 US GDP grew at 2.5% while US debt grew at 8%.  US GDP only grew at a 1.8% annual rate in 1st quarter 2013.

The IMF suggests oil prices might double in a decade, diminishing GDP growth. Gail Tverberg at the Oil Drum illustrates how oil and GDP are related.

Image

Image

Figure 14. Energy and GDP are linked.

Oil prices are rising, partly because of diminishing EROI (energy return on invested energy). Chris Nelder tells us that a 60% drop in EROI from 25 to 10 increased oil prices 150%, and that a future drop from 5 to 2 would likewise increase prices to $240/bbl. Overall EROI for oil has dropped from 100 to about 10, and EROI for oil sands is near 5. EROI for corn ethanol is near 1.

Figure 15. Cheap energy ends as decreasing EROI cuts net energy delivered

Figure 15. Cheap energy ends as decreasing EROI cuts net energy delivered.

Economists from Solow to Ayres and Warr model GDP as a function of capital, labor, and energy. The oil sector represents about 4% of GDP, and the whole energy sector about 8%, but the impact of energy is greater than this suggests. Ayres points out there is no short term substitute for energy, and the output elasticity of energy services must be significantly greater than the cost share. Tverberg calculated a 0.4% increase in oil supply relates to a 2.2% increase in GDP, a 5-to-1 effect.

Ayres and Warr key on the concept of the energy that does useful work â€" electric power and mobile power â€" rather than primary, thermal energy. US aggregate work/thermal efficiency has improved from 2.48% in 1900 to 13.17% in 2006. Electric power generation efficiency rose from 8% in 1920 to 30% in 1960. A modern ultrasupercritical pulverized coal plant can achieve 47% and a new combined cycle natural gas turbine 60%, but this source of useful work, efficiency improvement, is running out.

Ayres and Warr and Hamilton point out that oil price rises are correlated with recessions, reported by the Wall Street Journal and Oil Drum.

Figure 16. Oil price rises have led recessions

Figure 16. Oil price rises have led recessions.

The implication is that diminishing EROI, efficiency saturation, and rising energy prices depress GDP. Cheap energy is no longer the source of economic growth that can solve the fiscal crisis. Energy cost innovation with liquid fuel nuclear reactors may provide that GDP growth opportunity.

Molten salt reactor development

Heightened public concerns about nuclear waste, global CO2 emissions, and nuclear power cost have led scientists and engineers to revisit the liquid fuel technologies bypassed in the 1970s. Lawrence Livermore scientist Ralph Moir and Edward Teller, a Manhattan Project veteran and developer of the hydrogen bomb, called for the construction of a small prototype MSR to launch such an energy project in 2005. Oak Ridge had meticulously documented its research, which was scanned and posted on the web in 2006 by then graduate student Kirk Sorensen.

Figure 17. Underground MSR proposed by Edward Teller and Ralph Moir.

Figure 17. Underground MSR proposed by Edward Teller and Ralph Moir.

France supports theoretical work by two dozen scientists at Grenoble and elsewhere. The Czech Republic supports laboratory research in fuel processing at Rez, near Prague. Design for the FUJI molten salt reactor continues in Japan. Russia is modeling and testing components of a molten salt reactor designed to consume plutonium and actinides from LWR spent fuel. MSR studies continue in Canada and the Netherlands. US R&D funding has been relatively insignificant, except for related studies of solid fuel, molten-salt-cooled reactors at UC Berkeley, MIT, U Wisconsin, and ORNL.

Startup ventures in Alabama, Ontario, Florida, and South Africa are actively designing MSRs, raising capital, and seeking a host nation with performance-based regulations. TerraPower is studying liquid fuel reactors as well as developing its traveling wave reactor.

Developing MSRs requires high temperature materials for the reactor vessel, heat exchangers, and piping; and chemistry for uranium and fission product separation. The authors estimate that with national laboratory support, a prototype could be operational in 5 years for approximately $1 billion. It may take an additional 5 years of industry participation to achieve capabilities for mass production.

Impediments

The US Nuclear Regulatory Commission is not capable of licensing and regulating liquid fuel nuclear reactor technology. Its rules and procedures are specific to existing LWR power plant technologies. To illustrate, TerraPower has been driven from Washington state to China to gain permission to build its TWR. In 2007 NRC had proposed developing risk-informed, performance-based, technology-neutral regulations but the administration and Congress have not authorized nor funded this.

The biggest obstacle to advanced nuclear power technology is public ignorance and unwarranted fear of low-level ionizing radiation. Tens of thousands of people were unnecessarily evacuated from areas surrounding Fukushima, and many died from the stress. The World Health Organization and the United Nations reported that no one died from radiation nor will ill health effects be observed. Still incorporated into regulations, the obsolete linear no-threshold model (LNT) ignores direct evidence that low levels of radiation stimulate cellular responses that repair any damage from radiation. The existing ALARA (as low as reasonably achievable) guideline encourages ever more costly, unnecessary radiation protection. Nuclear power opponents use this fallacy that any radiation is dangerous to ratchet up costs to uneconomic levels. ICRP, EPA and NRC regulations should set threshold exposure limits in the same manner as for other potential health hazards. Governments must exhibit leadership in establishing new safety regulations that are based on observed health effects; then governments can unleash this cost-innovative technology.

Summary

Can liquid fuel reactor technology really be cheaper than coal? Yes! Moir’s analysis of ORNL’s design documents MSR energy cheaper than coal. One non-public venture has estimated its MSR costs leading to electricity costing 3-5 cents/kWh. Costs do depend upon goals. Opponents of nuclear power will attempt to raise costs by adding unnecessary requirements. The goal of energy cheaper than coal must be prioritized at every step of design and development. The potential global impacts on climate change, energy poverty, fuel costs, and economic growth can keep the focus on cost innovation.

The world faces a climate crisis from ever increasing CO2 emissions from burning fossil fuels. Developing nations, especially, strive to end energy poverty and improve the prosperity of their people. The new world economic order means nations will adopt the lowest cost energy sources. Only cost-innovative, zero-carbon, nuclear power can undersell coal, oil, and natural gas power sources.

Co-authors

Robert Hargraves, author of the book THORIUM: energy cheaper than coal, teaches energy policy at the Institute for Lifelong Education at Dartmouth College. He received his Ph.D. in physics from Brown University.

Ralph Moir has published ten papers on molten salt reactors during his career at Lawrence Livermore National Laboratory. He received his Sc.D. in nuclear engineering from the Massachusetts Institute of Technology.

Read More

Fear Mongering Over Water Leaks at Fukushima Nuclear Energy Plant

Fukushima Fear Mongering

I’ll start with the bottom line first: despite all word to the contrary, there is no reason for anyone to be concerned that “contaminated” water from the damaged Fukushima Dai-ichi nuclear power station is going to cause them any physical harm, now or in the future. The only way my bottom line statement could possibly be wrong is if some really nutty activists decide to occupy the site and drink directly from the water tanks that have been assumed to be leaking. Those nutty activists would have to be very patient people, because they would have to drink that water for many years before any negative effects might show up.

Fish swimming in the harbor have nothing to worry about; people who eat fish that swam in the harbor have nothing to worry about; people who decide to swim in the harbor would have nothing to worry about. A basic tenant of radiation protection is that the farther from the source you are, the less you have to worry about, but I am not sure how I can state that you have less than nothing to worry about.

Nearly all of the fear mongering stories I have read about the water leaking from the large number of tanks on the site of the Fukushima Dai-ichi nuclear power station contain few, if any facts that allow an accurate risk assessment. A long time ago, I learned that there were several ways to respond to a report of “contaminated” water. The most effective way was to make a fairly quick determination of the level of contamination so the appropriate resources could be applied to the problem.

Radioactive contamination is not a “go; no go” question, there is an infinite spectrum of possible concentrations and total sizes; the top end of that spectrum should generate a flight response, the bottom end of the spectrum should generate a yawn. A quantity of radioactive material that is small enough to generate a yawn should not rise on the scale just because more clean water is added to the mix to make the problem seem larger.

Unlike biological pathogens, radioactive material does not reproduce. A fixed quantity never grows; it decays and gradually gets less and less dangerous. In fact, a perfectly rational, but long ago discouraged response mantra is “the answer to pollution is dilution.”

Aside: I will remain focused on the topic at hand and not discuss why that useful mantra has been actively discredited and discouraged. End Aside.

I probably should have written more about this a long time ago, but I have never understood why there were so many tanks being built at the Fukushima Dai-ichi power station to hold treated water. From everything I have read, water that is used to cool the damaged reactors is contaminated to a level that might be of concern, but then it is run through treatment systems that remove essentially all of the isotopes that would harm to the heath of any living creature. The very best place to put that treated water is the same place where most treated sewage ends up â€" into the vast ocean where it will never again be a source of worry or harm to anyone.

An effective, low cost solution to alleviate any concerns of local fishermen would be bringing an occasional tanker to the site. The contents of a limited number of holding tanks could be put on the tanker, which could then take the water a few miles out to sea. At that point, the treated water could be diluted into an enormously large ocean.

Warning: From here out, there is going to be a little math and some units that you might need to look up.

According to the scary stories I have read, the reason we are all supposed to be concerned is that bone-seeking strontium-90 has been detected in the contaminated water. The level has been reported as “thirty times” the drinking water standard.

Unfortunately, most “news” sources these days have a very low opinion of their readers and seem to think that using internationally accepted scientific units will confuse them. In my opinion, attempting to avoid using standard units is what confuses people.

Here is my attempt at helping you understand why I yawn when someone thinks we should all be frightened by the news that 300 tons of water contaminated with Sr-90 at 30 times the drinking water standard might have leaked out of a storage tank and might soon reach the Pacific Ocean.

According to Chapter 9 (Radiological Aspects) of the World Health Organization’s document titled “Guidelines for Drinking-Water Quality”, radiation standards for drinking water are set with some extremely conservative assumptions.

The levels are established so that a person drinking two liters of water at the limit every day for an entire year (a total of 730 liters) will receive a “committed effective dose” of just 0.1 mSv.

Note: The calculation of the “committed effective dose” value recognizes that the dose will occur over a period long after the drinking has stopped due to internal accumulation and biological half life of the isotope of concern. Because the dose is in Sieverts, it takes into account the biological damage caused by ingesting Sr-90, which emits a high energy beta particle. End note.

A dose of 0.1 mSv is 10% of the maximum allowed additional dose (1 mSv) to a member of the general public. The average background dose rate from all sources of radiation has been calculated to be 2.4 mSv/year.

For strontium-90, the drinking water standard is 10 Bq/l. The water of concern is contaminated to 30 times that standard and there are 300 tons of it. There are one thousand liters in a metric ton of water. Determining the amount of Sr-90 that might flow into the Pacific Ocean is a simple multiplication problem.

10 Bq/l x 30 x 300 tons x 1000 l/ton = 90,000,000 Bq

Written in scientific notation on a blog where I dislike making the effort to use exponents, that can also be written as 9E+7 or 9 x 10^7.

That might sound like a lot of material, but each gram of Sr-90 contains approximately 5,000,000,000,000 Bq. That can also be written as 5E+12 or 5 TBq (terabecquerels)

If someone drank two liters per day of the water that we are supposed to be afraid of for an entire year, their committed effective dose would be just 3 mSv; it would slightly more than double their annual background dose. If the entire amount of that water entered the Pacific Ocean, it would contain less than 0.00002 grams (0.02 milligrams) of strontium-90.

Now can you see why I am not worried and why I think you need to stop worrying? Of course, I expect that most of the people who have made it this far were never worried in the first place, but you might have family, friends or acquaintances who have been losing sleep in fear of the Blob â€" in the form of water leaking from Fukushima â€" coming to get them.

One more thing â€" the most recent stories have included concerns that additional groundwater is flowing onto the power station site an might become contaminated on its normal path to the ocean. Remember what I wrote earlier; a limited amount of radioactive material does not get any larger just because more clean water is added.

Recommended reading

Fukushima Commentary August 24 Japan’s Disastrous Flirtation with Worst-Case Scenarios

The Register â€" Oh noes! New ‘CRISIS DISASTER’ at Fukushima! Oh wait, it’s nothing. Again: But hey, let’s soil ourselves repeatedly anyway

New Scientist â€" Should Fukushima’s radioactive water be dumped at sea?

The post Fear mongering over WATER leaks at Fukushima Dai-ichi appeared first on Atomic Insights.

Photo Credit: Fukushima Fear Mongering?/shutterstock

Read More
Powered By Blogger · Designed By Alternative Energy