Most supporters of renewable energy development are probably pretty comfortable with the way things are going. Wind and Solar generation has been increasing both in "nameplate capacity" and in actual production of electricity. There have not been any significant grid failures that can be blamed on renewables. Apart from a consolidation within the solar cell manufacturing sector there have not been any notable bankruptcies within the electricity generating sector. All visible signs are positive for a continued expansion of renewable resources.
When I talk to groups about renewable energy I start off with a Youtube video which demonstrates testing the compression strength of a concrete block. For 2 minutes and 40 seconds this is the most boring video you could imagine. The block shows absolutely no sign of stress. At 2:41 the concrete block fails and is utterly destroyed. As far as I am concerned we are at about 2 minutes and 30 seconds with respect to the electrical grid.
In order to understand what I believe to be the serious risks facing the electrical generation and distribution system it is necessary to review the structure of the system as it was before renewables began to be developed in a significant way. The chart below shows hypothetical load profiles for a peak demand day during the spring/fall, winter and summer as well as a line that represents the overall generating capacity in the system.
It can be observed that the system demand/load varies considerably throughout the day and throughout the year. It is also clear that there is a great deal of excess supply available for most hours on most days. In fact, only on the highest peak demand days of the entire year will the demand come close to the supply. That is by design as every well-managed electrical generation system in the world requires a reserve margin of 8-15% above peak demand. This reserve is meant to provide resiliency for the grid to accommodate scheduled maintenance shut-downs at major facilities such as nuclear plants, natural gas-fired and coal-fired plants as well as unscheduled outages due to storms or switching problems or other operational issues.
(Note: I appreciate that many people will raise objections to the demand curves presented in that their local situation might be very different. That is one of the challenges facing every Independent System/grid Operator. Local demand curves can be all over the map due to the mix of commercial, residential, and industrial users. My point is not that these particular curves are the most typical in all locations. The point is that demand varies significantly over the course of the day and through different seasons.)
So before we began to develop renewable energy there was plenty of generation capacity within the system. In fact, many generation facilities were not running at anything close to capacity most of the time.
Because of a public policy decision to reduce the burning of hydro-carbons (and the associated production of CO2 emissions) wind and solar generation sources have been subsidized through a variety of financial instruments including capital grants, tax credits, and feed-in-tariffs. Renewables have also been given preferential access to the grid in most jurisdictions.
These measures have achieved the stated policy goal. Wind and solar now make up a significant percentage of generation capacity in a number of jurisdictions and at times provide a large percentage of electrical production.
For example, Germany has developed over 30 GW of solar power and over 30 GW of Wind. On a blustery spring day in Germany renewables can meet up to 40% of the total electrical demand for a few hours at mid-day. There are regular announcements of "new records" for both solar and wind generation. A similar situation exists in Texas with regards to wind and in parts of Hawaii with regards to solar.
Remembering that there was already a surplus of generation capacity in the system before the development of renewables it is obvious that when renewables hit their generation peaks most traditional thermal generation plants are unable to sell electricity. That would not be a problem if the construction of these plants had not been financed based upon assumptions regarding how often they would be used and what wholesale electricity prices would be. In fact, the economics of running these plants has deteriorated to the point where many utilities, especially in Europe, are on a "credit watch".
The rational response of companies trying to sell electricity into a market that has a great over-supply would be to decommission some of the oldest and most polluting plants to bring supply and demand into a better balance. But there is a problem. Renewable resources cannot be relied upon, particularly at peak demand times. The chart below displays the wind resource available compared to the demand curve for a week in November, 2013 in Texas (this week was not chosen on purpose to make wind look bad. It was literally the first file I found on the ERCOT site when I was starting to write this blog).
In this situation demand rose throughout the week as a strong high pressure system spread across the state bringing with it colder temperatures while at the same time shorter days required more lighting. One of the more troublesome realities of meteorology is that large, stable high pressure systems are often responsible for peak electrical demand in both winter and summer because they are associated with clear skies and temperature extremes. These systems are also commonly characterized by very low winds across a wide area.
As a result while demand continued to climb wind energy faded away to almost nothing. At this point most of the thermal generation assets available within Texas had to come on-line in order to meet demand.
So it is impossible to decommission even the oldest and least efficient thermal generation plants in the system regardless of how many wind farms have been built and solar panels deployed. German utility E.on came face-to-face with that reality in the spring of 2013 when they were instructed by the local grid operator to keep an old plant operational even though it would rarely be needed.
But a new day is dawning in the U.S. and it could be a darn cold (or hot) one.
The EPA announced regulations in December 2011 that will require coal-fired thermal generation plants to clean up or shut down. The reality is that for many of these plants it will not be feasible to clean them up. In fact, in some cases the EPA will not even allow them to be updated with modern pollution controls. As a result more than 30 GW of firm generation capacity will be decommissioned over the next several years.
Plans to replace this loss are in some cases vague and have been changing often. Increased conservation and better utilization of existing plants are frequently included in Integrated Resource Plans. In other cases greater reliance upon renewables is explicitly identified. These are not really replacements for firm capacity.
A number of new Natural Gas fired plants are also under construction. While current low gas prices make this an attractive option the threat of future significant price hikes as well as the EPA's stated goal to regulate CO2 emissions are worrisome and are impacting the ability to secure financing of these plants in some cases.
As more and more coal-fired plants are retired it is likely that total system firm generation capacity will drop resulting in smaller reserves. This, in turn, will make the system more susceptible to storms or other unplanned outages.
The degree to which grid security is compromised will vary from region to region depending upon the penetration of renewables, number of coal-fired plant retirements and the health of the local economy which has a major impact on electricity demand. Based upon those factors I believe Texas and the Mid-west are the areas most at risk.
It may be that the reduction in coal-fired generation will do nothing more than cull excess capacity out of the system with no negative impacts. But groups such as the Institution of Engineering and Technology in the UK have issued warnings about the progressive stress on a system that has taken decades to evolve and is now faced with unprecedented challenges.
Like the concrete block in the Youtube video the system is not displaying any outward signs of weakness. The question is this – will the North American electricity system encounter its own version of second 2:41?
In many parts of the world there are significant financial incentives for homeowners to install roof-top solar panels. This can include capital grants for the equipment, tax write-offs and/or Feed-In-Tariffs that guarantee that electricity produced by the solar panel will be purchased by the local utility at above-market prices. In Hawaii the annual cost of these incentives is at least $200 million. In Germany it is now in the $billions.
As I pointed out in an earlier blog posting there is inherent unfairness in these subsidies which are only available to relatively wealthy single-family home owners. People living in multi-family dwellings, renters, and those on low or fixed incomes that cannot afford the capital costs of the installation cannot share in these programs. They can, however, contribute through taxes and electricity bill payments to the cost of the subsidies. They can also disproportionately help pay for the added complexities of a grid that can incorporate distributed power generation.
The incentive programs in many areas are also vulnerable to abuse. One couple in Ohio have installed over $180,000 worth of solar panels in order to provide year-round heating for their large indoor swimming pool and indoor tennis court. I’m sure they are most grateful to the taxpayers of Ohio and in fact the entire U.S. for the more than $55,000 they will receive in various tax breaks. And by the way, their solar panels do not help anyone become independent of Middle Eastern Oil. Electricity in Ohio is generated primarily by coal-fired plants with a small amount from natural gas-fired and nuclear plants.
Putting aside the fairness issue there is also a very strong argument against residential roof-top solar panels based upon basic economics.
If you live in the suburbs your street probably has dozens of single family homes of different sizes and shapes with various configurations of roofs covered by a variety of materials. Imagine if you will a veritable army of roofers crawling over these houses, attaching frames and mounting solar panels. If you think about that for a moment you will have to come to the conclusion that it is not an overly efficient operation. Lots of up and down ladders time and safety setup time and not so much install solar panel time. Now imagine that same scenario when it is raining or snowing – more than a little scary for everyone involved.
Compare that to utility-scale solar where uniform racks can be laid out and solar panels mounted from the ground in a matter of minutes. The two scenarios are illustrated by the photographs.
Recognizing that the public and electrical utility customers are footing a large part of this installation bill which configuration would seem to provide the best return on investment? It would be hard to argue against the utility-scale solar panels.
What about efficiency in terms of making the best use of the solar resource?
In the case of residential roof-top solar there are likely to be plenty of other buildings, trees, and hills nearby so that the solar panels are often in the shade. Almost all of these solar panels will also be mounted rigidly, most commonly at the angle that is the roof pitch. This will not be the optimal angle for most sites and latitudes.
Utility-scale solar panels can easily be equipped with single or dual-axis tracking which very significantly increases the power generated under all circumstances. They will also be located in large open areas where they will be in direct sunlight for most of the day.
Having small, deep-cycle batteries as backup for the solar panels might be an expensive necessity at Possum Lodge but in suburban North America that type of installation doesn’t make a lot of sense – which is probably why almost nobody does it. Instead, through the magic of net metering, the surplus solar at mid-day is pushed out onto the grid whether it is needed or not. The home-owner effectively gets to use this mid-day electricity as a credit against the much more expensive evening and night electricity that would otherwise have to be purchased from the local utility at peak demand prices.
For the local utility the end result is a significant reduction in revenues from the owners of the roof-top solar panels even though they are making the grid more expensive to build and maintain. Who picks up the slack? Everyone that does not have roof-top solar panels.
The home owner that installs the roof-top solar panels will probably be pretty excited about them and will maintain them to some degree. But as houses change hands that commitment could fade; as leaves, moss, and dirt accumulate through the years who is going up on the roof-top to polish up those solar panels. Nobody is my guess. So the overall efficiency of the panels is bound to decline over time. The same with local battery storage if it has been installed.
Finally, the presence of roof-top solar panels has been identified as a significant danger to fire fighters.
All in all, looking at roof-top solar panels perfectly objectively they just don’t make sense. There are better ways to spend those dollars as we transition away from a hydro-carbon economy. Some other ideas are described in my Sustainable Energy Manifesto.
The electricity generation situation in British Columbia, Canada is both simple and complex. The simplicity arises from an abundance of hydro-electric generating capacity. The complexity comes from a somewhat disjointed ownership of generating assets compounded by government policies that have been confusing to both generators and rate-payers.
Starting in 2002 BC Government policy mandated that BC Hydro (the publicly owned near-monopoly) change the way it added new generation capacity. Updates to existing hydro facilities and development of new large-scale hydro facilities would remain with BC Hydro. However, integration of new renewable and small-scale hydro generation would have to be through long-term purchase contracts with Independent Power Producers (IPPs). The rationale provided for this decision was the desire to transfer the risk of investments in new generation to the private sector. Given the guaranteed electricity rates, long-term locked in contracts (20 to 40 years) and “take or pay” provisions I don’t see much risk being transferred to the IPPs. What I do see is guaranteed increases in power rates for electricity which might not be required or might be better provided through public sector investments. As you might expect there has been a veritable stampede of IPPs bringing forward all manner of generation proposals. Over-subscription is usually an indication that what’s on offer is a pretty safe investment.
There are two fundamental issues at the heart of BC’s push for additional generation capacity (and the resultant growth in IPPs) .
The first issue is the Provincial Government’s stated desire to make BC “self-sufficient” in terms of electricity generation. Just how to determine whether or not this condition has been met is a contentious issue.
As in every jurisdiction peak demand in BC lasts for only a few hours on a few days or weeks of the year. At all other times and on all other days there is ample generation capacity already existing in the province.
So how do we handle those peak demand events which might be due to low water levels in hydro reservoirs or particularly hot days or particularly cold nights?
In the past BC Hydro had access to the 900 MW Burrard Generating Station (BGS) which could ramp up quickly to meet demand spikes. It has performed that job admirably since its construction between 1962 and 1975. In July, 2009 the BC Utilities Commission (BCUC) issued a report stating that “the Commission Panel declines to endorse BC Hydro’s proposal to reduce its reliance on Burrard for planning purposes” (page 115). In other words the BCUC found it to be in the public interest to continue to operate BGS during demand peaks as required (typically less than 10% of capacity on an annual basis).
On October 28, 2009 the BC Government issued a press release over-ruling the BCUC decision and has subsequently not allowed BC Hydro to include BGS for planning purposes. This administrative decision effectively removed 900 MW of firm capacity from BC Hydro’s generation fleet and provided some justification for the acquisition of new generation from IPPs (as stated explicitly in the press release).
Another side effect of this decision was the rejection of any upgrades to BGS that could have substantially reduced the Greenhouse Gas emissions of the facility while show-casing relatively environmentally friendly CCGT technology – technology that is being aggressively deployed in many other jurisdictions.
Even without considering BGS there is considerable debate about whether or not BC is actually “self-sufficient” with regards to electricity generation.
The situation is made more complicated by the Columbia River Treaty between the U.S. and Canada. This treaty, ratified in 1964, allocates up to 1.2 GW of generation capacity in Washington State to Canadian “ownership” in return for Canadian dams constructed on the Columbia River in aid of flood control in the U.S. As a matter of practice BC Hydro has not taken this electricity in kind but has instead received the proceeds from the sale of this electricity to U.S. customers.
There are also two industrial concerns (Rio Tinto Alcan and Fortis BC) which own and/or operate hydro-electric facilities with approximately 1.3 GW of generating capacity. Both of these organizations have the ability to enter into electricity sales agreements that are not controlled by BC Hydro, including export sales.
More detailed analyses of the “self-sufficiency” conundrum can be found in studies by Sopinka and Kooten (2010), Hoberg and Sopinka (2011) and Sopinka and Pitt (2013). A chart indicating the various sources of generation in BC as of 2011 is shown below.
The bottom line is that it would be very difficult to conclusively state that BC has insufficient electricity generation assets to meet domestic needs in the foreseeable future.
The second issue driving the need for additional IPP generation in BC is the forecast for future electricity demand. This too, is a contentious issue.
In 2004 BC Hydro forecast that annual electricity demand would be 72-76 GWh in 2024 as shown in the chart below. Without additional generation additions and with the loss of Burrard Generating Station a growing deficit in generation capacity was forecast.
Source for 2012 Forecast Source for 2004 Forecast
Eight years later the 2024 Forecast should have been much better defined. In fact, the revised forecast is for between 62 and 72 GWh. Not only has the total amount moved down but the uncertainty has increased significantly.
Even this downward revision of demand seems to be too high. Although the figures displayed in various BC Hydro publications are not totally consistent the “domestic” demand listed in the 2012 Annual report (page 91) was 52.197 GWh. This is considerably less than the 57 GWh forecast in 2004. All things considered it is very difficult to feel comfortable that BC Hydro projections are solid enough to warrant entering into long-term contracts for additional IPP generated electricity – electricity that is very significantly more expensive than that produced by existing legacy hydro facilities.
The impending development of some LNG facilities in Northern BC may lead to an increase in demand. However, it is quite likely that these plants will generate their own power using natural gas.
So what is the optimal path forward regarding electricity generation in BC? It seems to me that there is too much uncertainty around demand projections, the impact of conservation programs, LNG developments and most importantly Government policy regarding use of the Columbia Treaty allocation and other aspects of electricity “self-sufficiency” to be able to easily discern that path when considering BC in isolation. However, if we broaden our perspective to include a more regional view I think there are some hard facts that may point us in the right direction.
The Alberta and Saskatchewan economies are growing relatively quickly and both provinces are heavily dependent upon coal-fired plants for electricity generation. Alberta has also made a significant commitment to developing its abundant wind resources and has more than 1 GW of capacity installed as of the end of 2012. More wind developments in both Alberta and Saskatchewan are planned but integrating this intermittent resource is proving to be a challenge. The Alberta Electricity System Operator (AESO) has undertaken a multi-year investigation into how this challenge can be overcome.
Hydro facilities have the ability to follow rapid changes in the transmission system because output can be varied in less than a minute. As a result, using hydro to cushion output variability from wind farms is a very effective strategy. In Denmark and Germany this is accomplished using hydro resources from Sweden and Norway.
So here is a proposal.
The Site C dam proposed for development by BC Hydro is currently undergoing environmental review. It remains unclear if this additional electricity is actually needed in BC. However, this dam, located as it would be only about 100 km from the Alberta border, could act as backup to a much expanded development of wind resources in Alberta and potentially Saskatchewan as well.
The current plan is to equip the dam with 1.1 GW of generation capacity. But what if that was increased to 1.7 GW? Production at that rate would deplete the reservoir and is therefore unsustainable over the long term. But production at that level would be possible for many hours, possibly a few days – enough time to cover calm periods in Alberta when there was very little wind resource.
This over-capacity could work in conjunction with up to 2 GW of wind farm development in Alberta to reliably deliver emissions-free electricity to both provinces. The amount of “firm” and dispatchable electricity would be the average output of the wind farms plus the average output of Site C – roughly 600 MW of wind (at a capacity factor of 30%) and 1.1 GW from Site C for a total of 1.7 GW.
In periods of high winds (> 600 MW) electricity would flow into BC and the Site C output would be cut back and the Site C reservoir would be refilled. When Alberta wind farm output was low the excess generating capacity at Site C would be used to make up the difference, drawing down the reservoir.
Both Alberta and BC could be guaranteed a certain amount of electricity. For example, if it turns out that BC does not really need all 1.1 GW from Site C then the output could be split with Alberta receiving 1.2 GW of the aggregate 1.7 GW and BC receiving 500 MW as shown in the chart below. In that situation Alberta would be agreeing to purchase an average of 600 MW of output from Site C.
If it turns out that BC needs more than 1.1 GW from Site C then the output could be split differently with Alberta receiving perhaps 400 MW and BC receiving 1.3 GW. In that situation BC would be agreeing to purchase an average of 200 MW of wind generated electricity from Alberta.
The split could be renegotiated from time to time.
This proposal would allow the two provinces to work co-operatively to develop emissions-free electricity generation to meet future requirements. By pooling demand and negotiating a split of output that worked well for both provinces the risks associated with developing Site C and greatly expanding wind generation in Alberta would be minimized. This approach could serve as a model for similar arrangements in different parts of North America.
I recently joined a discussion about how gravity might be used to generate and store energy. One of the comments provided a link to Gravity Power, a company that has proposed a modified take on “pumped storage” whereby a vertical water reservoir is used with a heavy piston. During the discussions a few variations on this technology were proposed. I suggested that abandoned open pit mines might represent a good starting point for very large facilities.
As in my earlier posting on Funicular Power the principle behind Hydraulic Energy Storage is to use excess electricity generated mainly from wind farms when demand is low (for example at night) to raise the potential energy of a mass by moving it to a higher elevation. In this case the means to do that is a relatively standard hydro turbine in a very non-standard configuration.
In energy storage mode a massive solid piston is raised by increasing the water pressure below it by running the turbine in reverse, acting as a pump to force water down the penstock.
In generation mode the piston is allowed to sink forcing water back up the penstock and through the turbine.
The piston would be a large concrete “cup” filled with as heavy a material as could be justified by the economics of the project. This could be rock debris, dense concrete, or even iron ore. The denser the material the better.
The containing cylinder would also have to be reinforced concrete. Between the cylinder and the piston there would have to be a pressure seal. This could be a large rubber or plastic tube such as that used to contain oil spills.
The advantage of using hydraulic storage is that it can be scaled up to a truly massive size. A large hole, such as that left behind after an open pit mine has been abandoned, would accommodate a gargantuan cylinder and piston (for example the Marmora Iron mine shown below);
The facility described below would use only a portion of the Marmora pit;
||Diameter of piston (m)
||Height of piston (m)
||Density of concrete (kg/m3)
||Volume of piston (m3)
||Weight of piston (kg)
||Bouyant weight of piston = concrete – water (kg)
||Piston movement = Mine depth – piston height (m)
||Energy in MW if generated over 10 hours
The concrete pour required to line the hole and create the cylindrical “cup” is not overly large compared to a major hydro dam. A solid concrete piston would be rather expensive – on the order of $150 million in this example. It would be much cheaper to fill the “cup” with rock debris although this would be less dense. Increasing density by adding iron filings or using “dense” concrete would be useful but expensive.
Based upon other large engineering projects and mining operations this facility could probably be constructed for less than $1 billion – possibly less than $500 million. While that is a large amount of money it would provide 86 times the energy storage capacity compared to the largest battery complex in North America for less than 20 times the price. The Notrees facility completed in December, 2012 by Duke Energy cost $44 million to construct and the battery performance will degrade over time. Hydraulic Energy Storage, which uses exactly the same components as a hydro dam, would have a useful life of as much as 100 years.
Rather than trying to use an abandoned open pit mine which might be a long distance from transmission facilities Hydraulic Energy Storage could also be located close to a wind farm although that would involve additional costs associated with excavating a new hole.
When it comes to long-term, dependable and reliable energy storage there are not a lot of options available. Creative use of existing technologies (see unpumped storage) or investigation of untested concepts such as Funicular Power and Hydraulic Energy Storage have to be on the agenda if we are serious about transitioning to a sustainable energy environment.
It must be admitted that the circumstances were unique.
The consequences, although predicted by a few (or so they claimed) were dismissed as impossible by political leaders and the general public.
And yet there had been warning signs.
For months the price of oil had been moving monotonically higher; not at an alarming rate but without any obvious underlying cause.
Those with technical knowledge of the subject would have pointed out that the increase in daily oil production capacity had not been keeping pace with the increasing daily demand for several years. As a result the world markets were susceptible to any major problem in the supply chain.
But people had been hearing threats about “peak oil” for decades. They were immunized against this conspiracy by the oil industry to increase “Big Oil’s” money grab. Peter had cried “wolf” too many times.
So when oil passed $150/barrel public outrage called for an end to oil company creed. Politicians the world over, most being disciples of Reagan and Thatcher, refrained from interfering with the sacrosanct “market forces” that they believed in so deeply.
And then on October 31 those laws of supply and demand, both respected and feared, passed judgment on the world.
A late season hurricane shut down production along the Gulf Coast of the United States. The same week an earthquake in Southern Iran caused extensive damage to a number of pipelines serving the port of Ras Tanura in Saudi Arabia. Over 5 million barrels of oil at the terminals were spilt into the Persian Gulf when storage tanks were ruptured by that same earthquake.
Within days the price of oil had hit $200/barrel and gasoline prices in the U.S. spiked to $5/gallon. Service stations began to run out of gasoline and lineups reminiscent of the 1970’s became the norm.
Over the next several months oil production in the Gulf of Mexico returned to normal and significant progress was made on repairs to the Middle East infrastructure. Despite the increasing supplies the world oil crisis continued. World oil prices rose to $300/barrel and gasoline prices in the U.S. rose to $6/gallon.
It was clear that there was now a serious imbalance between supply and demand. When pressed to increase production the most prolific oil producers in the world stated categorically that there were no large untapped pools available. The old standbys, Saudi deep reservoirs and the Canadian Tar Sands had long ago been tapped and were producing at close to maximum output. Production continued its inexorable decline in the North Sea and the Alaskan North Slope as well as all of the older conventional oil fields.
As the new reality became accepted the world reacted as it had several times before to oil price spikes. Sales of pick-up trucks, the cash cows of the North American automobile industry, came a crashing halt.
Unable to deal with the endless line-ups at service stations many people started commuting via mass transit. While this was considered by most to be a positive development it stressed urban transit systems almost to the breaking point. Too often the destination signs on buses read “Sorry – Bus Full”. Subway and train riders frequently watched as car doors opened and closed with no opportunity to board.
As consumers had to allocate more of their disposable income to fuel purchases of one sort or another retail spending slowed dramatically pushing the developed economies into recession.
There were, of course, winners from the energy chaos. Oil and Gas exploration companies, flush with cash, began to expand their workforces. However, not quite believing that they were truly into a new era this expansion was tempered.
Manufacturers of electric vehicles saw record sales but were hard pressed to ramp up production lines quickly. But here again uncertainty about the future held back aggressive expansion plans.
And still the crisis deepened.
It was discovered that China, through its 5 year planning process, had secured the majority of its projected oil import requirements through fixed-price, long-term purchase agreements with major oil producers. Although not completely sheltered from the economic chaos being experienced elsewhere the Chinese economy continued to grow albeit at a slower rate, driven more and more by internal demand. The result was further pressure on the global oil supply.
As oil prices touched $400/barrel many consumers began to switch energy sources to use natural gas whenever possible. And that is when the other “shoe” dropped.
A fracking project in SE Ohio was identified as the source of a significant landslide which buried a small town resulting in more than 100 deaths and the destruction of several hundred homes. It was found that the local groundwater supply had been contaminated with fracking fluids making the water unfit for human consumption. As a result a national moratorium on fracking was imposed until the situation could be thoroughly investigated.
With a cloud hovering over the entire fracking industry the price of natural gas spiked to $10/MMBtu. The downstream impacts on employment and energy costs were felt almost immediately, pushing many economies further into recession.
In Europe declining tax revenues and rising unemployment caused the debt crisis to rear its ugly head once again. But this time the German and French economies were not strong enough to be able to rescue the weaker members of the EEC.
Greece and Spain quickly defaulted on bond payments and had to revert to National currencies leaving many Euro zone lenders with huge holes in their balance sheets. Several other countries teetered on the edge of loan defaults.
There were calls within the United Nations General Assembly for global rationing of oil resources. In an ironic twist Communist China declared that the commercial contracts that it had signed with oil exporters should trump any U.N. resolution. Because of its permanent seat on the Security Council and associated veto these discussions went nowhere.
Finally, out of frustration and desperation, the United States declared an embargo on all oil and natural gas exports from North America. All production from the Canadian Tar sands and Mexican oil fields not used domestically was diverted to U.S. refineries. LNG exports to Japan and elsewhere were halted.
As global political tensions kept rising the oil crisis did not abate. Even with lowered demand around the world and a significant increase in exploration activity discoveries could not replace declining production in the mature oil fields of the world.
China demanded that the U.S. lift the embargo on exports from the Canadian Tar Sands that China had contracted for. The United States refused and requested that China join U.N. discussions about oil rationing.
For the first time in decades diplomats around the world began to discuss the possibility of a global armed conflict.
It was in October of 2012 that I started "The Black Swan Blog".
What was my motivation? The trigger for me was reading a white paper written by Vinod Khosla entitled Black Swans thesis of energy transformation. This paper put forth the proposition that step-change technologies and approaches rather than incremental improvements will be required to address the energy needs of developing economies. The "Black Swan" event represents a sudden change in thinking or perspective which can lead to true innovation. This was the fundamental concept at the heart of the book "The Black Swan" by Nassim Nicholas Taleb.
When I investigated various alternative energy initiatives that were underway and the way taxpayer and ratepayer funding was being allocated I became alarmed. It did not seem to me that the large subsidies supporting the development of photo-voltaic solar panels and wind farms were sustainable. It actually seemed like this was in many ways lost money. Nor did there seem to be any serious effort towards overcoming the biggest problems associated with renewable energy sources; reliability and variability.
In order to bring my concerns to the attention of the public I started "The Black Swan Blog". Over the past year I have published more than 45 articles on everything from concentrated solar power to geoexchange to hydro-kinetics. At Energyblogs.com I have had more than 25,000 "reads" and counting other sites that I post at (and various cross-postings of my blog) the total number of times my blog entries have been read is greater than 70,000.
Is that impressive? In a world where almost any celebrity blog posting attracts millions of readers I certainly don't think so. On the other hand, from many positive comments that I have received there are thousands of people who have found at least some of the postings in "The Black Swan Blog" to be of interest.
I have learned a lot while researching and writing blog entries. I am now more convinced than ever that we should be focusing almost all of our R&D and funding towards the development of inexpensive utility-scale energy storage solutions that can retain energy for many days. If we had a solution to that problem then wind turbines could basically meet all of our energy needs. Conversely, without affordable energy storage solutions wind farms cause more problems than they are worth.
I am now absolutely opposed to any financial support for residential roof-top solar panels. They are expensive to install and maintain, inefficient because they are installed at a fixed angle, and require unnecessary upgrades to the local grid infrastructure which other utility customers end up paying for. This type of installation is consuming enormous amounts of money through capital grants, tax credits and Feed-In-Tariffs – tens of billions of dollars that could be better spent on storage technology.
I am also now opposed to any solar power development north and south of about 30 degrees latitude. My concern with such developments is the very large difference between winter and summer energy production.
In the higher latitudes peak demand is often during the late afternoon and through long winter nights when solar is not available. Relying upon solar in any significant way will require a lot of over-building in order to deal with low solar energy availability in winter and would result in a large surplus of electricity during the middle of the day in the summer. The other alternative would be to maintain some other source of electricity as back-up in winter. Neither situation really helps us move towards a sustainable energy future in a cost-effective manner. At some point in the future compressed hydrogen may provide the large scale, long duration energy storage that would allow solar energy to be balanced throughout the year at these latitudes. But until then solar power developments north and south of 30 degrees do not make a lot of sense.
On the other hand I believe that solar power should be the primary focus at latitudes lower than 30 degrees. Combining Photo-Voltaic (PV) solar panels to supply electricity during the day with Concentrated Solar Power (CSP) and Thermal Energy Storage at night would provide dispatchable and reliable base load electricity generation for a reasonable cost.
The Solana plant in Arizona which cost approximately $2 billion (about $7/watt) represents one example (although it is a pure CSP facility). The plant can produce 280 MW of electricity for up to 6 hours after the sun has set (in the same way that the Gemasolar plant in Spain generates 7x24x365). A combined PV/CSP facility could have been built for significantly less and would have provided the same extended generation profile.
I am particularly bullish about PV+CSP for the Hawaiian Islands where residual fuel oil is burned to produce electricity.
Another possible application of this technology is in the Middle East where more than 2% of the world's oil consumption is used by desalination plants. One large solar-powered plant is under construction in Saudi Arabia; PV+CSP has the potential to completely eliminate the burning of oil for desalination, a very poor use of a valuable and non-renewable resource.
While we develop energy storage solutions we also need to become more flexible in the way we use electricity. Demand Response programs are being initiated by some utilities but even more needs to be done to promote public education and awareness of the importance of reducing energy use at peak demand times. The truth of the matter is that we only have a problem with energy for a few hours per day for a few weeks per year. A program such as the one implemented in post-Fukushima Japan would have a significant positive impact on our ability to manage peak demand.
Finally, I continue to be a strong advocate for geoexchange systems. If building codes mandated the use of geoexchange rather than traditional HVAC systems the impact on energy use for both heating and cooling would be very significant. Widespread deployment of geoexchange systems could effectively "clip" peak demand both in the summer and winter.
I can't say that "The Black Swan Blog" has had any impact on alternative energy policy or practice during year one. I do think it has given readers some different perspectives on the complex issues surrounding renewables and the practical realities that will need to be faced as we transition to a sustainable energy enviroment.
What's the bottom line? I have had enough interest and positive feedback to keep "The Black Swan Blog" going for another year.
Thanks for your support.
It was in May of 1961 that President John F. Kennedy declared that the United States would do the things required to land a man on the moon before the decade was out “not because they are easy but because they are hard”. Twenty months later a contract to design and build a Lunar Excursion Module was awarded to Grumman and all of the LEMs were delivered by the end of 1966, more than two years before the first lunar landing.
I am drawing attention to that particular aspect of the Apollo program because it reflects a confidence and an approach that I think is lacking in our efforts to transition to a sustainable energy environment.
At the time that the LEM contract was awarded the U.S had only managed to put 3 solo astronauts into orbit, with the longest flight being less than 10 hours. There was no launch vehicle that could put three men into orbit let alone send them to the moon. There was no guarantee that humans could survive multiple days in space and there were no life support systems developed that would make such a voyage possible.
In that context, was it not just a little bit crazy to commit $billions to develop a lunar landing craft so early when the entire Apollo program was in its infancy, faced with unknown risks and an uncertain end result?
Eighty years before the Apollo program began the Canadian Government made a commitment to build a transcontinental railway. As with the Apollo program this project faced unknown and unknowable technical and financial risks. And yet the very first section of track laid down was in the Fraser Canyon of British Columbia, thousands of miles from the population centers in the east and very near the western end of the line.
Both of these examples (and there are many others) demonstrate a recognition that it is often best to attack the most difficult engineering problems first because they may take a lot of time and effort to deal with. And in both of these situations the actions of the sponsors confirmed their belief that the problems could and would be overcome.
Without being able to find a way through the treacherously narrow Fraser Canyon there could be no Canadian Transcontinental railway. And without a firm commitment to completing the entire railway the Fraser Canyon section of track would have been utterly useless.
Without a LEM, the first rocket-powered machine in history capable of taking off and landing as well as docking with an orbiting service module, there could be no lunar mission. And if any component of the entire Apollo program failed the LEMs would never have been used.
Now fast forward to the 1980′s and consider the approach to developing renewable energy sources.
The first serious effort was led by Arnold Goldman who formed the Luz Corporation to build the Solar Energy Generating Stations (SEGS) in California. Although he was a great believer in solar energy Mr. Goldman explicitly recognized that electricity is required after the sun sets and that it did not make sense to build plants that could not match supply and demand. As a result the SEGS plants are equipped with natural gas as a secondary fuel to be used on cloudy days and in the evening.
Unfortunately, after Luz went bankrupt in 1991 Mr. Goldman’s common sense approach seems to have been lost.
Solar and wind energy both suffer from the fact that they are not dispatchable and consequently they may generate electricity when it is not needed and they may not generate electricity when it is needed most. This is not mysterious. This is not a surprise.
Bearing this fact in mind it is clear that the most important engineering task to be undertaken is to be able to store the energy from solar and wind so that it can be used when it is needed. And yes, this is also the most technically challenging aspect of transitioning completely away from the burning of hydro-carbons to generate electricity.
Without reliable and affordable energy storage systems it is simply not possible to actually decommission the thermal generating plants that we have relied upon for more than 100 years. Simply reducing how much we use those plants is not good enough and in any case is not economically viable in the long run.
It follows that significant R&D funding and other financial support mechanisms should have been directed towards energy storage solutions from the beginning. Has that happened? The short answer is “No!”
I have written blog postings about many different storage technologies including the use of post-consumer electric vehicle batteries, flywheel technology, and compressed hydrogen. These are all technologies that are either in production use or very close. But they are also technologies that are immature and expensive requiring substantial additional funding to bring down costs.
And yet, in these cases the funding is invariably very difficult to get and inadequate when it is grudgingly provided.
The University of Western Michigan has not been able to get a few $million to conduct engineering studies on battery re-use. Beacon Power, developers of flywheel storage systems, went bankrupt and the company was only able to rise from the ashes because of a change in the regulations regarding quick-response backup power. The amount of money going into hydrogen storage research is also insignificant compared to the 10’s of $billions going into subsidies for rooftop solar panels and wind farms.
When I have spoken to the researchers in this field they have invariably cited a lack of funding as the major stumbling block preventing significant progress.
Apart from a lack of serious R&D funding support, commercialized energy storage, even in pilot projects, is very hard to configure in a way that will generate anything close to a profit.
Energy storage systems are treated as “end users” by Independent System Operators (ISOs) and consequently are charged a grid access fee. There is no Feed-In-Tariff for electricity produced from storage and there are no capital grants or other incentives provided to assist in the construction of energy storage facilities (although organizations like NREL do often participate in “one off” pilot projects).
Facing high costs, technical risks, access fees, and uncertain revenue streams utilities and private investors have done exactly what you might expect when it comes to commercializing energy storage solutions: almost nothing.
Going back to the Canadian Railway analogy what we have been doing is laying track across the prairies at a furious pace while leaving the “hard parts” of the plan as a homework assignment to be completed later. That is an excellent way to earn a failing grade in my opinion.
We need to get serious about energy storage. There is no other option.
I would like to see President Obama initiate an International project aimed at developing one or more affordable energy storage technologies before the decade was out with the goal of being able to supply 100 GWe of electricity for at least 10 hours (total storage of 1 TW-Hour).
Now that would be inspiring.
With the U.S. government “closed for business” and a looming debt ceiling crisis the debate over whether or not to extend the wind energy Production Tax Credit (PTC) is not getting much attention these days. President Obama’s official position is that the PTC should be extended indefinitely. Many in Congress disagree and there are is ample private sector and research lab commentary on both sides of the question.
In my opinion the PTC debate has to be framed within the context of what is the most effective use of scarce public funding to advance our transition from an economy based upon hydro-carbons to one based upon renewable energy sources. On that basis and with all due respect to President Obama I don’t think the PTC should be a priority.
Initially there was general acceptance of the need to spur innovation and reduce the installation costs of wind generation. With the transition to larger and larger turbines and very tall mounting towers the efficiency of wind generation has been improved and costs per MW of generation have fallen. But those cost reductions were driven in large part by competition from Chinese manufacturers rather than any breakthroughs in technology and the cost reductions have largely leveled off.
As wind generation in many jurisdictions in the world has developed it has moved from being a “green energy” bragging point in annual reports to being an operational concern for most Independent System Operators. The fundamental problem with wind generation is its unreliability and variability.
Wind generation is a bit like nutmeg. In small doses nutmeg is a pleasant treat to sprinkle on coffee or eggnog; taken in bulk it can be fatal. We are rapidly moving out of “pleasant treat” territory when it comes to wind generation.
In areas where wind capacity is relatively large compared to demand (Denmark, Germany, The U.S. Mid-West and Texas) the problems with wind are starting to get serious.
From a physical grid standpoint the most difficult problem is the very rapid ramp-up and ramp-down that wind farms can experience, even over very large areas. A quick look at the German generation for 2012 demonstrates the problem.
Despite having over 30 GW of Nameplate capacity there are many times when there is virtually no wind energy production across the whole of Germany. The periods of regional calm have lasted from a few minutes to many consecutive hours. In Germany’s case interconnections with Norway, Sweden, France, and the Czech Republic allow these rapid variations to be balanced by fast response hydro and nuclear generation outside the country. The same is true of Denmark.
In Texas these variations are balanced by thermal generation assets (mostly coal and natural gas fired plants) many of which have to be kept on-line as “spinning reserves” able to respond quickly to wind generation fluctuations. But because of the deregulated market in Texas and the existence of the PTC wind electricity producers can bid very low prices into the ERCOT market to the point where quite often they are bidding negative prices (this practice of bidding negative prices is even more prevalent with the Midcontinent Independent System operator – MISO). No responsible operator of a thermal generating plant can bid negative prices which means that they get blocked out of the electricity market when the wind is blowing strongly. In many places wind energy also gets preferential access to the grid by regulation.
As a result there is a growing crisis in traditional (and reliable) electricity generation. This has manifested itself in various ways. In the Euro zone almost all utilities are on credit watch. In both Europe and Texas it is becoming increasingly difficult to get financing for new thermal generation. ERCOT in Texas is raising the ceiling price in the spot market to $9,000/MW-Hour (the annual average in Texas is $45/MW-Hour) in an effort to get new generation built. That strategy is not working particularly well.
Even more worrisome is the fact that it is becoming increasingly obvious that the more wind generation that exists the more reliable reserve capacity is required. What that means is that it is necessary to maintain almost a complete duplication of reliable generation assets to backup the wind farms. That is physically wasteful and economically untenable.
It also must be recognized that the actual amount of effective wind generation at peak demand times is a small fraction of the “Nameplate” generation. While average capacity factors for wind in the U.S. are about 25%, more than half of that generation takes place at night when demand is low. Looking more specifically at generation at peak demand times the availability of wind is even less. This is because peak demand often occurs during cloudless high pressure weather events in both summer and winter when temperatures will be extreme and winds will be calm. I published a somewhat humorous take on that possibility in my Christmas, 2012 blog “The Fright Before Christmas”.
MISO’s Independent Market Monitor (Potomac Economics) noted in a June, 2013 report on page 39 that
“wind resource output is negatively correlated with load and often contributes to congestion at higher output levels, so hourly-integrated prices often overstate the economic value of wind generation”
As a result they state that the MISO practice of counting 13.3% of wind as reliable is much too high. They recommend instead that a value of 2.7% would be more appropriate (page 16 of the report).
ERCOT takes a similarly optimistic approach by counting 8% of Nameplate wind capacity as available. The reserve estimates do not explicitly address the possibility of a very calm, high demand event for an extended period of time. In the Report on the Capacity, Demand, and Reserves issued in May, 2013 ERCOT forecasts Reserve Margins declining from 13.8% in 2013 to less than 5% in 2023.
Two things seem strange about this forecast as far as I am concerned. The first is that wind capacity is not forecast to increase for the next ten years. That will certainly not be the case if the PTC is renewed. Second, the amount of coal-fired thermal generation capacity is forecast to decrease only very marginally over the next 10 years. That seems unlikely given that the new MACT regulations will almost certainly cause some large coal-fired plants to be decommissioned. The more likely scenario is that ERCOT Reserve Margins will decrease more quickly than indicated in this forecast.
Viewed objectively the investment of something close to $100 billion in wind generation (a significant portion of which was provided by taxpayers and ratepayers) has not produced very impressive results. No thermal generation plants have been decommissioned soley because of the existence of these wind farms. Capacity factors at peak demand times are in the single digits. The financial health of existing utilities which still provide the firm capacity needed to keep the lights on has been put in serious jeopardy.
There are much better ways to allocate the funds that would extend the PTC for wind developers. I have outlined a comprehensive program in my “Sustainable Energy Manifesto” with some of the major items being significantly increased support for energy storage R&D, a PTC or FIT for energy storage, hydro-kinetics, and Concentrated Solar Power developers, and regulatory changes that would cost very little.
Given how strong the wind energy lobby is, how politically correct renewables are, and how much money is at stake, extending the PTC would be the easy choice. I just don’t think it is the right choice.
When most renewable energy advocates talk about energy storage they are referring to relatively short-term storage; everything from 15 minute storage to stabilize the grid and provide bridging power during sudden changes in output (for example the Notrees battery storage facility or the Beacon Power Flywheel facility in Pennsylvania) to 12-14 hours of Thermal Energy Storage which allows the Gemasolar Concentrated Solar Power Plant in Spain to run 7x24x365. But if you examine the PV Solar generation from Germany on an annual basis it becomes obvious that there is also a long-term storage problem that needs to be solved.
Summer PV Solar output is about 5 times greater than winter output. Any attempt to treat PV Solar in Germany as a consistent and reliable source of electricity would involve building out 5x more capacity than is really justified and then dealing with a huge surplus of electricity in the summer. This would be both highly inefficient and horrendously expensive.
There are not a lot of options when it comes to really long-term energy storage that would span many months. However, there is one solution that would work and has been deployed in a limited way in real-world applications. That solution involves powering an electrolyzer to break water down into oxygen and hydrogen, then using the hydrogen in one fashion or another sometime later.
Research and development into the use of hydrogen storage of renewable energy has been going on since 2007 at the National Renewable Energy Laboratory (NREL) in Golden, Colorado. In partnership with Xcel Energy the NREL wind to hydrogen test bed has included a number of different components including different types of electrolyzers, fuel cells, hydrogen powered generators, and various interconnection technologies. Using hydrogen as a way to store energy is complicated with many ways to handle the flow of electricity between the electrolyzers and eventual end users of the hydrogen. The need to flip between AC and DC and to efficiently control the electricity flow within a “smart grid” represent significant challenges and only through tests of multiple configurations and using several different technologies will the optimal design be determined for deployment at scale. Research is ongoing.
In an effort to evaluate some of the challenges that would be encountered in a commercial application of hydrogen storage technology the Basin Electric Power Co-operative entered into a pilot project with the Energy & Environment Research Center at the University of North Dakota. This project used real time dynamic scheduling to draw electricity from the Wilton Wind farm and feed that into an electrolyzer. The output hydrogen was stored in tanks and delivered directly to three pickup trucks and a tractor that were converted to use hydrogen fuel.
As might be expected in a ground-breaking research project many issues were encountered, primarily around the reliability of some of the equipment components. Despite the challenges the project ran successully in a “production” mode from early 2008 until 2011 when the equipment was transferred to the NREL site in Colorado. A great deal of very valuable information was documented through this project regarding the full-cycle costs and practical application of hydrogen storage and use for the transporation sector.
In remote Bella Coola British Columbia, Canada hydrogen storage is in use every day to reduce the amount of diesel fuel burned to generate electricity.
The $7.4 million funding for the project was provided by BC Hydro, Sustainable Development Technology Canada, and General Electric Canada.
Excess hydro electricity is used to power an electrolyzer which extracts hydrogen gas from water. This gas is compressed and stored in tanks for future use. Part of the excess hydro electricity is also stored in a flow battery which can provide very fast response both for storage and delivery of electricity.
If peak demand exceeds the capacity of the hydro facility then the compressed hydrogen is fed into fuel cells which generate electricity without combustion making this an emissions-free system. Total hydrogen storage implemented in this project is 3.3 MW-hr which will deliver 100 KW for about 16 hours after accounting for energy losses in the fuel cells and associated processes.
Overall end-to-end efficiency of the system is about 25%. That is, for every MW of hydro electricity used to produce the hydrogen gas about 0.25 MW of power is eventually returned to the grid via the fuel cells.
A 75% loss may seem like a lot but the alternative is to let the water pass through the hydro dam spillways without generating electricity at all. That’s a 100% loss.
And according to Sean Allen, chief engineer for Powertech, the prime contractor for the project, there are ways to improve the overall efficiency including using heat generated by the Electrolyzer, Fuel Cells, and compressor as part of a district heating solution.
Even on the relatively small scale of this project construction costs were under $5/watt-hr ($7.4 million for effective delivery of 1.6 MW-Hrs). This compares quite favourably with the Notrees battery complex at $5/watt-hr and the Beacon Power flywheel technology at about $8/watt-hr. The big advantage for hydrogen storage systems is their ability to scale up for large amounts of storage by adding compressed hydrogen tanks. Theoretically weeks or months of excess energy could be stored in this way.
The big disadvantage for this configuration of hydrogen storage system is the overall efficiency compared to batteries or flywheels. That implies that the electricity used to power the system must truly be surplus to any reasonable need and therefore is essentially worthless. Hydro power at night when reservoirs are full represents one viable source that fits this definition. PV Solar at mid-day is reaching that point in some jurisdictions as is wind in places like Texas where wind generators are sometimes offering electricity at negative prices.
At the opposite end of the country another Canadian project has created a “hydrogen village” on the western tip of Prince Edward Island. The goal of this project is to demonstrate that remote communities can be completely self-sufficient in terms of electrical power without resorting to diesel generators. The wind farm produces electricity which is dynamically routed to satisfy both real-time demand as well as fueling an electrolyzer producing hydrogen which is stored in compressed form. When the wind is calm the stored hydrogen is used to fuel a back-up generator. The project started in 2009 and is ongoing.
Most recently utility giant E.ON initiated a project based upon yet another approach to the use of hydrogen storage. In their facility in Falkenhagen, Germany they use surplus wind generated electricity to power an electrolyzer. Rather than storing the resulting hydrogen this facility injects the hydrogen into a natural gas pipeline where it will become part of the energy feedstock for residential or commercial heating. As with the Bella Coola project, the electrolysis and eventual combustion of the hydrogen to produce energy will result in overall end-to-end system efficiencies of about 25%.
So is hydrogen storage the answer to our long-term energy storage requirements? At this point it is pretty much the only game in town. And despite the many challenges with this technology there have been enough “nearly ready for prime time” projects to warrant further research.
Which brings me back to one of my ongoing frustrations. While “the world” has pumped literally hundreds of $billions into subsidies for solar panels and wind farms the amount dedicated to energy storage research and development is in the hundreds of $millions. Long-term storage using hydrogen has received a small fraction of that total. If we really want to rely upon renewables in the next few decades the funding priorities have to turn 180 degrees.
If you have seen the movie “Jurassic Park” (and who hasn’t?) you might remember the scene where Dennis Nedry, the would-be thief of Ingen’s dinosaur embryos is having lunch with Lewis Dodgson, the potential purchaser of said embryos. After his loud declaration that “We got Dodgson here” received no reaction or interest from the other restaurant patrons Nedry mocks Dodgson’s secretive behaviour with the comment “Nobody Cares!”.
That’s how I am starting to feel about green energy.
As someone that reads postings on this blog site and probably other similar sites you might take great offence at that statement. You care about renewable energy, you care about our need to stop burning hydro-carbons, you care about conservation. So do I.
Opinion polls tell us that a majority of people want to reduce their impact on the planet and are even willing to pay a bit more for electricity and probably even manufactured goods that are more earth-friendly. Millions of people turn off lights and appliances on “Earth Day”.
So what am I talking about?
I recently travelled to Chicago and while on a stop-over at SEA-TAC I thought I would grab a magazine with some earth-friendly articles to read while I was waiting for my next flight. I walked into the closest newstand and viewed the racks of glossy magazines.
There were dozens of magazines about lifestyles – houses, hair, happiness, hardship; not a single article on solar panels, energy conservation, living with less “conspicous consumption” or anything like that – quite the opposite really.
There were dozens of magazines about the great outdoors – hiking, biking, hunting, fishing; not a single article about deforestation/reforestation, urban bicycle commuting, or even climate change for that matter.
There were dozens of magazines about cars – old cars, new cars, hot rods, and motorcycles; not a single article about electric cars (not even the Tesla!), electric bikes, fleet fuel consumption, or hydrid technology.
I am not lying when I tell you that amongst the hundreds of magazines in that shop there was not a single earth-friendly article to read.
What does that tell me? It tells me that as much as most people have a vague intention to treat the earth better when it comes right down to it they are just not that interested in how that intention could be translated into action; i.e. “nobody cares”. And by “nobody” I mean the large majority of the inhabitants of spaceship earth.
As someone that I suspect is pretty well informed about these topics I bet that you cannot answer the most basic of questions about your personal impact on the environment.
- What has been the actual fuel consumption of the car you drive over the past few months? (if you don’t drive an automobile bully for you!) Has it been trending up or down?
- If you drive by yourself to work (I do and I’m not proud to say that) have you investigated car pooling in the last year? (I have good intentions about car-pooling with a neighbour but haven’t got it done yet – I do bicycle to work about 1 day every two weeks).
- Do you know how much electricity your house consumes in a month? Has the consumption been going up or down over the past year? How does your house consumption compare to similar houses in your neighborhood?
- On hot summer days and cold winter nights do you know what is going on with demand vs. supply in your city or region? Have you ever been alerted to try and reduce your consumption of electricity (i.e. participated in a Demand Response event)?
- Do you know the rough breakdown of electricity generation sources that provide power to your home? (hydro, coal, natural gas, nuclear, wind, solar)
I confess that my answer to most of these questions is either “no” or “I don’t know”. Pathetic. But there you have it.
At it’s core the problem is one of psychology and awareness. I discussed this at length in a previous blog posting. Now a start-up out of Arlington, Virginia is proving that getting electricity consumers to “buy in” to the concept of Demand Response can produce real results. Alex Laskey of Opower sums it up like this.
“Most of these interventions are about translating intentions into behavior. They can appear to an outsider as if it’s about changing behavior, but it’s perhaps better thought of as realizing intentions.”
So we can be successful in changing the way consumers view energy usage. Given that we really only have a problem for a few hours a day for a few weeks per year that is pretty encouraging. By way of example, a record heatwave on the eastern seaboard recently required the largest Demand Response in the history of the utility (PJM) to keep the lights on and the air conditioners humming.
But memories are short and old habits die hard. We all need to be reminded on a regular basis that these issues won’t go away and need our regular attention. That’s why I keep writing posts for The Black Swan Blog.
« Previous entries Next Page » Next Page »