Saturday, August 27, 2011

A hurricane to Tip the scale.

Tonight New York City is bracing for Hurricane Irene, and while the storm will undoubtedly deliver aggressive winds and major flooding, the level of panic is a little excessive for a fading Category 1 hurricane. Don’t get me wrong, people should take all necessary precautions, and those New Yorkers in Zone A should certainly obey evacuation orders. As Governor Christy says “Get the hell off the beach,” you’ve maximized your tan. I just think that come Monday, people may feel a little bit silly for clearing out the grocery stores of every last crumb of bread and drop of milk.

Although the memory of Hurricane Katrina is still fresh (even more so for fans of Treme), New York City on Monday morning will still resemble New York City, not a post-Katrina New Orleans. Let’s get some perspective. Hurricanes, or tropical cyclones as they are known worldwide, are characterized by a region of extreme low pressure at the center that is surrounded by thunderstorms, causing powerful winds and heavy rain. The lower the pressure and the stronger the winds, the more intense the storm. The Saffir-Simpson Hurricane Scale only categorizes storms by wind speed, although central pressure is also a good indicator of hurricane strength.

Right now, Irene is hanging out about 100 miles south of Ocean City, Maryland. The average sustained wind speed is 80 mph, which makes it a Category 1 storm, and the pressure at the eye of the storm is 951 millibar. When the winds slow to 73 mph, it will become a tropical storm. When Hurricane Katrina hit New Orleans on August 29, 2005, it was a Category 3 storm with sustained winds of 125 mph and pressure of 920 millibar.

Katrina was the costliest tropical cyclone ever, with damage exceeding $100 billion. It also was one of the deadliest, claiming 1836 lives. But let’s talk about another storm. The most intense tropical cyclone ever was Typhoon Tip in 1979. Tip was positioned in the north western Pacific Ocean, and it made landfall on Guam and southern Japan. Although it didn’t claim as many lives as Katrina, the statistics are staggering. The highest sustained winds were 190 mph for one minute and 160 mph for ten minutes. The pressure at the eye of the storm reached an unheard of 870 millibar. It was also the largest storm ever, extending 1380 miles across, which is half the area of the continental United States. Typhoon Tip had such extreme winds, that it ranks as a hypothetical Category 6 on the Saffir-Simpson Scale. This means that the storm’s potential damage is beyond catastrophic.

Hurricane Irene may damage some roofs and fell trees, but we will emerge relatively unscathed. In comparison, read the description of a Category 5 hurricane below:

"People, livestock, and pets are at very high risk of injury or death from flying or falling debris, even if indoors in mobile homes or framed homes. Almost complete destruction of all mobile homes will occur, regardless of age or construction. A high percentage of frame homes will be destroyed, with total roof failure and wall collapse. Extensive damage to roof covers, windows, and doors will occur. Large amounts of windborne debris will be lofted into the air. Windborne debris damage will occur to nearly all unprotected windows and many protected windows. Significant damage to wood roof commercial buildings will occur due to loss of roof sheathing. Complete collapse of many older metal buildings can occur. Most unreinforced masonry walls will fail which can lead to the collapse of the buildings. A high percentage of industrial buildings and low-rise apartment buildings will be destroyed. Nearly all windows will be blown out of high-rise buildings resulting in falling glass, which will pose a threat for days to weeks after the storm. Nearly all commercial signage, fences, and canopies will be destroyed. Nearly all trees will be snapped or uprooted and power poles downed. Fallen trees and power poles will isolate residential areas. Power outages will last for weeks to possibly months. Long-term water shortages will increase human suffering. Most of the area will be uninhabitable for weeks or months." From the National Weather Service.

The above photograph is pretty much the reason for this whole post. Courtesy of KeystoneUSA-Zuma/Rex Features.


Too much of a good thing.

Like all living organisms, plants depend on nitrogen as an essential nutrient for growth. In natural ecosystems plants are supplied with nitrogen by microorganisms in the soil that decompose organic matter and break down large, complex organic nitrogen molecules into the smaller, usable forms nitrate and ammonium. Other microbes assist plants with their nitrogen need by fixing atmospheric nitrogen into mineral nitrogen, or by helping plants reach distant soil nitrogen when it is in short supply.

The nitrogen cycle reliably churns along, supporting plant life and all dependent organisms, as long as all of the elements are in place. In agricultural systems, the most important part of the equation is removed when the crops are harvested. Without that plant matter returning to the ground to decompose, soils quickly become nitrogen deficient, and agricultural yield drops. To combat this, farmers must restore the lost nitrogen, and they do so with the addition of nitrogen fertilizers. Unfortunately, fertilizers are often applied in excess as farmers try to maximize yield. Far more nitrogen is added to the system than can be utilized by crops, and the excess finds its way into the atmosphere and bodies of water where it wreaks havoc. A single molecule of nitrous oxide contributes to global climate change with 296 times the global warming potential as a molecule of carbon dioxide. When nitrogen fertilizers reach the water, they promote the growth of algae, leading to massive blooms that choke off marine and aquatic life.

An essay by Allen Good and Perrin Beatty published in this month’s PLoS Biology draws attention to the imbalance of nitrogen fertilizer usage in different regions of the world. For instance, China uses far more nitrogen than is needed for optimal yields, yet their fertilizer use continues to rise. In contrast the countries of sub-Sahara Africa don’t use enough nitrogen and as a result, they have nutrient-poor soils and low yields. When faced with poor water quality due to nitrogen surplus, the European Union established and implemented best nutrient management practices in 1987, resulting in a 56% usage decrease in twenty years.

How was this achieved? Scientists conducted long term studies to determine the optimal amount of nitrogen fertilizer for each crop species in various regions of the world. Ordinarily farmers would apply fertilizers willy nilly with little consideration for the specifics of the plant species, application method, and fertilization rate. The results of these experiments proved that even in well-balanced systems, farmers can reduce the application of nitrogen fertilizer with no loss in yield.

Good and Beatty used this idea and took it a step further by quantifying the potential economic and environmental savings to be gained if fertilizer usage is reduced. First they determined the economic cost associated with the environmental damage of excess fertilizer use. Then they used fertilizer use and price projections to calculate the cost savings if nitrogen use is reduced to match the regional recommendations. All of the countries that were analyzed, which account for 74% of global fertilizer use, required either no change in nitrogen use or a reduction from 5 to 20%. Based on their analysis, Good and Beatty found that directed nutrient management strategies could achieve a total savings of $19.8 billion a year by 2020 and $56 billion a year by 2030. These values are shocking, not only because of the amount of money that is wasted through careless use of fertilizers, but also the magnitude of environmental damage that is incurred year after year. To learn more about the study, and to see Good and Beatty’s recommendations, you can find their essay at PLoS Biology.

Thursday, August 4, 2011

Universal flu care.

Designing the annual flu vaccine is not unlike playing the stock market. Each year the World Health Organization (WHO) creates a portfolio of three strains of flu, each representing a different influenza virus. This portfolio is the trivalent inactive vaccine, the official name for your annual flu shot. The strains that are selected are predicted to be the predominant source of flu infection in the upcoming season. Inactive forms of the viruses are combined into one shot, which gives your immune system a preview of what it can expect to fight in the upcoming months. Your body produces antibodies to those strains in advance so that it is ready to attack when flu season begins.

Year after year the scientists at WHO have much success in predicting the most harmful strains of flu; however, their selection process is by no means infallible. While the scientists have plenty of data to draw upon, the decision is, at best, an educated guess. There is no way to know for sure which strains present the greatest risk, and those strains that don’t make it into the vaccine can still infect you and make you sick. Moreover, foresight is greatly limited by the speed with which the viruses evolve. Flu viruses mutate so rapidly that vaccines lose their effectiveness every year. Even after your body builds a supply of antibodies to a particular flu strain, it will be unable to recognize the same strain the following year.

A virus is a very simple entity consisting of a piece of DNA contained within a protein case. Antibodies can only recognize one specific site on the protein case, which they attach to, signaling white blood cells to attack. Usually the antibodies that develop in response to the annual flu vaccine target a highly variable site on the head of the protein case, meaning that a different antibody is needed for each strain of the virus every year. During the 2009 H1N1 pandemic, however, vaccinated patients produced a different kind of antibody. These bound to a region of the protein that is conserved among all Influenza A subtypes, including seasonal flu, avian flu, and swine flu. Additionally, the high degree of conservation among flu viruses suggests that this site on the protein may not mutate from year to year.

Of course the antibody itself cannot be a vaccine, but it will inform the design of a future universal flu vaccine. Now knowing the best region to target, scientists will be able to develop a vaccine that triggers your immune system to produce the antibodies to latch onto the same, conserved site, regardless of the year or strain. Such a vaccine will eliminate the annual guesswork and protect against unexpected strains. You can learn more about this stunning development in immunology as it was published in Science as well as additional reports in Nature News.

Sunday, January 9, 2011

Rumination on chemistry.

I just learned that 2011 has been designated as the International Year of Chemistry. For a science that has only found its star among the product and pharmaceutical giants, such a declaration is a pretty big deal. Let's be honest, chemistry plays an all-important role in developing the materials we use, but it hasn't captured the public's imagination the way that biology has (I'll admit that Walter White's crystal meth production in Breaking Bad may indicate a changing tide). But now that my final biochemistry grade has been calculated and recorded, honoring the central science is a celebration that I can get on board with.

I have always found chemistry to be the most fascinating, if not the most challenging, science I have ever studied. Chemistry and I definitely have a love/hate relationship, but now that I have completed what might be my last chemistry course ever, I'm leaning heavily towards the love side. Learning that all the matter in the universe is governed by the properties of only a handful of different particles is truly an awe-inspiring, if not life-altering, realization. Chemistry doesn't just give meaning to life; it gives meaning to everything. If I dedicate a moment to thinking about that one idea, it absolutely blows me away. Indeed, I have shed a tear while reading my chemistry text, and it's not the kind of tear (rage, desperation, frustration) you may expect.

In my undergraduate chemistry courses, I had trouble grasping some of the advanced concepts simply because I hadn't moved beyond the basics. It's not that I didn't believe them or understand them, it's just that I found the basics far too amazing to simply gloss over in the first two lectures. I didn't have the chance to fully appreciate them before I had to accept them as "oh-yeah-well-duh!"-type truths. I have always thought that atomic structure, the periodic table, principles of chemical bonding, and the properties of water should have their own semester-long course. That course would be taught in an antique parlor somewhere, where students can sit in comfortable chairs under dim lighting, surrounded by candles. There would be group readings, perhaps some chanting, and lots of emotion. It would be something that inspires fits of passion, tears of joy, and students speaking in tongues. To me, chemistry is that amazing.

Maybe it's blasphemy to suggest that chemistry deserves the kind of dramatic and mystical treatment usually reserved for religion. I am among the first to argue that science and religion are incompatible. Scientific inquiry is after all, the antithesis of faith. But chemistry is a discipline rooted in models of reality - the best possible description of that which we cannot actually see. True, models are scientific theory - they are well-tested and supported by independent inquiry, and they offer a sound explanation for natural phenomena. But models have flaws. Models are subject to refinement (plum pudding, anyone?). Indeed, such is the goal of science. To proceed with confidence in the study of chemistry, you must trust the models. You must ignore the nagging worry that somewhere in the world an innovative chemist will soon use the latest technology to debunk, or at least improve upon, the model that you will devote years, a career, your life, to elucidating. It probably won't upend the entire discipline, but it will definitely shake its core. That beautiful eureka moment drives science forward, but it can also break the individual scientist.

Forgive me for being melodramatic. I just read a recent article by Philip Ball in Nature News that reinforced this idea that science is determined to evolve and that nothing is absolutely certain. New techniques are casting doubt on the ABCs of the entire discipline - chemical bonds. No, seriously. The plastic sticks that link the plastic balls in your organic chemistry modeling kit may not actually exist. In fact, the hydrogen bond has already been redefined based on new experimental results that change the idea of electrostatic attraction (I panic - what about protein folding?!). Today's accepted model of the quantum chemical bond is based on interactions between electrons as governed by their wavefunctions, a mathematical tool used to describe the quantum state of a particle. The wavefunction cannot be measured as a one-electron unit, however, because the behavior of each electron depends on the behavior of its neighbors. Now here's the kicker - currently there exists no exact method for computing this correlational energy among electrons, so the description of every quantum chemical bond is at best an approximation.

The existence of a chemical bond also depends on the exact time at which it is characterized. Linus Pauling, the author of the valence-bond description, said that a group of atoms can be considered bonded "when it is convenient for the chemist to consider it as an independent molecular species." Now more than ever this ambiguous definition rings true. Ultra-fast laser spectroscopy, which makes it possible to study molecules on an extremely short time-scale, has produced results suggesting that chemical bonds may not accurately characterize a molecule's structure and reactivity. Moreover, a bond between two atoms embedded within molecule can be difficult to determine - they may simply be held in close proximity by the surrounding atoms. There is also little certainty in deciding which electrons belong to which atoms. The article enforces the idea that molecules are ultimately just a collection of nuclei embedded in a continuous electron cloud. The interactions between atoms - the sticks in our modeling kits - are not hard and fast as we once thought. Despite teaching chemical bonds as a fundamental concept of chemistry for decades, we are witnessing the idea unraveling as new technology and contemporary research challenges our interpretation of the classical work.

What can we do with the theories and models that have been replaced? What becomes of the knowledge that scientists have dedicated their careers to building and that students have toiled to understand? The plum pudding model is actually pretty impressive when you realize that Thompson didn't even know about the atomic nucleus when he came up with the idea in 1904. The nucleus was discovered soon after with Geiger and Marsden's beautiful gold foil experiment in 1909. But Rutherford's planetary model of the atom, a theory borne out of the gold foil experiment, was subsequently improved by Neils Bohr with the development of quantum mechanics. Even the Bohr model has since been invalidated and replaced with the frustratingly complicated atomic orbital model. Though they may seem silly now, these models were not replaced because the science behind them was shoddy or careless. On the contrary, each was revolutionary and derived from elegant experimentation and theory. Scientists can only use the technology available to them to make careful leaps from existing knowledge. These limits to scientific research are better seen in hindsight.

Luckily disproved or refined models do not decay in obsolescence, rather they enrich the discipline. I realized this fact when I found myself able to recall the antecedent atomic models just now. Even though it may not appear on an exam, we are taught the history of chemistry in class because it honors the ground-breaking work of chemists, and it illustrates the way in which science grows from existing knowledge. Refinement of existing theories is proof that science works. Classical research is meaningful, even if it is no longer useful. As Ronald Hoffmann of Cornell University says of the hotly debated valence bond and molecular orbital theories, "discarding any one of the two theories undermines the intellectual heritage of chemistry." Honoring past research gives value to modern science and motivates scientists who know that their discoveries may ultimately be bested. The beauty of science is that it is humble in the present, constantly striving to improve itself, while being deeply reverent of the past.

References: Ball, P. (2011). "Beyond the bond." Nature 469(7328): 26-28.

Sunday, September 19, 2010

She's baaaack...

Recent reports by NOAA revealed that la Nina has returned to the tropical Pacific and strengthened over the month of August. The sister of el Nino, La Nina is the cool phase of the ocean warming phenomenon, during which surface temperatures of the equatorial east-central Pacific change by at least 0.5 degrees Celsius. Last month, temperatures dropped by 1.3 - 1.8 degrees. It seems like this chilly little girl is back, and she may be sticking around into 2011.

By some oceanic and climatic mystery that remains unsolved, el Nino and la Nina have a powerful influence over the weather conditions in many parts of the world. These events, which tend to alternate in cycles of 3-6 years, can alter seasons, upset fisheries, and increase the occurrence of extreme weather such as floods, droughts, hurricanes, and cyclones. Over the past two years in California, el Nino played a role in everything from nerve-wracking drought to vanishing Chinook salmon. As a result, it was easy to blame el Nino for anything that was at least slightly annoying. Rain on my birthday? Hot temperatures on the day that I decided to wear lined wool pants? Flight delays at SFO? Damn you el Nino.

Will la Nina be as good a scapegoat as her brother? Nature News has some answers.

Friday, June 25, 2010

Pharmaceuticals without a prescription.


As the population ages, healthcare spending swells, and medical technology advances, the use and variety of pharmaceuticals and personal care products (PPCPs) grow to meet the demand of people worldwide. Like all products, the presence of PPCPs in the environment has become as ubiquitous as their use in the human population. Today it is likely that PPCPs are detectable at low levels in all waterbodies adjacent to human settlement. In some instances they have been found to reach concentrations that rival pesticides.

PPCPs enter the environment primarily through the wastewater stream. When a pharmaceutical is administered to a patient, as much as 90% of the dose can be excreted still in its active form. Even the portion that is metabolized can be transformed and excreted as a unique byproduct. Personal care products like shampoos and lotions enter the wastewater stream when we wash our bodies and hands. This ever-changing concoction of chemicals, from analgesics to antibiotics, lipid regulators to synthetic musks, continuously buffets wastewater treatment plants, most of which are only designed to remove conventional pollutants and the basics of human waste. Due to the variety and the novelty of compounds found in PPCPs, many of these chemicals pass through traditional wastewater treatment plants unchanged and enter our streams, rivers, bays, and oceans. A 2009 study by the San Francisco Estuary Institute found 18 common PPCP compounds in treated wastewater effluent and surface water in the South San Francisco Bay. These included acetaminophen (Tylenol), fluoxetine (Prozac), and gemfibrozil (Lopid).

Even though PPCPs are generally detected at low concentrations in our waterways, we cannot be sure of their impact on the environment because their effect on non-human organisms is unknown. People are warned for good reason not to take pharmaceuticals without a prescription or in combination with other drugs, because the synergistic effects can be lethal. But what happens to a Chinook salmon that takes a low dose of Lipitor? And what if that Lipitor is mixed with dozens of other unidentified pharmaceuticals in the water? Because of the seemingly low risk to humans and the sheer number of compounds to be studied, research on the environmental fate of PPCPs is limited and regulation is nonexistent.

A recent study by a team of researchers from Eötvös Loránd University in Budapest and the China University of Geosciences suggests that the risk of residual PPCP exposure to humans may be greater than we think. In some parts of the world, where groundwater supplies must stretched to meet the demand for potable water, riverbank infiltration is seen as a safe and practical method to speed up the recharge of an aquifer. Instead of harvesting drinking water directly from the river, where a wastewater outfall may discharge treated effluent just upstream, water is pumped from wells adjacent to the riverbank, which lowers the water table, changes the pressure gradient, and pulls water from the river into the aquifer. Riverbank infiltration uses the soils of the bank to filter out pathogens, heavy metals, excess nutrients, hydrocarbons, and other pollutants in the same way that stormwater is purified as it naturally percolates through soil; however, with riverbank infiltration this process happens quickly enough to meet the population's needs. Budapest supplies one third of its groundwater through riverbank infiltration of the Danube River, which also receives effluent from two wastewater treatment plants.

Over the course of a full year, Margit Varga and her team sampled water from the Danube River and sediments from within two meters of the bank at three sites adjacent to riverbank infiltration wells. Many PPCP compounds are removed from water when they stick to sediments; however, the research group tested their samples for the presence of four acidic drugs, which have a greater affinity for water and are less likely to grab onto the particles of soil. Three of these drugs - ibuprofen, naproxen, and diclofenac - were regularly detected in the river water. The highest concentrations occurred during the winter when the water level was relatively low and cold temperatures restricted microbial activity that could degrade the compounds. Naproxen and diclofenac were also detected in the sediment samples, suggesting that some amount of these acidic drugs are removed from the water during riverbank infiltration.

The concentration of these drugs in the sediment seemed to be influenced not only by their initial concentration in the water being pulled through the bank, but also by the concentration of total organic carbon in the sediment. Sediment with a high concentration of carbon was more effective at filtering out the drug compounds. Sandy sediment with low carbon content could allow PPCPs to penetrate further into the bank. It is well-known that sediments have different compositions and different abilities to filter out contaminants - this concept has been applied countless times to septic systems and stormwater treatment mechanisms. But Varga's study confirms that the same is true for PPCPs, which are unregulated, poorly understood, and still in the vague category of "contaminants of emerging concern." Given the right combination of low water levels, cold temperatures, increased use of PPCPs, and poor sediment filtration capacity, these compounds could very likely reach drinking water supplies in areas that depend on riverbank infiltration. And as growing demand for potable water brings human populations closer and closer to their treated (or untreated) wastewater, the only way to eliminate the risk of exposure might be the remove PPCPs from the waste stream entirely before they can reach the environment.

Varga, M., Dobor, J., Helenkar, A., Jurecska, L., Yao, Jun., & Zaray, G. (2010). Investigation of acidic pharmaceuticals in river water and sediment by microwave-assisted extraction and gas chromatography-mass spectrometry Microchemical Journal DOI: 10.1016/j.microc.2010.02.010

Photo courtesy of Carly & Art via Flickr

Monday, May 17, 2010

Report from the Emerald Coast.

So far it seems like we have only heard numbers. The numbers describe the gallons of oil spilled from the collapsed Deepwater Horizon rig, the miles of shoreline at risk along the Gulf Coast, the volume of chemical dispersant released into the water, the length of boom laid, and the projected economic losses. I came to the Emerald Coast of Florida with a head full of numbers and no real perspective. After my first day out on the water with Skipper Tonsmeire, and Emerald Coastkeeper Chasidy Fisher Hobbs, I finally understand how catastrophic the Deepwater Horizon oil spill really is.


The barrier islands that enclose the Pensacola Bay extend for miles on either side of the Pensacola Pass as open dunes of white sand. Where other barrier islands and beach cities have been developed into a solid line of condominiums and hotels, long stretches of the Gulf Islands are preserved and protected as the Gulf Islands National Seashore. Of course there are built-out areas too - Pensacola Beach is a popular destination for locals and tourists alike. But once you enter the gates of the park, the human presence feels secondary to the natural processes at work on the island. You can still see evidence of Hurricane Ivan tearing apart the single access road in 2004, and during major weather events the Gulf pours over the island and into the Intracoastal Waterway. Closer to the Pass, Great Blue Herons silently stroll along the waterline while anglers cast their lines, waist deep in the waves.

In Skipper’s boat, we patrolled the inland waterways, inspecting boom placement and studying the pre-impact condition of the shoreline. Some lines of boom had been strategically placed to guard inlets and important ecological areas like Red Fish Point and Big Lagoon State Park. In other areas, like near the Pensacola Pass, the boom has been staged for deployment near the shore. When an oil slick enters the Pass on a flood tide, lines of boom from either side of the channel will be angled toward the middle, creating a funnel to collect and then remove the oil. The boom will then be drawn back to the shore during the ebb tide, honoring the Coast Guard’s request not to interrupt commerce on the Intracoastal Waterway.

Escambia County has done a good job so far of protecting their sensitive inland and shoreline areas with boom, but this method will only be effective at stopping oil on the surface of the water. Reports are now emerging that most of the oil is suspended in the water column, and tarballs have already been seen on Gulf Shores, 90 miles west of Pensacola. Environmental damage to the Gulf Coast from the Deepwater Horizon oil spill may be inevitable if booming is our only protective measure.

Fortunately our patrol gave us the opportunity to swim in the Gulf for what may be the last time in a while. To a person familiar with the bone-chilling water of the San Francisco Bay, the water of the Emerald Coast seems unreasonably warm. Just inside the Pass the water is clear enough to see schools of bait fish at your feet. It could have been a perfect day out on the Gulf, but swimming adjacent to lines of boom conjures up an ominous feeling that is hard to ignore. Anglers continue to fish from the shore, even though commercial fishing has been suspended. Kayakers paddle through inland waters, even though boom excludes them from the more interesting shoreline. People land their boats right on the beach and dive into the water without a second thought about its quality. Even while the Emerald Coast plays this game of sit and wait, it remains a community devoted to its beautiful shoreline and coastal resources.