Cold suns, warm exoplanets and methane blankets

Posted on


Somewhere in our galaxy, an exoplanet is probably orbiting a star that’s colder than our sun, but instead of freezing solid, the planet might be cozy warm thanks to a greenhouse effect caused by methane in its atmosphere.

NASA astrobiologists from the Georgia Institute of Technology have developed a comprehensive new model that shows how planetary chemistry could make that happen. The model,  was based on a likely scenario on Earth three billion years ago, and was actually built around its possible geological and biological chemistry.

The sun produced a quarter less light and heat then, but Earth remained temperate, and methane may have saved our planet from an eon-long deep-freeze, scientists hypothesize. Had it not, we and most other complex life probably wouldn’t be here.

The new model combined multiple microbial metabolic processes with volcanic, oceanic and atmospheric activities, which may make it the most comprehensive of its kind to date. But while studying Earth’s distant past, the Georgia Tech researchers aimed their model light-years away, wanting it to someday help interpret conditions on recently discovered exoplanets.

The researchers set the model’s parameters broadly so that they could apply not only to our own planet but potentially also to its siblings with their varying sizes, geologies, and lifeforms.

Earth and its siblings

“We really had an eye to future use with exoplanets for a reason,” said Chris Reinhard, the study’s principal investigator and an assistant professor in Georgia Tech’s School of Earth and Atmospheric Sciences. “It’s possible that the atmospheric methane models that we are exploring for the early Earth represent conditions common to biospheres throughout our galaxy because they don’t require such an advanced stage of evolution like we have here on Earth now.”

The research was supported by the NASA Postdoctoral Program, the Japan Society for the Promotion of Science, the NASA Astrobiology Institute and the Alfred P. Sloan Foundation.

Previous models have examined the mix of atmospheric gases needed to keep Earth warm in spite of the sun’s former faintness, or studied isolated microbial metabolisms that could have made the needed methane. “In isolation, each metabolism hasn’t made for productive models that accounted well for that much methane,” Reinhard said.

The Georgia Tech researchers synergized those isolated microbial metabolisms, including ancient photosynthesis, with geological chemistry to create a model reflective of the complexity of an entire living planet. And the model’s methane production ballooned.

“It’s important to think about the mechanisms controlling the atmospheric levels of greenhouse gases in the framework of all biogeochemical cycles in the ocean and atmosphere,” said first author Ozaki, a postdoctoral assistant.

Carl Sagan and the faint Sun

The Georgia Tech model strengthens a leading hypothesis that attempts to explain a mystery called the “faint young Sun paradox” pointed out by iconic late astronomer Carl Sagan and his Cornell University colleague George Mullen in 1972.

Astronomers noticed long ago that stars burned brighter as they matured and weaker in their youths. They calculated that about two billion years ago, our sun must have shone about 25 percent fainter than it does .

That would have been too cold for any liquid water to exist on Earth, but paradoxically, strong evidence says that liquid water did exist. “Based on the observation of the geological record, we know that there must have been liquid water,” Reinhard said, “and in some cases, we know that temperatures were similar to how they are today, if not a little warmer.”

Sagan and Mullen postulated that Earth’s atmosphere must have created a greenhouse effect that saved it. Back then, they suspected ammonia was at work, but chemically, that idea proved less feasible.

“Methane has taken a lead role in this hypothesis,” Reinhard said. “When oxygen and methane enter the atmosphere, they chemically cancel each other out over time in a complex chain of chemical reactions. Because there was extremely little oxygen in the air back then, it would have allowed for methane to build up much higher levels than.”

Iron, and rust photosynthesis

At the core of the model are two different types of photosynthesis. But three billion years ago, the dominant type of photosynthesis we know today that pumps out oxygen may not have even existed yet.

Instead, two other very primitive bacterial photosynthetic processes likely were essential to Earth’s ancient biosphere. One transformed iron in the ocean into rust, and the other photosynthesized hydrogen into formaldehyde.

“The model relied on lots of volcanic activity spewing out hydrogen,” Ozaki said. Other bacteria fermented the formaldehyde, and other bacteria, still, turned the fermented product into methane.

The two photosynthetic processes served as the watch spring of the model’s clockwork, which pulled in 359 previously established biogeochemical reactions spanning land, sea and air.

3,000,000 runs and raging methane

The model was not the type of simulation that produces a video animation of Earth’s ancient biogeochemistry. Instead, the model mathematically analyzed the processes, and the output was numbers and graphs.

Ozaki ran the model more than 3 million times, varying parameters, and found that if the model contained both forms of photosynthesis operating in tandem, that 24 percent of the runs produced enough methane to create the balance needed in the atmosphere to maintain the greenhouse effect and keep ancient Earth, or possibly an exoplanet, temperate.

“That translates into about a 24 percent probability that this model would produce a stable, warm climate on the ancient Earth with a faint sun or on an Earth-like exoplanet around a dimmer star,” Reinhard said. “Other models that looked at these photosynthetic metabolisms in isolation have much lower probabilities of producing enough methane to keep the climate warm.”

“We’re confident that this rather unique statistical approach means that you can take the basic insights of this new model to the bank,” he said.

Other explanations for the “faint young Sun paradox” have been more cataclysmic and perhaps less regular in their dynamics. They include ideas about routine asteroid strikes stirring up seismic activity thus resulting in more methane production, or about the sun consistently firing coronal mass ejections at Earth, heating it up.


The force is strong: Amputee controls individual prosthetic fingers

Posted on


Luke Skywalker’s bionic hand is a step closer to reality for amputees in this galaxy. Researchers at the Georgia Institute of Technology have created an ultrasonic sensor that allows amputees to control each of their prosthetic fingers individually. It provides fine motor hand gestures that aren’t possible with current commercially available devices.

The first amputee to use it, a musician who lost part of his right arm five years ago, is now able to play the piano for the first time since his accident. He can even strum the Star Wars theme song.

“Our prosthetic arm is powered by ultrasound signals,” said Gil Weinberg, the Georgia Tech College of Design professor who leads the project. “By using this new technology, the arm can detect which fingers an amputee wants to move, even if they don’t have fingers.”

Jason Barnes is the amputee working with Weinberg. The 28-year-old was electrocuted during a work accident in 2012, forcing doctors to amputate his right arm just below the elbow. Barnes no longer has his hand and most of his forearm but does have the muscles in his residual limb that control his fingers.

Barnes’ everyday prosthesis is similar to the majority of devices on the market. It’s controlled by electromyogram (EMG) sensors attached to his muscles. He switches the arm into various modes by pressing buttons on the arm. Each mode has two programmed moves, which are controlled by him either flexing or contracting his forearm muscles. For example, flexing allows his index finger and thumb to clamp together; contracting closes his fist.

“EMG sensors aren’t very accurate,” said Weinberg, director of Georgia Tech’s Center for Music Technology. “They can detect a muscle movement, but the signal is too noisy to infer which finger the person wants to move. We tried to improve the pattern detection from EMG for Jason but couldn’t get finger-by-finger control.”

But then the team looked around the lab and saw an ultrasound machine. They partnered with two other Georgia Tech professors  Minoru Shinohara (College of Sciences) and Levent Degertekin (Woodruff School of Mechanical Engineering)  and attached an ultrasound probe to the arm. The same kind of probe doctors use to see babies in the womb could watch how Barnes’ muscles moved.

“That’s when we had a eureka moment,” said Weinberg.

When Barnes tries to move his amputated ring finger, the muscle movements differ from those seen when he tries to move any other digit. Weinberg and the team fed each unique movement into an algorithm that can quickly determine which finger Barnes wants to move. The ultrasound signals and machine learning can detect continuous and simultaneous movements of each finger, as well as how much force he intends to use.

“It’s completely mind-blowing,” said Barnes. “This new arm allows me to do whatever grip I want, on the fly, without changing modes or pressing a button. I never thought we’d be able to do this.”

This is the second device Weinberg’s lab has built for Barnes. His first love is the drums, so the team fitted him with a prosthetic arm with two drumsticks in 2014. He controlled one of the sticks. The other moved on its own by listening to the music in the room and improvising.

The device gave him the chance to drum again. The robotic stick could play faster than any drummer in the world. Worldwide attention has sent Barnes and Weinberg’s robots around the globe for concerts across four continents. They’ve also played at the Kennedy Center in Washington, D.C. and Moogfest.

That success pushed Weinberg to take the next step and create something that gives Barnes the dexterity he’s lacked since 2012.

“If this type of arm can work on music, something as subtle and expressive as playing the piano, this technology can also be used for many other types of fine motor activities such as bathing, grooming and feeding,” said Weinberg. “I also envision able-bodied persons being able to remotely control robotic arms and hands by simply moving their fingers.”

Traumatic brain injury causes intestinal damage, study shows

Posted on


University of Maryland School of Medicine (UMSOM) researchers have found a two-way link between traumatic brain injury (TBI) and intestinal changes. These interactions may contribute to increased infections in these patients, and may also worsen chronic brain damage.

This is the first study to find that TBI in mice can trigger delayed, long-term changes in the colon and that subsequent bacterial infections in the gastrointestinal system can increase posttraumatic brain inflammation and associated tissue loss. The findings were published recently in the journal Brain, Behavior, and Immunity.

“These results indicate strong two-way interactions between the brain and the gut that may help explain the increased incidence of systemic infections after brain trauma and allow new treatment approaches,” said the lead researcher, Alan Faden, MD, the David S. Brown Professor in Trauma in the Departments of Anesthesiology, Anatomy & Neurobiology, Psychiatry, Neurology, and Neurosurgery at UMSOM, and director of the UMSOM Shock, Trauma and Anesthesiology Research Center.

Researchers have known for years that TBI has significant effects on the gastrointestinal tract, but until now, scientists have not recognized that brain trauma can make the colon more permeable, potentially allowing allow harmful microbes to migrate from the intestine to other areas of the body, causing infection. People are 12 times more likely to die from blood poisoning after TBI, which is often caused by bacteria, and 2.5 times more likely to die of a digestive system problem, compared with those without such injury.

In this study, the researchers examined mice that received an experimental TBI. They found that the intestinal wall of the colon became more permeable after trauma, changes that were sustained over the following month.

It is not clear how TBI causes these gut changes. A key factor in the process may be enteric glial cells (EGCs), a class of cells that exist in the gut. These cells are similar to brain astroglial cells, and both types of glial cells are activated after TBI. After TBI, such activation is associated with brain inflammation that contributes to delayed tissue damage in the brain. Researchers don’t know whether activation of ECGs after TBI contributes to intestinal injury or is instead an attempt to compensate for the injury.

The researchers also focused on the two-way nature of the process: how gut dysfunction may worsen brain inflammation and tissue loss after TBI. They infected the mice with Citrobacter rodentium, a species of bacteria that is the rodent equivalent of E. coli, which infects humans. In mice with a TBI who were infected with this the bacteria, brain inflammation worsened. Furthermore, in the hippocampus, a key region for memory, the mice who had TBI and were then infected lost more neurons than animals without infection.

This suggests that TBI may trigger a vicious cycle, in which brain injury causes gut dysfunction, which then has the potential to worsen the original brain injury. “These results really underscore the importance of bi-directional gut-brain communication on the long-term effects of TBI,” said Dr. Faden.

Guanidinium stabilizes perovskite solar cells at 19 percent efficiency

Posted on


With the power-conversion efficiency of silicon solar cells plateauing around 25%, perovskites are now ideally placed to become the market’s next generation of photovoltaics. In particular, organic-inorganic lead halide perovskites offer manufacturing versatility that can potentially translate into much higher efficiency: studies have already shown photovoltaic performances above 20% across different solar cell architectures built with simple and low-cost processes.

The main challenge for the perovskite field is not so much efficiency but stability. Unlike silicon cells, perovskites are soft crystalline materials and prone to problems due to decomposition over time. In a commercial context, this puts perovskites on a higher price tag than conventional silicon cells.

There have therefore been many efforts in synthesizing perovskite materials that can maintain high efficiency over time. This is done by introducing different cations (positively charged ions) into the crystal structure of the perovskite. Although success has been reported by mixing inorganic cations like cesium or rubidium into the perovskite composition, these solutions tend to be difficult and expensive to implement.

Meanwhile, no organic — and easier to synthesize — cations that can improve both efficiency and stability have been found so far. Now, the lab of Mohammad Khaja Nazeeruddin at EPFL Valais Wallis, with colleagues at the University of Cordoba, has discovered that they can improve perovskite stability by introducing the large organic cation guanidinium (CH6N3+) into methylammonium lead iodide perovskites, which are among the most promising alternatives in the group today.

The scientists show that the guanidinium cation inserts into the crystal structure of the perovskite and enhances the material’s overall thermal and environmental stability, overcoming what is known in the field as the “Goldschmidt tolerance factor limit.” This is an indicator of the stability of a perovskite crystal, which describes how compatible a particular ion is to it. An ideal Goldschmidt tolerance factor should be below or equal to 1; guanidinium’s is just 1.03.

The study found that the addition of guanidinium significantly improved the material stability of the perovskite while delivering an average power conversion efficiency over 19% (19.2 ± 0.4%) and stabilizing this performance for 1000 hours under continuous light illumination, which is a standard laboratory test for measuring the efficiency of photovoltaic materials. The scientists estimate that this corresponds to 1333 days (or 3.7 years) of real-world usage — this is based on standard criteria used in the field.

Professor Nazeeruddin explains: “Taking a standard acceleration factor of 2 for each ten degrees increase in temperature, an acceleration factor of 8 is estimated for 55 °C as opposed to 25 °C degrees. Hence the 1000 hours at 55°C equivalent would be 8000 hours. Our cells were subjected at 60°C, therefore the numbers could be even higher. Assuming the equivalent of 6 hours full sunlight/day, or 250Wm-2 average irradiance (equivalent to North Africa) the total number of days are 1333, equals to 44.4 months and 3.7 years stability. However, for the standard solar cell accreditation a series of stress tests including temperature cycling and damp heat are also required.”

“This is a fundamental step within the perovskite field,” says Nazeeruddin. “It offers a new paradigm in perovskite design as further explorations beyond the tolerance factor limit could prevail for cationic blends while preserving a 3D structure with improved stability through increased number of H-bonds within the inorganic framework — a problem that we are now close to solving.”

Transformation to wind and solar achievable with low indirect GHG emissions

Posted on


Different low carbon technologies from wind or solar energy to fossil carbon capture and sequestration (CCS) differ greatly when it comes to indirect greenhouse gas emissions in their life cycle. Unlike what some critics argue, the researchers not only found that wind and solar energy belong to the more favorable when it comes to life-cycle emissions. They also show that a full decarbonization of the global power sector by scaling up these technologies would induce only modest indirect greenhouse gas emissions — and hence not impede the transformation towards a climate-friendly power system.

“Both fossil and non-fossil power technologies still come with a certain amount of greenhouse gas emissions within their life cycle — on the one hand because it needs energy to construct and operate them, on the other hand because of methane emissions, e.g. from coal and gas production,” explains lead author Michaja Pehl. “However, we found there are substantial differences across technologies regarding their greenhouse gas balance. Electricity production from biomass, coal, gas and hydropower for instance induces much higher indirect greenhouse gas emissions than nuclear electricity, or wind and solar-based power supply.”

With their study the researchers provide an innovative and comprehensive global analysis of embodied energy use and indirect greenhouse gas emissions — from all relevant power sector technologies. For the first time, their study combines the strengths of simulations based on integrated energy-economy-climate models that estimate cost-optimal long-term strategies to meet climate targets with life cycle assessment approaches. So far, these research branches have operated separately. Exploring the life cycle emissions of future low-carbon supply systems and the implications for technology choices, they found that fossil power plants equipped with CCS will still account for life-cycle emissions of around 100 grams of CO2-equivalents per kWh of electricity produced, ten times more than the around 10 grams of CO2-equivalents for wind and solar power they project for 2050 in a climate protection scenario in which power production is almost completely decarbonized.

Wind and solar provide a much better greenhouse gas emissions balance than fossil-based technologies

“There is no such thing as truly clean coal. Conventional coal power currently comes with around 1000 grams of CO2-equivalents per kWh. Capturing CO2 from coal plants can reduce emissions per kWh by around 90 percent, but substantial life-cycle greenhouse gas emissions remain,” says Gunnar Luderer, energy system analyst from PIK and project leader. “To keep global warming below 2°C, however, virtually carbon free electricity is necessary. This makes it increasingly implausible that coal power will play a major role in the future, even if equipped with CO2 scrubbers.”

“When it comes to life cycle greenhouse gas emissions, wind and solar energy provide a much better greenhouse gas balance than fossil-based low carbon technologies, because they do not require additional energy for the production and transport of fuels, and the technologies themselves can be produced to a large extend with decarbonized electricity,” states Edgar Hertwich, an industrial ecologist from Yale University who co-authored the study. Due to technological innovation, less and less energy will be needed to produce wind turbines and solar photovoltaic systems.

“Some critics have argued renewable energies could come with high hidden greenhouse gas emissions that would negate their benefits to the climate. Our study now shows that the opposite is true,” concludes Luderer. “During the transition to clean power supply, the additional life-cycle emissions for building up wind and solar capacities are much smaller than the remaining emissions from existing fossil power plants before they can finally be decommissioned. The faster the low-carbon transformation of power supply is accomplished, the lower is the overall remaining carbon burden for the climate.”

Hydropower dam energy without sacrificing Mekong food supply: New research offers solution

Posted on


The Mekong River is an economic engine for fishermen and a food source for millions of people worldwide. Nearly 100 hydropower dams are planned for construction along tributaries off the river’s 2,700-mile stretch, which flows through Burma, China, Vietnam, Laos, Thailand and Cambodia.

But while the dams are expected to provide clean energy to the region, if not managed properly, they also have the potential to offset natural river patterns, which would damage food production, supply and business.

Arizona State University professor John Sabo and collaborators have proposed a solution in the Dec. 8 issue of Science magazine that allows dam operators to generate power in ways that also protect — and possibly improve — food supplies and businesses throughout the Mekong river basin.

“We have figured out the relationship between river flows and fish catch, and we have developed an algorithm for dam operators to use that will increase fish harvests and still generate power,” Sabo said. “Dams are going to be built no matter how much fuss we make; our research shows how we can be more strategic about the buildout and operations of these dams in the Mekong.”

The proposed solution, the first of its kind for this problem, can be applied to other large river systems around the world facing similar tradeoffs.

The Mekong river floods annually, and it is known that those floods are important for fisheries, Sabo said. New in this research is the recognition that seasonal droughts are equally important. Long droughts combined with short floods may create the ideal conditions for terrestrial nutrients to be entrained into the freshwater system.

With that in mind, the algorithm presented by Sabo et al. in Science recommends long low-flow periods punctuated by pulses of flooding, which will allow dam operators to co-manage their power generation priorities, while protecting livelihoods for fisheries downstream.

Sabo worked with other ASU researchers on the project, as well as researchers from the University of Washington, University of Maryland, Conservation International, the University of South Florida, the Mekong River Commission and Aalto University.

“We have taken this conversation around fisheries and dams in the Mekong from a yes-or-no conversation, from a good idea-bad idea conversation, and we have come up with an alternative, a mathematical formula that has the possibility to achieve dam operator goals and protect fisheries,” said Gordon Holtgrieve, an assistant professor at the University of Washington.

Holtgrieve and a team of researchers will expand the project to better understand how dam operators can balance power generation needs with other factors, including rice production, food nutritional quality and ecological goals.

Physicists excited by discovery of new form of matter, excitonium

Posted on


Excitonium has a team of researchers at the University of Illinois at Urbana-Champaign… well… excited! Professor of Physics Peter Abbamonte and graduate students Anshul Kogar and Mindy Rak, with input from colleagues at Illinois, University of California, Berkeley, and University of Amsterdam, have proven the existence of this enigmatic new form of matter, which has perplexed scientists since it was first theorized almost 50 years ago.

The team studied non-doped crystals of the oft-analyzed transition metal dichalcogenide titanium diselenide (1T-TiSe2) and reproduced their surprising results five times on different cleaved crystals. University of Amsterdam Professor of Physics Jasper van Wezel provided crucial theoretical interpretation of the experimental results.

So what exactly is excitonium?

Excitonium is a condensate — it exhibits macroscopic quantum phenomena, like a superconductor, or superfluid, or insulating electronic crystal. It’s made up of excitons, particles that are formed in a very strange quantum mechanical pairing, namely that of an escaped electron and the hole it left behind.

It defies reason, but it turns out that when an electron, seated at the edge of a crowded-with-electrons valence band in a semiconductor, gets excited and jumps over the energy gap to the otherwise empty conduction band, it leaves behind a “hole” in the valence band. That hole behaves as though it were a particle with positive charge, and it attracts the escaped electron. When the escaped electron with its negative charge, pairs up with the hole, the two remarkably form a composite particle, a boson — an exciton.

In point of fact, the hole’s particle-like attributes are attributable to the collective behavior of the surrounding crowd of electrons. But that understanding makes the pairing no less strange and wonderful.

Why has excitonium taken 50 years to be discovered in real materials?

Until now, scientists have not had the experimental tools to positively distinguish whether what looked like excitonium wasn’t in fact a Peierls phase. Though it’s completely unrelated to exciton formation, Peierls phases and exciton condensation share the same symmetry and similar observables — a superlattice and the opening of a single-particle energy gap.

Abbamonte and his team were able to overcome that challenge by using a novel technique they developed called momentum-resolved electron energy-loss spectroscopy (M-EELS). M-EELS is more sensitive to valence band excitations than inelastic x-ray or neutron scattering techniques. Kogar retrofit an EEL spectrometer, which on its own could measure only the trajectory of an electron, giving how much energy and momentum it lost, with a goniometer, which allows the team to measure very precisely an electron’s momentum in real space.

With their new technique, the group was able for the first time to measure collective excitations of the low-energy bosonic particles, the paired electrons and holes, regardless of their momentum. More specifically, the team achieved the first-ever observation in any material of the precursor to exciton condensation, a soft plasmon phase that emerged as the material approached its critical temperature of 190 Kelvin. This soft plasmon phase is “smoking gun” proof of exciton condensation in a three-dimensional solid and the first-ever definitive evidence for the discovery of excitonium.

“This result is of cosmic significance,” affirms Abbamonte. “Ever since the term ‘excitonium’ was coined in the 1960s by Harvard theoretical physicist Bert Halperin, physicists have sought to demonstrate its existence. Theorists have debated whether it would be an insulator, a perfect conductor, or a superfluid — with some convincing arguments on all sides. Since the 1970s, many experimentalists have published evidence of the existence of excitonium, but their findings weren’t definitive proof and could equally have been explained by a conventional structural phase transition.”

Rak recalls the moment, working in the Abbamonte laboratory, when she first understood the magnitude of these findings: “I remember Anshul being very excited about the results of our first measurements on TiSe2. We were standing at a whiteboard in the lab as he explained to me that we had just measured something that no one had seen before: a soft plasmon.”

“The excitement generated by this discovery remained with us throughout the entire project,” she continues. “The work we did on TiSe2 allowed me to see the unique promise our M-EELS technique holds for advancing our knowledge of the physical properties of materials and has motivated my continued research on TiSe2.”

Kogar admits, discovering excitonium was not the original motivation for the research — the team had set out to test their new M-EELS method on a crystal that was readily available — grown at Illinois by former graduate student Young Il Joe, now of NIST. But he emphasizes, not coincidentally, excitonium was a major interest:

“This discovery was serendipitous. But Peter and I had had a conversation about 5 or 6 years ago addressing exactly this topic of the soft electronic mode, though in a different context, the Wigner crystal instability. So although we didn’t immediately get at why it was occurring in TiSe2, we did know that it was an important result — and one that had been brewing in our minds for a few years.”

This fundamental research holds great promise for unlocking further quantum mechanical mysteries: after all, the study of macroscopic quantum phenomena is what has shaped our understanding of quantum mechanics. It could also shed light on the metal-insulator transition in band solids, in which exciton condensation is believed to play a part. Beyond that, possible technological applications of excitonium are purely speculative.