- Dead Stars Reveal Mysteries of Planet Formation [Guest Post by David Wilson]
- The remnants of stars can tell us a surprising amount about how planets are made.
- The Future of Life Detection on Mars: We Come in Peace, But Carry Lasers! [Guest Post by Samantha Rolfe]
- Looking for biomarkers on Mars using Raman Spectroscopy.
- East Anglia’s Giant Purple Blob [Guest Post by Luke Surl]
- The causes and consequences of usually high levels of air pollution over the east of England that triggered a panic in the media.
- The Null Hypothesis: When Do We Declare a Barren World? [Guest Post by Euan Monaghan]
- How far should we go to find life on other planets?
- Lost in Space: Finding a Sense of Place in the Cosmos. [Guest Post by Sean McMahon]
- How does our exploration of the universe aid our understanding of our own place within it?
- The Hunt for an Exo-Earth: How Close Are We? [Guest Post by Hugh Osborn]
- Will the next generation of exoplanet-hunting telescopes find another Earth?
- Rarely Done Planets [Guest Post by David Waltham]
- Climate sensitivity, exoplanetary habitability and ‘lucky’ Earth.
- A Multiplicity of Worlds [Royal Statistical Society]
- Is the current catalogue of exoplanets representative of the true diversity of worlds in the galaxy?
- Are Exoplanets Habitable? [TWDK]
- The fourth and final article of my series at Things We Don’t Know looks at what makes a habitable planet, and kicks off their coverage of World Space Week 2013
- Older posts
- The archive
This is a guest post by David Wilson, a PhD student in the Astronomy and Astrophysics group at the University of Warwick, where he studies the remains of planetary systems around white dwarfs (see below!). He can be found on Twitter and blogs about various astronomy topics at Stuff About Space.
Twenty seven years ago astronomers noticed something strange about the white dwarf star GD29-38.
White dwarfs are dead stars, the burnt out carbon cores of stars like our Sun which have exhausted their hydrogen fuel; incredibly dense, incredibly hot balls of matter roughly the size of the Earth. Because of their high temperature, tens of thousands of degrees, all white dwarfs glow blue.
But the light from GD 29-38 wasn’t just blue. When it was split into a spectrum, separated into a rainbow of separate colours, there seemed to be something else there. Something shining with an infrared light, beyond the range of our eyesight.
Initially the discovers were excited, as the red light could have come from an orbiting brown dwarf, a mysterious object several times bigger than a planet but much smaller than a star. But both the white dwarf and the infrared source were pulsating slightly, periodically getting brighter and dimmer. If the red light was from a separate object, then it shouldn’t have pulsed in time with the white dwarf.
The spectrum also revealed metals in the white dwarf’s atmosphere, heavy elements like calcium, magnesium and iron. These were also out of place, as white dwarfs have such a strong gravity that anything heavier than hydrogen or helium should have sunk down into their cores long ago. The metals must be falling onto the white dwarf from the space around it- but how did they get there?
It took until 2003 for the origin of the mysterious infrared glow to be found, during which time many more white dwarfs with similar red spectra and metal polluted atmospheres were found. The explanation was that the infrared light is coming from a disc of dusty debris surrounding the white dwarf.
This debris was formed from the wreckage of an asteroid, leftover from when GD29-38 was a Sun-like star with its own system of planets. The dust in the disc rains down onto the white dwarf, explaining the metals we see in the atmosphere.
The story of how the debris disc got there is a result of the turbulent formation of the white dwarf. As it runs out of fuel a star swells up to a huge red giant, then blows away roughly half of its mass in an immense stellar wind, leaving the tiny white dwarf core.
With the gravitational force at its heart cut in two, the system of planets around the dying star is thrown into chaos. Planets begin to migrate outwards, trying to reach orbits twice as far away from the central star as before. As they do this, they risk coming into close contact with each other.
Some of the planets survive these encounters and carry on as they are. Others, especially when a big Jupiter sized planet is involved, are thrown out of the system into the depths of interstellar space. And some are scattered into the centre of the system towards the white dwarf.
These unlucky asteroids and dwarf planets fall in towards the white dwarf until they reach a point known as the tidal disruption radius. There the tidal force, the difference in gravitational pull between the parts of the asteroid nearest the white dwarf and the areas further away, becomes so great that the asteroid is ripped apart, forming the dusty debris disc that we see as an infrared glow.
The discovery of this process lead to an important conclusion. As the dust rains down onto the white dwarf it becomes visible to our telescopes. If we can measure what metals there are, and how much of each there is, then we can reveal the chemical composition of the asteroid or planet that formed the disc. We can ask, and answer, the question: “What are planets made of?”
Two decades ago we only knew about the eight planets in our solar system (Pluto was never a planet, it was just mislabelled). Now we know of over a thousand planets, new worlds orbiting hundreds of stars. Through our telescopes we can measure the size of these planets, what their masses are, and even in some cases get a glimpse into their atmospheres.
But we can’t find out what they’re made of, what the geology of these newly discovered planets is like. This means that we don’t know for sure if the way that the rocky planets are built in our solar system, the particular mix of iron, oxygen, magnesium, silicon and other chemicals that make up the Earth and its neighbours, is the way all planets are built.
The metal polluted white dwarfs form a perfect laboratory, presenting us with rocky objects that have broken apart into their chemical components. By observing as many as we can, we can begin to explore the chemical diversity of planets and planetary systems. We can see if the way our planets are built is the normal way to construct a planet, or whether Earth is even more unique than we thought.
To date we’ve discovered around a dozen white dwarfs with enough chemicals to compare their systems in detail with our own. So far, they look fairly similar to the Earth, a hopeful sign. But we need many more to truly explore this area, and over the next few years myself and others will be scouring the sky, using the Hubble Space Telescope above us and an array of telescopes on the ground. We will find more metal polluted white dwarfs, measure the chemicals of the planetary debris around them, and begin to explore in detail what things you need to build a planet.
This is a guest post by Samantha Rolfe, a PhD student at the The Open University’s Department of Physical Sciences, where she is researching potential biomarkers on Mars using Raman spectroscopy. You can find her on Twitter, or talking science on Radio Verulam.
The robotic exploration of other planets has been happening for many decades now. We have been to almost all the classical planets, with the New Horizons mission presently on its way to the Pluto‑Charon system (Pluto will always be a planet in my heart). Among the earliest fragile feelers of this type were extended in the 1970s in the shape of the Viking missions to Mars. Mars has been the subject of speculation for over a century in the minds of humans when considering whether we are alone in the Universe. For many years, almost right up to the landing of the Viking missions, it was believed that Mars had vegetation on its surface; Italian astronomer Giovanni Schiaparelli thought he had observed a network of linear ‘channels’ on Mars during observations in 1877, which was later mistranslated as ‘canals’ by Percival Lowell, further fuelling the fire that intelligent Martians existed there. However, images from the Mariner program showed the surface to be littered with craters, a surface similar to that of the Moon.
The Viking landers were sent with life detection instrumentation, the results of which proved inconclusive (though recent reanalysis shows they may have detected organic material but it was masked by geochemical processes that were not understood at the time) and this led to pessimism about finding life elsewhere in the Solar System in planetary science departments around the world. Nonetheless, with improving technology and further study of Mars from orbit and the ground has revealed that Mars definitely had areas of standing and running water on its surface for a significant amount of time; long enough to create fluvial fans, sedimentary stacks and rounded pebbles, which are amongst the evidence for liquid water. These discoveries, along with the developing discipline of astrobiology, have forced us to continue looking for the potential of Mars as a habitable planet.
The concept of habitability has been stretched in recent years with the in depth study of extremophiles, often single celled organisms (though they can be found on all three branches of the phylogenetic tree) living in conditions where humans would instantly perish. Examples of terrestrial life living at extremes of temperature, pressure or salinity, for example, makes for an interesting case that Mars may too host life. Liquid water can only exist at the surface of Mars if its freezing point is depressed to extremes, evidence of which has been found in the form of Recurring Slope Lineae – streaks seen to lengthen and retreat with the seasons on crater walls – if there is liquid water at the surface, perhaps there are reservoirs in the subsurface which life could utilise.
Future missions to planetary bodies will be employing new techniques to search for life. Raman spectroscopy is one of these techniques. A non-destructive laser is fired at a sample and some of the reflected photons are engaged in a non-elastic interaction with the sampled molecules, slightly changing the frequency of the returning light. This is displayed as spectroscopic peaks or bands representative of the individual bonds within the molecule. Therefore, each molecule has its own unique Raman spectrum allowing the identification of organic and inorganic molecules even within a mixed matrix of materials, making it a useful tool for life detection.
The present surface conditions of Mars are not forgiving to the survival organic material or, therefore, its detection. The surface is known to be an oxidising environment, leading to the destruction of organic material that may exist at the surface of Mars. The Martian subsurface may be protecting organic molecules waiting to be detected as tantalising evidence for the possible existence of life on the Red Planet. ESA’s ExoMars mission, due to launch in 2018, will be carrying a Raman spectrometer and ideas for future missions to Jupiter’s moon Europa are also considering strapping a Raman spectrometer to them and throwing it into the extreme radiation environment of the Jovian system.
Before we land on these planetary bodies, we can test what we think we are expecting i.e. can organic molecules be detected in simulated Martian environments? Experiments have shown that organic molecules such as amino acids are able to survive Martian surface conditions, for perhaps millions of years (extrapolated) in small quantities (parts per billion). In the harsh light of the Martian day (where the atmosphere does not block the ultraviolet radiation from the Sun as effectively as the Earth’s does), the Raman signatures of amino acids are degraded. Similar results are seen for microbes, such as Deinococcus radiodurans. Their Raman signatures have been analysed and examined after exposure to the ionising radiation environment expected at the surface and near surface of Mars.
If we are to discover organic molecules or even microbial Raman signatures on Mars then it is apparent that we will need to dig or drill down into the subsurface, beyond the depth where destructive ultraviolet and ionising radiation can penetrate. For ultraviolet, mere millimetres of regolith can block harmful rays, but the depth to which ionising radiation is able to penetrate is thought to be at least 2 m below the surface. Luckily, ExoMars will carry a drill with the ability to bore to a depth of 2 m (see what they did there?). Drilling to this depth has never been attempted before and will be a great feat of engineering if achieved. Samples recovered from the subsurface will need to be handled with great care and be removed from direct interaction with the Martian daylight as experiments have shown that Raman signatures of some organic molecules can begin to degrade within seconds, losing vital information about potential life that may exist or have existed in the subsurface.
Raman spectroscopy is only some of what we have to look forward to in terms of future martian life detection missions and with all the new information we have been gathering with Curiosity of the Mars Sample Laboratory mission in Gale Crater (rounded pebbles indicating long term presence of liquid water, Mars is not red all over but grey too – a sedimentary rock, ‘John Klein’, was drilled into, a first in Mars exploration, and was found to be grey under the surface with analysis being consistent with clay minerals), we can only imagine what we might find in the future. Especially given that Curiosity’s mission is only to assess the habitability of Mars, not search for life, we have so much to look forward to.
Despite the amazing advances and discoveries made by robotic missions, robots are no substitute for human exploration. It is thought that humans could have conducted the same amount of research that the Mars rovers have within a few days or weeks, compared to the several years that it has taken. However, human space exploration warrants further discussion as there are many difficulties that we need to overcome before travel into interplanetary space will be safe enough, never mind the spiralling costs.
This is a guest post by Luke Surl, a PhD student in the Centre for Ocean and Atmospheric Sciences (COAS) at the University of East Anglia, where he is researching the atmospheric chemistry of volcanic plumes. You can find him on Twitter, or visit lukesurl.com for his excellent science-inspired comics.
Last week a giant purple blob descended upon East Anglia, with commotion and a flurry of newspapermen in its wake. The vulnerable were told to shelter in their homes, powerless to tackle its all-pervasive reach. Wisdom was sought from the sages of this ill-understood art, but all that could be done was hope the blight would soon pass.
A little dramatic license is appropriate for a guest blog, no? To decode, the purple blob is the region of “Very High” air quality risk shown on the official maps that have been appearing this week. These maps have been accompanied by warnings for asthmatics and others sensitive to such conditions. The “sages” are the atmospheric scientists who, normally eclipsed in the media spotlight by their climatic colleges, have been ubiquitous on in the media.
If you haven’t been keeping track, in short a combination of factors conspired this week to cause parts of Britain to experience an usually high level of particulate matter. Britons were breathing dust blown in from the Sahara, plus some with old-fashioned home-grown pollution. The weather slowed the dispersion of this event causing it to linger and intensify.
While “smog” seemed to be the media’s favoured term for the phenomenon, (evoking memories of the London smog of 1952) the discussion amongst the atmospheric scientists at UEA (where I do research) was of the aerosol counts. “Aerosol” is a catch-all term for solid and liquid particles suspended in air, and there are, critically two sorts. We deem primary aerosol particles directly emitted to the atmosphere whilst secondary aerosol are particles which form in-air from gaseous beginnings
The Saharan dust we have been inhaling is primary. Secondary aerosol is most readily created when the air has been polluted with sulphur and NOx. On an ordinary day, road traffic is the biggest such aerosol offender. In London, one of the principle raison d’etres of the congestion charge is to prevent such an air quality hit in a concentrated metropolis of cars and people.
Such technical distinctions are, however, largely ignored by ones lungs. Particles smaller than 10 micrometers in diameter travel into the lungs. The smallest of these can end up penetrating and settling deep into the respiratory system. This is not good news for anyone, especially asthmatics and others with similar conditions.
In some of the more morbid papers that atmospheric scientists are likely to come across, this air quality impact can be quantified. A 2009 study found Americans living in the most polluted areas can attribute air quality to their lives being about 2.5 years shorter than their cousins in cleaner areas. In China, where the economic boom has been quite literally dulled by thick smogs in its cities, the numbers are quite terrifying. These numbers are difficult to process. They are cold, dispassionate and cryptic, buried in journal papers few will read. But every data point hides an individual tragedy of a life extinguished early
Thankfully Norwich and London are nowhere near Chinese levels, though there are still thousands of such deaths a year. Britain, and the EU in general, quite rightly holds itself to very high standards with regards to its air.
As in everything, the recent incident has had a political dimension. Public debate has asked whether this incident is to be blamed on natural or human causes.
This misses the point. While the primary aerosol from the Sahara and the directions of the winds are beyond the remit of any public policy. But this natural phenomenon is compounded by human action. Regardless of how we apportion the blame, the particulates owing its existence to our cars and factories isn’t made harmless or insignificant by their natural counterparts, rather they can make a bad problem worse, especially for the most vulnerable. And even when the winds change and the purple blobs and media disperse, this pollution can still chip away days, months or years from human lives.
There’s nothing more essential to human life than the air we breathe, which is partly one of the reasons I have chosen atmospheric science as my field of research. It’s also fundamentally something we cannot help but share with our neighbours and community. Our air’s pollution and perturbation, from nature and from man, is something that will impact us all.
This is a guest post by Euan Monaghan, a post-doctoral researcher in the Department of Physical Sciences at The Open University, where he studies the habitability of the subsurface of Mars. You can find him on Twitter.
Astrobiology is the search for life elsewhere in the universe. When this search is focussed on a specific world, there’s a chance—quite a good chance it would seem—that this search will turn out to be fruitless; that there will be no life to be found except the terrestrial life we bring along with us in the process. But can we ever say for sure?
This piece is focussed on Mars, but the idea applies to all worlds targeted for astrobiological exploration. The particular habitats on Europa, Titan or Kepler-62e might be different to those found on Mars, but the question is the same everywhere: does this world host life?
Scientific progress has made the martians of our imagination progressively smaller and more insignificant. No longer the grand canal builders of old—no longer even considered to be multi-cellular—the optimistic amongst us imagine microbes in briny pockets kilometres beneath a hostile surface; their presence deep underground given away by a subtle disequilibrium in the gases of Mars’ tenuous atmosphere. If the martians are there, they’re in hiding.
As we gain a greater understanding of the geologic and climatic history of Mars, a subterranean biosphere doesn’t seem so unreasonable. While Mars was likely warm and wet long before the Earth was, it is also smaller and so cooled faster. It couldn’t hold onto a thick, warming atmosphere for long and so its surface water was gradually lost, both out into space and down into the planet’s interior, to be fixed within the structure of minerals, frozen as permafrost or trapped in groundwater aquifers beneath layers of ice. And as Mars cooled and the water descended, so did the planet’s habitable zone, until it was hidden from view.
The habitability of any extra-terrestrial environment is estimated through the study of life adapted to extreme conditions on the Earth. This ‘envelope of life’, with its upper and lower boundaries of temperature, pressure, salt tolerance and so on, is expanding all the time. The relatively recent discovery of our own deep subsurface biosphere, as well as its remarkable diversity and extent, has broadened our concept of what we consider to be a habitable environment. It is with this ever-more subtle knowledge of our own world that we turn back to the planets in our search for life.
The next logical step in that search, for Mars at least, is a detailed study of its atmosphere. In early 2016 the European Space Agency will launch a mission to do just that: the ExoMars Trace Gas Orbiter (TGO) will perform a more comprehensive inventory of the martian atmosphere and the respective abundances of its gases than ever before. It is hoped that the results of this study will provide an insight into active processes occurring deep underground. But then again there is the very real possibility that the TGO will arrive in orbit and find no signs of life, however tentative. The null hypothesis—Mars is a barren world—would still stand. Should we then give up on our search, or do we commit time and resources to a strategy of ever more sophisticated astrobiological exploration, all the while striving to prevent accidental contamination by terrestrial life?
The inevitable moments when we decide to re-focus our search for life beyond the Earth should not be considered moments of pessimism. The universe has too much potential.
This is a guest post by Sean McMahon, a PhD student in the School of Geosciences at the University of Aberdeen. Sean’s research applies geological perspectives and techniques to astrobiological problems ranging from the origin and distribution of life in the universe to the origin of methane in the Martian atmosphere. Visit his excellent blog, Fourth Planet, for more on his research, his impressive space art and photography, and writings.
“Though a planetary perspective is a magnificent and enriching thing, places, not planets, are the core of human experience. It is from places that we build our world.”
— Mapping Mars, Oliver Morton (2002)
“He stood thereby, though ‘in the centre of Immensities, in the conflux of Eternities,’ yet manlike towards God and man; the vague shoreless Universe had become for him a firm city, and dwelling which he knew.”
— The French Revolution: A History, Thomas Carlyle (1837)
Last year, in a car park in Aberdeen, I saw Jupiter through a telescope for the first time. What I saw was not the familiar red-spotted giant from the Nasa photographs, that great bronze bauble marbled with cream like artisan coffee—no. What I saw, through a gap in the Scottish clouds, was a pale round smudge with three white specks for moons. It was not dramatic but it was a strange and lovely moment. It reminded me that Jupiter, the other planets, and even the distant stars and galaxies, are no less real, no less here—albeit further away—than Scotland, clouds, car parks, and me. They are on the same map, sharing our geography, our humdrum commonplace reality.
In our eagerness to be inspired by astronomical imagery, we are often tempted to forget this fundamental sameness. Documentaries about the cosmos besiege us with spectacular graphics, rousing orchestral music and rapturous, lyrical narration. In the tradition of Carl Sagan, we are urged to adopt a “cosmic perspective”, in which the Earth dwindles to an insignificant1 “mote of dust suspended in a sunbeam”. Meanwhile, digital space art is reliving the Romanticism of 19th Century painting: balance, proportion and subtlety are abandoned in favour of vertiginous perspectives, extremes of colour and contrast, and sublime, mystical lighting: silhouetted planets disintegrate into vast purple nebulae bristling with crepuscular rays. Thus, it seems that an ecstatic, almost mythical vision of outer space, emphasizing above all its spiritual and aesthetic grandeur, has taken root in popular culture.
Maybe that vision has some role to play in attracting public interest to the space sciences. But paradoxically, it can make the “wonders of the universe” seem less accessible than ever; profound, ethereal, miraculous, even unreal. It bolsters the popularity of astrology by reinforcing the illusion that planets and stars are unfathomable, heavenly beings: much more plausible aids to divination than ordinary material things. Most worryingly, it can give the impression that space exploration is an esoteric spiritual quest, unrelated to ordinary human problems and unfit for serious attention from media, government or young, career-minded scientists.
Perhaps the “numinous” view of space reflects a deeper failure to grasp the implications of the Copernican Revolution. Somehow, I suggest, we still make some kind of basic ontological distinction between the heavens and the Earth2. Consequently, we are unable to feel truly embedded in our extraterrestrial environment, which remains a transcendent, detached and coldly beautiful space rather than a homely, material, lived-in place. The Apollo programme helped to bridge that gap for a generation, transforming the moon from an icon of celestial indifference into a humanly intelligible landscape—rather like a golf course, in fact, replete with bunkers, buggies, flags and footprints3. Revealingly, many people today find it easier to believe that the whole thing was a hoax.
The sharp, vivid photographs taken by NASA’s Curiosity Rover can have a similar effect, reminding us that the martian surface is a real place, not so different in appearance from the rocky deserts of Libya or the High Arctic. Despite our unsophisticated cultural relationship with outer space—a mixture of mythology, indifference and reverence—a crewed mission to Mars in the next thirty years now seems very likely. I hope that mission will allow the next generation to feel more at home in the universe, more fully at ease with the fact that even Milton Keynes4 is part of the Milky Way. What we stand to gain is not an exalted “cosmic perspective” but simply a richer, more expansive sense of place, of where it is that we live our lives.
1 This strain of rhetoric characteristically fails to observe that human beings adjudicate the significance of the universe, not the other way around.
2 Douglas Adams exploited this confusion to humorous effect, juxtaposing ordinary things with cosmic phenomena: the “restaurant at the end of the universe,” the “whelk in a supernova” and so on; “you may think it’s a long way down the road to the chemist but that’s just peanuts compared to [the size of] space”.
3 Some readers will know that the American astronaut Alan Shephard did in fact play golf on the moon; two golf balls remain there.
4 Milton Keynes is an architecturally unprepossessing English town and home to the Open University, where much British space research has been conducted.
This is a guest post by Hugh Osborn, a PhD student in the Astronomy and Astrophysics group at the University of Warwick. Hugh’s research involves using transit surveys to discover exoplanets. Visit his excellent blog, Lost in Transits, for more on exoplanets, their detection and his research.
In the 1890s Percival Lovell pointed the huge, 24-inch Alvan Clark telescope in Flagstaff, Arizona towards the planet Mars. Ever the romantic, he longed to find some sign of life on the Red Planet: to hold a mirror up to the empty sky above and find a planet that looked a little bit like home. Of course, in Lovell’s case, it was the telescope itself that gave the impression of life, imposing faint lines onto the image that he mistook for canals. But, with Mars long since relegated to the status of a dusty, hostile world, that ideal of finding such a planet still lingers. In the great loneliness of space, our species yearns to find a world like our own, maybe even a world that some other lineage of life might call home.
A hundred years after Lovell’s wayward romanticism, the real search for Earth-like planets began. A team of astronomers at the University of Geneva used precise spectroscopy to discover a Jupiter-sized world around the star 55-Peg. This was followed by a series of similar worlds; all distinctly alien with huge gas giants orbiting perishingly close to their stars. However, as techniques improved and more time & money was invested on exoplanet astronomy, that initial trickle of new worlds soon turned into a flood. By 2008 more than 300 planets had been discovered including many multi-planet systems and a handful of potentially rocky planets around low-mass stars. However, the ultimate goal of finding Earth-like planets still seemed an impossible dream.
In 2009 the phenomenally sensitive Kepler mission launched. Here was a mission that might finally discover Earth-sized planets around Sun-like stars, detecting the faint dip in light as they passed between their star and us. Four years, 3500 planetary candidates and 200 confirmed planets later, the mission was universally declared a success. Its remarkable achievements include a handful of new terrestrial worlds, such as Kepler-61b and 62e, orbiting safely within their star’s habitable zones. However, despite lots of column inches and speculation, are these planets really the Earth 2.0s we were sold?
While such worlds may well have surfaces with beautifully Earth-like temperatures, there are a number of problems with calling such worlds definitive Earth twins. For a start the majority of these potentially habitable planets (such as Kepler-62e) orbit low-mass M-type stars. These are dimmer and redder than our Sun and, due to the relative distance of the habitable zone, such planets are likely to be tidally locked. The nature of such stars also makes them significantly more active, producing more atmosphere-stripping UV radiation. This means, despite appearances, ‘habitable’ planets around M-dwarfs are almost certainly less conducive to life than more sun-like stars.
Even more damning is the size of these planets. Rather than being truly Earth-like, the crop of currently known ‘Habitable planets’ are all super-Earths. In the case of Kepler’s goldilocks worlds, this means they have radii between 1.6 and 2.3 times that of Earth. That may not sound too bad, but the mass of each planet scales with the volume. That means, when compression due to gravity is taken into account, for such planets to be rocky they would need masses between 8 and 30 times that of Earth. With 10ME often used as the likely limit of terrestrial planets, can we really call such planets Earth-like. In fact, a recent study of super-Earths put the maximum theoretical radius for a rocky planet as between 1.5 and 1.8RE, with most worlds above this size likely being more like Mini-Neptunes.
So it appears our crop of habitable super-Earths may not be as life-friendly as previously thought. But it is true that deep in Kepler’s 3500 candidates a true Earth-like planet may lurk. However the majority of Kepler’s candidates orbit distant, dim stars. This means the hope of confirming these worlds by other techniques, especially tiny exo-Earths, is increasingly unlikely. And with Kepler’s primary mission now ended by a technical fault, an obvious question arises: just when and how will we find a true Earth analogue?
Future exoplanet missions may well be numerous, but are they cut out to discover a true Earth-like planet? The recently launched Gaia spacecraft, for example, will discover hundreds of Gas Giants orbiting Sun-like stars using the astrometry technique, but it would need to be around a hundred times more sensitive to discover Earths. New ground-based transit surveys such as NGTS are set to be an order of magnitude better than previous such surveys, but still these will only be able to find super-Earth or Neptune-sized worlds.
Similarly, Kepler’s successor, the Transiting Exoplanet Survey Satellite which is due to be launched in 2017, will only be able to find short-period planets with radii more than 50% larger than Earth. HARPS, the most prolific exoplanet-hunting instrument to date, is also due for an upgrade by 2017. Its protégée is a spectrometer named ESPRESSO that will be able to measure the change in velocity of a star down to a mere 10cms-1. Even this ridiculous level of accuracy is still not sufficient to detect the 8cms-1 effect Earth’s mass has on the Sun.
So despite billions spent on the next generation of planet-finders, they all fall short of finding that elusive second Earth. What, precisely, will it take to find this particular Holy Grail? There is some hope that the E-ELT (European-Extremely Large Telescope), with its 35m of collecting area and world-beating instruments will be able to detect exo-earths. Not only will its radial velocity measurements likely be sensitive enough to find such planets, it may also be able to directly image earth-analogues around the nearest stars. However, with observing time likely to be at a premium, the long-duration observations required to find and study exo-earths could prove difficult.
Alternatively, large space telescopes could be the answer. JWST will be able to do innovative exoplanet research including taking direct images of long-period planets and accurate atmospheric spectra of transiting super-Earths and giants. Even more remarkably, it may manage to take spectra of habitable zone super-Earths such as GJ 581d. But direct detection of true Earth-analogues remains out of reach. An even more ambitious project may be required, such as TPF or Darwin. These were a pair of proposals that could have directly imaged nearby stars to discover Earth-like planets. However, with both projects long since shelved by their respective space agencies, the future doesn’t look so bright for Earth-hunting telescopes.
After the unabashed confidence of the Kepler era, the idea that no Earth-like planet discovery is on the horizon may come as a surprisingly pessimistic conclusion. However not all hope is lost. The pace of technological advancement is quickening. Instruments such as TESS, Espresso, E-ELT and JWST are already being built. These missions may not be perfectly designed to the technical challenge of discovering truly Earth-like planets, but they will get us closer than ever before. As a civilisation we have waited hundreds of years for such a discovery; I’m sure we can hold out for a few more.
This is a guest post by David Waltham, Reader in Mathematical Geology at Royal Holloway, University of London. David’s new book, Lucky Planet, is out in April 2014. Visit his ‘Strange Worlds Catalogue‘ for more exoplanet oddities.
The issue of manmade global-warming seems far removed from questions of exoplanet habitability but there is a close link. A planet whose climate is highly sensitive to greenhouse-gas changes is also a planet that responds strongly to increasing heat from its aging star; and it’s hard for such a world to remain habitable for long. The Earth seems to be one such world (that’s why global warming is such a threat) but it has never-the-less remained habitable for billions of years. How it managed pull off this trick is an intriguing, but not particularly new, mystery.
In 1972 Carl Sagan and George Mullen recognized that, since our Sun produced 30% less heat when she was young, surface temperatures on the early Earth should have been far below freezing. However, geological evidence showed running water when our world was just a few hundred million years old. Sagan and Mullen called this the faint young Sun paradox and, forty years later, there is still no consensus on how to resolve it. However the concept of climate sensitivity, an idea refined over the last thirty years by climate scientists interested in anthropogenic global-warming, now gives us a clear framework for discussing the issues.
Climate sensitivity tells us how much warmer a planet becomes for a given increase in the heat it receives. It’s a bit like going from gas-mark 5 to gas-mark 6; how much hotter does this make an oven? At gas-mark 6 more gas is being burnt and temperature rises but, in a badly insulated oven for example, the increase would be less than expected. Similarly, different planets warm up by different amounts for a given increase in heating and this difference in climate sensitivity depends upon the relative strengths of positive and negative feedbacks in the climate system. As I’ll show below, the faint young Sun paradox occurs because Earth’s high climate sensitivity is incompatible with the flowing of liquid water on her surface when she was young.
Climate sensitivity is usually expressed by how much warmer the Earth becomes if carbon dioxide concentrations are doubled. Doubling of CO2 is expected by the end of the current century and so this is a very concrete way of expressing the expected impact. The best guess is that climate sensitivity is in the range 1.5-4.5 °C . This range is largely based upon computer models of the present-day climate system but it is backed up by simulations of Earth’s past climate which only match observations when similar climate sensitivities are used . If anything, these geological studies suggest that the computer estimates are too low but let’s be conservative and stick with the computer models. What does a climate sensitivity of 3 °C predict concerning temperature changes over the life time of our planet?
To calculate this we need to re-express climate sensitivity in a slightly different way. Doubling CO2 increases heating at the Earth’s surface by 3.7 Wm-2 but, to produce an equivalent amount of heating at ground level, solar radiation must go up by 5.3 Wm-2 because some is reflected back into space. Thus, temperatures go up 3 °C if solar heating increases by 5.3 Wm-2. Earth’s climate sensitivity is therefore 0.6 °C per Wm-2. Heat from the Sun has actually gone up 90 Wm-2 over the last 4 billion years and so temperatures should have risen more than 50 °C. This implies a young Earth that endured average temperatures near -40 °C and that is inconsistent with liquid water anywhere on our planet’s surface.
An obvious objection to this analysis is that the ancient climate system was very different to that of the modern Earth and so the present-day climate sensitivity may not be relevant. That’s a fair point but we can get around it by concentrating instead on the Phanerozoic Eon (i.e. the last 542 million years) when there is no reason to think that climate sensitivity would have been massively different to today. Solar heating has increased 15 Wm-2 over this time and so temperatures should have risen by about 10 °C but there is no evidence whatsoever for such a rise. Analysis of oxygen isotopes in ancient marine organisms suggest that Phanerozoic temperatures have fluctuated around a steady mean or perhaps even dropped a little. Thus, whether we look at the whole of Earth’s history or just the last half-billion years, there is no evidence for the expected overall warming despite the steadily increasing luminosity of our Sun. What’s going on?
The missing part of the puzzle is that Earth itself has evolved, both geologically and biologically, during its long history. For example, the slow growth of the continents and the biological evolution of more effective rock-fragmenters (e.g. lichens and trees) has steadily increased the efficiency with which CO2 is removed from the atmosphere by the chemical reaction of acid-rain on volcanic rock. Another greenhouse gas, methane, has also greatly declined through time as oxygen levels have grown following the evolution of photosynthesis. Furthermore, land, especially plant-covered land, is more reflective than sea and so, as the continents grew and as they became colonized by life, more of the Sun’s heat has been reflected into space. These processes, and perhaps others, cooled our planet as the Sun tried to warm it.
Two opposing forces therefore fought for dominance of climate trends and, coincidentally, roughly cancelled out. But what produced this coincidence? Some would ascribe it to the Gaia hypothesis that a sufficiently complex bio-geochemical system will inherently produce environmental stability. However there’s no credible mechanism for this and, in any case, Gaia may have confused cause and effect: Earth’s complex biosphere didn’t produce a stable climate; rather a stable climate was a necessary precondition for a complex biosphere. If this is right, then biospheres whose complexity and beauty rival that of the Earth will be rare in the Universe. On the majority of those few worlds where life arises, it will all-too-soon be frozen by bio-geochemistry or roasted by its sun. However a few worlds will, purely by chance, walk the fine line between these fates long enough for intelligent life to arise. We live on one of those rare, lucky planets.
Dear readers and visitors,
Happy New Year! Thank you for taking the time to read my posts over the past orbital period; I hope you’ve enjoyed reading them as much as I have writing them. I’ve had a rather busy few months, and as my PhD thesis looms large this year, the frequency of my posts have declined somewhat, and for this I apologise.
To make it up to you, I hope to supply you all with some excellent guest features from other researchers and writers over the next few months whilst I’m distracted elsewhere. I’ll kick things off tomorrow (January 3rd) with a guest post on ‘Climate Sensitivity’ from Dr David Waltham, a Reader in Mathematical Geology at Royal Holloway, University of London.
If you, or someone you know, would like to write a piece for II-I- please send an email to email@example.com with your ideas! It would be great to hear from you.
All the best, and thanks again for reading.
I wrote an article for the October edition of the Royal Statistical Society’s Significance magazine about statistics and exoplanets. You can download a .pdf copy here.
This the fourth and final article in a series of posts by me at Things We Don’t Know about the many unknowns involved in the study of planets in the orbit of other stars across the galaxy. It started off their coverage for World Space Week 2013.
As the catalogue of planets orbiting other stars (called exoplanets) known to us continues to grow, increasing discoveries of potentially ‘habitable’ planets are likely to follow. The ‘habitable zone’ (HZ) concept, which was introduced in a previous post, is becoming increasingly important to our interpretation of these announcements. However, when used unilaterally as it often is, the HZ metric may be misleading – and should rather be considered as a good initial indicator of possible habitable conditions, interpreted relative to other available planetary characteristics.
The habitable zone describes the theoretical distance (with both upper and lower limits) at which a given planet must orbit a star to support the basic fundamental requirements for the existence of life based on our understanding of the evolution of the biosphere on Earth. It is often referred to as “the Goldilocks Zone“, since it looks for the region “not too hot, and not too cold”. The concept is based on terrestrial (rocky, as opposed to gaseous or icy) planets that exhibit dynamic tectonic activity (volcanism and/or possible plate tectonics) and that have active magnetic fields to protect their atmospheres from high energy stellar particles that could strip it away. The composition of atmosphere is assumed to consist of water vapour, carbon dioxide and nitrogen with liquid water available at the surface, as on the Earth. Liquid water is the key; the giver of life and the fundamental factor in defining the habitable zone in any planetary system.
It should be relatively easy to spot a number of limitations of the habitable zone concept already; we are still unsure of the atmospheric composition of many of the planets we have already discovered, which would significantly affect any habitability analysis.
Also, we assume that any potential exobiology (the biology of life on other worlds) would have the same requirements as Earth-based life, which may not necessarily be so. The wide variety of extremophile organisms (those able to tolerate extremes of temperature, pressure, salinity, radiation etc.) on Earth might mean we should extend the parameters of the habitable zone beyond those originally considered. All in all, the idea of a habitable zone is a great thought experiment, but it may not necessarily translate into a warm, clement planet in reality. Planetary processes, such as tectonics and atmospheric greenhouse effects, warp the boundaries of the habitable zone. Furthermore, astrobiologists are now considering the very real possibility of salty liquid waterexisting in massive sub-surface oceans of Jupiter’s icy moon Europa, a body well outside of the defined habitable zone of our solar system1.
Another good example of the limitations of using the habitable zone concept in isolation was the furore that resulted from the discovery of the first planet definitively found to be within the habitable zone of its star, Kepler 22b, in late 20112. The popular science media and news outlets were awash with articles and posts describing Kepler 22b as “Earth’s twin” and “Earth 2.0” based solely on the fact that it has been discovered to be orbiting within the habitable zone of the Sun-like star Kepler 22. The media circus surrounding this announcement was an unusual situation, and one that had not been afforded to many other exoplanet announcements before or since. It’s clear that the possibility that this distant world may be suitable for life had spurred the imagination of scientists and the public alike. However, what was usually skipped over, or not mentioned at all, is that Kepler 22b has a radius 2.1 times that of the Earth, and estimates of its mass range from 10 to 34 times that of our planet. The large uncertainty in these figures are due to the method used in its detection, more about which can be found in this previous post in my TWDK series. The unknowns inherent in the discovery of Kepler 22b meant that it could be either a warm, ocean covered rocky planet with a greenhouse atmosphere similar in composition to that of the Earth, or a gaseous planet with crushing gravity and surface temperatures closer to a lead-melting 460 °C, depending on its mass and composition. These are attributes we cannot yet determine effectively.
More data and better detection technology will provide the answer in time, but until then it remains important not to over-hype planets that are only borderline habitable in the very best case scenario as this will most likely be damaging to the public perception of this exciting field in the long term.
Recently, there has been revitalised interest in the habitable zone concept itself, which was first proposed in 1953, with updated estimates based on new climate models published in the scientific literature, as well as increased use of integrated habitability metrics which take other planetary factors into account. However, our understanding of the factors that control the habitability of extrasolar planets is at a very early stage, as is our grasp on the limits that life can endure, and it remains too early to say with much confidence that we have discovered another world suitable for life.
 Tyler, Robert H. “Strong ocean tidal flow and heating on moons of the outer planets.” Nature 456, 770-772 DOI: doi:10.1038/nature07571
 Borucki, William J et al. “Kepler-22b: a 2.4 Earth-radius planet in the habitable zone of a Sun-like star.” The Astrophysical Journal 745.2 (2012): 120. (PDF)