Oct 222021

Following on from my last post with the Andromeda Galaxy, here is another beautiful galaxy which is also part of a group of galaxies known as the Local Group. The Andromeda Galaxy and our own Milky Way are the two largest in the group and this one known as the Triangulum Galaxy is third.

Why the Triangulum Galaxy? This is simply because it can be found in the small constellation of Triangulum – the Triangle! Just about any group of three stars might do, but this particular triangle is found above Aries and below Andromeda.

Again, as with my M31 image, the Optolong L-Extreme filter was used to collect the H-Alpha data which was then blended into the red channel. This enhances the pinky-red regions on nebulosity in the spiral arms of the galaxy.

As usual, click on the image below to see the full-size version.

For the technically-minded, this article shows the equipment used to take this image.

I acquired the images using N.I.N.A. software. I took 72 x 300s exposures with the QHY268C camera for the RGB and 17 x 600s exposures using the L-Extreme dual-narrowband filter to get the Ha data which I combined with the red channel from the RGB. All processing with PixInsight.

 Posted by at 11:24 am

M31 – The Andromeda Galaxy

 Deep Sky, Equipment, General Astronomy, Image Processing  Comments Off on M31 – The Andromeda Galaxy
Oct 132021

Messier 31, the great galaxy in the constellation of Andromeda is one of the most photographed night-sky objects of all, probably second only to the Orion Nebula. Every few years I get an itch to image it again. Most astro-photographers like to revisit old targets once in a while, and this often happens when new telescopes and cameras have been purchased, and this is the reason why I’m having another go at this beautiful object. I showed off my new kit in my previous post.

It is often said of M31 that it is the furthest away object that can be seen with the naked eye. This is an amazing thing when you think about it! This galaxy is about 2.5 million light years away and it is the nearest large galaxy to our own Milky Way galaxy which is not too different in structure from M31 itself. In a dark, moonless sky, M31 looks like a fuzzy blob to the naked eye. Some people say they can also detect another nearby galaxy called M33 – The Triangulum Galaxy with the naked eye. I personally can’t see M33, but it is further away from us than M31 at around 2.75 million light years, so M33 really does represent the furthest thing anyone can see without optical aid of any kind. I’ve asked a lot of people if they can see M33 in a good, dark sky in the UK, but I’ve never found anyone who can, so I’m happy that those photons that left the Andromeda Galaxy when Homo habilis first walked the Earth, enter my eye and are detected by my retina represent the most ancient particles of light that can ever stimulate the human consciousness.

Here is my latest image of M31. Click on the image below to view a much larger version (4000 pixels across). Next, I will describe some more details of how this image was acquired and processed.

Firstly, what are we looking at here? The first thing to realise is that we are viewing this spiral galaxy from an angle of about 45 degrees. If we could fly over the galaxy and look directly down on it, we would see a vast spiral shape. Another thing to understand is that all of the distinct, bright stars in the image are all relatively close-by stars in our own Milky Way – in other words we are looking through a ‘curtain’ of nearby stars to see outside our own galaxy. We should understand that galaxies are vast islands of stars, separated by huge distances of near-empty space. The Andromeda Galaxy contains about a trillion stars (that’s a million, million stars) which is about twice the number in our own Milky Way galaxy. So, you are looking at all of these trillion stars in this image which are too far away to see them individually, so they glow like a huge mass.

What else can we see here? Well, you will see the dark regions in the galaxy. These are huge lanes of cosmic dust which are obscuring the light from stars behind them. Also, if you zoom in to the big image you will see the disk of red regions that glow around the galaxy. Here’s a zoomed in region that shows the red regions nicely. Each of these red areas shines be the light of Hydrogen-Alpha. All of them would be seen as nebulae to any inhabitants of planets orbiting the stars in M31 and any of them could be the equivalent of, say, the Orion Nebula that we see locally here in our region of the Milky Way.

Lastly, there is a bright elliptical blob showing below M31 in the main image. This is a dwarf elliptical galaxy called M110 which is a satellite to M31 itself. We have similar objects associated with the Milky way and they are known as the large and small Magellanic Clouds.

So, how did I create this image? I used the telescope and camera system I showed in my previous post. Over four clear nights in October 2021, I took lots of long exposure photographs of M31. The telescope was guided very accurately by the separate guide scope that was checking the guiding accuracy every 2 seconds throughout the whole time, and instructed the mount to make tiny corrections to keep the galaxy perfectly still on the chip of the sensitive camera. Eventually I had about 22 hours of exposures stored on my imaging computer. By the time I weeded out the poorer frames, I had 10 hours of data from my broadband luminance filter, and about 7 hours of data from my narrowband filter.

The narrowband filter I used was the 2″ Optolong L-Extreme filter. This passes light from both H-Alpha and Oxygen-III sources, both with a passband of 7nm wide. In this image I only wanted the H-Alpha data, so I extracted the red channel from the narrowband images and threw away the green and blue which shared the OIII signal. Then I merged the H-Alpha signal with the red channel from the broadband RGB images. This enhanced the red emission nebulae in M31 beautifully.

I’ll write a more detailed blog about my process next…

 Posted by at 2:13 pm

My new deep-sky imaging setup

 Deep Sky, Equipment, General Astronomy  Comments Off on My new deep-sky imaging setup
Sep 292021

Since the early Summer of 2021 I have been building up a new deep-sky imaging setup based around the beautiful and venerable Takahashi FSQ-85EDX Refracting telescope. I’ve always wanted a ‘Tak’ and decided to go for this model known ad the ‘Baby-Q’. The optics are glorious and the focuser is incredibly rugged and can carry heavy cameras and filter wheels.

The idea, eventually, is to turn this setup into a fully robotic system which will be mounted low to the ground and housed in a simple box-like structure with a sliding roof. For now, I’m testing out the system to see how it performs.

Here is a small gallery of photos of the current system. I will add more details about the components below.

Here’s a list of the main components that you can see in the above photos:

  • Telescope: Takahashi FSQ-85EDX F/5.3 Apochromatic Refractor with 1.01x field flattener.
  • Mount: Skywatcher AZ-EQ6 Pro
  • Camera: QHY268C cooled CMOS camera (one-shot colour, full 16-bit)
  • Filter Wheel: Starlight Xpress 5 x 2″ filter wheel
  • Guide Scope: ZWO 60mm. Focal length is 280mm, F/4.67
  • Guide Camera: ZWO ASI290MM Mini mono
  • Auto Focuser: Pegasus Astro FocusCube2
  • Power, Dew Heater and USB Hub: Pegasus Ultimate Powerbox V2
  • Dew Heater bands on both scopes
  • Windows Computer: Beelink Mini PC (in the plastic box on the ground running N.I.N.A.)

If you look at the photos with all the cabling, you will see a plastic box on the ground below the mount. This contains a ‘headless’ mini PC running Windows 10. Think of an Intel NUC and you will get the idea, but this is a Beelink with an Intel i5 CPU which comes cheaper than a NUC. This computer has all of the software installed to control the rig. I’m using the free N.I.N.A. software here and the little PC is connected to the wireless router I have in my dome just a few feet away. This allows me to use remote desktop from the comfort of my dome, office or house.

The thing that really was a ‘game-changer’ for me is the Pegasus Powerbox which is mounted just below the lens of the main scope. This provides all of the 12-volt power ports I need to run the various bits of kit and also has a USB hub with 6 ports. Additionally it can power and control the heat of three heater bands and can detect the dew-point so that it can intelligently adjust the power to the bands to keep the lenses free from dew. Because nearly everything connects to this hub, there are only two cables that need to be connected to the big plastic box on the ground. One is the 12V power to the hub and the other is the USB3 port to the Beelink mini PC.

I run the amazing free N.I.N.A (Nightime Imaging ‘N’ Astronomy) software on the mini PC and the recently added Advanced Scheduler is amazing allowing me to power up the system before dark and set up various targets to image during the night. The system will do everything such as cooling the camera, auto-focusing, slewing and centring targets, flipping across the meridian and shutting down at dawn. It can also deal with re-focusing during the night if the focus drifts and re-centring after a cloudy spell.

Assuming I get some clear nights over the Autumn months, I will hopefully be posting some new images soon.

 Posted by at 3:21 pm

The Veil Nebula – a mosaic

 Deep Sky, General Astronomy, Image Processing  Comments Off on The Veil Nebula – a mosaic
Sep 292021

The beautiful Veil Nebula in the constellation of Cygnus (the Swan) covers a large apparent area of the sky. When I say ‘large’ I mean it in a relative way. It covers a large enough area to make it hard for the average telescope to cover in one frame. To put this into perspective, the full Moon (or the Sun) is about half a degree across, but we need a field of view (FOV) of about 3 by 3 degrees to encompass the whole of the Veil Nebula. Thus, we can say that the full Moon would fit about 6 times across the apparent span of the Veil Nebula.

I have a lot of different telescopes and cameras! Some telescopes, such as the popular Schmidt Cassegrain design, are good for viewing the planets and small galaxies, but these typically have very small FOVs because they have long focal lengths to provide the high magnification which we need to see the belts on Jupiter, the craters on the Moon, or the rings of Saturn. Think of these telescopes as the telephoto lenses of the astronomer’s toolkit. Then there are the shorter focal length, smaller telescopes. These are the type (typically small refractors) that can give a wider view of the starry sky and they are ideal for delivering a larger FOV on to the camera’s sensor. However, only the smallest of these could cover the 3 by 3 degrees we require, and so I have resorted to the technique of imaging one half of the Veil Nebula on one night, followed by the other half on another night! I used an using an 85mm F/5.3 refractor. The two sets of images are ultimately processed and seamlessly joined together in a mosaic to show the Veil Nebula in one final image. Although this sounds complicated, there are advantages to this approach as the final image provides a much higher resolution of the target than could been obtained with a telescope that could fit the whole thing in in one go. The final image ends up with more pixels too.

The Veil Nebula is a Supernova remnant. The star that blew itself to pieces was 20 times more massive than the Sun and was just over 2,000 light years away. This cataclysmic event happened about 10,000 years ago. The remaining remnant structure is about 110 light years across and contains the beautiful glowing filaments that you can see in the image. The red colour is caused by ionised Hydrogen atoms, and the green from doubly ionised Oxygen atoms. The filter that I used to capture this image allows light of these two colours (wavelengths) to pass through, but cuts off everything else, including general light pollution and moonlight etc. Astronomers call this narrowband imaging.

Click on the image below to see a full-sized version.

 Posted by at 9:48 am

Three new images from the summer

 Deep Sky, General Astronomy, Image Processing  Comments Off on Three new images from the summer
Sep 282021

Here, on the south coast of England, the nights get very short indeed for a couple of months around the Summer Solstice. In fact, there are several weeks where theoretical ‘astronomical twilight’ never ends and the Sun never drops below 18 degrees below the horizon. I normally abandon deep-sky imaging but, this year, I was testing out a new system and decided to have a go at a few easy and classic Summer deep-sky targets.

The main thing that helped my productivity during these short nights was a new dual-band narrowband filter from Optolong called the L-Extreme. These multi-band filters are becoming very popular with deep-sky imagers these days. The pass-band spectrum graph is shown below, and you can see that there are two peaks – one centred on H-Alpha and the other on OIII and both are 7nm wide.

I’ve been imaging with Ha, OIII and SII narrowband filters for many years, but so often in the past I have been unable to capture a full set of sub images due to poor weather or lack of time, and this filter brings the possibility of acquiring more finished images as these three testify. By the way, the SII band is not included with this filter but, so often, the SII signal is so weak it rarely adds much to an image. However, since I have this filter in my filter wheel (so that I can use a Luminance filter for RGB imaging) I still have the option of adding my SII filter into the mix if I so desire.

I should mention that these narrowband filters are generally used with one-shot colour cameras. The Ha signal ends up in the red channel and the OIII signal is often mixed between green and blue. My new system includes the amazing QHY268C one-shot colour cooled CMOS camera which is very sensitive and has 16-bit resolution.

I will add a separate article showing the new setup, but it includes the superb Takahashi FSQ-85EDX APO refractor working at f/5.4 riding on a Skywatcher AZ-EQ6 Pro mount. The field of view is 179′ x 120′ which is 3 x 2 degrees (1.72 arc-seconds per pixel).

All three images consist of just over 2 hours of exposures – that’s all the darkness I had on each night! I took 600 second exposures throughout and calibrated with dark, flat and flat-dark frames.

Please click on each image below to see the full size of the images (which are only 50% of the originals).

The first image is the North America Nebula in Cygnus (NGC7000)

The second is IC1396 in Cepheus which contains the Elephant Trunk Nebula near the middle.

Lastly, NGC6888, The Crescent Nebula in Cygnus which is sometime referred to a van Gogh’s Ear!

Hopefully, my next article will not be too long coming. Thanks for reading.

 Posted by at 11:00 am

My Current Research into PCEB Exoplanets

 Exoplanets, General Astronomy, Post Common Envelope Binaries  Comments Off on My Current Research into PCEB Exoplanets
Aug 172021

It’s about time I got my astronomy blog going again so I thought I’d start by explaining what I’ve been up to over the past two years or so in an area of study on the possible existence of exoplanets around a certain type of binary star system known as Post Common Envelope Binary (PCEB) systems.

I got into this area of research after meeting fellow Selsey astronomer John Mallett in 2019. John is part of a small group of amateur astronomers who have been researching in this area for a few years and I was recruited into their number almost straight away. The group goes by the name of ALTAIR Eclipsing Binary Research Group and here is a link to the about page of our website which is written and maintained by myself. The site is private to the four of us who are currently members, but the about page is publicly accessible. You can access all the published papers from the ALTAIR group on the About page – I have only been involved in the most recent one, but I am very busy writing parts of a new paper which is in progress.

The technique used to determine if an unseen third body (an exoplanet or perhaps a brown dwarf) might be orbiting one, or both of the stars in the PCEB system is called the method of Eclipse Time Variation (ETV). The rest of this article explains this in more detail.

Let’s clear up some terminology…


Exoplanets are planets that orbit stars other than the Sun. It is an amazing fact that we had no evidence of the existence of any exoplanets until 1992, although it was widely assumed that the Sun could not be unique in having a family of planets (and other smaller bodies) in orbit around it. Such are the vast distances between the stars, it is only in these past few decades that the technology has existed to detect the existence of exoplanets, and it is only very recently that any exoplanets have been directly observed – most are detected by more subtle, indirect means – more about this shortly as it is pertinent to this story. To put it into perspective, there are now well over 4000 ‘confirmed’ exoplanets, and the number has been doubling every 27 months. However, only 16 exoplanets have been discovered using the ETV method that we, the Altair Group, are studying.

See NASA’s exoplanet website for lots of great information.

Post Common Envelope Binaries (PCEBs)

Binary stars are extremely common in the universe. A binary star system consists of two stars in a common orbit around each other. Star systems with three or more stars also exist. The specific type of binary systems that we are interested in are known as PCEB systems and evolve in a particular way, which I will briefly describe.

It is likely, in a binary system, that one of the stars will be more massive than the other. The more massive the star, the faster it ‘burns’ through the various stages of stellar fusion, and the faster it evolves. In this way, the more massive ‘primary’ star in the pair more quickly evolves into its red giant phase and expands enormously. (Our Sun will become a red giant in about 5 billion years). The primary star expands to the point where matter in its outer layers starts to overflow and transfers to the smaller secondary star. Thus, a common envelope of material is formed between the two stars. The secondary, typically a main-sequence star at this point (like our Sun), cannot cope with all this material that is being donated to it by the primary partner and, eventually, the common envelope is ejected from the star system altogether. The key here is that the departing envelope material takes angular momentum away with it, and the law of the conservation of angular momentum dictates that the two stars must be left with less angular momentum. This they achieve by moving closer together, which also causes their orbital period to decrease to the point where they spin around their common centre of gravity in a stunning, but typical time of two or three hours! During this process, the primary star will evolve on and will become either a white or red dwarf, whilst the secondary star stays as a main sequence star for a long period due to the fact that it started life as a fairly low mass star.


First, I should mention that it is generally not possible for us to make out the two separate stars using a telescope from here on Earth – certainly not with an amateur telescope, and this means we just see a single resulting point of light. So where does the eclipsing bit come in?

If the stars are aligned, even roughly, to our line of sight such that during their orbital dance, the stars move in front of each other, we will notice the following effects:

  • When the stars are side by side, the combined point of light that we see will be at its brightest
  • When the fainter of the stars moves in front of the brighter, we will see a larger dip in brightness (the primary eclipse)
  • When the brighter of the stars moves in front of the fainter, we will see a smaller dip in brightness (the secondary eclipse)

These effects are clearly illustrated in the graph of brightness (magnitude) over time shown below. We call this a light curve. I made these measurements at my observatory here in Ham of brightness (a process called photometry) of a star called HS0705+6700 (aka V0470 Camelopardalis) over a 4.5 hour period. The big dips are the primary ‘minima’ and the small ones are the secondaries. The binary orbital period of this star system is about 2.3 hours. The strange time scale at the bottom is in Julian Days with a correction for differences in the Earth’s position with respect to the barycentre of the Solar System. See here for more details about BJD, but know that this is the standard way astronomers in this field represent the date and time of the measurements. You can see on the graph that the primary dips are separated by just under 0.1 of a day which is where the 2.3 hours comes from.

Rotating Like Clockwork?

We might expect that the resulting whirling pair of stars will eclipse each other repeatedly and accurately, like celestial clockwork, and this would be true if we just considered the Keplerian equations of celestial mechanics. It would mean that, if we can measure the time of the minima with sufficient accuracy, we can work out the binary period and therefore we should be able to predict the time of any eclipse into the future (or back into the distant past).

This does actually hold accurately to a certain degree. We do indeed predict when eclipses should occur so that we can observe and measure them over the months and years after we have established the exact date and time of the first observed eclipse, and we are able to do this because, although the period is not exactly following Keplerian clockwork, it does so to a very high degree of accuracy – easily enough to pin down the time so that we can go out on the predicted night and be sure of capturing the minimum dip so that we can re-measure and refine the date and time once more.

However, in the real world there are several other known physical effects that can very slightly change the orbital period of the two stars. I will go into this shortly, but first let me introduce the tool we use to highlight any differences that might be seen in the binary period over time. The graph is called the O – C (pronounced “O minus C”) which stands for the Observed minus Calculated (or Computed) graph. An example taken from within our ALTAIR website for our friend HS0705+6700 is shown below:

What we see in the O – C chart above is the Cycle or Epoch number of the eclipse plotted on the horizontal x-axis. The number of seconds difference from the expected calculated date and time of each minimum point is plotted on the y-axis. The different colours illustrate that the measurements were made by different groups of observers and research teams over the years (the legend is not shown here for clarity). The red group to the right has been made members of our ALTAIR group in the past few years (we try to observe this star every month if we can). As mentioned above, the period of HS0705+6700 is about 2.3 hours, so 80000 orbital cycles represent roughly 21 years of measurements. The zero Epoch is normally defined and agreed to be a certain historical observation and sometimes a measurement prior to this is found in the literature as is the case with the blue measurement with a negative cycle number at far left. The error bars tend to become smaller as time goes on due to improvements in technology. Some of the measurements on the left will have been made from photographic plates and photomultiplier detectors, whilst CCD cameras will have been used in more recent years.

Making sense of the O – C curve

The simplest way to compute the time of any epoch is to measure the period to the best accuracy possible and then add integer multiples of it to the date and time of the first epoch. If we call the Epoch number E, we can write a formula like this:

JDE = JD0 + E x Period

This is called the linear ephemeris. JD means Julian Date which is just a way of representing a date and time as a number. As I type this, the Julian Date is 2459444.8958333335. There are many on-line Julian Date converters available to try. They are handy because you can find differences between dates by subtracting them and calculate future dates by adding days to them. This is why it is common practice to quote the period of binary stars in days. For example, the period of HS0705+6700 is 0.09564668 days and the linear ephemeris for this star system is as follows:

JDE = 2451822.75964598 + E x 0.09564668

The value of the reference Julian Date of the JD0 in the above equation is a date back in October 2000. With this, you can now calculate the date and time of any epoch in the future, or past. If everything was purely working to Keplerian celestial mechanics, every point in the above O – C plot would be on the zero line and each point would be exactly 0.09564668 days apart, so we now must address why this is not the case.

Mechanisms that can Change the Binary Period

There are three mechanisms that are known to have an effect on the period of this kind of PCEB system and these are:

  • The Applegate Effect
  • Angular momentum loss
  • The presence of a third, unseen body (or more than one unseen body).

Before I briefly explain these effects, I think you will be able to imagine that if we can calculate the sizes the first two effects, and we still find that they cannot account for the full magnitude of the O – C variations seen, then we are left with the conclusion that some unseen body must be the cause. This is the reasoning used so far by researchers in this field, and it is the way that those 16 ETV exoplanets mentioned above have been ‘discovered’.

Another thing that makes the above argument more convincing is that, when all other known effects have been subtracted from the O – C plots, what is often left over is a cyclic variation. Surely an exoplanet would leave a cyclic, periodic fingerprint behind as it goes around in its orbit?

Just a few words about the first two effects in the list above:

The Applegate Effect

A mechanism to potentially explain the eclipse time variations seen in O – C diagrams was proposed by Applegate in 1992.  This process relies on the secondary star being magnetically active, undergoing solar-like magnetic cycles which are strong enough to redistribute the angular momentum (AM) within the star. No loss of AM from the system is required. This leads to shape changes which affect the gravitational quadrupole moment. Since the orbits of the stars are gravitationally coupled to variations in their shape, this effect could explain the small changes that are observed in the orbital period. We can calculate the magnitude of the AM variations seen in the O – C plots and then compute the size of the Applegate effect. Generally, it is found that the Applegate effect is at least an order of magnitude too small to explain the O – C variations (at least this is the case with the PCEBs that we are currently studying).

Angular Momentum Loss

Note that we are talking about a loss of AM, not a cyclic variation in AM. This means that there should be a gradual decrease in the period over time as the AM is lost. There are two causes of AM loss.

The first is caused by Gravitational Radiation. This is a consequence of Einstein’s General Theory of Relativity. Accelerated masses cause ripples in the geometry of space and time and carry energy away from the binary system.

The second effect is known a Magnetic Braking which is a theory explaining the loss of stellar AM due to material getting captured by the stellar magnetic field and thrown out to a great distance from the surface of the star.

The sizes of both of these effects can be calculated. Often, like the Applegate effect, their contributions are not enough to explain the O – C variations seen.

Once this work has been done and the known effects have been subtracted, the remaining job is to calculate the number, mass and orbits of potential exoplanets that could explain the cyclic variations in the O – C. As you can imagine, this is not an easy task!

Our current work focuses on following up with lots of new eclipse timings for the 18 stars in our canon. We are interested in how the proposed exoplanets are faring against our new timings.

Further Discussions

This blog article is an overview of our work. I have not explained any details about how we observe and measure the brightness of the stars, nor how we analyse the light curves to calculate the times of minima. We have written several Python tools to do all this and maybe the next blog will go into a bit more detail.

 Posted by at 8:49 am

Climate Change for Dummies #4: How we know for sure that we are responsible for warming the Earth

 Climate Change, Uncategorized  Comments Off on Climate Change for Dummies #4: How we know for sure that we are responsible for warming the Earth
May 082016


In the first three previous parts of this series, I’ve been slowly laying out and building up the evidence. Part 1 described how we are certain that CO2 levels are rising, Part 2 explained how we are certain that we are responsible for that. Part 3 laid out the facts that tell us for sure that the Earth has been warming up in an unusual manner in recent times, and in this article I will bring it all together and present the evidence that makes us certain that human activities are the cause of this recent warming.

Just to be pedantic I have fully defined what I mean by ‘recent’ and ‘unusual’ in Part 3– but you knew that because you’ve already read every word! The first thing we need to understand is, what are the various factors that can cause the climate to change?

Climate Forcings

Climate forcings are different factors that affect the Earth’s climate. These “forcings” drive or “force” the climate system to change. The forcings can be both natural and man-made, and the natural kind can also be split into External or Internal forcings.

Natural External forcings consist of changes in the amount of radiation that we receive from the Sun. These can be from changes in the Earth’s orbital parameters, or from the variability of the Sun’s own output.

Natural Internal forcings comprise all those changes that occur within the Earth’s system itself, in particular volcanic activity, fluctuations in ocean circulations and large-scale changes in the marine and terrestrial biosphere or in the cryosphere.

Some of these natural forcings cause warming and others cause the planet to cool. If the Sun becomes more active, for example, and pours out more electromagnetic radiation, then this would warm the planet, whereas a large increase in volcanic activity causes more particles or aerosols in the atmosphere and this generally has a cooling effect.

The point is, to be able to account for the measured global warming, we must account for all of the warming and cooling effects of these forcings and add them all up; only then will we get a complete picture. The problem is when this is done with all of the natural forcings there is just no way to account for the measurements that show the recent global warming. In fact, the answer is not even close to reality.

Only when we look at the unnatural, man-made forcings do things start to agree with our observations and measurements. Additionally, when we only include man-made forcings in the models the answer is also wrong – at some periods it comes out a bit too warm. It seems that the total of the natural forcings on their own is responsible for some periods of slight recent cooling, and when added to the man-made effects the results are a near perfect match to our observations.

The chart below (from NASA GISS) shows many of the forcings. Note the very slight effect from changes in ‘Solar Irradiance’ (the Sun!). Also note the transient affect of ‘Stratospheric Aerosols’, which come from volcanoes, and are pretty well random. We are responsible for putting the majority of aerosols into the Troposphere from activities such as the burning of tropical forests, coal and oil.


To translate these values of ‘Effective Forcings’ into temperature, climate scientists run them through sophisticated climate models. When this is done we see that the calculations match the observed global temperatures beautifully. See the chart below where we see temperature anomalies against a baseline from 1850-1900 plotted against time . (HadCRUT3 is one of the datasets of global temperatures from the UK Met Office). Notice how far short the blue line, from just the natural forcings, falls from the observed truth.


The evidence is devastating. But if you want more, I like this excellent animated chart. Just keep clicking on the down arrow at the bottom of the page to get the next frame of the animation.

It is fitting that the national treasure that is Sir David Attenborough is in the news this week as he (and we) celebrate his 90th birthday. A few years ago, Sir David was a bit sceptical about how much human activities are responsible for warming the climate. He made a documentary about it in 2006 called “The Truth about Climate Change”. He paid a visit to the UK’s Met Office centre in Exeter and spoke to the climate scientists there. It was just this very topic that convinced him of the overwhelming evidence. Here’s the clip from that programme:

I think that’s a fitting way to close this case.

 Posted by at 7:04 pm

Climate Change for Dummies Series

 Climate Change  Comments Off on Climate Change for Dummies Series
Apr 262016


Maybe I shouldn’t have called this series ‘for dummies’, I don’t mean to imply anything by that but it’s done now!

I’m posting part 3 of the series today which took longer than I thought. The new article which is the previous blog post to this one is called “How we know that the Earth is warming up

If you haven’t read the first two blogs, please read them first. I designed the series to build up the evidence in an incremental fashion. The articles are just below in my blog, or you can use the links below:

Part 1: How do we know that recent increases in CO2 levels are man-made?

Part 2: How do we know that CO2 causes the planet to warm up?

Part 3: How we know that the Earth is warming up

Part 4: How we know for sure that we are responsible for warming the earth

I hope you find them informative and clearly written – that’s the goal anyway.

 Posted by at 3:21 pm

Climate Change for Dummies #3: How we know that the Earth is warming up

 Climate Change  Comments Off on Climate Change for Dummies #3: How we know that the Earth is warming up
Apr 102016


Chart from HotWhopper.com

You’d think that this was a simple question to answer, and it is really, but there are people who dispute this fact – despite the mountains of evidence we have. In this article, I’m not going to address why the Earth is warming, but just establish the fact that it is, and also that it is doing so at an unusually fast rate.

Firstly, let’s set the time-frame that is concerning us here. We know that the climate of the Earth has changed in the past, well before we humans were capable of influencing it. Indeed, the scientific study of palaeoclimatology has revealed that climate changes that have occurred in the past have helped us understand that the ‘recent’ warming is unusual. So, what does ‘recent’ mean? In this article ‘recent’ means since about 1850.

Now that I can use the word ‘recent’ so that we all know what it means, to be able to state that the level of warming that we measure is ‘unusual’ I need to define what ‘unusual’ means. I can hear the groans of dismay from here! However, bear with me because so much is made about how the recent changes can be dismissed as simply ‘natural’, I have to set a baseline of what is natural so that we can agree that the recent warming is ‘unusual’.

Past Global Temperatures

Because we have only been using instruments to physically measure the temperature around the world for  a hundred years or so, we need to use other ‘proxy’ methods. These include ice cores, tree rings, coral reefs and lake and ocean sediments amongst others. The data extracted using these proxies can be used to reconstruct various aspects of the climate, including global temperatures before instrumental records became available.

The chart in Figure 1 below is a recent version of one of the most talked about graphs in the history of science. It has been called the ‘Hockey Stick’ because the recent rapid upturn in global temperature looks like the upturned blade of a hockey stick.


Figure 1: The 1998 original Hockey Stick chart (blue), shown against a 2007 reconstruction by Wahl & Amman (red). More recent data from instruments are in black.

Michael Mann’s original 1998 version has been the subject of intense scrutiny over the years, and the denier community has done a very good job in attacking it to the point where many people think that the hockey stick is ‘broken’. However this could not be further from the truth. Several more recent reconstructions carried out independently using a variety of techniques and proxies have all verified the original findings. The denier community do this all the time – they just keep repeating a particular message again and again until it becomes fixed in the minds of the press and the public, even though it is not true. However, the scientific method keeps grinding on, and the climate scientists have concluded that the main ‘take-away’ is as follows:

The last few decades are the hottest in the last 500 to 2000 years (depending on how far back the reconstruction goes).

This answers the definition of what we mean by ‘unusual’ warming.

By the way, get used to looking at ‘anomalies’ when viewing data and charts. Just about every chart or table we look at in the field of climate science show anomalies of something, be it temperature or amount of ice loss. The anomaly values are always the difference compared to a baseline figure which will be plotted as the zero. The baseline will be a calculated average over a number of years. For example in the hockey stick chart shown above, the left axis describes the baseline as ‘Ref. 1902-1980’, and this averaged figure will appear at 0.0 on the axis. Temperatures warmer than the baseline appear above the 0.0 baseline, and cooler below.

Multiple Lines of Evidence

There are many lines of evidence that we can look at to find out if the recent warming is unusual (notice how I can use those two words now and we all know what I mean).

Direct measurements

We have global records of land and sea surface air temperatures as well as air temperatures over the oceans. Some of these temperature records go back to about 1850. You will hear a lot of nonsense from the denier community who regularly bleat on about badly placed measuring stations and urban heat island effects; but none of that detracts from the fact that all these calibrated measurements show the recent warming very clearly.  I might write a separate article about the badly-placed weather stations and why it is that we know they don’t make a jot of difference.

There are several datasets of global air temperatures, and a good example is the data from the Goddard Institute of Space Studies (GISS). The data in their GISSTEMP analysis comes from weather measurement stations, and you can download the raw data yourself. The chart below comes from the GISSTEMP data analyses:


Figure 2: Line plot of global mean land-ocean temperature index, 1880 to present, with the base period 1951-1980. The dotted black line is the annual mean and the solid red line is the five-year mean. The green bars show uncertainty estimates.

Satellite Measurements

I could put this under the ‘Direct Measurements’ section above, because satellites orbiting the Earth are equipped with sophisticated instruments that are making direct measurements of something; but they cannot directly measure temperature with a thermometer in the same way as a weather station does, or a ship which measures the temperature of the water at the surface of the ocean (often done by hauling in a bucket of water!). Satellite measurements can lead to an indirect measurement of the air temperature at various altitudes in the troposphere, but there are a lot of manipulations of the raw measurements required to do this. This is what a scientist who works with the Remote Sensing Systems (RSS) satellite dataset said recently:

They [satellites] are not thermometers in space. The satellite [temperature] data … were obtained from so-called Microwave Sounding Units (MSUs), which measure the microwave emissions of oxygen molecules from broad atmospheric layers. Converting this information to estimates of temperature trends has substantial uncertainties.

However, climate change deniers, like Senator Ted Cruz (R-Tx) have been holding up the RSS satellite dataset as “the best data we have”. They like to say this because the RSS data has, until recently, shown the least amount of warming in recent years. They like to say that the satellite data is more reliable than the ground based measurements which we discussed in the ‘Direct Measurements’ section above. However, there have been recent adjustments made to the RSS data which the deniers don’t like. Have a look at this video:

So, in summary, satellite measurements are valuable if correctly converted and calibrated, and they also show a warming, as shown in the chart below which is the latest RSS data which has been properly adjusted for diurnal variation (as described in the above video clip, and more information here).


Figure 3 – The global Middle Troposphere (TMT) anomalies from 1980 to now. The black line shows the old version, the light blue line the new. Note that the overall trend in the new version is 60% bigger than in the old version. The green line at the top shows the effect of the improved diurnal correction.

Just before we leave this section, it is interesting to note a denier trick to do with cherry picking the data. Take another look at the RSS data chart above, and notice the big peak around 1998 – that was due to a very big El Nino year. What people like Ted Cruz like to do is show a small section of this data where the left-hand side of the chart starts at that 1998 peak. They then draw a line through the data which appears to show cooling because the trend appears to go down, not up. This is also known as deliberately lying and abusing one’s position of authority – something that Ted Cruz seems to do on a regular basis.

Rise in Sea Level

There are two reasons why a rise in sea levels provides an indication that the world is warming. The first reason is due to the fact that water expands as it warms, and the second is the addition of extra water from melting ice caps and glaciers. But before going into any details we must again establish a baseline so that we can show that the recent rise in sea levels is ‘unusual’.

The basic summary of past sea levels goes like this: At the end of the last ice age (about 21,000 years ago), global sea levels rose 120 metres over several millennia and stabilised between 2,000 and 3,000 years ago. There is strong evidence to show that sea levels have changed very little from around AD 0 to about AD 1900. Since then there has been a marked, measured increase in global sea levels. So, the story for sea levels is very similar to the hockey stick described above for global temperatures, and it is clear to see that the recent measured rise is certainly ‘unusual’. This really is not surprising because the curve just has to follow the shape of the temperature anomaly curve because basic physics says that it should!

warm-03Figure 4 – Sea level evolution in North Carolina from proxy data (blue curve with uncertainty range). Local land subsidence is already removed. For more information see here

How do we measure the rise in sea levels? There are two main methods, and a bit like the case for temperatures there are direct measurements using tide gauges and also satellite measurements. However, for the case of satellites the sea level measurements are indeed direct measurements rather than indirect as in the case of temperature. This article explains the techniques used for both surface-based and satellite.


World-wide, the vast majority of the world’s 170,000(+) glaciers are shrinking, a small number are actually growing, but this is due to global warming too! To understand why please see this superb video – it says it all much better than I could possibly describe it.


Frequency of Cold and Warm Nights and Days

This is an interesting one. From time to time, weather stations will record a record high or low temperature. If there were no global warming going on, we would expect that record lows and highs would average out over time. But this is definitely not what we see. What we observe is that record highs are outpacing record lows, and as time goes on, the ratio of record highs to lows is increasing. The following widget shows the current ratio of record highs and lows across the USA. It is updated on a daily basis. It is not possible to have such a large ratio in favour of the record highs unless there has been recent global warming. Please be sure to click on the “Learn More” link on the widget.

Ice Melt

I’ve already covered the melting of Glaciers world-wide and mentioned that melting ice is one contributing factor in the measured rise in sea levels, but there are other indications of a warming planet that can be seen and measured at both poles of the Earth.

Firstly, the Arctic and the Antarctic differ enormously in their makeup. The arctic is a frozen sea surrounded by land, whereas the Antarctic is a frozen continent surrounded by sea. But, there is also a similarity between them that is worth mentioning, and that is the phenomenon called Polar Amplification. In basic terms this means that any change in the net radiation balance (for example greenhouse gas intensification) tends to produce a larger change in temperature near the poles than the planetary average. So, changes such as the amount of ice cover are extremely important, as these effects have positive (bad) feedbacks associated with them. The obvious positive feedback effect here is that ice is bright and reflective which helps to bounce radiation from the sun back into space. However, warming melts the ice, exposing the sea. The sea is darker and absorbs more heat which melts more ice which exposes more sea which warms more and melts more ice – and so it goes on.

The Arctic

Let’s focus on the Arctic first. The extent of the arctic sea ice can easily be photographed and measured by satellites these days. The extent is a two dimensional indication to how far the sea ice covers the sea each year. Obviously there is a seasonal variation with an annual peak in the sea ice extent in the northern winter and a minimum extent at the end of the summer.

The chart below shows the trend in arctic sea ice extent from 1979 to 2015 (averaged to get rid of seasonal variations). There are clearly large troughs in extent – for example 2007 and 2012 when record lows in sea ice extent were measured. It seems that the sea ice has recovered from these record lows. However, the long-term trend is clearly downwards. It is very likely that a new record low, beating that of 2012 will be recorded by the end of this year – 2016. Record lows will continue to be beaten as the trend goes downwards.


Figure 5 – Shows the recent downward trend in Arctic sea ice extent.

However, sea ice extent alone does not reveal the whole picture because even the thinnest ice will be recorded in this metric. What is more revealing is the measurement of the volume of the sea ice. This is much harder to do and there are two main satellite measurement techniques that are used. Firstly, there is altimetry where satellites measure the height of the ice. Incredibly they can detect changes of just a few millimetres from an altitude of around 1000km! Secondly, exquisitely sensitive changes in the gravitational field of the Earth are measured using satellites, notably NASA’s GRACE satellites. This reveals the change in volume of the ice as mass is lost to the oceans. The chart below shows data from GRACE for the case of Greenland and the downward trend in volume is clearly seen.


Figure 6 – The solid blue line is the best-fitting linear trend. T Harig C, Simons FJ. Mapping Greenland’s mass loss in space and time. Proceedings of the National Academy of Sciences of the United States of America. 2012;109(49):19934-19937. doi:10.1073/pnas.1206785109.

The bottom line is that the arctic sea ice volume is reducing and at an increasingly fast rate. This is true even during the years where the sea ice extent grows.

The Antarctic

Climate change deniers like to talk about the Antarctic because the extent of the sea ice surrounding the continent has actually increased in recent years! Surely that can only happen if the Antarctic is cooling, not warming!

As always, the truth is slightly more complicated and, in fact, the increase in the sea ice extent is explained by warming, not cooling! Firstly, the chart below shows that the surface air temperatures over the ice-covered areas of the Antarctic are warming. Also, oceanographic studies reveal that the surface waters of the southern oceans are warming – and at a faster rate that the global average.


Figure 7 – Annual mean surface air temperature averaged over the ice-covered areas of the Southern Ocean. Straight line is trend line (Zhang 2007).

So, how can the increase in Antarctic sea ice extent be explained? I can’t beat this very concise explanation by John Cook from the brilliant website skepticalscience.com

There are several contributing factors. One is the drop in ozone levels over Antarctica. The hole in the ozone layer above the South Pole has caused cooling in the stratosphere (Gillet 2003). A side-effect is a strengthening of the cyclonic winds that circle the Antarctic continent (Thompson 2002). The wind pushes sea ice around, creating areas of open water known as polynyas. More polynyas leads to increased sea ice production (Turner 2009).

Another contributor is changes in ocean circulation. The Southern Ocean consists of a layer of cold water near the surface and a layer of warmer water below. Water from the warmer layer rises up to the surface, melting sea ice. However, as air temperatures warm, the amount of rain and snowfall also increases. This freshens the surface waters, leading to a surface layer less dense than the saltier, warmer water below. The layers become more stratified and mix less. Less heat is transported upwards from the deeper, warmer layer. Hence less sea ice is melted (Zhang 2007).

Antarctic sea ice is complex and counter-intuitive. Despite warming waters, complicated factors unique to the Antarctic region have combined to increase sea ice production. The simplistic interpretation that it’s caused by cooling is false.

There are other indicators that point to the fact that the world has warmed over and above what is expected due to natural variations in recent years. I think I have laid out enough evidence in this article to close the case. However, as an exercise for the reader you might like to look into these topics:

  • Spring arriving earlier
  • Increase in ocean heat content
  • Rise in specific humidity

There are others…

 Posted by at 10:07 am

Climate Change for Dummies #2: How do we know that CO2 causes the planet to warm up?

 Climate Change  Comments Off on Climate Change for Dummies #2: How do we know that CO2 causes the planet to warm up?
Apr 042016

In part 1 of this series, I put forward the evidence on how we are sure that CO2 levels are rising and how we know that human activities are the cause. In this article I show how it is that we know that CO2 causes the planet to warm up. I’ll start with a small rant…

Small Rant

If the Earth, like our satellite the Moon, had no (or very, very little) atmosphere, the temperature measured at its surface would be very much colder than the balmy conditions we enjoy as a species of animal that has evolved on this watery blue jewel of a planet that we call home. And let’s be sure of one thing – this mote of dust that we inhabit, with its fragile and thin atmosphere, is the only place we are ever likely to survive. Why? Because the Universe is a big place, and we will never manage to abandon this oasis of our birth to populate another rock which we may royally fuck-up in a similar manner as we have here. Rant over!

Warmer than theory

If we considered the Earth to be an ideal theoretical ‘blackbody’ then we could calculate that the radiation from the Sun would warm its surface to about 5°C. If you then allow for the fact that the Earth is quite reflective due mainly to its bright oceans, clouds and polar caps, this temperature would drop to a theoretical value of about -18°C. Something here does not add up! The actual, measured average temperature of the Earth is about +14°C, so what is it that causes the Earth to be over 30°C warmer than simple theory predicts?

The answer lies with the atmosphere – the layer of air that extends from the surface of the Earth to the edge of space. It is often said that if the Earth was the size of a bowling ball, then the atmosphere would be only as thick as a coat of varnish, and that is certainly about right, but all our weather happens in this tenuous layer of gases, and this article will explain how certain gases in the air are responsible for raising the temperature.

Isn’t this old news?

You might think that our understanding of the climate is a very recent thing, perhaps in the last few decades, but in fact we have known about the basic mechanisms that drive changes in the climate for well over a century, and the first clue came from the great French mathematician and physicist Joseph Fourier around 1824. He calculated from the size of the Earth and its distance from the Sun what the temperature should be if the only factor was the heat it receives from the Sun. As mentioned above the answer to this turns out to be rather chillier than it is, but he then went on to suggest that the air might absorb heat rising from the Earth’s surface, preventing it from escaping into space, thus warming the planet. This idea was inspired by earlier experiments carried out by Horace Bénédicte de Saussure, and so Fourier was the first person to describe “the greenhouse effect” (although he didn’t use that term).

The Greenhouse Effect

A quick physics refresher here: All light, be it visible, infrared, ultraviolet, radio, microwave, x-ray or gamma is the same stuff – electromagnetic radiation. The different categories are to do with the wavelength of the light. Wavelength is sort of the inverse of Frequency. The higher the frequency, the higher the energy of the light (and the shorter the wavelength). Talk of wavelength and frequency suggest something is waving, and indeed it is, for light has both magnetic and electrical field components, which are the things that are waving about. When one wiggles it generates the other and vice versa. End of physics lesson.

The greenhouse effect goes like this: The electromagnetic radiation reaching the Earth from the Sun consists mainly of visible and ultraviolet wavelengths. Not all of it reaches the ground, some is reflected back into space before that, but the planet absorbs the energy that does reach the surface and then reradiates it as infrared radiation. You can’t see infrared light with your eyes, but you can feel it on your skin – it’s what we call heat. On the way back up, greenhouse gases absorb the infrared light and reemit it in all directions. Some of it eventually escapes back into space; some goes back down towards the Earth’s surface. The result is a warming in the lower part of the atmosphere – think of it like putting a blanket on at night to keep warm.

The picture below is taken from NASA’s website here


Being just a little bit of a pedant about these things, the term “greenhouse effect” is not a bad one, but a glass greenhouse does not keep the air inside it warm for quite the same reason as the so-called greenhouse gasses (GHGs) do, and that’s because the infrared radiation in a greenhouse is trapped by the fact that glass is opaque to infrared radiation (and won’t let it back out), rather than the absorption/re-emission process described above.

GHGs vs. Non GHGs

Before discussing the gases that do contribute to global warming, let’s first look at the other gases that are present in the atmosphere that don’t. This is quite instructive for a couple of reasons which will become apparent.

The air we breathe is mostly made up of Nitrogen (78%), Oxygen (21%) and Argon (0.93%). The rest of the components of (dry) air are present in trace amounts as we will see when we look at GHGs.

Why is it that some molecules are affected by infrared radiation yet others aren’t? Well, it’s complicated, but Nitrogen and Oxygen exist in the atmosphere as N2 and O2 which are known as diatomic molecules (two atoms in the molecule). Argon exists as a monatomic molecule. When these molecules vibrate, there is very little change in the distribution of their electrons and hence very little change in the electrical charge distribution around the molecule. This makes them almost totally unaffected by infrared (IR) radiation – in other words, IR radiation is just not in tune with them; not on their wavelength so to speak.

There are several gases in the air that do contribute to global warming. Before discussing them individually, let’s just explain how it is that these GHG molecules are affected by IR radiation. If you can, imagine a molecule of Carbon Dioxide (CO2) which has one atom of Carbon bound to two atoms of Oxygen. When this molecule is excited by an incoming infrared photon of light, it vibrates and absorbs the photon, and then reemits another photon of IR in a different direction. It just so happens that the frequency that the molecule ‘likes’ to vibrate at is ‘in tune’ with the frequency of infrared radiation; and this is why the IR photons get absorbed. Photons of visible or ultraviolet light, for example, do not cause the molecule to ‘ring’ and are ignored.

A diagram is worth a thousand words and the following animation of a CO2 molecule being excited by a photon of IR radiation is from the UCAR website here

Human-caused climate change deniers (henceforth referred go as ‘deniers’) like to point out how small the contribution of GHGs like CO2 is to the overall makeup of the atmosphere; we just learnt that they are only present in trace amounts (just over 400 ppmv or 0.04% in the case of CO2). They ask “how could such small traces make any difference to the temperature of the Earth?” There are several answers to this, and one is that the greenhouse effect cannot be diluted by the non-GHGs. In other words doubling or halving the percentage of Nitrogen or Oxygen will not have any effect of the ability of the GHGs to absorb and reemit IR radiation. Additionally just because the GHGs are present in trace amounts, and people find it hard to believe they can be dangerous is not a very scientific way of going about things as there are lots of examples of things that humans have trouble imagining that are nonetheless true. For example even 0.000001% of Arsenic in the water supply is considered a danger to Human health.

The Greenhouse Gases

  • Water vapour (H2O) is the most abundant greenhouse gas and is the biggest contributor to the ‘natural’ greenhouse effect. The amount present in the air varies depending on how ‘wet’ the air is. Water vapour provides us with an example of a bad ‘positive feedback’ in the climate meaning that more of it causes more warming which in turn causes more water vapour to form and so on. Human activity has little direct effect on the abundance of water vapour aside from the fact that warming caused by CO2 emissions causes warming, which causes more moisture, which causes more warming…
  • Carbon Dioxide (CO2) is probably the most important of the GHGs. It has a much longer lifetime in the atmosphere and this can be anything between 30 and just below 100 years (By comparison, water vapour molecules hang around for about 9 days on average). We know for certain that human activities are increasing the amount of CO2 in the atmosphere.
  • Methane (CH4) exists in quantities in the air about 200 times lower than CO2. The thing about methane is that it is a more ‘potent’ GHG than CO2 as it has a higher greenhouse ‘potential’. However, it has a shorter lifetime in the air than CO2 at about 11 years. Methane is currently responsible for about 20% of the non-natural greenhouse effect. It comes from sources such as cattle and drilling for natural gas. Recently there was a huge leak of methane in California which we really could have done without! A big worry with methane is another (bad) positive feedback effect where recent warming is melting the not-so-permafrost in places such as Eastern Siberia, this is venting methane which causes more warming which cause more venting…
  • The are other GHGs including Nitrous Oxide, Ozone and CFCs, but their impact is not so important as those described above.

The Evidence

To be honest, when I write this stuff, I get to the point where I think that anyone reading this would already be convinced by the story so far. But I’ve only really introduced the science and haven’t presented any evidence of warming or that it is caused by GHGs. Let’s start by thinking about some experiments we could carry out to prove our case.

The first experiment I can think of is some way of looking down at the Earth from space and measuring the amount of radiation that is coming back from the surface. If we could measure the amount at the different wavelengths of light then that would be great. Secondly, it seems sensible to propose that, if the warming is being caused by GHGs in the lower part of the atmosphere, then if we could measure changes in temperature vertically through the atmosphere that would also prove valuable evidence. It turns out, of course, that we can do both these things.

The first experiment has been carried out by satellites orbiting the Earth since 1970. They have been measuring the brightness of the radiation coming back up from the surface at various wavelengths. The chart below from a paper in published in Nature (Harries 2001) shows the change in the brightness over the infrared part of the electromagnetic spectrum from 1970 compared to 1997. This is a period over which the Earth has been measured to have warmed. The chart shows that the brightness decreases most at the IR wavelengths which are ‘tuned’ to the GHG molecules as described above. Here we can see the decrease due to Carbon Dioxide (CO2), Ozone (O3) and Methane (CH4), plus some other CFC molecules. In a nutshell, the Earth has got slightly dimmer over time in the IR wavelengths as seen from above, because more GHG molecules have held the IR ‘captive’.


The second experiment involves measuring the temperature changes over time at varying heights in the atmosphere. Some useful terminology here: The troposphere is the name for the lower region of the atmosphere covering from ground level up to roughly 18 km. The stratosphere takes over after that and goes on up to about 50 km in altitude (the mesosphere takes over higher than that).

We can easily measure that GHGs are warming the troposphere. We use ground based weather stations and satellites. If the GHGs are preventing some of the IR radiation from leaving the troposphere we would also expect the upper atmosphere to cool over time as the lower atmosphere warms over time. Interestingly, the situation would be reversed if it was the Sun causing the warming.

Fortunately, since 1979 The National Oceanic and Atmospheric Administration (NOAA) have been operating the Stratospheric Sounding Units (SSUs). These satellites have provided near global stratospheric temperature data above the lower stratosphere. These meaurements clearly show that the stratosphere has been cooling whilst the troposphere has been warming. The chart below is taken from a study by Ramaswamy et al., Reviews of Geophysics, Feb. 2001. The data covers the time perion of 1980 to 1995.


There really is no other convincing way of explaining a cooling in the upper atmosphere whilst the lower atmosphere and surface warms.

Just one more thing. Satellite measurements have detected that the stratosphere is shrinking, this is exactly what you would expect as contraction occurs with cooling.

Case closed.

 Posted by at 9:58 am