**Guest Post by Willis Eschenbach**

“Climate sensitivity” is the name for the measure of how much the earth’s surface is supposed to warm for a given change in what is called “forcing”. A change in forcing means a change in the net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation.

There is an interesting study of the earth’s radiation budget called “Long-term global distribution of Earth’s shortwave radiation budget at the top of atmosphere“, by N. Hatzianastassiou et al. Among other things it contains a look at the albedo by hemisphere for the period 1984-1998. I realized today that I could use that data, along with the NASA solar data, to calculate an observational estimate of equilibrium climate sensitivity.

Now, you can’t just look at the direct change in solar forcing versus the change in temperature to get the long-term sensitivity. All that will give you is the “instantaneous” climate sensitivity. The reason is that it takes a while for the earth to warm up or cool down, so the immediate change from an increase in forcing will be smaller than the eventual equilibrium change if that same forcing change is sustained over a long time period.

However, all is not lost. Figure 1 shows the annual cycle of solar forcing changes and temperature changes.

*Figure 1. Lissajous figure of the change in solar forcing (horizontal axis) versus the change in temperature (vertical axis) on an annual average basis.*

So … what are we looking at in Figure 1?

I began by combining the NASA solar data, which shows month-by-month changes in the solar energy hitting the earth, with the albedo data. The solar forcing in watts per square metre (W/m2) times (1 minus albedo) gives us the amount of incoming solar energy that actually makes it into the system. This is the actual net solar forcing, month by month.

Then I plotted the changes in that net solar forcing (after albedo reflections) against the corresponding changes in temperature, by hemisphere. First, a couple of comments about that plot.

The Northern Hemisphere (NH) has larger temperature swings (vertical axis) than does the Southern Hemisphere (SH). This is because more of the NH is land and more of the SH is ocean … and the ocean has a much larger specific heat. This means that the ocean takes more energy to heat it than does the land.

We can also see the same thing reflected in the slope of the ovals. The slope of the ovals is a measure of the “lag” in the system. The harder it is to warm or cool the hemisphere, the larger the lag, and the flatter the slope.

So that explains the red and the blue lines, which are the actual data for the NH and the SH respectively.

For the “lagged model”, I used the simplest of models. This uses an exponential function to approximate the lag, along with a variable “lambda_0” which is the instantaneous climate sensitivity. It models the process in which an object is warmed by incoming radiation. At first the warming is fairly fast, but then as time goes on the warming is slower and slower, until it finally reaches equilibrium. The length of time it takes to warm up is governed by a “time constant” called “tau”. I used the following formula:

ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)

where ∆T is change in temperature, ∆F is change in forcing, lambda (λ) is the instantaneous climate sensitivity, “n” and “n + 1” are the times of the observations,and tau (τ) is the time constant. I used Excel to calculate the values that give the best fit for both the NH and the SH, using the “Solver” tool. The fit is actually quite good, with an RMS error of only 0.2°C and 0.1°C for the NH and the SH respectively.

Now, as you might expect, we get different numbers for both lambda_0 and tau for the NH and the SH, as follows:

Hemisphere lambda_0 Tau (months) NH 0.08 1.9 SH 0.04 2.4

Note that (as expected) it takes longer for the SH to warm or cool than for the NH (tau is larger for the SH). In addition, as expected, the SH changes less with a given amount of heating.

Now, bear in mind that lambda_0 is the instantaneous climate sensitivity. However, since we also know the time constant, we can use that to calculate the equilibrium sensitivity. I’m sure there is some easy way to do that, but I just used the same spreadsheet. To simulate a doubling of CO2, I gave it a one-time jump of 3.7 W/m2 of forcing.

The results were that the equilibrium climate sensitivity to a change in forcing from a doubling of CO2 (3.7 W/m2) are **0.4°C in the Northern Hemisphere, and 0.2°C in the Southern Hemisphere**. This gives us an overall average global equilibrium climate sensitivity of** 0.3°C for a doubling of CO2**.

Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.

w.

NOTE: The spreadsheet used to do the calculations and generate the graph is here.

NOTE: I also looked at modeling the change using the entire dataset which covers from 1984 to 1998, rather than just using the annual averages (not shown). The answers for lambda_0 and tau for the NH and the SH came out the same (to the accuracy reported above), despite the general warming over the time period. I am aware that the time constant “tau”, at only a few months, is shorter than other studies have shown. However … I’m just reporting what I found. When I try modeling it with a larger time constant, the angle comes out all wrong, much flatter.

While it is certainly possible that there are much longer-term periods for the warming, they are not evident in either of my analyses on this data. If such longer-term time lags exist, it appears that they are not significant enough to lengthen the lags shown in my analysis above. The details of the long-term analysis (as opposed to using the average as above) are shown in the spreadsheet.

EXCELLENT work, Willis. This is how science is done: eliminate natural causes FIRST…

I think they used to call that hysteresis when I was a kid playing with magnets.

Superb! Right in the ballpark for my heterodox opinion (that sensitivity was only slightly above zero, of no possible concern.)

But you have a misstatement in the text:

It’s the reverse. The steeper the slope (NH) the lower the specific heat, and the easier it is to warm.

[Thanks, fixed. -w.]Very elegant. I can’t see any flaw in your logic. Except perhaps (very implausible at sufficient size) multi-annual oceanic feedbacks (+ve).

I think you have a problem in that you assume there is no other longer term warming/cooling cycles operating ‘in the background’ during the period 1984-1998; eg something like a rising or falling re-distribution of ocean-atmosphere heat on a multi-decadal scale; (for eg, the PDO), would scuttle the above calculations I expect.

I note also that your use of 3.7W/m^2 is a generous acceptance of some AGW’s overblown assumptions, so the result is also very generous. Reality is probably rather lower than that. Even closer to zero!

Your diagram of Southern Hemisphere temperature ranges vs. Northern Hemisphere, reminds me why we like living in Australia.

Hi Willis,

I wrote the following in 2009, and probably did the calculations in about 2002. I will not bother to try to find them – just assume I used a Ouija Board and a forked willow stick.

But my number was 0.3C – so we MUST both be correct!

Best regards, Allan

http://wattsupwiththat.com/2009/04/30/is-climate-change-the-%e2%80%9cdefining-challenge-of-our-age%e2%80%9d-part-3-of-3/#comment-125314

Allan M R MacRae (02:46:26) :

Indur Goklany (21:25:46) :

Excellent comments Indur – thank you.

Bill,

In science, first there is Hypothesis, then Theory (Evolution) and finally Law (Gravity).

Catastrophic humanmade global warming is still only a hypothesis, and I would suggest it is already a failed one. All evidence suggests that the sensitivity of Earth temperature to CO2 is at most 0.3C for a doubling of atmospheric CO2 from 280 to 560 ppm. This is not a problem for the planet.

The sensitivity might be even lower – there has been no net global warming since 1940, in spite of an 800% increase in humanmade CO2 emissions. The only noticeable impact is that we have made little plants happy.

Regards, Allan

Does this not assume that the effect of an external change in forcing (i.e. solar) is the same as an internal change in forcing (i.e. GHG)?

I think the problem with this is that the external change is “outside” the albedo effect, whereas the GHG change is not – so there is no direct way to compare the effects.

This may well be nonsense – but I’m uneasy about the comparison.

Willis this is so apparently straightforward one wonders why it has not been done previously.

It would be nice to have it emphasized that the ‘Temperature’ you are using is the

atmospheric temperatureand not a global surface temperature.Looking at the SST from http://weather.unisys.com/surface/sst_anom_new.gif (for the 28 May) and considering the Northern Hemisphere warms faster, things look a little cool for 3 weeks (half a Tau) before midsummer.

As usual a worthwhile effort.

I’m a bit mystified as to how you got from 3.7 w/m2 as the equilibrium climate sensitivity to 0.4C and 0.2C for the northern and southern hemispheres respectively. If you use the normally accepted conversion of 3.2 W /m2/ C you get the value of 1.16 C per CO2e doubling, which is closer to normally accepted value of sensitivity without positive feedback.

A possible source of error. In the annual cycle, does the temperature actually reach equilibrium or is it prevented from doing that by the changing season? If not, that might explain your smaller than expected Tau values.

Very good analysis of the readily available data Willis. The very small increase from Co2 is virtually irrelevant. That no one has yet proved that CO2 is a positive forcing and a zero or negative impact could be the real result and with the forcing added you get a small result.

Maybe we are cooling. What result do you get if you take out the 3.7 Watts/Metre.

84 to 98 numbers are old, now are up to date numbers available that you can do the same with?

If slope is a measure of time lag and the SH has more water why does it not have a steeper slope?

I am not a practicing religious person but was bought up a church goer who walked away quite young. That said I am mindful of the advent of Protestant Christianity through the agency of Martin Luther and others and how this appears to be a model or a template, if you like, for what we are seeing with the use of the internet by people of like mind and interests to examine and discuss issues particularly where there is widespread realization in the minds of individuals that the institutionalised view is somewhat awry from logic and reality. This is essentially the same as the later presentation of findings throughout the enlightenment at various fora, often to an audience of educated, interested amateurs ( using amateur in its sense of someone doing something for the love of it).

Compare this with the so called “official” version of climate science which is done, apparently, by “peer reviewed, published papers” where thought police gatekeepers actively filter out uncomfortably contrarian views of those who do not pay homage to the official gods of AGW.

Willis’s article and Clive Best’s thought provoking paper from the other day are examples that give considerable weight to this healthily skeptical movement, essentially and unashamedly protestant in character being the true path for the continuation of the Enlightenment rather than the institutionalised orthodoxy of the IPCC, its various scientological diocese and all the posturing hangers on that make their living from its access to funding and largesse to the homage paying faithful.

John Daly used 6 different methods to calculate climate sensitivity.

The average warming predicted by the six methods for a doubling of CO2, is only +0.2 degC.http://www.john-daly.com/miniwarm.htm

Doesn’t your calculation deal only with shortwave? I.e., incoming times (1-albedo) is the net shortwave, no? Wouldn’t cloud and greenhouse-gas variations would superimpose longwave variations you don’t capture?

Maybe. However, I note that there is still no CO2 signal in any modern temperature/time graph. Which there ought to be by this time, if the climate sensitivity of CO2 is distinguishable from zero.

This seems like a fun exercise. I’ll play with you…:) But I need some clarifications first, since I’m not sure that I understand your formula correctly.

I assume that lambda=lambda_0 but I’m not entirely certain if you intend to use Delta F ( as you explain) or F (as actually written in the formula). The formula seems to make more sense if you use F, because if you use Delta F a constant strong forcing would never be able to affect T. On the other hand, the actual computation seems to use Delta F. Most probably I’m just confused.

My first curiosity was wondering how a normalized annual CO2 concentration cycle would look superimposed on the chart? http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_trend_mlo.png

I further suggest it would be helpful for reader visualization to add four tick marks to the curves indicating equinox and solstice.

Tony Rogers says:May 29, 2012 at 3:17 am

A possible source of error. In the annual cycle, does the temperature actually reach equilibrium or is it prevented from doing that by the changing season? If not, that might explain your smaller than expected Tau values.

IIRC the SST in the northern hemisphere reaches a maxcimum some time after the longest day of the year. This is indicative of a lag in the system caused by the time it takes for heat to re-emerge from the ocean on a seasonal basis.

I think there are some much longer lags, but these are to do with runs of highly active or less active solar cycles at the multi-decadal scale rather than co2.

Joezee;

As I commented above, steepness of slope is the inverse of time lag. Willis misstated in his text. Time is the x-axis, temp. is the y-axis. A longer lag (bigger step) on the x-axis is required for a given temp change for the SH, which reduces the slope (rate of y-increase per x-increment).

Wrong.

First: The million dollar question is the value of λ. We still don’t know that value. If we did we would know what would really happen with a doubling of CO2 (2xCO2) and that debate would be settled once and for all. It’s not.

Second: We know a doubling of CO2 causes a 1-1.2°C temperature rise when everything else stays equal. Every serious climate scientist knows that. So in a sense you are claiming a negative forcing of more than 75%, because with a temperature rise by 2xCO2 comes also a rise in water vapor concentration and we know that water vapor is a potent greenhouse gas causing on top of the initial 2xCO2 increase even more warming.

Total Greenhouse Effect is 33°C, but if clouds are removed the TGE will become 60°C. Water vapor and thus clouds cause a 45% negative forcing in the real world.

I am describing a natural situation in the last paragraph. Now we are increasing CO2. So a 2xCO2 scenario will create a 1-1.2°C rise. In turn water vapor will increase with temperature, but this will cause some extra clouds which will cool the extra warming on top of the already 1-1.2°C rise with approximately 45%.

What temperature rise can we expect with CO2 doubling? At least more than 1-1.2°C.

Water vapor feedback is thought to be between 36-85% depending on the situation (clear sky or clouds). The average is 60%. A 1-1.2°C (2xCO2) rise therefore causes an extra 0.6-0.7°C rise by water vapor on top of that 1-1.2°C, but water vapor causes a 45% (naturally) negative forcing (see above) making it a value of 0.33-0.39°C. This little amount of extra warming will cause a little bit more CO2 to rise and thus water vapor rise until equilibrium is reached.

So in fact we are looking at a sensitivity of at least 1.5-2°C for a doubling of CO2.

To make things even more complicated we could also take 45% of 1.6-1.7°C (“A 1-1.2°C (2xCO2) rise therefore causes an extra 0.6-0.7°C”) which creates a sensitivity of 0.8-0.9°C in a 2xCO2 scenario. 3 times more than the one you came up with.

Why don’t we observe that: Simple it’s the 800 year lag thing. The world has to warm up and that simply is not an overnight process. We are only seriously increasing CO2 for a few decades now. And we haven’t reached the 2xCO2 level yet. We are not even halfway.

You seem to have a missing delta on the F and a parentheses mismatch on the right-hand side of the equation.

[Thanks, fixed. -w]I have not put much thought into your methodology/math so I wont comment on that, but it is on the order of predicted warming by CO2 increases ALONE. Keep in mind that AGW models ASSUME this increase will also affect other GHG, mostly water vapor, and have positive feedback, yada yada

This feedback model (in my view) is the biggest problem with AGW theory. Without it the predicted warming becomes quite mild as you have shown.

Willis writes: “This is because more of the NH is land and more of the SH is ocean …”

The wording could be confusing to some. There’s more land in the Northern Hemisphere than Southern Hemisphere but the oceans cover a greater surface area (approx 60%) in the Northern Hemisphere than land.

Excellent Willis, just how it should be done.

Love that Lissajous! Using it forces us to feel the permanence and repeating nature of a cycle. (Hint: it would be even better to display it as a GIF, moving like a real o’scope. Then the direction of change would also be feelable.)

The usual statistical stuff (eg scattergram) omits time entirely … in other words, omits EVERYTHING THAT MATTERS.

Thank you for the simplest, most elegant model I have seen so far. This is simple engineering test at its best. The result corresponds very well with the thermostat hypothesis and with the Svensmark research on clouds. So, for me, the result is in:

0.3 C per doubling of CO2, very little change in the tropics and more warmup as we reach the poles. This is good for the environment, increases crop yields and result in a calmer weather.

More CO2 please to feed a starving world!!

Sensitivity to forcing depends on wave length and on whether it falls on land or on sea. Short-wave (solar) radiation goes down into water to the depth of 2 dozen meters and is absorbed completely, heating ocean. Long-wave (greenhouse backscattering) penetrates into water by several microns and almost all is spent on evaporation. In tropics and mid-latitudes it can not heat near-surface air, since here air is warmer than surface. In high latitudes water vapoure is condenced in near-surface air and warms it. That is why Arctic is more sensitive to greenhouse effect than tropics and mid-lattitudes. All radiation falling on land is absorbed, that is why Northern hemisphere, where most of landmasses are, is more sensitive to greenhouse forcing.

Hi Willis,

I think a source of discrepancy between what you have done and many other analyses is that there is not a single time constant, but a broad range An annual forcing cycle is too short a period to see the longer term responses. The simplest example is the difference between land and water. The much faster land leads the water by quite a lot because it is so much lower in heat capacity, but heat transfer between the land and ocean slows down the response of the land somewhat. (with less slowing in the north than the south, of course). The response is a complicated mixture of the two response times, but the influence of the very fast land yields a love value for apparent response. Any single constant model is going to hide the longer response times (which extend to many years for the ocean) and so lead to a very low calculated sensitivity. A more informative analysis can be done by looking at the response to Pinatubo, which extends over several years (You have to account for ENSO influence to see the Pinatubo effect clearly, which I think you have said you object to). This kind of analysis does not rely on seasonality and shows at least some of the longer term response.

F is the total so-called forcing.. If one works out the doubling of CO2 from the formulae developed (from empirical data and experimentation) by the late Prof Hoyt Hottel of MIT (who understood more about radiant heat transfer than all the so-called climate scientists including sceptics put together) then absorption is only just over 1.0 watts/m2 but that does not mean an increase in equivalent temperature. There will be an increase in potential and kinetic energy and radiation to space. No one knows the proportion of these. So the net effect on atmospheric temperature of CO2 increase could well be close to zero. Heat and mass transfer is an engineering discipline which no one connected with the IPCC understands. because a) they have no qualifications in engineering disciplines (including thermodynamics) and b) they have no experience with heat transfer equipment (eg furnaces, heat exchangers, refrigeration etc)

Interesting, but I don’t see causation between CO2 changes and temp changes. What I see is CO2 increasing and temperature changing, that’s it. I see no CAUSATION between the two. Correlation is not evidence of causation.

“A change in forcing means a change in the net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation”

There is an immediate problem with that definition. If we are the top of the atmosphere (TOA) then there will be no downwelling radiation from greenhouse gases for the obvious reason that there are no greenhouse gases above the TOA.

Strictly speaking, radiative forcing is defined as the change in net downward radiative flux at the tropopause.

Interesting approach and graphics.

David Stockwell of Niche Modeling has evaluated the impact of the solar cycle. See his Solar Accumulation theory. He finds a theoretical Pi/2 (90degree) lag or 2.75 years for an 11 year cycle. The annual cycle equivalent would be 3 months.

I strongly suspect that if Willis keeps up this clear, focused analysis on the climate, he will have earned a PhD in Science. This continues his outstanding contributions to understanding and knowledge of our climate control mechanism.

M Seward says:

May 29, 2012 at 3:30 am

A brilliant post that gets to the heart of this charade.

Webster, Clayson and Curry have measured the various fluxes, wind speed, rainfall and surface temperature in the Western Pacific region.

http://www.arm.gov/publications/proceedings/conf05/extended_abs/webster_pj.pdf

The temperature change during the diurnal cycle at the surface skin (which is what satellites measure) is quite large 5.8 degrees and this is drive by a change in flux of 778 W/m2. Thus, one degree on the surface is 134 w/m2.

There can be no lag in the > y-1 range, of sea temperature down to 5 meters, as the response of temperature changes of up to 5.8 degrees at the surface, or delta 778 W/m2, are instantaneous using a yearly temporal window.

The analogy is the beating heart and blood pressure, the energy input is cyclical, blood flow has a dampened pulsating dynamic profile, and ‘average’ pressure (max+min)/2 is not an adequate measure of the system. Maximum pressure can increase due to arterial dilation/vaso-constriction or due to increased heart rate.

A simple test of whether Eschenbach’s method is junk is to apply it to climate model output. The climate sensitivity of the models are known, so if Eschenbach’s massively underestimates it in the model, we can be confident that it will massively underestimate it in reality.

A second test is to consider if the climate forcing during the LGM could plausibly have been sufficiently large to give a global cooling of a few degrees with so low a climate sensitivity.

That is a perfect encapsulation of the situation, cementafriend. Wouldn’t you think a powerful force that can increase the Earth’s surface temperature by 33C could be directly measured by replicable experiments in a lab?

As an aside, over the weekend, i had the ‘pleasure’ of hearing a replay of an interview with Michael E. Mann by Michael Smerconish. Mann is smooth, you gotta give him that–he completely rolls Smerconish. He has his storyline down pat. As a friendly suggestion, don’t listen to this before you’ve had your breakfast.

http://soundcloud.com/smerconishshow/dr-michael-mann-the-hockey

I don’t believe this is a valid assessment of sensitivity.

By comparing only shortwave, you are missing the amount of thermal energy that’s either going

into or coming our of the oceans and the amount of shortwave that’s either going into the oceans, or heating the atmosphere.

On a global average, after all, the earth is warmest at aphelion and coolest at perihelion.

So it turns out that the oceanic buffers are almost completely out of phase with the solar forcing – over the seasonal variation.

NREL still has data posted with 30 year averages (1961 – 1990) of the measured total solar insolation at ground level, and measured local average temperature, by month, for a long list of locations in the U.S. While this data is useful for estimating solar PV performance, it also provides a useful database to calculate climate sensitivity, but just for the U.S. The website is here-

http://rredc.nrel.gov/solar/old_data/nsrdb/1961-1990/redbook/sum2/state.html

Back in 2007, I wanted to verify one of Idso’s natural experiments (#3) in his 1998 paper. For about 60 locations, one or two from each state, I calculated the maximum difference in temperatures between winter and summer, and divided by the maximum difference in TSI between winter and summer. Note that the maximum temperature differential occurs a month or two after the maximum TSI differential.

The nice feature of this approach is that small errors in either measurement have only a small impact on the calculated sensitivity. It does not rely on tiny differences between two large values, such as the top-of-atmosphere radiation imbalance.

The result is a short-term climate sensitivity of 0.08 C/W/m^2, or about 0.3 C for a doubling of CO2.

Sounds familiar.

The sensitivity calculated from coastal locations is lower than that calculated from inland locations. Small islands had the lowest sensitivity.

Sounds familiar again.

The sensitivity increases as a function of increasing latitude. The values range from <0.02 up to 0.11 C/W/m^2.

What happens when you try averaging the temperature and solar forcing in both hemispheres together, before diagnosing the sensitivity and time scale?

M Seward says:

May 29, 2012 at 3:30 am

I am not a practicing religious person but was bought up a church goer who walked away quite young. That said I am mindful of the advent of Protestant Christianity through the agency of Martin Luther and others…………….

——————————————

Except the historical Luther was much more like Michael Mann than any skeptic I know.

steve fitzpatrick says:

May 29, 2012 at 5:24 am

Hi Willis,

I think a source of discrepancy between what you have done and many other analyses is that there is not a single time constant, but a broad range An annual forcing cycle is too short a period to see the longer term responses….

_________________________________

The longer term responses are slow and fairly small over short periods of time Over the short time period of 14 years where the annual swings in insolation and the response of temperature are large this is a good first approximation of the energy/temperature response of the climate.

This is really good Willis. There is a lot that can be done with this data.

I note that Hansen calculates his sensitivity as 0.75C per W/m2 change. Well, obviously this data is far, far below that value. But another interesting thing from the data is that the Northern Hemisphere Albedo value rises to 0.333 in the winter. Hansen, calculates his 0.75C per W/m2 by using an Albedo value in the ice ages of just 0.305 (in other words, he artificially understated the ice age Albedo so that he can get a greater GHG effect). There is no way, the current Northern Hemisphere winter has a higher Albedo than the annual average in the ice ages.

There is also the issue of the greenhouse effect which is about 150 W/m2 (Temp – solar forcing). But using this data, the value in the Northern Hemisphere changes from about 200 W/m2 in the winter to 100 W/m2 in the summer (now some of this is just due to absorption/emission from surfaces but it is somewhat unusual).

Even for a one-box model, your tau’s look quite low. If given a cyclical period and a response lag, the formula I use is:

tau=period*tan(2pi*lag/period)/2pi

For a 2 month lag of midlatitude SST’s on the annual 12 month cycle, I get tau=3.3; a lag of 2.5 months gives tau=7.12

For a 4 month lag on a 60 month ENSO cycle, I get tau=4.25

I’m not sure if the formula is correct, but the results seem reasonable.

I also suspect that if a one-box model was adequate, then you could estimate sensitivity by simply regressing temperature against ln(co2). I would think that the rate that heat was being extracted from the pipeline would quickly match the rate that it was being added. That is, the amount of heat in the pipeline would quickly approach an upper limit.

Willis,

how did you calculate dF and dT for the hemispheres?

Did you consider that heat is transported from one hemisphere to the other each day and each moment?

However, since we also know the time constant, we can use that to calculate the equilibrium sensitivity.No you can’t. The only thing that time constant is telling you is that the year is a bit longer than 4*2.4 months (using SH). The response will never lag the forcing by more than 1/4 the cycle time, 3 months in this case. One year (or 15 years) is far too short for the ocean to reach equilibrium.

jrwakefield says:May 29, 2012 at 6:03 am

Interesting, but I don’t see causation between CO2 changes and temp changes. What I see is CO2 increasing and temperature changing, that’s it. I see no CAUSATION between the two. Correlation is not evidence of causation.

Jeez. For the hundredth time, correlation is proof of causation. With the usual statistical caveats.

Use of the phrase ‘Correlation is not evidence of causation.’ is proof the utterer doesn’t understand science or statistics.

Seasonal changes, while interesting, are not a direct comparison to the usual definition of climate sensitivity:

“A model estimate of equilibrium sensitivity thus requires a very long model integration. A measure requiring shorter integrations is the transient climate response (TCR) which is defined as theaverage temperature response over a twenty year periodcentered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year. The transient response is lower than the equilibrium sensitivity, due to the “inertia” of ocean heat uptake. Fully equilibrating ocean temperatures requires integrations of thousands of model years.” http://en.wikipedia.org/wiki/Climate_sensitivity – emphasis added.Given the thermal inertia of the oceans, estimates derived directly from seasonal changes (short term) are

alwaysgoing to be underestimates of the transient climate sensitivity (~20 years).You might be interested in Knutti and Meehl 2006 (http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3865.1), “Constraining Climate Sensitivity from the Seasonal Cycle in Surface Temperature”, where they examine seasonal signals in ~25 regions of the world, and find a transient sensitivity from 1.5-2°C to 5-6.5°C, most likely 3-3.5°C/doubling of CO2.

Willis, several people bring up “only the shortwave radiation” but from what you said,

“…There is an interesting study of the earth’s radiation budget called “Long-term global distribution of Earth’s shortwave radiation budget at the top of atmosphere“, by N. Hatzianastassiou et al. Among other things it contains a look at the albedo by hemisphere for the period 1984-1998. I realized today that I could use that data, along with the NASA solar data, to calculate an observational estimate of equilibrium climate sensitivity….”It looks like you ate using the albedo by hemisphere from the N. Hatzianastassiou et al. study and using that to modify the NASA solar data to estimate the total energy at the earth’s surface at each point in time. Is that correct?

Alos I am assuming the albedo numbers would also include the effects of volcanoes or is it only clouds.

The method appears essentially correct to me, in that it gives a measure of how much global temperature is OBSERVED to change as a result of a change in solar radiation.

So, if 3.7W/m2 is the effect of a doubling of CO2, then the result must follow. A warming of 0.3C based on observation. The question is whether 3.7W/m2 is correct, as it can be shown that CO2 has both a warming and cooling effect, and the absence of the atmospheric hotspot contrary to theory indicates that the cooling effect of CO2 may predominate over the warming effect. As such 3.7W/m2 may well be much too high.

This is a critical item. The absence of the atmospheric hotspot, contrary to theory, indicates that the cooling effects of CO2 may predominate over the warming effect. As such 3.7W/m2 may well be much too high. This is further confirmed by the leveling of temperatures post WWII and at present.

Both these times were periods of rapid CO2 increase as a result of increased economic activity. The post war reconstruction and the current industrialization of India and China, the most populous nations on earth. As such, the assumption that CO2 has a net warming effect of 3.7W/m2 is not supported by the OBSERVATION evidence.

The only weakness I can see offhand in the method is in Tau

Hemisphere lambda_0 Tau (months)

NH 0.08 1.9

SH 0.04 2.4

The values for Tau look like the season lag – for example, June 21 is the first day of NH summer, but the hottest days lag this. Mid August should be hottest in NH according to your numbers, which is pretty close to observed. The argument might be made that your model is understating the longer term lags and only modelling the seasonal lags, which would make 0.3C the minimum.

Quoted from Willis:

“Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.”

====================

There are no holes in that statement above.

I commend your approach. An open and transparent peer review will begin. Knowledge will be gained and shared. Tribal agendas will be exposed. Maybe proper scientific methods will be learned from this example.

richard telford says:May 29, 2012 at 6:27 am

A simple test of whether Eschenbach’s method is junk is to apply it to climate model output. The climate sensitivity of the models are known, so if Eschenbach’s massively underestimates it in the model, we can be confident that it will massively underestimate it in reality.

That statement is junk. All it says is, if the climate models are correct, then the climate models are correct.

ps: CO2 cools the atmosphere by radiating away energy from N2 and O2. This increases the vertical circulation to restore the lapse rate, reducing surface temperatures. The observational evidence suggests this effect is greater than any temperature increase due to CO2.

It is the change in vertical circulation that heats and cools real greenhouses and is the real greenhouse effect. Radiation is not the mechanism that heats and cools real greenhouses, thus it is unscientific for science to pretend that the mechanisms are somehow related or equivalent. Maintaining that two dissimilar process have a similar effect because they have the same name is not science.

KR says:May 29, 2012 at 7:39 am

Given the thermal inertia of the oceans, estimates derived directly from seasonal changes (short term) are always going to be underestimates of the transient climate sensitivity (~20 years).

I agree, but you need longer term ocean effects to be a factor 10 greater than seasonal changes to reach the IPCC prediction. Care to suggest a physical mechanism?

eyesonu says:

May 29, 2012 at 7:49 am

“I commend your approach. An open and transparent peer review will begin. Knowledge will be gained and shared. Tribal agendas will be exposed. Maybe proper scientific methods will be learned from this example.”

Nah! That’s WAAAAAAAY too simple! Bawhahahaha!

Dear Willis

Maybe somewhat OT, and sorry for that. But I don’t know who else to contact with my point. The temperatures initially stirring the pot were (dry bulb) surface temperatures. My question: Why temperature? Why not enthalpy? To illustrate my point, sea level air at 25º and 10% rh has an enthalpy value of 30,4 kJ/kg. Same air, but at 25º and 90% rh has an enthalpy value of 71,8 kJ/kg. An enormous difference for the same dry bulb temperature. Am I missing something, or is everyone else missing something?

As an amateur astronomer I am aware that due to the eccentricity of its orbit the Earth is closer to the Sun in Southern Hemisphere Summer and conversely farther away from the Sun in Northern Hemisphere Summer. I do not know the relative difference in distances, but has this been accounted for in the estimates of solar radiation for the two hemispheres?

Willis says:

While it is certainly possible that there are much longer-term periods for the warming, they are not evident in either of my analyses on this data. If such longer-term time lags exist, it appears that they are not significant enough to lengthen the lags shown in my analysis above. The details of the long-term analysis (as opposed to using the average as above) are shown in the spreadsheet.

tallbloke says: May 29, 2012 at 4:04 am

IIRC the SST in the northern hemisphere reaches a maximum some time after the longest day of the year. This is indicative of a lag in the system caused by the time it takes for heat to re-emerge from the ocean on a seasonal basis.

I think there are some much longer lags, but these are to do with runs of highly active or less active solar cycles at the multi-decadal scale rather than CO2.

____________________

Some Thoughts Regarding the Evidence of Longer Cycles and Lags:

We know there is a ~9 month lag of atmospheric CO2 concentration after temperature on a ~~4 year cycle of natural global temperature variation.

http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/

We also know that CO2 lags temperature by ~800 years on a much longer time cycle (ice core data).

As you suggest tallbloke, there is probably at least one intermediate lag, and quite possibly several, between these two – perhaps associated with the Wolf-Gleissberg Cycle, Hale Polarity Cycle, etc., AND-OR with the PDO, etc.

The lag of CO2 after temperature observed in these longer cycles is probably mostly physical in origin, related to ocean solution and exsolution of CO2, but also includes a long term biological component.

Willis’s analysis deals with the seasonal (annual) cycle, in which the biological component of the CO2 lag is comparatively much greater.

I have the opinion that we are looking at several natural cycles of varying duration in which there are external natural drivers (Sun, Earth orbits, stars), then some randomization associated with large ocean phenomena (PDO, etc.); these drive Earth’s natural temperature cycles at all time scales, and result in a series of related CO2 lags after temperature.

Finally:

Atmospheric CO2 variation is primarily a result, not a driver of temperature, and human fossil fuel combustion is probably NOT causing the recent increases in atmospheric CO2 – it is more likely the result of the cumulative impact of all these aforementioned natural cycles – for example, the Medieval Warm Period was ~~800 years ago.

Someone may not have seen the global warming vs number of pirates chart.

Willis,

You wrote “This gives us an overall average global equilibrium climate sensitivity of 0.3°C for a doubling of CO2.”

Congratulations, Willis! This is the correct number! (vigorous and lengthy applause)

Well I think you are barking up the wrong tree Willis.

Plotting the “average” of some data set of Temperature (global) against the varying (very predictable) TSI variation, which presumably results in an annual variation in the solar energy stored in the oceans and rocks (as well as the atmosphere) will of course yield an open cyclic plot such as those you have, because of the very well recognized thermal delays. So ho hum; maybe we haven’t seen your graph before, but the data and its relationship is well known. So no problem there; a picturesque way of showing thermal delay time effects.

But none of that has anything to do with carbon dioxide, the effect of which, seems to be, to warm the atmosphere, but simultaneously cooling the ocean and rocks ( by blocking some solar spectrum energy from EVER reaching those energy storage sites).

There’s no relationship at all between the processes by which the sun heats the planet, and the quite different mechanisms by which CO2 and other GHGs like H2O and O3 warm the ATMOSPHERE, or the minute degree to which the atmosphere via LWIR radiation can effect the Temperature of the much greater thermal mass of the non gaseous part of the planet.

And there still is no evidence, observational or theoretical, that something called “climate sensitivity”, evidently invented by the late Stephen Schneider, even exists. No such logarithmic relationship can be shown, and trying to perpetuate that myth through ordinary cyclic variation of instantaneous TSI, is a wasted effort. Sorry, that’s about as polite as I can put it Willis.

“To simulate a doubling of CO2, I gave it a one-time jump of 3.7 W/m2 of forcing.”

Is this a reliable estimate? And am I right that we could triple that and still be in lukewarmer territory?

Models sure must be doing some funky stuff with clouds and water.

The response time for the annual cycle definitely needs to be short. This paper found the same thing:

http://www.pas.rochester.edu/~douglass/papers/DBK%20Physics%20Letters%20A%20.pdf

However, I don’t think that this captures the way the system would respond to different sorts of perturbations: for one thing, the annual cycle in solar insolation varies strongly with latitude and flips sign in opposite hemispheres. Compare that to a relatively uniform forcing effect from say CO2. Of course the reaction of the system will be very different: one will act much more to alter atmospheric circulation than the other, for one thing.

That being said, if you look at responses on the inter-annual (as opposed to intra-annual) timescale to things like volcanoes, the response times that work best are still shorter than would be required get a high sensitivity.

not even close to right. the instantaneous response is different from the transient response and the equillbrium response.

Why an exponential function?

Climate Weenie says:

May 29, 2012 at 6:32 am

I don’t believe this is a valid assessment of sensitivity.

By comparing only shortwave, you are missing the amount of thermal energy that’s either going

into or coming our of the oceans and the amount of shortwave that’s either going into the oceans, or heating the atmosphere.

On a global average, after all, the earth is warmest at aphelion and coolest at perihelion.

So it turns out that the oceanic buffers are almost completely out of phase with the solar forcing – over the seasonal variation.

I believe you are wrong. Take a look global sea surface temperatures at:

http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps

They peak at approximately 2.5 months after perihelion (~0100UTC 5 Jan 2012) and are lowest about 5 months after aphelion.

From the abstract:

At pixel level, the OSR differences between model andERBE are mostly within ±10Wm−2, with ±5Wm−2 over

extended regions, while there exist some geographic areas

with differences of up to 40Wm−2, associated with uncertainties

in cloud properties and surface albedo.

Are you using the authors’ model values or the data that they cite? Their model seems to have larger error than the effect you are trying to estimate, and you usually deprecate the use of model output.

Why do you assume that lambda is constant over the recording interval? Is the recording interval long enough for what you want to estimate?

I think that a plot of predicted vs actual temperatures, in chronological order, NH and SH separately would add to the exposition — sorry, I always seem to recommend more work.

You wrote: “∆F is change in forcing,” but your model has F, not ∆F. I think that the model has it right, with change in T being proportional to F over the time interval, not change in F. That said, I think that tau in the first term on the right is redundant; estimation of tau and lambda is confounded.

So, you have a first order autoregressive model for unexplained variation in delta T with an exogenous linear function of F. From seat of the pants, I expect that the error in the estimate of lambda is about +/- 10.

I think your approach is defensible, but that you need many more data points than what are available.

Steven Mosher says:

May 29, 2012 at 9:46 am

not even close to right. the instantaneous response is different from the transient response and the equillbrium[sic]response.

For the benefit of the people who are wondering what you are talking about; can you define the terms:

instantaneous response

transient response

equilibrium response

and the system to which you are applying these terms.

This post is a joke right?

Climate is a complex system. Reducing the modeled time dependence to a single time constant based on an oscillating forcing is nonsense. A paper by Schwartz based on historical data estimated a single time constant of 5 years, and had to take it back. If there is a single time constant that could describe what is happening it is more like 80 years.

thingadonta says:

May 29, 2012 at 2:49 am

No, I actually mention that there may be longer cycles. However, it seems to me that the climate sensitivity on those timescales must be smaller, or it would show up in the above analysis.

w.

That’s an ingenious approach Willis. However, I fear it was doomed to failure from the outset because the basic concept of ‘climate sensitivity’ to a greenhouse gas (CO2 in this case), defined by the IPCC as the temperature-increase at the earth’s surface due to a doubling of its atmospheric concentration, is essentially meaningless. There are two reasons for this.

1. The amount of radiative forcing produced by a doubling of the CO2 concentration is not a regular fixed amount. The IPCC’s logarithmic formula from which the fixed amount results is false. The correct formula can be derived from the Beer-Lambert law of optics and it follows an inverse exponential law, not a logarithmic one. Consequently, repeatedly doubling the amount of CO2 in the atmosphere produces progressively smaller increments of radiative forcing at each repetition, not equal ones as the IPCC’s formula pretends.

2. The relationship between the amount of radiation absorbed at the earth’s surface and the global mean temperature is not a linear one but is the end-product of numerous factors including the Stefan-Boltzmann law, the specific heats and latent heats of surface substances and so on. This relationship implies that constant increments of incident radiation will produce progressively smaller temperature increases at the surface as it warms.

These two factors combine to produce rapidly diminishing increments of surface temperature with each proportional increase of CO2-concentration. Consequently, your estimate of 0.3°C for a doubling of CO2 could only apply to the unique situation where the initial surface temperature and the initial CO2 concentration have particular unique values. It cannot be a general rule.

Sniff test:

Stephan-Boltzman warming @ 0.97 emissivity from … (say) … 396 to 399.7W/m2 = 0.18 K/W/m2 or 0.68 K per doubling CO2e (3.7 W/m2) if the surface wasn’t in thermal contact with anything else (which of course it is) and that the energy didn’t perform any work (which of course it does), such that 0.68K per 2XCO2e would be more of an estimate of an actually impossible maximum.

Since 0.3 is less than 0.68 and the same order of magnigtude I’m calling it a PASS.

Therefore, I like the 0.3 C per 2xCO2e estimate. I also like the novel approach and graphical representation of the data, although, a couple of points (at intercepts) of the month would have made it easier to determine whether clockwise or counterclockwise was the appropriate way to read it. Assuming I’ve had enough coffee and clockwise is indeed the proper way to follow the graph, note the “sensitivity” changes toward the “peaks”; especially at the top where temperature sort of maxes out where although the heat flux is still increasing the temperature isn’t so much anymore. Could this be thermostats at work? This also aligns with the Stephan-Boltzman Law where as the temperature increases it takes less and less heat flux for the same increment of temperature increase.

Nice work. Plenty here for me to ponder over.

Sherlock says:

May 29, 2012 at 8:35 am

Yes, it is accounted for. The NASA solar data that I used includes all of that.

w.

Steven Mosher says:

May 29, 2012 at 9:46 am

Since I mentioned that difference in responses, I’m not sure what you mean. And as usual, your twitter style of posting is useless.

Please, Steven, either make your point or stop with your teasing “I’m so much smarter than you” type of posts.

I suspect you have something to say, and knowing you, I suspect it might be true and valuable and accurate. However, I cannot glean it from your mini-post, and so I invite you to either tell us what it is, or go away. I’m not willing to play your game.

w.

My issue [or one of them] with “climate sensitivity” is this: What questions are better answered or described by calculating ‘sensitivity’, especially if it is not constant? To what use will it be put?

It seems it is an attempt to reduce an embarrassment of complexity to a single quotable number which is the first derivative of something called “forcing”. This could then be used/abused by the disingenuous. So, if global CO2 levels are falling during the Northern Hemisphere summer, then could someone legitimately claim that “climate sensitivity” is actually increasing?

It reminds me of the story that President Nixon became the first President to use the third derivative of prices [with respect to time] when he said:

“The rate of increase in inflation is falling”. True, maybe. But not very helpful.

Eric Adler says:

May 29, 2012 at 10:38 am

Your comment is a joke, right?

I say that because claiming something is “nonsense”, with no attempt to even begin to tell us why it is “nonsense”, is a joke in the world of science. I may well be wrong, I have been many times, but you have not said one word about where or how I might have gone off the rails.

w.

PS—As far as I can find out, Schwartz didn’t “take it back”. He modified his estimate to make it ≈ 8 years.

I would also note that the climate models I’ve analyzed use a time constant “tau” which is on the order of two to three years. The question is obviously still open. Note that I make no claims above that this is the only time constant in the system.

Philip Bradley:

Repeating an error does not stop it being an error however many times it is repeated.

At May 29, 2012 at 7:37 am you say;

NO! You are absolutely wrong.The absence of correlation disproves causation, but the presence of correlation indicates nothing about causation except the possibility of a causal link. This is a matter of logic and statistics have nothing to do with it.

Try this. Say out loud to yourself 100 times

“Correlation is not evidence of causality”.

Richard

OOps! of course I intended to write

The absence of correlation disproves causation, but the presence of correlation indicates nothing about causation except the possibility of a causal link.

Sorry.

Richard

[Fixed. -w.]slp says:

May 29, 2012 at 4:31 am

I will disagree with that. Assume that we move from winter forcing to summer forcing in one step. After that,

ΔF = 0. However, the new forcing will still cause things to warm up. Instead of “a change in forcing”, perhaps what you mean is “the difference between incoming energy and energy emitted by the surface”. A value defined like that might make sense if it was the difference of the fourth roots. At any rate, according to Stefan’s equation, there should be a fourth power in there somewhere.“This is because more of the NH is land and more of the SH is ocean … and the ocean has a much larger specific heat.”i.e. Btu/# F or kcal/kg CWhich simply points out the reason why any warming from any variable will always show up in the NH, it has a low specific heat meaning any sensible heat change will be larger than the SH but more importantly demonstrates the fallacy of measuring heat in units of temperature (partial heat) instead of units of total heat in Btu/# or kcal/kg.

Comparing the SH and NH is also an apples and oranges fallacy as well given an ocean dominated hemisphere retains more heat latently than sensibly. This is why Hansen isn’t a scientist at all, he is an incompetent boob for making such an elemental mistake to make assertions based on partial heat measurements instead of reputable scientists who recognize heat/energy has two components.

jorgekafkazar says:”Why an exponential function?”

Because the model for temperature response to forcing being used is basically a first order linear time invariant differential equation. The generic solution for an unperturbed system which is not at the equilibrium state (which we define as zero) is the initial value times e^-t/tau. That is, the system tends to approach it’s equilibrium state when undisturbed, in an exponentially decaying manner. It asymptotically approaches equilibrium if it starts away from equilibrium, but can never reach it (ie it takes infinite time) but it can get arbitrarily close. The system also asymptotically approaches a “new equilibrium” state when forced by a step function, either above or below the original equilibrium state. This kind of system is encountered fairly frequently by engineers, especially electrical and they probably recognize it from modeling circuits, for example. It also occurs in modeling of convective heat transfer.

W.;

Even on Twitter, Mosh could have spared you ~28 more characters!

Some interjections seem intended only to spread CUD (Confusion, Uncertainty, and Doubt). Mosh certainly likes chewing his!

As for transients, etc., am I wrong in thinking that this has much to do with the finding that OLR varies positively and monotonically with surface temperature, rather than negatively as AGW requires? My take-away from that is that there is no “blanketed heat” available to power the GE. It leaves on the first available radiative train.

“”””” Re John West…………….. ld this be thermostats at work? This also aligns with the Stephan-Boltzman Law where as the temperature increases it takes less and less heat flux for the same increment of temperature increase. “””””

Well that statement is exactly wrong; it takes ever higher amounts of radiant energy to increase the Temperature by some designated increment; NOT less.

The amount of energy it takes to raise a system Temperature from 1 K to 2 K is microscopic, compared to that required to take a system from 1,000,000 K up to 1,000,001 K

The standard definitions:

Instantaneous sensitivity – Immediate changes upon forcing changes. Given the time constants and thermal inertia for the climate as a whole, seasonal responses are very close to this. Short term up/down changes in temperature don’t penetrate far into the oceans – the thermal content of the top 2.5 meters of sea water is equal to that of the entire atmosphere.

Transient sensitivity – As defined, and as used in the literature, the response over about

20 years. Currently the median estimate of this is 3-3.5C/doubling of CO2. To a first pass estimate this can be considered the time for the well mixed layer of the ocean (75-100 meters) to adjust to a sustained +/- change in forcings.Equilibrium sensitivity – Hundreds of years out, when changes in forcing have fully percolated/circulated into the deep oceans.

—-

Willis Eschenbach – the calculations you did in this post are most relevant to the “instantaneous sensitivity”, not the “transient sensitivity”. Apples and oranges…

Again, I would refer you to Knutti and Meehl 2006 (http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3865.1), “Constraining Climate Sensitivity from the Seasonal Cycle in Surface Temperature”, where they look at various models and compare the seasonal responses to observational data in multiple (25?) regions to constrain transient sensitivity. If the climate had the

transientsensitivity you compute it simply wouldn’t have theinstantaneoussensitivity to produce the observed seasonal responses.There is no backradiation or backscattering or whatever. Gases (whatever gas) simply don’t emit any IR-radiation unless they are very hot or burning. Atmospheric gases are so low temrature that they simly can’t emit any energy (IR- radiation). There is no energy source in atmosphere.

richardscourtney says: “The absence of correlation disproves causation”

If by “disproves causation” you mean “shows there is no dependence of either variable on the other” this is not necessarily the case. For example, if they are both

marginallynormally distributed andjointlynormally distributed, then the two random variables are independent if their covariance is zero (ie they are uncorrelated) BUT, if they are not jointly normal, they can be uncorrelated while not being independent:http://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent

Oh, how I’d love to see the radiation “equilibrium” cartoons redone using enthalpy!

It wad frae monie a blunder free us,

An’ foolish notion

Isn’t the average warming per the land- and satellite-based measurements already since 1979 roughly +0.3C?

With only an increase of atmospheric CO2 of less than 20%?

Re: Willis Eschenbach

May 29, 2012 at 11:16 am

Stephen Schwartz revised his original estimate of effective time constant from ~5 years to 8.5+/-2.5 years. His corresponding estimate of equilibrium sensitivity (final temperature response to doubling CO2) based on that time constant was revised to 1.9+/-1 C. 0.3C is way lower than every estimate I have read, and much lower than even a simple blackbody response (about 1C per doubling). For what it is worth, Schwartz agreed with critics that his first estimate was biased low due to inclusion of too much very short term response in the autocorrelation calculation, but he continued to defend the revised value ~1.9C per doubling) as reasonably accurate, in spite of howls from many in the climate science establishment (of which aerosol specialists Stephen Schwartz is certainly a member!). (http://www.ecd.bnl.gov/pubs/BNL-80226-2008-JA.pdf)

1.8C to 1.9C per doubling corresponds to the expected warming if the atmosphere’s relative humidity remains constant as the temperature rises (that is, if total atmospheric moisture content rises on average with warmer temperatures to maintain constant relative humidity). Anything higher than 1.9C per doubling requires substantial positive “cloud feedback”. Anything lower than 1.8C requires substantial negative “cloud feedback”. Cloud feedback is uncertain; the available data are just not very good.

“”””” Billy Liar says:

May 29, 2012 at 10:16 am

Steven Mosher says:

May 29, 2012 at 9:46 am

“not even close to right. the instantaneous response is different from the transient response and the equillbrium [sic] response.”……

For the benefit of the people who are wondering what you are talking about; can you define the terms:

instantaneous response

transient response

equilibrium response

and the system to which you are applying these terms. “””””

Well Billy, I am not sure exactly how Mosh defines those terms; particularly “instantaneous response”; NO physical system responds “instantaneously” ; but let me give it a shot at what I “think” he means, and what those terms mean.

First of all we can dispense with “equilibrium response”

; that’s an oxymoron; a system in equilibrium is a stationary system and it isn’t “responding” to anything, or else it wouldn’t be in equilibrium. Since earth is never even remotely in a state of equilibrium, I suspect that Steven really meant a “Steady State Response”. This would often be referred to as the “Frequency Response”, the result of applying a ‘small’ sinusoidal cyclic disturbance to the system; small so the system remains ‘linear’. In some sense, Willis’ “Lisajous figures” are a pretty good example of a steady state response, although in his chosen “system” the whole planet, the varying signal is not exactly small, and the system not too linear; but the cyclic return to a familiar state, is characteristic of a steady state response, as is the phase shift between drive signal and sytem response.

For “Instantaneous response” which I have explained doesn’t exist, a likely substitute would be the “Impulse Response”. Mathematically, an “Impulse” is a short (zero length) application of a high (infinite) “force”, such that the product of the high force and the small time, which is THE measure of Impulse, has a finite value. A perfect example of an Impulse response, is yesterday’s photo Anthony posted of the BS Iowa 16 inch broadside. The gun powder applies an astronomical force to the shell for a very short time, during which the round transits the barrel, and in turn the recoil from that applies an equal force for the same time, in the other direction to the ship. Long after the shell has reached its target, the ship will still be “responding” to the applied impulse, and the ship will eventually settle down to a new position somewhere to port of where it started. You could do the same thing by swimming up to the Iowa abeam, and pushing on the side of the ship as you swim . Sometime next year if you survive, the ship might move to the same place; well maybe next century.

Impulse response of physical systems is a well understood discipline. Note it is a TIME RESPONSE unlike the steady state FREQUENCY RESPONSE. Frequency is a figment of our imagination; but “time” actually happens in real time.

So what of “Transient Response.” Well impulse response, most certainly IS a transient response; a response to a transient signal, namely the impulse.

But usually people who mean impulse response, use THAT term specifically, so when they say “Transient Response”, they most likely mean the response to a STEP FUNCTION.

Whereas an impulse signal immediately goes away, a step function changes value and stays there till the end of the universe.

Mathematically, step function and impulse responses are related at least for linear systems in their linear operating regimes. So they are also related to steady state response; but you have to know the steady state amplitude relationshp AND ALSO the steady state phase relation ship.

Now I don’t KNOW, that MOSH really meant those things, as I described them; but I wouldn’t be taking any bets against my belief that he does.

Magic Turtle says:May 29, 2012 at 10:51 am

1. The amount of radiative forcing produced by a doubling of the CO2 concentration is not a regular fixed amount. The IPCC’s logarithmic formula from which the fixed amount results is false. The correct formula can be derived from the Beer-Lambert law of optics and it follows an inverse exponential law, not a logarithmic one. Consequently, repeatedly doubling the amount of CO2 in the atmosphere produces progressively smaller increments of radiative forcing at each repetition, not equal ones as the IPCC’s formula pretends.

Not true, the logarithmic dependence does not come from the Beer-Lambert Law which applies in optically thin situations, but from the optically thick situation which applies to CO2 absorption in the IR in our atmosphere. Look up ‘Curve of Growth’ to see a derivation.

Basically, weak lines have a linear dependence on concentration, moderately strong lines have a logarithmic dependence and very strong lines a square root dependence.

George E. Smith; says:

“ it takes ever higher amounts of radiant energy to increase the Temperature by some designated increment; NOT less.”

True that. But also true is that it takes less incremental temperature increase to increase heat flux emission by some incremental amount as the temperature increases in a black/grey body in the range of Earthly temperatures.

For example: a black body has to increase 1.9 degrees to increase emission by 10 W/m2 if going from 370 to 380 W/m2, but only a 1.8 degree increase is required to emit an additional 10 W/m2 from 400 to 410 W/m2, and only a 1.7 degree increase for a 10 W/m2 increase from 440 to 450 W/m2.

BB emission increase from 370→380 W/m2 = 1.9 K increase from 284.2K to 286.1K

BB emission increase from 400→410 W/m2 = 1.8 K increase from 289.8K to 291.6K

BB emission increase from 440→450 W/m2 = 1.7 K increase from 296.8K to 298.5K

The temperature increase gets smaller for same heat flux emission increase as the temperature of the body increases.

So, as the temperature of the surface warms it can radiate more heat with less incremental increase in temperature.

In other words the “sensitivity” (temperature increase per unit heat flux) of a black/grey body goes down as the temperature goes up in the Earthly temperature range.

excel graph

KR,

Thanks for those definitions. What is the ‘standard’ system model to which these are applied?

George E Smith,

Many thanks for your explanation. I guess like me you have some control engineering background. I often wonder whether anyone in climate science knows anything about control systems.

The conventional definition of transient response is the surface temperature at the time of doubling of CO2 (assumed ~3.7 watts/M^2 forcing) if CO2 concentration increases by 1% per year. Doubling then takes place after 70 years; the temperature increase at that time is “transient sensitivity”.

The conventional definition of equilibrium sensitivity is the temperature a nearly infinite time after CO2 doubles. What is a reasonable approximation of “infinite time” depends on the heat capacity of the deep ocean and how quickly the deep ocean can warm, but the response at 1,000 -2,000 years is often considered a good appoximation. Most all of the response is (of course) MUCH quicker than this. The equilibrium sensitivity value is (I think) becoming recognized as not a very useful number, since CO2 is never going to stay doubled for 1,000 years (the oceans will absorb most of it over a couple of centuries or less). The “transient sensitivity” is probably a more meaningful number in terms of estimating how much future warming might actually take place in response to forcing.

There is a possible problem with this analysis in that some of the energy may go into phase change of water (aka known as melting snow & freezing ice). I seem to recall that the surface heat penetrates around 4m into the rock each year. If snow or ice were a similar thickness, then large amounts of the yearly energy could go into melting and cooling ice giving the appearance that the effect of solar radiation was having little affect, whereas the phase change of water involves large amounts of energy for almost no temperature change.

Billy Liar:

Global mean temperatures ( not sea surface ) peak at ( in the same month ) as aphelion and

trough at ( in the same month ) as perihelion. Run the analysis again using the lowest atmospheric layer.

Billy Liar–“What is the ‘standard’ system model to which these are applied?”Those are the

climate sensitivityterms in use in the literature – the system is the entire physical climate, both in terms of the observations and in the models.stevefitzpatrick says:”The conventional definition of equilibrium sensitivity is the temperature a nearly infinite time after CO2 doubles.”

There is no such thing as “nearly infinite” so I assume you mean the time at which the remaining response is sufficiently neglidible so that it would not change the last digit of the reported value if added.

Since the models

shouldbe giving an exponentially decaying rate of change in response to an instantaneous doubling (ie dT/dt = (C/tau)*e^-t/tau, T = C – Ce^-t/tau) they wouldn’t ever actually reach atrueequilibrium state, but they can get arbitrarily close to it (meaning there are “at” it to within the accuracy desired). Of course, if one knows the exact equations rather than having results pop out of a model, one can calculate the value of the asymptote exactly.Allan MacRae said @ May 29, 2012 at 3:01 am

Newton’s Law of Universal Gravitation states that every point mass in the universe attracts every other point mass with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. Newton had

noTheory of Gravitation and categorically rejected the idea. He published this in 1686.The first generally accepted Theory of Gravitation was Einstein’s General Relativity where gravity is a consequence of the curvature of spacetime governing the motion of inertial objects. This was published in 1916.

How on Earth (or off it if you prefer) did a Theory published in 1916 evolve into a Law published in 1686?

Geoff Alder said @ May 29, 2012 at 8:15 am

You are missing nothing. The obvious follow-on question to ask is: “Why don’t “climatologists” use an appropriate measure?”

Willis,

You only calculated the high frequency response of the climate system, which acts as a low pass filter. i.e. slow perturbations have a bigger impact than fast perturbations.

see http://members.casema.nl/errenwijlens/co2/Climate_sensitivity_period.gif

Billy Liar said @ May 29, 2012 at 1:33 pm

It would appear that those in climate “science” have a sound grasp of control systems; they have been controlling what we read about climate in the MSM and climatology journals for a quarter of a century now. Unfortunately…

This result is very different from observational based estimates in the scientific literature.

For the first couple of hits on Google Scholar:

An Observationally Based Estimate of the Climate Sensitivity: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%282002%29015%3C3117%3AAOBEOT%3E2.0.CO%3B2

“From the probability distribution of T2 we obtain a 90% confidence interval, whose lower bound (the 5-percentile) is 1.6 K. The median is 6.1 K, above the canonical range of 1.5–4.5 K; the mode is 2.1 K.”

Using multiple observationally-based constraints to estimate climate

sensitivity: http://www.image.ucar.edu/idag/Papers/Annan_Constraints.pdf

“The resulting distribution can be represented by (1.7, 2.9, 4.9) in the format used throughout this paper. That is to say, it has a maximum likelihood value of 2.9°C, and […] a […] range of 1.7–4.9°C (95%).”

If my cursory reading of your article gleaned your methods correctly, then one I suspect that by analysing annual cycles, you’re not picking up on all the elements of lag in the system. Longer ones, such as the propagation of the heat into the deep ocean takes 50 years or so, and therefore do not appear in annual data. I suspect that because your time constant is so low, because annual analysis doesn’t find most of it, you’re missing most of the warming due to the increase in CO2.

“Andrew

Since the models should be giving an exponentially decaying rate of change in response to an instantaneous doubling (ie dT/dt = (C/tau)*e^-t/tau, T = C – Ce^-t/tau) they wouldn’t ever actually reach a true equilibrium state, but they can get arbitrarily close to it (meaning there are “at” it to within the accuracy desired).”

But delta T is in fact (Tmax+Tmin)/2 the Tmax is a function of influx in the order of 880 W/m2 and Tmin is where influx is about 150 w/m2. The system oscillates daily and yearly.

richardscourtney says:May 29, 2012 at 11:34 am

Philip Bradley:

NO! You are absolutely wrong.

Go read the wikipedia page. And note I said a casual relationship, rather than A causes B. The latter being the logical fallacy.

http://en.wikipedia.org/wiki/Correlation_does_not_imply_causation

To my saying (May 29, 2012 at 10:51 am):

“1. The amount of radiative forcing produced by a doubling of the CO2 concentration is not a regular fixed amount. The IPCC’s logarithmic formula from which the fixed amount results is false. The correct formula can be derived from the Beer-Lambert law of optics and it follows an inverse exponential law, not a logarithmic one.”

Phil says (May 29, 2012 at 12:56 pm):‘ Not true, the logarithmic dependence does not come from the Beer-Lambert Law which applies in optically thin situations, but from the optically thick situation which applies to CO2 absorption in the IR in our atmosphere. Look up ‘Curve of Growth’ to see a derivation.Basically, weak lines have a linear dependence on concentration, moderately strong lines have a logarithmic dependence and very strong lines a square root dependence.’

What I said is true. I never said that the logarithmic relationship is derived from the Beer-Lambert law, which I agree does apply to optically-thin lines. But it also applies to groups of optically-thin lines and for that reason it can be applied to entire absorption/emission wavebands. The thicknesses of the individual lines become irrelevant in that case. Such an application provides a straightforward method of deriving a general formula for the absorption of terrestrial surface radiance by atmospheric CO2. This has the form:

A = S.k.[1 – exp(-g.C)]where

Ais the mean radiance absorbed by the atmospheric CO2 (in W/sq.m);Sis the mean surface radiance (in W/sq.m);kis the fraction ofSthat is emitted on the GHG’s absorption wavelengths;gis a constant that is specific to CO2 (ie. different for different GHGs);Cis the concentration of CO2 (in ppmv, although this is merely a convenient approximate metric to use in place of the specific mass of CO2 in units of kg/sq.m).If we say that a fraction (

p) of this absorbed power is returned to the planet’s surface by whatever means, then the amount of surface radiance being thus recycled (R) is simply the product (p.A) and this leads us to the basic equation:R = p.S.k.[1 – exp(-g.C)]Suppose now that the CO2 concentration is increased to a new level

C’and that this causes the amount of power being recycled back to the surface to increase toR’. Then the formula just derived tells us thatR’ = p.S.k.[1 – exp(-g.C’)]The amount of ‘radiative forcing’ (

Fsay) produced at the surface by the increase in CO2 concentration fromCtoC’is simplyR – R’and hence we obtain:F = p.S.k.[1 – exp(-g.C’)] – p.S.k.[1 – exp(-g.C)]= p.S.k.[exp(-g.C) – exp(-g.C’)]

Compare this with the IPCC’s logarithmic formula for radiative forcing from CO2:

F = 5.35.Ln(C’/C)]As you can see, they are completely different formulae following different mathematical laws that imply different physical laws as their bases.

I looked up ‘Curve of Growth’ as per your instruction and could not find any derivation of the IPCC’s logarithmic formula anywhere. Perhaps you could provide one, or at least a link to one?

DocMartyn says: “But delta T is in fact (Tmax+Tmin)/2 the Tmax is a function of influx in the order of 880 W/m2 and Tmin is where influx is about 150 w/m2. The system oscillates daily and yearly.”

I was making a comment about the functional form of a climate model’s response to a step change forcing. Not about how the real world reacts to the insolation cycles within the day and the year.

But, by the way, one could use this same kind of model to simulate a day and night cycle. It is just that the solution to the differential equation (ie dT/dt +T/tau = f(t) ) is not the same as the solution for a step change. In this case, T refers to the, say, minute to minute temperature or hourly temperature observed within the day, not the “daily mean” (ie Tmax+Tmin/2) and f(t) rather than being a Heaviside function, is the time evolution of the daily insolation cycle. How well this model would work depends on how well the differential equation describes the system. But climate models give no indication that their makers expect this form to be inadequate, since this kind of differential equation can easily be fit to climate model output if you know the sensitivity and response time. So the functional form I describe is basically what models do. Whether that is how

realityworks is another question.Philip Bradley said @ May 29, 2012 at 3:12 pm

While casual relationships are not necessarily causal, marriages usually are, hence the popular sobriquet: She Who Must be Obeyed. Even though marriage and death rates are well-correlated, we do not believe that marriages cause death, or vice versa. That would be fallacious.

Guest Post by Willis Eschenbach

“Climate sensitivity” is the name for the measure of how much the earth’s surface is supposed to warm for a given change in what is called “forcing”. A change in forcing means a change in the net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation.

…Comments and criticisms gladly accepted, this is how science works. I put my ideas out there, and y’all try to find holes in them.

w.

===============================================================

One important hole is in the phrase “net downwelling radiation at the top of the atmosphere, which includes both shortwave (solar) and longwave (“greenhouse”) radiation”. Longwave SOLAR radiation is somehow unfortunately disappeared through the hole. Let us reinstate it.

Of course, we’ll immediately face the problem, that the so called “greenhouse gases” also block a part of the incoming longwave SOLAR radiation, thus contributing to cooling. This is a severe blow to the AGW hypothesis, although the question about the net effect remains. But the AGW people would not like it.

People need to keep in mind that this is real data about how the real climate system operates.

This is a complicated system and I will take real data over theory anytime.

Compiled from high resolution data boiled down to monthly averages over 14 years. What else are we looking for? The infinite/10,000 years of equilibrium climate sensitivity to occur?

For example, the global Albedo values have declined by more in relative terms than the global temperature series increases (global brightening?). But that means the net/total Greenhouse Effect (about 150 W/m2) has declined by 0.13 W/m2 over the period, not risen by 5.35 ln(CO2,1998/CO2,1984).

Dig into it – this is an important dataset.

we do not believe that marriages cause death, or vice versa. That would be fallacious.I’ll suggest the causal relationship involves birth.

Funny how other peoples typos are glaring, but you often can’t see your own.

KR,

Thanks. So where is the model (in control system terms) documented. Any idea?

“”””” Andrew says:

May 29, 2012 at 2:07 pm

stevefitzpatrick says:”The conventional definition of equilibrium sensitivity is the temperature a nearly infinite time after CO2 doubles.” “”””””

In simple exponential decay situations, the residual amount halves every increment of time equal to

t x ln(2), which you are supposed to learn in 4H club, is 0.6931 x t, where (t) is the decay time constant. The decay time constant is the time it would take to reach zero if the rate of decay stayed at its original value, at the start of the decay. And you are also supposed to know that the amount decays to 5% in three time constants, and to 1% in 5 time constants. So in 10 time constants the remainder would be 0.01%, close enough for most people.

You can get an even lower sensitivity if you calculate it with the diurnal cycle, and that points to a strong frequency dependence of the response. This is what Hansen means by the response function (or the Greens function if you are mathematical). If you add an impulse, the immediate response is quite slow, but over time it gets to equilibrium. Long period cycles like the solar 11-year one have sensitivities that would imply several degrees per doubling. Basically by oscillating the system, it can’t respond as fully as if you just give it a steady forcing in one direction, and the higher the frequency, the less the response. I am sure there is an engineering analogy with harmonic oscillators.

Climate Weenie says:

May 29, 2012 at 1:55 pm

Ah, you meant global atmospheric temperatures. Not much heat in the atmosphere compared to the ocean – all the missing heat is stored there.

Philip Bradley says:May 29, 2012 at 7:37 am

…

Jeez. For the hundredth time, correlation is proof of causation. With the usual statistical caveats.

Use of the phrase ‘Correlation is not evidence of causation.’ is proof the utterer doesn’t understand science or statistics.

Things like this always leave a smile on my face, as either the poster is joking or they simply don’t understand. Causality is a very well understood concept. Correlation’s relationship to causality is less so. This (causality) is decidedly not a statistical phenomenon, but its impact (observed correlation) may be seen in observational data.

Sadly, I think the conflation of these is due in large part to the propensity to replace a real theoretical construction with an observational statistical analysis. The latter has no real conception of causality, only correlation. The former makes (hopefully) testable predictions. The predictions ought to demonstration some amount of correlation. And more so, if you remove the inputs from the model, and have it generate noise, the output ought not to be correlated in that case, otherwise all you’ve demonstrated is an inability to distinguish between signal and noise. Which also means your theory (or mathematical model) is junk.

Which I think happened at some point to some hockey stick thingy.

The logarithmic “climate sensitivity” is a point where Phil and I part company.

A fixed Temperature increase per doubling of CO2, means a change of deltaT for CO2 going from 280 ppm, to 560 ppm. It also means going from 1 ppm to 2 ppm, or from one CO2 molecule per cubic metre, to two CO2 molecules per cubic meter.

“Approximately logarithmic,” doesn’t mean anything; “logarithmic” has a precise meaning.

Approximately logarithmic is also approximately linear, or it could be fitted to the function:-

Y = exp (-1/x^2)..

There is NO earth climate experimental data, that allows an unambiguous decision between these three possibilities or many other possible mathematical functions.

As for Beer’s Law, sometimes referred to as the Beer-Lambert Law, it was an approximate relationship for absorption in DILUTE solutions of a given solute, from chemistry, and some will argue that it has an optical application as well, in that if a given thickness of an absorbing optical glass (or other specimen), attenuates a given input wavelength optical beam by 1/2 ( 1/2 left after traversing (t) ), then double that thickness will leave only 1/4 remaining.

But a necessary condition for application of Beer’s law, is that the output, is identical to the input except for quantity.. Photons go in and some of them die; PERMANENTLY.

Many optical materials fail to obey Beer’s law, because the photons don’t stay dead; they resurrect as some other wavelength, so the transmitted POWER is much greater, than what is calculated using Beer’s law.

And that is what happens in the atmosphere with CO2, and other GHGs. The long wavelength IR absorbed by GHG molecules, doesn’t stay absorbed; the molecule “fluoresces” at some other wavelength, so the power transmission does not decay exponentially.

That’s why the “CO2 saturation” notion is a non-starter. CO2 absorbs LWIR; re-emits some other wavelength, and then is ready to grab another LWIR photon from the surface; so it never really “saturates”.

But regardless of how atmospheric CO2 absorbs, and emits LWIR photons; that’s a far cry from showing a resultant increase in the Temperature of the rest of the planet.

Physical systems which appear to follow logarithmic relationships ( same as exponential in reverse) such as radioactive decay for example; do so in a statistically noisy manner. The

exp (-t/tau) is only the statistical average relationship; it does not predict when the very next decay event will occur.

Guest Post by Willis Eschenbach

…forcing from a doubling of CO2 (3.7 W/m2)…

==========================================================

This is another hole. The 3.7 W/m2 is derived from the notion, that the “greenhouse gasses” cause the on average -18 degree Celsius cold surface get +15 degree Celsius on average warm. -18 Degree Celsius is the temperature in the freezer. I suggest you turn off the freezer, open it, then close the opening with a glass lid (blocking the IR radiation) and wait till the temperature rises to +15 degree Celsius by means of back radiation.

Wait, I have a better idea. Let us just read about the professor R.W.Wood’s experiment with the back radiation: http://www.wmconnolley.org.uk/sci/wood_rw.1909.html . No warming effect through back radiation! Now we know the truth. And the SOLAR IR-radiation is there! The glass lid cooled the box by 10 degrees by blocking a good portion of the SOLAR IR-radiation! AGW people would not like it.

Jim D says:

Well put…and thus bears repeating. I think that this is indeed likely the major problem with Willis’s analysis. Actually, the analysis isn’t too dissimilar to this analysis that someone pointed me to by some guy named George White: http://www.palisad.com/co2/eb/eb.html

As I told that person, the first thing that you should do is repeat the analysis using a climate model and see what sensitivity you predict for the model on the basis of this analysis. The advantage of this is that in the model you know what the answer is because most of the climate models have had their sensitivity well-determined. If your method correctly diagnoses the model’s climate sensitivity (which I am quite sure Willis’s method…and this other guy George White’s method won’t)…then it at least MIGHT work in the more complicated real world. However, if it doesn’t work in the simplified world of a climate model, it is quite unlikely that it will magically work in the real world!

By the way, there has been some work on looking at the seasonal cycle and compare to climate models in order to try to diagnose the climate sensitivity, see here: http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3865.1 Their conclusion is “Subject to a number of assumptions on the models and datasets used, it is found that climate sensitivity is very unlikely (5% probability) to be either below 1.5–2 K or above about 5–6.5 K, with the best agreement found for sensitivities between 3 and 3.5 K.”

George E. Smith; says:

May 29, 2012 at 6:27 pm

That’s why the “CO2 saturation” notion is a non-starter. CO2 absorbs LWIR; re-emits some other wavelength, and then is ready to grab another LWIR photon from the surface; so it never really “saturates”.

=========

Why assume that the absorbed energy will come in the form of a photon from the surface? It is more likely the CO2 molecule will absorb kinetic energy from N2 and O2 and convert this into radiation, and about 1/2 of which will then be radiated into space that would otherwise remain in the atmosphere and be conducted back to the surface.

Back radiation? That isn’t radiation being reflected back. It is energy from N2\O2 being radiated back to the surface. The more back radiation from CO2, the more it is cooling the atmosphere.

George E. Smith; says: “And you are also supposed to know that the amount decays to 5% in three time constants, and to 1% in 5 time constants. So in 10 time constants the remainder would be 0.01%, close enough for most people.”

The model’s response time is not necessarily known ahead of time, so they do long runs and then when the computer can no longer register a change, that is the time to “equilibration” or statistically indistinguishable from such, from which the most accurate value for the model’s sensitivity can be determined. At the lengths they run the models for either they think the response time is centuries or longer, they have no idea, or want ridiculous accuracy.

Isn’t the average warming per the land- and satellite-based measurements already since 1979 roughly +0.3C?……………….With only an increase of atmospheric CO2 of less than 20%?

Please, tell me that I’m full of sh!t……….

Two questions come to mind. The first is that it’s unclear whether you can assume that the albedo remains constant. The second is that since you are using an average, are you losing information due to a change in phase angle over time? In other words, shouldn’t the figure show a rotation over time if the response changes and none if it doesn’t?

Now you have: ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)

which could as easily have been written as: ΔT(n+1) = a∆F(n+1) + bΔT(n).

This way a and b are not confounded as λ and τ were. Their estimates are correlated, with the correlation depending on the correlation ∆F(n+1) and ΔT(n), which are probably uncorrelated (? — except maybe seasons of the year when both decline (fall) or both increase (spring) produce small positive correlation.)

Why is the change in mean temperature a linear function of the change in forcing over the interval, instead of something like the mean (or integrated) forcing?

The Pompous Git says: May 29, 2012 at 2:37 pm

You are well-named, sir. What a load of pedantic nonsense.

Apart from totally missing the point, do you have anything of value to add?

Willis,

I also think you are confused about what your tau represents…It is not the relaxation time for equilibration of the system. In fact, in the limit that the time constant for equilibration of the system approached infinity, the lag time that you compute would approach 3 months. That is to say, in the limit of a very large heat capacity for the system, the temperature would be 90deg out-of-phase with the cyclical forcing. (In other words, the temperature would reach its peak when the value of the forcing crossed through zero…i.e., the maximum temperature would occur at the equinoxes.) In the limit of zero heat capacity in the system, the temperature would be in-phase with the cyclical forcing. (The maximum temperature would occur at the summer solstice.) Hence, the tau that you have computed would always fall in the range of 0 to 3 months, no matter how long the relaxation time of the system is!

You can see this already with a simple one-box model given by a differential equation of the form:

c dT/dt = F(t) – (1/lambda)*T

where c is the heat capacity, T is the temperature (relative to the equilibrium temperature in absence of any forcing), lamdba is the climate sensitivity, t is the time, and F(t) is the forcing [which, for the seasonal cycle will have a form like F(t) = F_0 * cos(omega*t) where omega = 2*pi if you measure t in years]. I just set up a MATLAB function to solve this equation numerically (although I imagine that it may not be too hard to solve analytically if I had thought about it a little more).

As an example, I find that in this simple model, a lag time of 2.4 months occurs when the relaxation time (given by c*lambda) is 6 months. [In order to get considerably larger relaxation times, say of a few years or more, this simple model requires the temperature data to have a lag time very close to 3 months, but I think a more realistic model with more than one relaxation time could show a lag time of, say, 2.4 months and still have a much slower relaxation to equilibrium.]

“”””” Andrew says:

May 29, 2012 at 6:54 pm

George E. Smith; says: “And you are also supposed to know that the amount decays to 5% in three time constants, and to 1% in 5 time constants. So in 10 time constants the remainder would be 0.01%, close enough for most people.”

The model’s response time is not necessarily known ahead of time, so they do long runs and then when the computer can no longer register a change, that is the time to “equilibration” or statistically indistinguishable from such, from which the most accurate value for the model’s sensitivity can be determined. “””””

Sorry Andrew, if the model postulates some thermal time delay process, then the rate of rise (or fall) can be determined in the very first time intervals of the “simulation,”, as the initial rate of change.

and from the known disturbance from the steady state, the time constant can immediately be obtained. You don’t have to run a process into the ground, to finally figure out how long that will take.

Besides earth is never in equilibrium, so it never does stop changing.

“”””” ferd berple says:

May 29, 2012 at 6:53 pm

George E. Smith; says:

May 29, 2012 at 6:27 pm

That’s why the “CO2 saturation” notion is a non-starter. CO2 absorbs LWIR; re-emits some other wavelength, and then is ready to grab another LWIR photon from the surface; so it never really “saturates”.

=========

Why assume that the absorbed energy will come in the form of a photon from the surface? “””””

didn’t !

George E. Smithsays (May 29, 2012 at 6:27 pm):‘As for Beer’s Law, sometimes referred to as the Beer-Lambert Law, it was an approximate relationship for absorption in DILUTE solutions of a given solute, from chemistry, and some will argue that it has an optical application as well,…’Actually the Beer-Lambert law is a theoretically-ideal expression for radiative absorption in relatively transparent media in all phases of matter, ie. solid, liquid and gas, as the Wikipedia article on it (here: http://en.wikipedia.org/wiki/Beer-Lambert_law#Derivation ) makes clear.

I think I understand your model now. The formulation is a little unusual, but it seems perfectly valid. It cannot handle a constant forcing (Delta F=0), but behaves correctly for dealing with changes of forcing, which is what we are interested in anyhow.

I’m not sure how you get the values for Delta F for the two hemispheres. I can imagine several methods, but for comparison I would prefer to know which method you used. The NASA table gives average insolation according to latitude and time, say s(theta,t). For simplicity of formulas, lets give the latitude theta in radians.

To compute the hemispherical forcing F, we also need the albedo in dependence of latitude and time. Lets say it’s a(theta,t), where a(theta,t) is number between 0 and 1. The average forcing of the hemisphere at time t would, I think, be given as the integral over theta from 0 to pi/2 of (1-a(theta,t))s(theta,t)cos(theta).

The problem I have is that the paper by Hatzianastassiou et al. only seems to give graphs of the computed latitude dependent, time averaged albedo, and the time dependent, space averaged albedo for each hemisphere. So how exactly did you proceed from there? Did you get the actual values somewhere, or did you do some approximation?

Robbie says:

May 29, 2012 at 4:22 am

Robbie,

It is not enough that your model is based on physics. It actually needs to be based on the appropriate physics. When subtracting one part in 2500 from the model (all the co2) from the model produces several degrees GMAT cooling in the first year, I humbly submit that the appropriate physics is not included in the model.

KR says:

May 29, 2012 at 12:15 pm

Thanks for an interesting citation, KR. However, it’s models all the way down. From the Abstract:

A neural network analyzing the output of climateprediction.net simulations … be still, my beating heart.

a graph of the climateprediction.net results …Here’sNote that:

1. They have about a 3°C range before any change in CO2, they don’t even agree on the current global surface temperature.

2. A host of their predictions fall through the floor before any change in CO2.

3. Another bunch of them fall through the floor after they add CO2.

4. Pick a number, they have a prediction that gives that result.

That’s your “evidence”, KR? From their results, it looks like there’s a darned good chance we’re already in an ice age and haven’t noticed …

I have said, over and over, that climate models are

not evidence of anything real. Those results are a perfect example of why model results are not evidence of anything but the beliefs, biases, and errors of the programmers.In any case, the authors of your citation see the holes in their logic, and they attempt to cooper them up as follows:

So the authors point out that the results from their whiz-bang neural network analysis of a bunch of crappy GCM runs might just be one model fitting another model … so they give us an “evaluation of the method”. How do they evaluate it?

Well, not by looking at observations, but by looking at other climate models …

So I’m sorry, KR, but I find nothing of use in your citation

I also note that you do not point out any problems with my mathematics, logic, or data … so why exactly are you claiming I’m wrong, other than that my results don’t agree with some neural network’s analysis of a very poor computer model?

You say I’m wrong, but where is the error in my work?

w.

steve fitzpatrick says:

May 29, 2012 at 12:48 pm

Thanks, Steve. If it’s lower than other estimates, you should read more estimates. You could start with the

of Sherwood Idso …workNext, actual climate sensitivity should be lower than a simple blackbody response, because of the existence of:

1. a range of thermostatic phenomena that regulate the climate and reduce its swings, and

2. a variety of parasitic losses (sensible and latent heat) that increase with increasing temperature.

See

andherefor a discussion of some of the issues surrounding sensitivity.hereFinally, as you likely know, the blackbody response from a doubling of CO2 is 0.7°C.

w.

Jim D says:

May 29, 2012 at 5:57 pm

That sounds like an interesting result. Citation?

Thanks,

w.

Allan MacRae said @ May 29, 2012 at 7:46 pm

I thought your point was: “In science, first there is Hypothesis, then Theory (Evolution) and finally Law (Gravity).” I do not believe that is true and gave an example that contradicts your assertion. Just in case you found the Newton/Einstein example too difficult to follow, consider Snell’s Law.

Snell’s Law (la loi de Descartes if you are French) describes the relationship between the angles of incidence and refraction when a pencil of light passes through the boundary between two different isotropic media, such as water and glass. It was first described by the Arab philosopher Ibn Sahl in 984. The corpuscular theory of light was developed by the Frenchman Pierre Gassendi and the wave theory of light by the Dutchman Christiaan Huygens in the 18th C.

I do not see how the wave, or corpuscular theories of light could have evolved into Snell’s Law half a millennium prior. Nor do I know of any theory that ever evolved into a physical law. So far, you have given no example of this evolution. I invite you to do so, or explain why I am mistaken in the examples I give. Perhaps then we can proceed to any point that I may be missing. Historians and philosophers are rather averse to arguments proceeding from false premisses.

Glad you like my moniker BTW. I stole it from Stuart Littlemore after an ABC budget-cut reduced him to being Stuart Littleless 😉

joeldshore says:

May 29, 2012 at 6:45 pm

Thanks, Joel, but … I’ve already done that. Twice.

andHere. In both cases I found, using the same lagged model I used above, that the climate sensitivity is ≈ 0.3 … curiously, it’s the same result I get above.hereDespite the fact that I’m using a much lower value for the sensitivity than they claim for the model, I’m

able to reproduce the model output to a very high degree of fidelity(correlation = 0.995) … which means that in fact I am using the correct climate sensitivity. (It also means that the global temperature results from their unimaginably complex models can be perfectly emulated by a one-line equation … but I digress.)So who you gonna believe … the climate modelers, or your own lying eyes? I fear your argument is like that of the communists, who used to say “That works well in practice, comrade … but it will never work in theory!”.

w.

[UPDATE—ERROR]I erroneously stated above that the climate sensitivity found in my analysis of the climate models was 0.3 for a doubling of CO2. In fact, that was the sensitivity in degrees per W/m2, which means that the sensitivity for a doubling of CO2 is 1.1°C. -w.BarryW says:

May 29, 2012 at 7:14 pm

Thanks, Barry. I didn’t assume the albedo remains constant. I used the actual variations in the albedo from Hatzianastassiou et al. Sorry for the lack of clarity.

As mentioned in the notes, I also calculated it without using an average. See the

for those results.spreadsheetw.

mb says:

May 29, 2012 at 9:42 pm

I used the average albedo for each hemisphere. To get the values, I simply digitized the graphs in Hatzianastassiou. The values are available in the

which I cited in the head post.spreadsheetw.

Willis,OT, apologies for that – my piece on Graeff is up at Tallbloke’s blog and if you want to reply but not there you can always email me direct. OTOH it would be nice to see that piece here – when my computer is back functioning so I can cope with replies…

“”””” Magic Turtle says:

May 29, 2012 at 9:20 pm

George E. Smith says (May 29, 2012 at 6:27 pm):

‘As for Beer’s Law, sometimes referred to as the Beer-Lambert Law, it was an approximate relationship for absorption in DILUTE solutions of a given solute, from chemistry, and some will argue that it has an optical application as well,…’

Actually the Beer-Lambert law is a theoretically-ideal expression for radiative absorption in relatively transparent media in all phases of matter, ie. solid, liquid and gas, as the Wikipedia article on it (here: http://en.wikipedia.org/wiki/Beer-Lambert_law#Derivation ) makes clear. “””””

“”””” the Beer-Lambert law is a theoretically-ideal expression for radiative absorption “””””

So what means “theoretically ideal expression”. Would it also be a practically ideal expression ?

You also state (or does Wikipedia, also stand in for you here) it is an expression for “radiative absorption ” ; OF WHAT ? The incident photons, or the energy they carry. If it is the latter, then B-L could also be used to compute THE ENERGY TRANSMISSION of solids, liquids and gases.

As for Beer’s law, it IS, as I stated, an approximate law for the absorption of a dilute solution of a solute as a function of the concentration of the solute. And the same holds for dye containing optical glasses, as a function of the dye concentration in the glass. Only when the solute concentration is fixed, in either liquid or solid solutios, does the absorption follow an exponential with THICKNESS law; and for a great many such solutions, even then the exponential law holds ONLY for the original input beam wavelength; ie, the input photons; it does not yield a correct result for the energy or power transmission.

For example, all of the Schott sharp cut long wave pass optical filter glasses, have steep cutoffs versus wavelength, that give attenuations of the input wavelength of 10^4 or 10^5 just beyond the cutoff wavelength, and you can prove that for yourself, with a laser and a double monochromator tuned to the laser wavelength. But if you remove the monochromator, to eliminate the restriction to the original input wavelength, a power meter wii show that a large fraction of the input power is still being transmitted; so Beer’s law is not being obeyed for the power transmission.

And the same goes for the LWIR transmission of the atmosphere in the presence of CO2.

Sure the CO2 absorbs surface emitted LWIR photons; but they don’t stay absorbed; the energy is re-emitted at some other wavelength so it is not stopped by the CO2. As a result, the energy transmission is greater than the Beer law would claim; both as a function of CO2 abundance, and also as a function of distance. I happen to have a full set of all Schott optical filter glasses in the standard 50 x 50 x 3 mm size, and very few of them follow the exponential decay law for the total transmitted power. And of course they are all constant concentration samples, so they can’t be checked for linearity of the concentration eponential.

Just because something is in wikileaks does not make it reliable information.

“So … what are we looking at in Figure 1?” Looks like four rubber bands on a piece of paper. 😉

And if you read your own reference, you will see; that it is far from a theoretically ideal law, and even Wiki erroneously ascribes it to the transmission, instead of the absorpton. See also the notes on deviations from the approximate law, and the conditions under which it applies, specially that one about the radiation not affecting the atoms. They fail to mention the material must not be fluorescent, so the absorbed photons have to stay dead. and that’s impossible because if they do, then the sample will heat, and since it is a solid or liquid, or gas, above absolute zero, then it must radiate a thermal spectrum, to get rid of the temporarily absorbed energy.

Once again, wiki lives up to its reputation.

Gymnosperm says:

On May 29, 2012 at 10:12 pm

I am not basing my model on physics. It’s just using some common sense. Eschenbach is claiming a negative forcing by water vapor of more than 75%. Yes, more than 75%, because an increase in water vapor due to increased CO2 warming (1-1.2°C by 2xCO2) makes that value more than 75%. That’s simply wrong. It isn’t happening now and it won’t happen into the future.

Negative forcing by water vapor and mainly clouds is just 45% in the real world at this moment. In the long run (that takes a few hundreds of years) equilibrium will be reached when CO2 stays doubled and doesn’t increase anymore. But then the temperature will have risen at least somewhere between 1.5-2°C. And not what Eschenbach is claiming.

He is simply 100% wrong in his 0.3°C sensitivity claim.

scalability.org says:May 29, 2012 at 6:18 pm

Things like this always leave a smile on my face, as either the poster is joking or they simply don’t understand. Causality is a very well understood concept. Correlation’s relationship to causality is less so. This (causality) is decidedly not a statistical phenomenon, but its impact (observed correlation) may be seen in observational data.

Correlation is a statistical phenomena. Hence the statistical caveat.

From wikipedia,

“Correlation does not imply causation” is a phrase used in science and statistics to emphasize that correlation between two variables does not automatically imply that one causes the other (thoughcorrelation is necessary for linear causationin the absence of any third and countervailing causative variable).If various critics above have a problem with this, I suggest you edit the entry.

Willis Eschenbach says:

…Which shows that your result is wrong since that is most assuredly not the correct climate sensitivity for the model. The climate sensitivity in the models are not hard to determine by putting a certain forcing in and seeing how much temperature change you get when you run the model out for a long time. The answer obtained is very different from the answer that you obtain. Therefore, your method is very poor in diagnosing the climate sensitivity of climate models.

No…It does not show that at all, any more than Nikolov and Zeller’s fit shows that they are correct. In fact, it shows that your method of diagnosing climate sensitivity DOES NOT WORK in a case where the answer is known.

No…You have shown no evidence that it works well in practice. Being able to fit a bit of data for the seasonal cycle is not evidence that your method is effective in diagnosing the climate sensitivity! And, the fact that it is known to work extremely poorly in diagnosing the sensitivity in a climate model shows that it works very poorly in theory. Techniques that work poorly in theory (i.e., the simplified world of a model) seldom end up working well in practice (the real world).

WIllis: Now that I have looked back at the two previous posts of yours that I linked to, I have two more comments:

(1) In those posts, you found a climate sensitivity of ~0.3 C per W/m^2 for the GISS climate model. So, no, that is not particularly close to the value that you have found here of ~0.3 C for a change in forcing of 3.7 W/m^2. Rather, it is about 4 times as large.

(2) For those posts, you were not looking at the seasonal cycle in the models…You were looking at the long term warming trend over the last century or so. Hence, it is not surprising that you got a higher answer there (closer to the correct value for equilibrium sensitivity in that model but still too low because of the difference between transient climate response and equilibrium climate sensitivity). As myself, Jim D, and Hans Erren ( http://wattsupwiththat.com/2012/05/29/an-observational-estimate-of-climate-sensitivity/#comment-996383 ) are all trying to point out to you, your method of diagnosing climate sensitivity will give you a lower sensitivity for higher frequency forcings.

George E. Smith; says: “Sorry Andrew, if the model postulates some thermal time delay process, then the rate of rise (or fall) can be determined in the very first time intervals of the “simulation,”, as the initial rate of change.”

That might be the case…if the models were

exactlyas neat as the equations that simulate their long term behavior quite well. The problem is that there is a lot of noise about those neat functional forms that hides them. So the very first time step will show something different from that correct “initial rate of change”…if one averages a large number of models, the model’s noise begins to cancel out and approach a neat functional form almost exactly. But I do think they probably run the models excessively. Unless they are unaware of the simplicity of the underlying functional form the models approach.joeldshore:

At May 30, 2012 at 5:36 am you say to Willis:

Of dear! Even by you standards that is very wrong.

Firstly,

climate sensitivity is NOT “known”.If climate sensitivity were “known” then each model would use the same value of it.

But the climate models each use a different value of climate sensitivity that vary by a factor of 2.5.

(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).

Secondly, if a method works then it works whether or not “the answer is known”.

Thirdly, … and etc. …

Richard

[SNIP: Off Topic. We have a Tips and Notes page here for things like this. -REP]

Billy Liar–“So where is the model (in control system terms) documented. Any idea?”A reasonable resource is at http://stommel.tamu.edu/~baum/climate_modeling.html – links to major climate models. If you are interested in simpler models, I would suggest Googling 2-box to 6-box models; increasing model complexity does improve fidelity to observations, but even a zero dimensional flux model can provide some insight.

Willis Eschenbach– The major issue I have with your work is that it’s a “one-box” model, which fails entirely to capture the behavior of ocean heat transfer.I might suggest you take a look at http://arthur.shumwaysmith.com/life/category/tags/two_box_model for a discussion/development of a

somewhatmore capable model – the behavior of which with reasonable heat transfer rates. Single box models will simply give unrealistic (as in your 0.3C/doubling) results.richardscourtney says:

You are clearly not following the discussion. Let me explain it again:

(1) The climate sensitivity in the real world is not known and Willis claims to have found a method to determine it by looking at the seasonal cycle.

(2) The climate sensitivity in various climate models is known. For the purposes of this discussion, it is irrelevant whether that climate sensitivity corresponds to the climate sensitivity in the real world.

(3) Since the climate sensitivity in the various models is known, one can try to use Willis’s method and see if it correctly diagnoses the known climate sensitivity of a particular model when one looks at the seasonal cycle in the model.

(4) If Willis’ method does correctly diagnose the climate sensitivity in the model, then there is some reasonable hope it might also diagnose the climate sensitivity in the real world; if it doesn’t, there is little hope that it will magically do a better job in the more complicated real world.

What about this is difficult to understand?

joeldshore May 30, 2012 at 8:33 am– That is a major reason I pointed to the Knutti and Meehl paper. While all climate models(including, I’ll note, Willis Eschenbach’s single-box model as discussed in this thread)are inexact, comparing seasonal responses to forcings with a realistic climate model(one that at least considers several time constants)and it’s sensitivities would be an excellent test case for a seasonal method. That’s just what Knutti and Meehl 2006 did.Annual cycles are just too short for significant heat penetration into the oceans, and hence cannot directly provide transient

(20 year)climate sensitivity estimates. The thermal mass of the oceans acts as a flywheel or inductor – limiting responses to fast forcing changes.Willis:

One should not imply the existence in nature of the equilibrium climate sensitivity (TECS) for TECS is defined in terms of the equilibrium temperature but this temperature is not an observable. As the equilibrium temperature is not an observable, claims made about the magnitude of TECS, including the claims that you make in this article, are not falsifiable, thus lying outside science.

Through the error of implying the existence of TECS, climatologists have neglected the necessity for providing a basis for falsifying the claims that are made by their models if these models are to be properly labelled as “scientific.” Were it to exist, this basis would be the underlying statistical population. Currently, there is no such population.

Magic Turtle says:May 29, 2012 at 4:17 pm

To my saying (May 29, 2012 at 10:51 am):

“1. The amount of radiative forcing produced by a doubling of the CO2 concentration is not a regular fixed amount. The IPCC’s logarithmic formula from which the fixed amount results is false. The correct formula can be derived from the Beer-Lambert law of optics and it follows an inverse exponential law, not a logarithmic one.”

Phil says (May 29, 2012 at 12:56 pm):

‘ Not true, the logarithmic dependence does not come from the Beer-Lambert Law which applies in optically thin situations, but from the optically thick situation which applies to CO2 absorption in the IR in our atmosphere. Look up ‘Curve of Growth’ to see a derivation.

Basically, weak lines have a linear dependence on concentration, moderately strong lines have a logarithmic dependence and very strong lines a square root dependence.’

What I said is true. I never said that the logarithmic relationship is derived from the Beer-Lambert law, which I agree does apply to optically-thin lines.

No you said that the “correct formula can be derived from the Beer-Lambert law” which it can not because the absorption is not optically thin.

But it also applies to groups of optically-thin lines and for that reason it can be applied to entire absorption/emission wavebands. The thicknesses of the individual lines become irrelevant in that case. Such an application provides a straightforward method of deriving a general formula for the absorption of terrestrial surface radiance by atmospheric CO2.However that absorption band is optically thick so your use of the Beer-Lambert Law to derive a relationship is flawed. The absorption by CO2 in the 15μm corresponds to the moderately strong region and hence has a logarithmic dependence which is what is found experimentally.

http://i302.photobucket.com/albums/nn107/Sprintstar400/CO2spectra-1.gif

A good derivation can be found here (note that it starts from the B-L Law):

http://www.physics.sfsu.edu/~lea/courses/grad/cog.PDF

@ George E. SmithRe. Your comments to me of May 30, 2012 at 3:15 am and 3:33 am.

“”””” the Beer-Lambert law is a theoretically-ideal expression for radiative absorption “””””So what means “theoretically ideal expression”. Would it also be a practically ideal expression?

That depends on what you want to use it for. If you want to estimate the amount of radiant power that the CO2 in the earth’s atmosphere is absorbing it will only give you an approximation to the true value. But I contend that it would give you a better approximation than the IPCC’s logarithmic expression which does not do what is claimed for it and makes no sense to me.

You also state (or does Wikipedia, also stand in for you here) it is an expression for “radiative absorption ” ; OF WHAT ? The incident photons, or the energy they carry.It’s both! The molecule absorbs both the photon and the energy it carries, perhaps like a flying duck ‘absorbs’ a shotgun pellet along with the energy that it carries.

If it is the latter, then B-L could also be used to compute THE ENERGY TRANSMISSION of solids, liquids and gases.I’m not sure what you mean by ‘energy transmission’ here, but if you mean the amount of power that is transmitted by a beam of radiation through a medium and out the other side then, yes, that is how the B-L law can often be used (depending on the precise conditions of the case of course).

I think I understand your point about the inapplicability of Beer’s law to optical filters. However I don’t see it as a problem in relation the application of the Beer-Lambert law to the atmosphere because the Beer-Lambert law is not the same as Beer’s law. Under Beer’s law ‘the absorption follow(s) an exponential with THICKNESS’ relationship as you rightly say, but the B-L law is more fundamental and is not constrained in this way. Instead the B-L law allows that the total amount of power that is absorbed from a beam will depend (exponentially) on the total number of absorbent molecules in the beam’s path regardless of their spatial distribution along the beam. Therefore the B-L law is applicable to CO2 molecules in the earth’s atmosphere where the density of the gas varies whereas Beer’s law is not applicable to it as you say.

Sure the CO2 absorbs surface emitted LWIR photons; but they don’t stay absorbed; the energy is re-emitted at some other wavelength so it is not stopped by the CO2.Quite possibly, but I am not arguing that CO2’s emission of radiation conforms to the Beer-Lambert law; only that its absorption of radiation does.

Just because something is in wikileaks does not make it reliable information.Indeed so! But I didn’t refer you to wikileaks; I referred you to Wikipedia. Ha, ha!

However I acknowledge the validity of your point in relation to Wikipedia too and I don’t trust it blindly myself. I referred to it because it gives the most lucid and comprehensive explanation of the B-L law (derived from first principles) that I’ve seen to date and because it includes a section specifically on the application of the B-L law to the atmosphere (here: http://en.wikipedia.org/wiki/Beer-Lambert_law#Beer.E2.80.93Lambert_law_in_the_atmosphere). If you find a discussion somewhere else that disagrees with it please let me know.

ferd berple says:That’s why the “CO2 saturation” notion is a non-starter. CO2 absorbs LWIR; re-emits some other wavelength, and then is ready to grab another LWIR photon from the surface; so it never really “saturates”.

May 29, 2012 at 6:53 pm

George E. Smith; says:

May 29, 2012 at 6:27 pm

=========

Why assume that the absorbed energy will come in the form of a photon from the surface? It is more likely the CO2 molecule will absorb kinetic energy from N2 and O2 and convert this into radiation, and about 1/2 of which will then be radiated into space that would otherwise remain in the atmosphere and be conducted back to the surface.

Absorbed LWIR excites the vibrational and rotational states which are therefore able to emit radiation if they are not collisionally deactivated first, transfer of translational kinetic energy from N2 and O2 does not necessarily excite the ro-vib levels in which case it will not emit, so no it is not more likely that the acquisition of kinetic energy will be converted into radiation.

Magic Turtle says:May 30, 2012 at 10:09 am

That depends on what you want to use it for. If you want to estimate the amount of radiant power that the CO2 in the earth’s atmosphere is absorbing it will only give you an approximation to the true value. But I contend that it would give you a better approximation than the IPCC’s logarithmic expression which does not do what is claimed for it and makes no sense to me.

That’s because you don’t understand absorption in an optically thick situation which is the case for CO2 absorption in the atmosphere. You insist on applying B-L outside its range of applicability,

it might come close for the Martian atmosphere but not Earth’s.

Magic Turtle says:May 30, 2012 at 10:09 am

Instead the B-L law allows that the total amount of power that is absorbed from a beam will depend (exponentially) on the total number of absorbent molecules in the beam’s path regardless of their spatial distribution along the beam. Therefore the B-L law is applicable to CO2 molecules in the earth’s atmosphere where the density of the gas varies whereas Beer’s law is not applicable to it as you say.

This is true only if that beam path is optically thin (read the page you cited), you have to take account of optical thickness, do yourself a favor and read the article I referenced.

Willis

please could you explain your formula

ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ) ?

Do you have any sources for that?

When you look at the units in your formula, there must be something wrong. τ must be without unit, it can’t be a time constant. And if τ is just a factor, wouldn’t lambda be “your lambda”/ τ ? But even then, this lambda is not the usual climate sensitivity.

But even without this, your formula looks strange, I need some sources to understand.

PS:

You haven’t answered my question before, maybe you overlooked or was it a stupid question? My question was, if you considered heat exchange between the two hemispheres e.g. by winds or ocean currents.

KR:

In your post at May 30, 2012 at 8:27 am you reply to my having written

But, as I explained, they don’t.

And you rightly say I also wrote:

You reply with the incorrect assertion

And you then provide a series of numbered points. I address each of them in turn.

You say

I agree.

But I add the caveat that Willis is the third person I know who has attempted to determine climate sensitivity from the variation of solar forcing which results from Earth/Sun distance provided by the Earth’s orbit.

You say

No!We are discussing the method Willis has used to the magnitude of “the climate sensitivity in the real world”

You say

Yes. But so what? We are discussing the method Willis has used to determine the magnitude of “the climate sensitivity in the real world” (n.b. NOT the climate sensitivity used in any “particular model”).

You say

Absolutely not!Models are tested against observations and analyses of reality.

Analyses of reality are NOT tested against models.

This is because

. Therefore, your assertion is the opposite of what is required. Your assertion would be correct if it saidthe analysis is of reality and a model is merely a representation of an idea about realityIf there is no flaw in Willis’ method then it does correctly diagnose the climate sensitivity in the real world. So, if a model uses the climate sensitivity indicated by Willis’ method then there is some reasonable hope that the model might emulate the real world; if it doesn’t, there is little hope that it will magically do a better job of emulating the more complicated real world.You ask me

I have no difficulty understanding this, but clearly you do.Richard

Willis Eschenbach

May 30, 2012 at 12:35 am

“Finally, as you likely know, the blackbody response from a doubling of CO2 is 0.7°C.”

And as you likely know, the blackbody sensitivity, expressed in units of watts/M^2 per degree change in temperature, depends on the assumed emission temperature, as given by:

dJ/dT = (2.268 X 10^-7) * T^3

For 255K emission (what approximately balances absorbed solar energy with net radiative loss to space) the sensitivity to a doubling is 3.76 watts/M^2/K, or about 3.71/3.76 = 0.987 C per doubling (which is why I say “about 1C”). It is only by assuming a much higher average emission temperature (eg 288K) that the blackbody sensitivity is close to 0.7C per doubling (actually 0.685C per doubling for 288K). But 288K is not the emission temperature for Earth.

“”””” Magic Turtle says:

May 30, 2012 at 10:09 am

@ George E. Smith

Re. Your comments to me of May 30, 2012 at 3:15 am and 3:33 am.

“”””” the Beer-Lambert law is a theoretically-ideal expression for radiative absorption “””””

So what means “theoretically ideal expression”. Would it also be a practically ideal expression?

That depends on what you want to use it for. “””””

Well Magic, you are missing the whole point. Beer’s Law, and Lambert’s law (they are different laws) both assume that photons (and their energy) are absorbed proportionally to the number of absorbing species molecules or atoms they encounter; and that the ENERGY STAYS ABSORBED, which lowers the amount of propagating radiant energythat becomes available to the next bunch of molecules to take a shot at absorbing. So for example, in a stack of thin sheets, the EM radiant energy exiting each sheet in turn, would be reduced by the ABSORPTION in the sheet, so less energy enters the next sheet. But the photons are required to stay dead for the exponential decay relationship to apply for either law. But often they simply resurrect as a longer wavelength photon, which will alsmost surely have a different absorption probability to apply in the next sheet. So individual photons may be extinguished, but the energy continues to propagate, at a different wavelength,. The ORIGINAL SPECIES are absorbed in the usual exponentially decaying manner, so the laws apply to the absorption of the original species, but they don’t apply to the ENERGY TRANSMISSION of many materials.

For starters,there are deviations from the laws simply due to the Stokes shift when the subsequent emission of a longer wavelength photon occurrs. Eventually it all will be transmitted, just at different wavelengths, including Planckian thermal wavelengths from sample heating.

Photons don’t stay dead; not until the temperature reaches zero K.. But if you only account for the original impinging photon species, (absorption), measurements show quite good agreement with those laws. Try googling “blue pumped white LEDs” They depend on re-emission of longer wavelength photons, as well as transmission of the original blue (often 460 nm) photons, and Stokes shift losses are key to final efficiency.

“”””” re stevefitzpatrick………..For 255K emission (what approximately balances absorbed solar energy with net radiative loss to space) the sensitivity to a doubling is 3.76 watts/M^2/K, or about 3.71/3.76 = 0.987 C per doubling (which is why I say “about 1C”). It is only by assuming a much higher average emission temperature (eg 288K) that the blackbody sensitivity is close to 0.7C per doubling (actually 0.685C per doubling for 288K). But 288K is not the emission temperature for Earth. “””””

Well 288 K is certainly closer to the usually cited mean Temperature of earth, than is 255 K.. “””””

You say 255K (BB radiation) matches solar input, but that assumes that earth is indeed a single Temperature BB emitter which it isn’t. You have to believe that ALL of the roughly 5 to 80 micron spectrum emission from a 288 K Temperature source is absorbed in the atmosphere, and none escapes, to believe that earth looks like a 255 K BB. In fact plenty of energy escapes from earth at much higher effective source Temperatures from the hottest desert areas, where CO2 absorption is less of a factor.

The tropical hot areas are responsible for cooling the earth, not the cold polar areas, which often emit less than 10% of what the hot zones radiate.

richardscourtney May 30, 2012 at 10:49 am– I’m afraid that the comment you were replying to was fromjoeldshore, not me.richardscourtney says:

How is one to know if there is a flaw in the method? The method, like a model, makes a zillion approximations, as we have been discussing. In fact, his particular method is based on fitting an extremely simple model to the data and then using that model to determine the climate sensitivity. There is no evidence whatsoever that his model can do a good job telling you the climate sensitivity when tuned to data on the seasonal cycle.

It is strange how you somehow believe that Willis’s model can tell you the correct climate sensitivity but a full-scale global climate model can’t…And, that in fact, you can’t even use a full-scale climate model as a testbed to see how well Willis’s simple technique works on a much more careful representation of reality. It seems the only reason that you believe this is that Willis’s model gives you your desired answer.

Willis did not do an “analysis of reality” any more than any data-fitting exercise with a simple model is an “analysis of reality”. Willis used a model that is completely untested to fit some data. (Actually, I think such a model has been tested to enough to know that it is way too simplistic for the task.) And now he wants to claim that the completely untested aspect of his model is correct…i.e., that his simple model tuned to the most basic aspects of the seasonal cycle can diagnose the correct climate sensitivity. There is no reason whatsoever to believe that it can and every reason to believe that it can’t.

The use of “synthetic data” produced by models as a way of evaluating a diagnostic technique has a rich and successful history in science. You want to overturn that because you want to elevate a simple model that gives the answer that you happen to like above all other models. It is really bizarre. You guys have an extreme allergic reaction to climate models unless said models are simplistic enough that they can be tuned to data to give answers that you like and then you apparently ready to believe them without even demanding any testing whatsoever!

Curiously enough, looking at some discussions of multiple box models, I ran across a reference to several earlier Eschenbach postings on forcings and modeling, and a fairly detailed reply (http://tinyurl.com/6tvk3wt) indicating that

they all suffer from the same problem– using asingletime constant, a single-box model. And just that doesn’t fit the data.The model discussed in this thread also has a single time constant, and will, inevitably, also fail to fit the data accordingly.

A two-box model (as discussed in the link above), two time constants (still a fairly crude model, mind you), fits the data quite well with time constants of ~2 and ~45 years. And demonstrates a sensitivity of around 2.4C/doubling of CO2. That model is also a much better match to the actual physics – where the fast climate response is seen in portions of the climate (atmosphere) with low heat capacity, and the slow response time is seen in portions (oceans) with high heat capacity.

“Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.”– George Box.A single box model is not terribly useful.

[UPDATE—ERROR]I erroneously statedthat the climate sensitivity found in my analysis of the climate models was 0.3 for a doubling of CO2. In fact, that was the sensitivity in degrees per W/m2, which means that the sensitivity for a doubling of CO2 is 1.1°C. -w.above@

Phil, Re. Your comment to me May 30, 2012 at 9:54 amMT:“What I said is true. I never said that the logarithmic relationship is derived from the Beer-Lambert law, which I agree does apply to optically-thin lines.”Phil:‘No you said that the “correct formula can be derived from the Beer-Lambert law” which it can not because the absorption is not optically thin.’Sorry, but I am not following you. How does the optical thickness of CO2 absorption prevent one from deriving a valid formula for radiation-absorption by CO2 from the Beer-Lambert law? I cannot understand an argument that you are not putting, Phil!

MT:“But it also applies to groups of optically-thin lines and for that reason it can be applied to entire absorption/emission wavebands. The thicknesses of the individual lines become irrelevant in that case. Such an application provides a straightforward method of deriving a general formula for the absorption of terrestrial surface radiance by atmospheric CO2.”Phil:‘ However that absorption band is optically thick so your use of the Beer-Lambert Law to derive a relationship is flawed. The absorption by CO2 in the 15μm corresponds to the moderately strong region and hence has a logarithmic dependence which is what is found experimentally.’My approach does not focus on one particular frequency of absorption as yours does, but focuses on the sizes of the absorption wavebands instead. I can imagine that the optical thickening effect would broaden that waveband to a degree and render it more shallow to a degree as well, but I doubt that such modifications would be significant. If they were the whole concept of absorption wavebands would be meaningless. In any case we can take them into account for practical purposes by treating them as effectively constant and incorporating them into the constant ‘

g’ in the formula that I presented in my earlier post:A = S.k.[1 – exp(-g.C)].Phil:‘http://i302.photobucket.com/albums/nn107/Sprintstar400/CO2spectra-1.gif’Why have you given me this link to a couple of unexplained graphs that have no assigned authorship and give no explanation of what they are supposed to represent and how they have been generated? They are meaningless to me.

Phil:‘A good derivation can be found here (note that it starts from the B-L Law):http://www.physics.sfsu.edu/~lea/courses/grad/cog.PDF’

This link purports to give a derivation of the Curve of Growth. It does not give a derivation of the IPCC’s logarithmic formula for radiative forcing from CO2. And again, it is unattributed, has no explanatory introduction, does not state its assumptions and provides no references that might give some key to the context of thought in which its complex mathematical arguments might have been conceived. To me it is meaningless technobabble, I’m afraid.

I do not deny that the Curve of Growth for radiation absorption by CO2 at 15µm is logarithmic with respect to the relative intensities of the incident and absorbed radiant energies by the individual CO2 molecules. But that is not what the IPCC’s formula is referring to. It is portraying the combined effect of

as logarithmic! That is a completely different matter.all the CO2-molecules in the atmosphere taken togetherRe. Your comment to me May 30, 2012 at 10:25 amMT:“But I contend that (the B-L formula) would give you a better approximation than the IPCC’s logarithmic expression which does not do what is claimed for it and makes no sense to me.”Phil:‘That’s because you don’t understand absorption in an optically thick situation which is the case for CO2 absorption in the atmosphere.’Then pray enlighten me. How does absorption in an optically thick situation make the IPCC’s logarithmic formula sensible (instead of insensible) and more applicable to atmospheric CO2 than my formula derived from B-L? You still have not explained this and neither have any of the references that you have given me.

Re. Your comment to me May 30, 2012 at 10:34 amMT:“Instead the B-L law allows that the total amount of power that is absorbed from a beam will depend (exponentially) on the total number of absorbent molecules in the beam’s path regardless of their spatial distribution along the beam. Therefore the B-L law is applicable to CO2 molecules in the earth’s atmosphere where the density of the gas varies whereas Beer’s law is not applicable to it as you say.”Phil:‘This is true only if that beam path is optically thin (read the page you cited), you have to take account of optical thickness, do yourself a favor and read the article I referenced.’I have read the page that I cited! I have also read the article that you referenced and it doesn’t show how optical thickness makes the B-L law inapplicable to atmospheric CO2. Neither does it show how the IPCC’s formula has been derived! Surely you are the one who needs to do himself a favour by reading the article that he has referenced to me!

joeldshore says:

May 30, 2012 at 5:36 am

Much appreciated, Joel. What I reported is the sensitivity that

fits the climate model results, with a correlation over 0.97 for each of the two models I studied … so you can claim it is “not the correct climate sensitivity”, but it is assuredly the sensitivity that works, and works exceedingly well.So … why does it almost perfectly emulate the model if it is “not the correct climate sensitivity”??? Can you emulate the model results using what you say is the “correct climate sensitivity”? If not, why not?

Thanks,

w.

Willis Eschenbach says:

Wills: It did not perfectly emulate the model. It systematically misrepresented the magnitude of the model response to volcanic forcings, requiring you to put a “fudge” into your model to make it work better. In fact, that problem with the volcanic forcings was a signal that something was wrong…and that something, as KR is pointing out, is assuming a model with only one timescale when there are at least two timescales (associated with the heat capacities of the atmosphere and ocean mixed layer) that need to be considered.

I am puzzled why you would claim that you found the correct climate sensitivity for the GISS model. Do you think they are not telling you the truth when they run the model to determine the true climate sensitivity of the model? Clearly, the climate sensitivity that you find with your fit of a simpler model to their model, while it may do a reasonable job at fitting (if you ignore the one forcing that operates over a different timescale, the volcanic forcing), does not diagnose the correct equilibrium climate sensitivity of the model. I.e., they measured the equilibrium climate sensitivity and got a different result than you got from your fitting procedure. If you think they did it wrong, you have to demonstrate this. You haven’t because you haven’t directly determined an equilibrium climate sensitivity. You have determined it only indirectly through fitting of their model results to a simpler model. Your simpler model includes only one timescale and what it mainly shows is that such a model is too simple to accurately determine the actual equilibrium climate sensitivity of the model.

And, I should add that you now have corrected the record http://wattsupwiththat.com/2012/05/29/an-observational-estimate-of-climate-sensitivity/#comment-997301 (thanks) and noted that your two estimates of climate sensitivity, one by fitting your model to the GISS model emulation of the instrumental temperature record and the other by fitting your model to the seasonal cycle, produce estimates that differ by about a factor of four. This is further evidence of the deficiencies of your model.

And, they differ in just the way that we expect: I.e., the higher the frequency of response that you look at, the smaller the estimate of climate sensitivity that you come up with. That is because equilibrium climate sensitivity is the zero-frequency limit of the response…and will be systematically underestimated when you try to do it using higher frequency responses using a model that only allows for one relaxation time.

KR:

Your post at May 30, 2012 at 11:50 am says:

You are right. I made a mistake.

Please accept my sincere apology. I did not intend the insult.Richard

Philip Bradley:

re. your post at May 29, 2012 at 3:12 pm

Correlation is not proof of causation. Live with it.

Richard

Willis must be joking. This reasoning behind this calculation is pathetically foolish, and it would be laughed at, if it were submitted to a respectable journal..

The reaction of the earth’s climate to an influx of radiation cannot simply be characterized by a single time constant and a single heat capacity. There are a number of different time constants because there are a number of different mechanisms for the transfer of heat within the system. Some of the time constants are short like the daily heating and cooling of the surface of the land, and ocean, and others associated with deeper penetration of heat into the depths of the land and ocean are much longer. A short term oscillating radiation stimulus will only uncover the time constant associated with the shallowest penetration of heat into and out of the system.

A paper by Stephen Schwartz written in 2007 derived a single time constant of 5 years, looking at past data on the earth’s climate. He was forced to admit that even this estimate was incorrect and too small once he was confronted with the fact that his model was over simplistic. The longest time constant is more like 70 years according to many of the climate models.

I don’t see how people can call themselves skeptics and yet endorse such a simplistic and flawed analysis.

Andrew:

Your post at May 29, 2012 at 12:17 pm says;

I agree.

We seem to be starting a semantic argument about the meaning of “causation” which I am not willing to continue.

If it will help, I am willing to accept the amendment to my statement such that it becomes

“The absence of correlation disproves direct causation”.

If that is not acceptable to you then so be it. I am not willing to continue the matter whatever is said.

Richard

Willis,

By the way, I should make it clear that I am not criticizing you for trying to use the simplest model possible to try to learn what you can. As a physicist, I definitely like that sort of approach. However, it is important to recognize the caveat that one should always use the simplest model possible…but no simpler. I think the point here is that these one-box models with a single timescale are too simple for the purpose. In particular, although you can fit it well to data where the forcings are over a fairly narrow range of frequencies, more careful investigation shows that it is deficient…namely, that it will tend not to predict very well for data over a different range of frequencies than what you fit it over. This is why when you fit your model to the GISS model emulation of the instrumental temperature record, you could get a good result as long as you didn’t look at response to a forcing at a different frequency (like the volcanic forcing, or like the long-time response to a doubling of CO2 as is used to determine the equilibrium climate sensitivity).

This is also why you can again get a good result for fitting to the forcing due to the seasonal cycle but will no doubt find that if you use this result to predict the climate sensitivity in a full-fledged climate model, it will not give anything close to the correct result. (And, I would argue, it is likely not to give anything close to the correct result for the real world, although unfortunately we don’t know what the correct result is for the real world, since this is of course what we are trying to find out.)

The main reason, as I noted is the very different heat capacities and resulting timescales for response of the ocean mixed layer and of the atmosphere. (One probably has to worry also about the timescale for response of the deep ocean…But, two timescales would at least be a lot better than one.)

Joeldshore:

I am giving a partial answer to your post (at May 30, 2012 at 11:59 am) which is addressed to me because your post is so wrong I would need to write a book to answer all its errors.

You ask:

I answer:

If you don’t know the answer to that question then all your posts in this thread are pointless.

A flaw would be a significant error of understanding, assumption, calculation and/or interpretation.

If you cannot find a flaw or indicate how a flaw probably exists then don’t bother to critique his model.

You say

What is “strange” is your daft idea that I “believe” anything about Willis’ analysis or climate models.

I know of no flaw in Willis’ analysis although there may be several.

I know that all except at most one of the GCMs is wrong because they each use a different value of climate sensitivity.

Your assertions about my “beliefs” seem to be your psychological projection.

You assert

Your assertion demonstrates that you did not read Willis’ account of his analysis or you failed to understand it. There are no other possibilities.

You say

Clearly, you do not know or understand the difference between heuristic and predictive modelling.

Your suggestion that the climate sensitivity provided by Willis’ analysis should be checked against a climate model is plain daft. Which climate model? They each use a different value of climate sensitivity.

More fundamentally, as I told you, Willis is analysing reality. Any model is an expression of an understanding of reality.

Reality matters more than anybody’s understanding of it.Therefore,

• it is reasonable to assess a model by comparison to the result of Willis’ analysis

but

• it cannot possibly be reasonable to assess the result of Willis’ analysis by comparison to a model.

Richard

Another problem with Willis’ article is that his calculation only includes solar forcing. It neglects the long wave radiation escaping upward into space, because the earth absorbs short wave radiation, and radiates most of what it absorbs upward as long wave radiation. This is different from the solar radiation that is directly reflected. All of the net flow of radiation counts as forcing. This seems to be another basic phenomenon neglected in Willis’ fatally flawed calculation.

I haven’t read all of the posts on this page. It would be very sad if nobody else reading this blog recognized how flawed this calculation is. All I have noticed so far is praise. This is very perplexing.

REPLY:It is not perplexing at all you are just being your usual concern troll self – AnthonySo you started with a bivariate time series in F and T, and created derived time series ΔT and ∆F by computing first differences. Then to half of the time series you estimated the linear vector autoregressive model ΔT(n+1) = a∆F(n+1) + bΔT(n). You introduced a third variable τ, and nonlinear functions a = a(τ) and b=b(τ). In your modeling, any nonlinear monotonic functions would do as well. Now ∆F(n+1) is correlated with ΔT(n), they differing is signs only at the solstices (lower left “corner” and upper right “corner), where ΔT changes sign a month after ∆F changes sign. If these had been the original data plotted in chronological order, they would be called “hysteresis” plots, and you’d see the individual years (unless you plotted using thick lines to hide them), but they seem to be the averages by month of year — is that so?

“Correlation usually implies causation” in the sense that, historically, when X and Y variables have been found reliably to be highly correlated they are usually related by a causal mechanism. However, correlation does not imply that changing X will change Y, or that changing Y will change X.

You have done several things that you usually deprecate: you took model output as data (this is what I asked you about); you worked with averages over large spatial and time scales; you assumed that change in temperature is a linear function of change in forcing; you assumed that the change in CO2 changes radiative forcing in the usual way assumed by climate theorists; you assumed that the system was stationary; and you reported the resultant estimate without an uncertainty or standard deviation estimate.

@ George E. SmithRe. Your comment to me of May 30, 2012 at 11:18 am

You tell me that I am ‘missing the whole point’. Actually, I think that is what you are doing.

The secondary emission from CO2 (which you described as taking place between notional thin sheets) does not affect my argument because that is only concerned with the absorption of radiant energy/power from the initial ‘beam’ radiating from the earth’s surface. The secondary emissions and their possible re-absorption by other CO2-molecules do not affect this substantially (since the probability of a re-emitted photon being intercepted by another CO2 molecule that was receptive to it would be extremely small in the earth’s atmosphere) and the primary absorption therefore follows the B-L law as per normal. If the absorbed photons are not really ‘dead’ I think they might as well be for all the difference it would make to the amount absorbed from the primary radiance.

Since the CO2 molecules are mixed up with all the other molecules comprising the atmosphere, once they have absorbed their portion of the primary outgoing radiance from the earth’s surface that absorbed power has then been effectively captured by the whole atmosphere whose overall mean temperature will be increased accordingly. At this somewhat higher mean temperature the atmosphere will radiate more strongly both to outer space and back to the surface. The amount being re-radiated back to the surface is what constitutes the ‘radiative forcing’ from CO2 in the techno-jargon of the IPCC. In the ideal condition of steady-state equilibrium the amount of this so-called ‘back-radiation’ will be a constant fraction of the amount initially absorbed by the CO2 from the primary surface radiance. This fraction is represented by the symbol ‘

p’ in my formula derived from B-L, and is hidden invisibly in the number ‘5.35’ in the IPCC’s formula.I have tried to address your points as fully as I can here because I’m about to go off on a trip and won’t be able to address any more before I get back in a couple of weeks.

KR says:

May 30, 2012 at 8:04 am

Many thanks for your pointers to interesting sites in your response to me and to Willis.

Willis asked for me to show how a diurnal cycle implies even lower sensitivity. Take an area like the Pacific, and its diurnal forcing cycle, maybe a few hundred to a thousand W/m2. Its diurnal temperature range is only about 1 degree. The sensitivity is therefore 0.001 to 0.01 degrees per W/m2, which extrapolates to about 0.004-0.04 degrees per CO2 doubling. What’s wrong with this, of course, is the high frequency of the diurnal cycle that doesn’t allow the ocean to adjust to those W/m2. Extending to a year suffers from underestimation for similar reasons. The same amplitude forcing applied on longer and longer frequencies will have a larger and larger responses, because the responses are limited by how the lag interacts with the frequency. You won’t get the full equilibrium response unless the forcing increases to a constant level and doesn’t reverse.

richardscourtney says: “I agree. We seem to be starting a semantic argument about the meaning of “causation” which I am not willing to continue.”

I can see you are frustratedly dealing with other people right now. I merely wanted to point out that to assume uncorrelated variables are independent is as fallacious as assuming causation is demonstrated by correlation.

Well, even if you aren’t interested, in case anyone else is:

A lack of correlation between two normally distributed random variables that are not known to be jointly normal

maymean that they are independent (neither variable is related to the other) but if they aren’t jointly normal it doesn’t. If they aren’t jointly normal, all that can be said is that neither variable appears to belinearlydependent on the other, or in any way that might “look” linear to some extent, that is, if there is a casual relationship it must involve some other equation that relates them than a line or an equation that behaves vaguely like one, such as the relationships described at the link I gave above. Of course, it must be a relationship which would not create any apparent linear correlation so several relationships besides linear ones are also likely ruled out. Whether this is the same thing as sayingdirectcausation is ruled out is not clear to me.Willis Eschenbach says:

May 29, 2012 at 11:16 am

“Eric Adler says:

May 29, 2012 at 10:38 am

“This post is a joke right?

Climate is a complex system. Reducing the modeled time dependence to a single time constant based on an oscillating forcing is nonsense. A paper by Schwartz based on historical data estimated a single time constant of 5 years, and had to take it back. If there is a single time constant that could describe what is happening it is more like 80 years.”

Your comment is a joke, right?

I say that because claiming something is “nonsense”, with no attempt to even begin to tell us why it is “nonsense”, is a joke in the world of science. I may well be wrong, I have been many times, but you have not said one word about where or how I might have gone off the rails.

w.

PS—As far as I can find out, Schwartz didn’t “take it back”. He modified his estimate to make it ≈ 8 years.”

He revised his climate sensitivity so that a doubled CO2 concentration would yield a 1.9C temperature rise. He admitted that there was more than one time constant, and the shorter one, which was a few months, similar to yours, was not relevant for the estimation of equilibrium climate sensitivity,

http://www.ecd.bnl.gov/steve/pubs/HeatCapCommentResponse.pdf

“Revised determination of climate sensitivity. The upward revision of the climate system time constant

4 by approximately 70% results, by Eq (1), in a like upward increase in the value of the climate sensitivity

5 from the value given in S07, 0.30 ± 0.14 K/(W m-2) to 0.51 ± 0.26 K/(W m-2), corresponding for the

6 forcing of doubled CO2 taken as 3.7 W m-2, to an equilibrium increase in GMST for doubled CO2 ΔT2×

7 of 1.9 ± 1.0 K. Although this value is still rather low compared to many current estimates it is much

8 more consistent than the value given in S07 with the estimate given in the Fourth Assessment Report of

9 the IPCC [2007] as “2 to 4.5 K with a best estimate of about 3 K and … very unlikely to be less than 1.5

10 K”.

oops, above I wrote:

Now ∆F(n+1) is correlated with ΔT(n), they differing is signs only at the solstices (lower left “corner” and upper right “corner), where ΔT changes sign a month after ∆F changes sign.I forgot that these were deltas, though I wrote them. The places where ΔT changes sign a month after ∆F changes sign are at the Y intercepts, (.∆F = 0), not at the upper right and lower left extremal points of the ellipses.

Willis Eschenbach said @ May 30, 2012 at 12:55 pm

How odd! That’s the same result that I get from MODTRAN.

Capo: “Willis, please could you explain your formula

ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ) ? Do you have any sources for that?”

That puzzles me, too. It seems to imply that Mr. Eschenbach thinks of the short-wave radiation as being delivered in a sequence of impulses:

,

where and for all ,

, , and

.

Further, the temperature values seem to be treated the values the temperature assumes immediately after respective impulses:

With those assumptions you could get something akin to Mr. Eschenbach’s equation by starting with a simplified scenario in which for all , i.e. in which for and in which is a non-zero value small enough that the system can be considered linear.

In that scenario, always equals before , so it is reasonable to assume that before the ‘s had values at most negligibly different from –i.e., that before then the ‘s were negligible–since a time whereof the memory of man runneth not to the contrary.

In response to the step change in radiation-impulse magnitudes at , though, the surface-temperature sample values begin to increase, asymptotically approaching a value at which the resultant change in radiation (and other heat) loss by the surface equals the change incoming radiation. I assume you will accept without proof that the approach to that value is exponential, with a time constant of, say, , which means that for ‘s such that :

. (1)

If the incremental sensitivity of temperature to small changes in (incoming short-wave) radiation is , the limit it thereby approaches is , and, bearing in mind the initial slope of an exponential with time constant , we can conclude that the initial response of the temperature to the radiation-impulse increase is given by

.

Reflection reveals that this relationship holds in general for any such that :

. (2)

Finally, linearity enables us to apply superposition to (1) to (2) to arrive at the following approximation to Mr. Eschenbach’s equation for situations in which and are not in general zero:

This differs from his in that I (harmlessly, I believe) displaced the indexing by one period and that I included a dimensioned numerator in the exponent to make the exponent dimensionless. Without having thought it through completely, I sense that in the context he’s using his equation he’ll arrive at a delay that’s about a half month too high, but that’s a quibble—and I haven’t proven to myself that it’s true.

Joe Born:

I am writing to ask for a clarifying statement concerning your post at May 31, 2012 at 5:37 am.

I follow the maths but I do not understand your point.

Your post begins by asking for an explanation of Willis’ equation and says;

“It seems to imply that Mr. Eschenbach thinks of the short-wave radiation as being delivered in a sequence of impulses”.

It then says;

“the temperature values T_i seem to be treated the values the temperature assumes immediately after respective impulses”

and concludes

“With those assumptions you could get something akin to Mr. Eschenbach’s equation by starting with a simplified scenario …”

OK. I get all that, but it seems your argument agrees the method Willis has adopted.

Are you disputing the assumptions which you state? They seem reasonable to me.

I note your point that in Willis’ model the surface temperature will rise exponentially until radiation loss from the surface increases to halt the surface temperature rise. But so what? That would seem to govern climate sensitivity, would it not? So, it seems that I am missing the point you are trying to make.

I could continue, but I have said sufficient to demonstrate that I have failed to understand the point you are making.

Hence, I would be grateful for a succinct statement of your substantive point. Thanking you in anticipation

Richard

As an expansion on this discussion, a two-box/two time constant model (as opposed to the one-box/single time constant model W. Eschenbach is using here) fit to forcings and the southern oscillation index (http://tinyurl.com/7t8rs8b) is quite capable of reproducing the observed temperatures. The estimated climate sensitivity from that slightly more complex model, one that that better matches the physics (fast responding atmosphere, slow responding oceans) is ~2.6C/doubling CO2.

The Pompous Git:

re. your comment at May 30, 2012 at 11:59 pm about the similarity of Willis’ result (which assesses total climate sensitivity) and your MODTRAN result.

That similarity would seem to suggest the water vapour feedback on climate sensitivity is zero (or close to zero).

Richard

richardscoutney (May 31, 2012 at 7:03 am):

That the data permits almost any interpretation one cares to make can be traced to a fundamental error in the construction of modern climatology. This is that the models reference no statistical population. Absent the associated population, the models cannot be tested.

KR:

Thankyou very much for your comment at May 31, 2012 at 6:27 am which says;

You point to a fundamental problem with most of climate science; viz.

the data permits almost any interpretation one wants to make.Very recently (on another thread of WUWT) I have been involved in a discussion where an individual is certain his model of the carbon cycle must be ‘right’ because it fits the empirical data: his certainty is not altered by the fact that I have also modeled the carbon cycle in 6 other – and each different – ways and each of those models also matches the same data.

Your analysis provides a very different result from that of Wills, and that difference is because you and he use different assumptions. The problem is to determine which if either of those analyses is right.

All we can do is to try to falsify both your and Willis’ analyses. In this discussion we are trying to find fault with Willis’ analysis. Perhaps you could ask Anthony to publish your analysis on WUWT so we can attempt to perform the same service for you?

Richard

Joe Born,

thank you very much for your helpful comments. Ok, my problems with the units are solved, Delta_T is not the temperature anomaly referring to a initial state, it seems to be some sort of derivation of T in units °C/month. So the units are fitting on both sides.

Given your interpretation as true, the exciting question now is:

What is lambda in Willis’ formula? With regard to the short time intervals, it seems to be a “super-fast” climate sensitivity, where feedbacks have almost no time to get in. This could explain, why Willis’ value comes close to the climate sensitivity for CO2 only without feedbacks (about 1°C).

But I think there’s a further problem:

You wrote “which means that for i‘s such that \Delta F_i=0:

\Delta T_i=\Delta T_{i-1} e^{\frac{-\Delta t}{\tau}}”

Delta F_i=0 means, there is no further incremental step in forcing. But the system goes on getting warmer until the new eqilibrium temperature is reached. So why? Because the former incremental steps of forcing are still operating. It’s an effect which should be contributed to climate sensitity, but in Willis’ formula it gets separated from lambda and is put in the second term. Again, low values for lambda are not surprising any more. In my opinion Willis’ lambda is just a fit parameter, but it hasn’t the correct meaning of usual climate sensitivity.

Best regards

richardscourtney says:

May 31, 2012 at 7:03 am

KR:

Thankyou very much for your comment at May 31, 2012 at 6:27 am which says;

….

“Your analysis provides a very different result from that of Wills, and that difference is because you and he use different assumptions. The problem is to determine which if either of those analyses is right.

All we can do is to try to falsify both your and Willis’ analyses. In this discussion we are trying to find fault with Willis’ analysis. Perhaps you could ask Anthony to publish your analysis on WUWT so we can attempt to perform the same service for you?

Richard”

Richard,

There are a number of scientific points which have been made against Willis’ post here and have not been addressed by Willis or anyone else.

1) A model with a single time constant is clearly inadequate to describe the climate of the earth. It should have at least two time constants to describe the radically different characteristics of energy absorption by the ocean and atmosphere. This point was made in KR’s link to Tamino’s empirical analysis of the response of the global temperature to long and short duration forcings.

There is no need to publish this material on WUWT. It is freely available to any reader here, including Willis.

http://web.archive.org/web/20100104073232/http://tamino.wordpress.com/2009/08/17/not-computer-models/

It is clear that the effects of the longer time constant involving temperature change of the ocean, over the long term, will not appear in the analysis if you are only looking at annual variations in forcing or other short term data like volcanic eruptions. That is why a two box model, rather than a one box model is used. In a previous post I pointed out that Stephen Schwartz also erroneously used a 1 box model in his paper, but when this was pointed out to him, he corrected himself. The climate sensitivity he derived was 1.9C for a doubling of CO2.

2)Willis seems to have been looking at only on forcing component, arriving solar radiation, minus the reflected radiation. This does not represent all of the forcing components needed to explain the evolution of climate.

If no answer to these criticisms is presented, it would appear that there are none.

richardscourtney–“…the data permits almost any interpretation one wants to make”I would have to disagree. Eschenbach’s

one-box model(which appears to be based upon or quite similar to Schwartz model) produces widely different results when looking at forcings of different time scales – such as seasonal, CO2, volcanic, etc. And in fact when looking at longer term forcings (http://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/) Eschenbach had to introduce an arbitrary scaling factor to the volcanic forcing to make his model work – volcanic influences (and seasonal) have a different time scale than CO2 and solar, and the model mismatch appears to be due to the fact that the climate itself has different time scales (ocean and atmosphere), and a single parameter model such as Eschenbach’s won’t work well.Different results (of integer multiples) from different time scale forcings are an indication of a serious issue with the model.

On the other hand, using a

two-box model(http://tinyurl.com/7t8rs8b – an analysis from Tamino, not mine) no arbitrary forcing scale constants are needed, dropping (or using alone) any single forcing shows clear statistical fit issues (which is what should be expected), but using all forcings matches behaviors quite well.The data and the physics, and the strength of your conclusions. All models are wrong – but models that work under the constraints of the dataconstrainthe interpretationsmay be quite useful. And in this case the two-box model (which at least acknowledges different response times for different climatic compartments) does a much better job.KR:

As you point out, a model may be useful. However, among the useful models is not the one that is the focus of Willis’s article. It is useless in the sense of conveying no information to a policy maker about the outcomes from his/her policy decisions. The lack of information is a consequence from the unobservability of the equilibrium temperature.

George E. Smith; says:May 29, 2012 at 6:27 pm

The logarithmic “climate sensitivity” is a point where Phil and I part company.

A fixed Temperature increase per doubling of CO2, means a change of deltaT for CO2 going from 280 ppm, to 560 ppm. It also means going from 1 ppm to 2 ppm, or from one CO2 molecule per cubic metre, to two CO2 molecules per cubic meter.

“Approximately logarithmic,” doesn’t mean anything; “logarithmic” has a precise meaning.

Actually we don’t part company at all, and I agree that the same sensitivity formula doesn’t apply when going from 1-2 ppm and going from 280-560 ppm. The formula that is appropriate is the ‘curve of growth’, at very low concentration (e.g. 1-2ppm) you’re in a linear regime, as concentration grows other terms become dominant and the dependance passes through a logarithmic regime until at higher concentration the dominant terms give a square root dependance. For the 15μm band our current atmosphere is in the intermediate logarithmic dependance, the note I linked to shows a fairly clear derivation of this dependance.

http://www.physics.sfsu.edu/~lea/courses/grad/cog.PDF

The figures on pages 3 and 5 show it quite clearly.

richardscourtney:

My comment was not a criticism of Mr. Eschenbach’s approach but primarily an answer to Capo’s question about how Mr. Eschenbach arrived at his equation, which to me, and apparently to Capo, too, was not immediately apparent. I don’t dispute it necessarily, although I used a different approach when I went through the exercise myself (and got a similar sensitivity but, I confess, a little greater rms error, which doesn’t necessarily speak well for my approach). But I do suspect that his having the instantaneous slug of radiation energy (as it theoretically would) cause the instantaneous temperature rise might lead to a lag value whose meaning is questionable.

Capo:

I’m not sure I see the “super-fast” nature you’re talking about other than the theoretically instantaneous temperature changes I just mentioned. As you say, it takes many periods for the temperature samples to respond completely to a one-time change in the radiation-impulse magnitude, so that aspect is not necessarily “super-fast.” Of course, in real life the delays between the surface and, say, the upper troposphere are greater than what Mr. Eschenbach found, but that is not inconsistent with the relatively short delay he finds between the short-wave insolation of the surface and the surface’s temperature response. (No doubt I’ve misunderstood your point.)

I’m afraid I also failed to comprehend your point about sensitivity and lambda. Could you take another run at that, maybe from a different direction?

“”””” Phil. says:

May 31, 2012 at 8:53 am

George E. Smith; says:

May 29, 2012 at 6:27 pm

The logarithmic “climate sensitivity” is a point where Phil and I part company……”””””

Now you’ve got me all confused Phil .

Over what region of the mathematical function: y = log (x) , does it have a constant slope ,

y = m(x)+C and in what region does it follow y = a(x)^0.5 ?

I always thought y = log (x) tracked y = log (x) for all values of (x)

Or are you saying climate sensitivity is digital; it is the Temperature rise for CO2 going from 280 ppm to 560 ppm (doubling) and for no other values of CO2 ?

Can we just say it is “non-linear” The experimental data certainly agrees with that ; and it is too noisy to confirm any actual , over any range of values. mathematical formula

Magic Turtle says:May 30, 2012 at 1:02 pm

Sorry, but I am not following you. How does the optical thickness of CO2 absorption prevent one from deriving a valid formula for radiation-absorption by CO2 from the Beer-Lambert law? I cannot understand an argument that you are not putting, Phil!

Because the B-L law doesn’t hold for the optical thickness of the CO2 absorption in our present atmosphere. Your citation explicitly says so too!

MT: My approach does not focus on one particular frequency of absorption as yours does, but focuses on the sizes of the absorption wavebands instead. I can imagine that the optical thickening effect would broaden that waveband to a degree and render it more shallow to a degree as well, but I doubt that such modifications would be significant.Your imagination and doubts are rendered moot by the available experimental evidence!

Form the Wiki cite by you:

“The derivation assumes that every absorbing particle behaves independently with respect to the light and is not affected by other particles. Error is introduced when particles are lying along the same optical path such that some particles are in the shadow of others. This occurs in highly concentrated solutions. In practice, when large absorption values are measured, dilution is required to achieve accurate results. Measurements of absorption in the range of Iℓ/Io=0.1 to 1 are less affected by shadowing than other sources of random error. In this range, the ODE model developed above is a good approximation; measurements of absorption in this range are linearly related to concentration.

At higher absorbances, concentrations will be underestimated due to this shadow effect unless one employs a more sophisticated model that describes the non-linear relationship between absorption and concentration.”As shown in the absorption spectra for current atmospheric conditions and doubled CO2 concentration which I linked before:

http://i302.photobucket.com/albums/nn107/Sprintstar400/CO2spectra-1.gif

the majority of the band has Iℓ/Io much less than 0.1, thus as your own cite states “a more sophisticated model that describes the non-linear relationship between absorption and concentration” is needed. The curve of growth which I supplied the derivation for is exactly that, at current concentrations of the atmosphere the CO2 band is in the logarithmic regime. The Q-branch saturates even at 1ppm.

KR says:

May 30, 2012 at 8:27 am

Thanks, KR. I don’t understand why you (and others) keep saying my results are “unrealistic” or “wrong”. My results fit the observations quite closely. So what do you mean when you say that they are “unrealistic”? Unrealistic (in my world) means that they don’t agree with reality … but my results agree with reality.

So you’ll have to make your objection clearer. My model reproduces the annual cycle, so no “two-box” model is needed … but why not? If you say a “two-box” model is necessary, why is it not necessary for the annual cycle?

Look, KR, I don’t claim to have all the answers here. I’m struggling to understand this. But you can’t claim my results are “unrealistic”, the problem is that they are realistic …

w.

joeldshore says:

May 30, 2012 at 1:26 pm

Joel, the volcanic adjustment is just the icing on the cake. I can fit the model results, with a slightly poorer correlation, with no such adjustment. Other than a slight difference following the volcanic eruptions, my simple model emulates the model nearly as well as when I adjust for the models treating volcanic forcing differently than they treat radiative forcing.

So yes, I can emulate the model, with a correlation well over 0.90,

with or without the the volcanic adjustment.The volcanic adjustment is trivial, 7% reduction.You still don’t seem to have grasped the implications of that finding. It means that the models are NOT doing some kind of thing with two timescales as you assume. If they were,

I couldn’t emulate the models with any accuracy at all… but I can emulate them with high accuracy with only one timescale.Nor does it reduce the implications of my work to point out that the volcanic adjustment increases the accuracy. That is simply more evidence that my model is accurate, and that somewhere in the bowels of the models they treat volcanic forcing slightly differently than radiative forcing … which is no surprise at all, they depend on very different mechanisms.

w.

George E. Smith; says:May 31, 2012 at 9:48 am

“”””” Phil. says:

May 31, 2012 at 8:53 am

George E. Smith; says:

May 29, 2012 at 6:27 pm

The logarithmic “climate sensitivity” is a point where Phil and I part company……”””””

Now you’ve got me all confused Phil .

Over what region of the mathematical function: y = log (x) , does it have a constant slope ,

y = m(x)+C and in what region does it follow y = a(x)^0.5 ?

y=fn(x) is y=m(x) as x➔0, y=a√x as x➔∞ in a transition region in between y≅log(x)

I always thought y = log (x) tracked y = log (x) for all values of (x)Or are you saying climate sensitivity is digital; it is the Temperature rise for CO2 going from 280 ppm to 560 ppm (doubling) and for no other values of CO2 ?

As shown above there is a range of values for which the dependance is logarithmic, the present conditions fall in that range for the CO2 absorption band.

I suggest you read the note I referenced George, it’s explained there.

Can we just say it is “non-linear” The experimental data certainly agrees with that ; and it is too noisy to confirm any actual , over any range of values. mathematical formulaThe data for the CO2 absorption isn’t noisy at all George.

George E. Smith says:

George, I have to admit I am a bit puzzled by your hangup on this. We’ve had discussions about it several times and the basic points are not that complicated. The log(x) is not a fundamental law of nature; it is an approximation that holds pretty well in a certain regime (that regime being, as I understand it, when the primary absorption band contributing to the radiative forcing is saturated in the middle but not in the wings). It turns out that this is the regime that we are currently in with CO2 (and remain in over a fairly broad range of going down or up in temperature).

However, in the dilute regime, a better approximation is that the forcing is linear in the concentration. And, then there is a high concentration regime where the forcing is roughly proportional to the square root of the concentration.

These are, of course, approximations, and there are deviations from them. In fact, the deviations are such that if you use the logarithmic formula with the values that hold for the regime we are in and extend it up past doubling, then it does underestimate the forcing a bit at these higher values, but the underestimate is not that bad and, relative to other uncertainties, such as the climate sensitivity, it is not worth making too big a fuss about.

Willis Eschenbach– The issues I have with single-box models such as the one you fitted in this thread are that they give different answers for different time scales (http://wattsupwiththat.com/2010/12/19/model-charged-with-excessive-use-of-forcing/, http://wattsupwiththat.com/2011/01/17/zero-point-three-times-the-forcing/, here), they require arbitrary scaling to fit (http://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/ where you iteratively modified the volcanic forcing), and they do not represent known and fairly simple physics – that the atmosphere with very little thermal mass warms and cools at a different rate than the oceans.A two-box model is a huge improvement in fitting forcings of different temporal variation, and doesn’t need arbitrary scaling of those forcings with different time scales. I will note that it is, as well, likely insufficient – I recall an article (can’t lay my hands on the reference at the moment) wherein a five-box model was the minimum complexity for ocean thermal diffusion in a 1D thermal model (constrained by observed temperatures), as only at that complexity or higher did the variation from model to model become small WRT the values.

But the two-box model (http://tinyurl.com/7t8rs8b, http://tinyurl.com/6tvk3wt) is a

hugeimprovement upon the single-box model in all regards, in all categories of fit and residuals. And hence sensitivity estimated from that model fit (~2.5C/doubling of CO2) is going to be correspondingly more accurate than from your single-box fit.“Nor does it reduce the implications of my work to point out that the volcanic adjustment increases the accuracy. That is simply more evidence that my model is accurate, and that somewhere in the bowels of the models they treat volcanic forcing slightly differently than radiative forcing …”That’s evidence if and only if (IFF) such volcanic specific adjustments are made in other models – otherwise special case treatments such as those are evidence that your model is

less accurate. The two-box model (here: http://tinyurl.com/7t8rs8b)does not need to treat volcanic forcing differently, and fits the data better, which I feel is solid evidence that it’s abetter model. Given that code for many of the commonly used models are directly available, can you point to such exceptional treatment and adjustment in those models? Over and above the observationally supported forcing efficiencies primarily related to forcing locations?Willis says:

Willis: When you are critiquing other people’s work, whether it be Nikolov and Zeller or climate scientists, you seem to understand that being able to fit one piece of data does not indicate that a model is realistic in all respects. Not surprisingly, these seem principles hold true even for your models!

Because the annual cycle has a single frequency in it. However, one hint that your model would fail with multiple frequencies is that you got a very different climate sensitivity when you fit to the GISS model emulation of the instrumental temperature record than you got when you fit to the annual cycle. Another hint is how in that situation, you had to introduce a fudge factor to correctly model the response to the volcanic forcing, which came in at a shorter timescale (higher frequency).

That is simply because the volcanic eruptions are rare enough and short-lived enough that doing a bad job modeling them does not penalize you very much.

That is because you are sticking to modeling phenomena occurring in one narrow range of frequency. In such a case, climate sensitivity and relaxation time can trade off with each other. I.e., you can get a good fit to higher-frequency phenomena by using an artificially-low climate sensitivity.

No…It is evidence of how your model fails when you have two very disparate time frequencies. The evidence of your model’s limitations are clear if you are willing to see it. If it was someone else’s model reaching a conclusion that you didn’t like (be it completely denying the greenhouse effect or arguing for a climate sensitivity in the range that the IPCC says it is), I don’t think you would have any difficulty seeing the limitations. I guess it is always harder to see the problems in one’s own work.

Eric Adler says:

May 30, 2012 at 2:05 pm

Look, Eric, if what you say were true, I wouldn’t be able to get a very good fit to the annual data using a single time constant.

So perhaps you can explain why my results match so well with reality?

Look, guys, I may be wrong about many things. But when a simple equation matches reality, you can’t just say I must be joking, as though that wiped out the excellent correlation between my model and the reality. If my model is a joke, then why is the fit so good?

w.

Jim D says:

May 30, 2012 at 6:01 pm

No, I didn’t. I asked for a citation to your claim. It seems you have none, which is fine, but in that case you need to present your actual calculations.

w.

Joe Born:

Thankyou for your answer at (May 31, 2012 at 9:14 am) to me.

That is more than I asked for, and I appreciate it.

Richard

KR says:

May 31, 2012 at 8:33 am

In fact, the adjustment in volcanic forcing was trivial (7%), and my model gives a correlation of ~0.97 without the adjustment, and 0.995 with the adjustement.

So contrary to your claims, the single parameter model actually works very, very well. I know you and Joel don’t like the fact that it works so well, but it does. It fits the model output to a very high degree of accuracy.

Finally, I’ve

that the climate models don’t reproduce the post-volcano temperature changes very well at all … so obviously it isshowntheirmodel, not mine, that’s having trouble with the volcanoes.w.

Terry Oldberg says:

May 31, 2012 at 9:50 am

It also doesn’t provide any information on the effect of gamma rays on marigolds … but since it was not designed to provide information to policy makers nor give information on gamma rays, so what? Is it your claim that all science must provide information to policy makers?

w.

Willis:

Thank you for taking the time to reply. When a model is legitimately scientific, it provides information to the users of this model about the unobserved but observable outcomes of statistical events;. that they have this information gives these users a basis for controlling the outcomes. The equilibrium temperature fails the test of observability Thus, while public officials appear to believe that models such as yours provide them with a basis for controlling Earth’s surface temperature, this belief is mistaken.

Terry.

Willis says (to Jim D):

JimD presented the basic calculation here: http://wattsupwiththat.com/2012/05/29/an-observational-estimate-of-climate-sensitivity/#comment-997506 You could argue that his way of estimating the climate sensitivity is a little more naive than yours, but it is likely not that different in the result it gets: I just tried estimating the climate sensitivity from the seasonal cycle using his naive approach of just looking at the ratio of the amplitude of the temperature cycle over the amplitude of the forcing cycle, and I get estimates of the climate sensitivity for CO2 doubling of 0.11 C and 0.29 C from this for the Southern and Northern hemispheres, which are not far off from your estimates of 0.2 and 0.4 C, respectively.

His basic point is in line with what all of us are saying: The higher frequency the data that you use to estimate climate sensitivity using a model with a single relaxation time scale, the lower the value of climate sensitivity you will get. So, for the diurnal cycle over the ocean, as Jim points out, you’d get an estimate on the order of 0.01 C per CO2 doubling. Using the annual cycle, you get an estimate on the order of 0.3 C per doubling. Using the instrumental temperature record, you get an estimate on the order of 1 C or so per doubling.

Do you see the pattern here?

KR:

I am grateful for your taking the trouble to answer me in your post at May 31, 2012 at 8:33 am.

Unfortunately, I fail to see how you have refuted my point which was

Indeed, despite your claim to the contrary, you have provided another illustration of my point.

Perhaps as you say, Willis’ model “produces widely different results when looking at forcings of different time scales”. But so what? It works over the time scale he assessed.

The worst one could say from that is that Willis’ model has limited usefulness because it only works over short time scales.

But you say much more than that. You assert

I have an important observation about anything “Tamino” posts on his blog which I mention below. Before that, I observe that you and Tamino fail to show Tamino’s model is validated for all time scales.

So, why is a “two-box” model right? You say it addresses annual effects and oceanic thermal delay. OK, let us assume that is true. In that case why is a ‘three-box’ model which assesses biological response delay not right? And why is a “four-box” model which …. etc.

I understand you are likely to say Willis’ model only assesses the first order effect and Tamino’s model also assesses the second order effect. But there is no determination of the relative magnitudes of the third to n order effects.

There is no way to determine that Tamino’s model is more useful than Willis’ model when the relative magnitudes of the third to n order effects are not known. All one can say is that both models fit your criterion that “The data and the physics constrain the interpretations, and the strength of your conclusions”.

As Willis says to you at May 31, 2012 at 10:01 am

Indeed, Terry Oldberg states the real difficulty when he says to me at May 31, 2012 at 8:15 am

Subsequent to his having seen your post that I am answering at May 31, 2012 at 9:50 am Terry Oldberg says to you:

He is saying the same as me; i.e. the data does not constrain the model to an adequate degree for it to be useful “to a policymaker” because much, much too little is known. Or, as I succinctly phrase it,

the data permits almost any interpretation one wants to make.As an addendum, I add my observation about anything Tamino posts on his blog.

“Tamino” is an academic and, therefore, his career benefits from anything he publishes in the peer-reviewed literature: it increases his publication count. But anything published under a false name on his blog cannot be published in the peer-reviewed literature. So, when “Tamino” posts work on his blog he is declaring that he has decided the work is so unworthy that it cannot be raised to a standard worthy of publication. The possibility exists that he may have made a misjudgement about the value of a particular item he has chosen to post on his blog, but

I need some encouragement to bother evaluating his stuff that he (n.b. himself) has thrown away as being worthless.Richard

richardscourtney– As I said above, the data doesn’tpermitany interpretation, but ratherconstrainsinterpretations. Which is a major point in developing models – using observational data to find the best model available, and rejecting models that perform poorly in terms of that data.A two-box model certainly isn’t perfect

(see my comment at http://wattsupwiththat.com/2012/05/29/an-observational-estimate-of-climate-sensitivity/#comment-998046 regarding ocean thermal models, and I believe most radiative atmospheric models run to about twenty levels [twenty boxes], at which point they are usefully and asymptotically approaching the behavior of an infinite level model), but it’sfar betterthan a one-box model – the tendency of a one-box model to produce different results at different time scales makes all of those results less certain.—

Regarding Tamino and his blog – the vast majority of his blog posts are applications of standard textbook statistical analysis to various issues, and I believe are sub-LPU in size (http://en.wikipedia.org/wiki/Least_publishable_unit 🙂 ). I would consider pointing out bad statistical practices a public service – I’m pleased that he feels strongly enough about his field of study to make such posts. I’ll also note that he has on occasion expanded his blog posts into published peer-reviewed work (http://iopscience.iop.org/1748-9326/6/4/044022), and on that basis I’m just going to have to disagree with your valuation.

Joe Born:

Take a look at the one-box model Isaac Held describes here:

http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-forced-response/

(Be careful, the lambda in Held’s equation is the inverse of Willis’ lambda)

If I try to bring it to Willis’ form, I get

dT/dt = lambda*F(t)/tau – T(t)/tau

Linearisation of the differential equation gives

T_i+1 – T_i = (lambda*F_i/tau – T_i / tau) * delta t

The left side is Willis’ delta T_n+1, the problem on the right side is F_i. As far as I understand Willis’ excel sheet, he didn’t use F_i (the total Forcing), but only the monthly increment in forcing.

So he does something brutal: Each monthly increment in forcing is calculated in one single step (!!?) of 1 month, in the next month this forcing vanishes and the next monthly increment comes into play. I think, this is wrong, because a monthly increment has an effect for a much longer time.

Hence my conclusion, that Willis’ lambda is not the usual climate sensitivity, it’s rather a fit parameter. Similar to Willis’ tau, which fits the data quite well, but it’s not the usual time constant in the one-box model.

Next problem:

Held’s simple one-box model c*dT/dt is the energy flux into the ocean. I’m skeptic if this model can describe the situation used by Willis. For example if the northern hemisphere warms, there is a further energy flux from the northern to the southern hemisphere, so I think, one should use a two-box model as the simplest one.

Willis Eschenbach:

[UPDATE—ERROR] I erroneously stated above that the climate sensitivity found in my analysis of the climate models was 0.3 for a doubling of CO2. In fact, that was the sensitivity in degrees per W/m2, which means that the sensitivity for a doubling of CO2 is 1.1°C. -w.What is the standard error of the estimate?

KR:

I appreciate your taking the trouble to provide your answer to me that you have posted at May 31, 2012 at 1:04 pm.

You say a two-box model is “better” than a one-box model. Perhaps.

The point I (and I think also Terry Oldberg) am making is that there is no way to define what is an adequate model.

It is a fact that – as Willis says – Willis “one box” model DOES work. It DOES emulate physical reality. For sake of argument, I will accept that Tamino’s “two box” model also works (but I have not checked that). And, as I explained, other models are also possible by adding more “boxes”.

Which of the many models (each constrained by the available data) is preferable and for what?

The range of published values of climate sensitivity shows that the data allows interpretation such that the determinations of climate sensitivity vary by an order of magnitude. In my terms, that says the data permits almost any interpretation one wants to make.

At present,

Willis’ determination of climate sensitivity is as valid as any other. Arm-waving about “add another box” or boxes does not change that.Richard

Capo:

I understand your problem with Mr. Eschenbach’s approach; by presenting it in the way he did, he made it much harder to follow than necessary.

Be that as it may, it turns out that he isn’t really throwing away the forcing history at each step, or at least not any history that matters. Implicitly, the previous-temperature-change term ΔT(n) captures all the forcing history you need. If there was no previous-period temperature change, then the temperature has reached the steady-state value dictated by the cumulative forcing value, so there’s no need to concern oneself with that value anymore; only changes from it are of interest. If there was a temperature change last period, on the other hand, then we know the change was partial execution of an exponential approach to steady-state, and, if we know the time constant, we in essence thereby know the steady-state value to which that forcing history is driving the temperature–and what the forcing history’s contribution to the next temperature change has to be.

So, opaque as it is, his approach is basically sound. And, by limiting himself to changes, he automatically confines himself to a quasi-linear regime in a decidedly non-linear system; he doesn’t have to subtract an average before doing the linear operations, for example.

As to the n-box discussion, I’m afraid I can’t contribute much. Certainly it should be possible to obtain more-accurate results with more-complicated models. And it would be trivial to conjure up a system in which Mr. Eschenbach’s simple approach would seem to find low sensitivity in the presence of a large high-frequency stimulus even though that system’s sensitivity to low-frequency stimuli is actually high. When I do that, though, I’m struck with a sense of unreality. Remember, it the clouds’ response to surface heating that is supposed to provide the positive feedback on which the grant-financed models’ calamity scenarios are based, and a low-frequency (multi-year) path for such a cloud-response mechanism exceeds my powers of imagination. So I”ve spared myself thinking about that much. Sorry I can’t help.

capo:

The left side is Willis’ delta T_n+1, the problem on the right side is F_i. As far as I understand Willis’ excel sheet, he didn’t use F_i (the total Forcing), but only the monthly increment in forcing.So he does something brutal: Each monthly increment in forcing is calculated in one single step (!!?) of 1 month, in the next month this forcing vanishes and the next monthly increment comes into play. I think, this is wrong, because a monthly increment has an effect for a much longer time.

that’s not much of a problem. Say that the CO2 concentration were doubled over a period of 70 years, 840 months: then instead of a 1-time increment of 3.7W/m^2 in the forcing there would be 840 increments of (3.7/840) W/m^2; in the model these effects simply summate (or sum) to the total forcing change, which works out to 1.1K over 70 years.

Willis’ model can now be used to predict the increase in global mean temp over the next 20 years (conditional on measured increases in CO2), or to predict the 2012 mean temp of any month starting with the temp at any previous month.

Capo says:

May 31, 2012 at 1:56 pm

An interesting question, Capo. In fact, the monthly increment doesn’t “vanish”. Since the increment is included in the following month, and since the increment includes the previous month, what is happening is that the effect of any given increment is slowly decreasing exponentially over time.

My guess, and it’s only that, is that I could get a better fit by using a “fat-tailed” exponential decay, rather than a standard exponential decay … so many ideas, so little time … however, the fit with a standard exponential is quite good.

w.

Terry Oldberg says:

May 31, 2012 at 12:48 pm

Thank you, Terry. My model uses only the sun and the clouds to calculate what is happening to the temperature. As such, it’s hard to see how anyone could think it would give users a “basis for controlling the outcomes”, since we cannot control the inputs.

w.

Willis (May 31, 2012 at 3:59 pm):

I’m thinking of your assignment of a numerical value to the equilibrium climate sensitivity (TECS). TECS is the proportionality constant in a functional relation that maps a change in the logarithm of the CO2 concentration to a change in the equilibrium temperature. From the form of this relation, it might appear that one can control the equilibrium temperature by controlling the CO2 concentration.

Matthew R Marler says:

May 31, 2012 at 2:43 pm

That’s an excellent question, Matthew, to which I have no answer or even any plan of attack. I’ll have to think about that.

Since it is the result of an interative procedure, one which is simultaneously fitting two values (tau and lambda) I’m not sure how to even go about calculating it … suggestions?

w.

Willis:

Since it is the result of an interative procedure, one which is simultaneously fitting two values (tau and lambda) I’m not sure how to even go about calculating it … suggestions?You have a vector autoregressive model of low dimension. The math is described in a number of books, of which I opened “Time Series Analysis and Its Applications, with R Examples” by Robert Shumway and David Stoffer, p 303. Their R software is available at CRAN. You probably have to estimate a and b in my notation, transform to lambda and tau in your notation, and use the multivariate delta-method to get the asymptotic normal theory approximate variance-covariance matrix for lambda and tau. That’s some matrix multiplications, which you can look up or I can show you in a letter. Or you could just bootstrap it: simulate 1000 realizations that have the same sample statistics as your data, and look at the distribution of the estimates.

Matt

Willis, I assumed that you want to do it yourself. If you want, you can send me the data in Excel or ascii (comma delimited) and I’ll fit the model in SAS.

Matthew R Marler says:

May 31, 2012 at 4:28 pm (Edit)

Matthew, many thanks for the offer. All of the data is in the spreadsheet that I referenced above,

it is again.herew.

CONTINUATION—I have just published a new post which answers many of the comments made on my first analysis. The post does the same analysis, but this time using the actual data rather than the averaged data. The post is “

“.A Longer Look at Climate Sensitivityw.

richardscourtney says:

May 31, 2012 at 3:29 pm

“KR:

I appreciate your taking the trouble to provide your answer to me that you have posted at May 31, 2012 at 1:04 pm.

You say a two-box model is “better” than a one-box model. Perhaps.

The point I (and I think also Terry Oldberg) am making is that there is no way to define what is an adequate model.

It is a fact that – as Willis says – Willis “one box” model DOES work. It DOES emulate physical reality. For sake of argument, I will accept that Tamino’s “two box” model also works (but I have not checked that). And, as I explained, other models are also possible by adding more “boxes”.

Which of the many models (each constrained by the available data) is preferable and for what?

The range of published values of climate sensitivity shows that the data allows interpretation such that the determinations of climate sensitivity vary by an order of magnitude. In my terms, that says the data permits almost any interpretation one wants to make.

At present, Willis’ determination of climate sensitivity is as valid as any other. Arm-waving about “add another box” or boxes does not change that.

Richard”

You are wrong about Willis’ calculation being as valid as any other. The Physics of the situation says that there are many different “boxes” that hold differing amounts of energy in the climate system. The speed of transmission of energy, between different boxes is dependent on their identity. You have the atmosphere, different land surfaces, the ocean surface layer which absorbs solar energy, the deep ocean etc. Each has a different heat capacity and coefficient of energy transfer between it and the adjacent layer. The time scale for setting up a steady state solution between each adjacent box is quite different. It is pretty clear from this physics that a single box model cannot accurately model the behavior of the earth’s climate system.

This is a pretty convincing argument to anyone with an understanding of the physics, and an open mind. It appears that you are lacking one or the other or both.

Eric Adler (May 31, 2012 at 6:12 pm):

It seems to me that there is not a conflict between Richard Courtney’s claim and yours. You claim that the discretization error declines with the number of boxes; that’s true. Richard claims that the numerical value of the equilibrium climate sensitivity (TECS) is indeterminant; that’s also true.

Willis, you asked for a citation for how I used the magnitude of the diurnal cycle of solar forcing or the daily variation of sea-surface temperature? Do you not believe the numbers I gave you, which are fairly straightforward? Given these two numbers you can derive a sensitivity either by your method or my simpler one. You will come out with a very small number to help your cause, but it doesn’t mean much less than using the annual cycle. The point is that you are not going to get climate sensitivity from the annual cycle, any more than from the diurnal cycle, because limiting the analysis to the annual frequency also limits the lag time to something shorter, which ends up making it irrelevant to the years of lag time that actually exist in the climate system, or are you advocating that the climate system lag time is really only a couple of months?

richardscourtney, Eric AdlerThere are many ways to judge the relative merits of various models.

– Does one model track the data better than another

(all the data, as opposed to the annual data used in testing this one-box model)? That’s a plus.– Does one model work better than another on data that it hasn’t been refined on; refine it on part of the data set, feed it the forcings for another, and see how it works? For example, different subsets of temporal or frequency coverage? That’s a plus.

– Is one model based on more physics, and/or more constraints, than another? A plus.

– Does one model require

ad hocadjustments, such as arbitrarily rescaling certain forcings to improve fit? That’s aseriousminus.– Does the model have unconstrained free parameters? Not a problem here, but some of Dr. Spencer’s recent work includes models with >20 free parameters, unconstrained by observation – and hence leading to unphysical silliness such as 700 meter oceanic mixed layers. As John von Neumann said,

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk …”. Serious minus; it’s entirely too easy to over-fit the data.– Does one model fit the observations with better statistical significance? This may require Monte Carlo testing or fairly significant work

(judging the Akaike information criterion, for example)to establish, but a plus.It’s actually fairly straightforward to compare models and judge which is better, which is more likely to be useful.

—

If, as you appear to, reject all modeling, I would despair of your ever being able to cross the street – since your expectations of the world around are, indeed, models.

Eric Adler– My apologies, I misread your last comment, and which parts were yours and Richard Courtney’s. I would agree with your post WRT Richard’s position.KR:

As you point out, rhere are many ways in which to judge the relative merits of various models. As you also point out, prior to forming expectations about the world around us, we must select one of them. There is a logical approach to making this selection. Like researchers in many other fields of inquiry, climatological researchers make this selection illogically and to our disadvantage.

Joe Born

You helped a lot to clarify my thoughts. Thanks.

KR and Eric Adler:

Insults and childish snark are not an acceptable response to logical argument and criticism.

I have shown you are wrong. Live with it.

Richard

Jim D says:

May 31, 2012 at 6:14 pm

I’m sorry, Jim, but I don’t believe your numbers, for two reasons. First, you give no indication of where you got them. That’s why I asked for a citation, so I wouldn’t have to just take your word for something.

Second, I don’t believe you because I know something about Pacific temperatures. You say:

However, here’s a typical record of a couple of months of surface air temperatures from one of the TAO moored buoy array in the mid Pacific:

Note that swings of up to 4°C are not uncommon. As you can see, your claim of a 1° variation is simply not true.

Regarding the larger point, will I get a different climate sensitivity and time constant when I look at daily data? Quite possibly so, but unfortunately I don’t have the data to calculate it. Which is why I asked for a citation in the first place, because I thought you might have data that would settle the question.

Thanks,

w.

KR says:

May 31, 2012 at 8:11 pm

KR, I have used all the data that I have available. What other data are you referring to? What am I missing here?

Thanks,

w.