All posts
Optimisation·5 min read

Radiation call frequency: the trade-off behind `radt`

WRF's radiation schemes are expensive, so they run on a longer interval than the dynamics. The shipping interval is 30 minutes. Here is the trade and what we would test before changing it.

Radiation is the most expensive physics call in a WRF run, per invocation. The shortwave scheme traces solar flux through the full atmospheric column and accounts for absorption and scattering at every model level. The longwave scheme does the same for thermal emission. Together they set the energy budget that drives the boundary layer, which drives thermals. Because they are expensive, WRF does not call them every dynamical time step - it calls them on a separate, longer interval controlled by `radt` in the namelist.

Our shipping `radt` is 30 minutes, the WRF default. The convention in the WRF community is roughly to set `radt` equal to the grid spacing in kilometres, which for our 4 km domain would suggest 4 minutes. We are well above that convention. That is worth a closer look, and this post is about the trade-off behind the knob and what we would test before changing it.

What a longer interval costs you, theoretically. Surface heating in the morning lags the actual sun position because the model is still using the radiation field from the previous call. The sun angle changes about 0.25 degrees per minute, so over a 30-minute interval the modelled solar flux is averaged across roughly 7.5 degrees of sun angle. During the morning ramp when the boundary layer is growing fastest, this can shift modelled surface heat flux meaningfully, and through it, the timing of convective onset.

Whether this actually matters in our setup is not known. We would expect some smearing of morning thermal-onset timing relative to a more frequent radiation call. In practice the effect could be small (the PBL scheme integrates over many time steps and may smooth out the radiation interval noise) or larger (on days where the inversion breaks cleanly, even modest radiation timing errors could shift onset by tens of minutes).

What we would test. A side-by-side comparison of the live `radt = 30` setup against a `radt = 5` setup on the same set of cycles, scoring two things: (a) wall-clock cost of the more frequent call, which sets whether the change is even affordable within the four-cycle-a-day budget, and (b) trigger-time and morning-`wstar` differences against pilot-reported convective onset, which sets whether the change actually improves the forecast.

What would push us to actually run that test. The validation pipeline being in place, so that the trigger-time comparison has something to score against. And one or more pilots reporting that the live forecast's morning-onset timing is consistently off in a direction that a finer radiation interval would explain. Without either, this stays in the bucket of plausible improvements that have not been measured.

The radiation-interval sweep sits behind the validation pipeline in the queue. Once trigger-time comparison against pilot-reported convective onset is in place, this is one of the cheaper knobs to turn and measure. Numbers will be written up here when they land.