15.5: Assimilation Models
- Page ID
- 30163
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Many of the models I have described so far have output, such as current velocity or surface topography, constrained by oceanic observations of the variables they calculate. Such models are called assimilation models. In this section, I will consider how data can be assimilated into numerical models.
Let’s begin with a primitive-equation, eddy-admitting numerical model used to calculate the position of the Gulf Stream. Let’s assume that the model is driven with real-time surface winds from the ECMWF weather model. Using the model, we can calculate the position of the current and also the sea-surface topography associated with the current. We find that the position of the Gulf Stream wiggles offshore of Cape Hatteras due to instabilities, and the position calculated by the model is just one of many possible positions for the same wind forcing. Which position is correct, that is, what is the position of the current today? We know, from satellite altimetry, the position of the current at a few points a few days ago. Can we use this information to calculate the current’s position today? How do we assimilate this information into the model?
Many different approaches are being explored (Malanotte-Rizzoli, 1996). Roger Daley (1991) gives a complete description of how data are used with atmospheric models. Andrew Bennet (1992) and Carl Wunsch (1996) describe oceanic applications.
The different approaches are necessary because assimilation of data into models is not easy.
- Data assimilation is an inverse problem: A finite number of observations are used to estimate a continuous field—a function, which has an infinite number of points. The calculated fields, the solution to the inverse problem, are completely under-determined. There are many fields that fit the observations and the model precisely, and the solutions are not unique. In our example, the position of the Gulf Stream is a function. We may not need an infinite number of values to specify the position of the stream if we assume the position is somewhat smooth in space. But we certainly need hundreds of values along the stream’s axis. Yet, we have only a few satellite points to constrain the position of the Stream. To learn more about inverse problems and their solution, read Parker (1994), who gives a very good introduction based on geophysical examples.
- Ocean dynamics are non-linear, while most methods for calculating solutions to inverse problems depend on linear approximations. For example, the position of the Gulf Stream is a very nonlinear function of the forcing by wind and heat fluxes over the north Atlantic.
- Both the model and the data are incomplete and both have errors. For example, we have altimeter measurements only along the tracks such as those shown in figure \(2.2.6\), and the measurements have errors of \(\pm 2 \ \text{cm}\).
- Most data available for assimilation into data comes from the surface, such as AVHRR and altimeter data. Surface data obviously constrain the surface geostrophic velocity, and surface velocity is related to deeper velocities. The trick is to couple the surface observations to deeper currents.
While various techniques are used to constrain numerical models in oceanography, perhaps the most practical are techniques borrowed from meteorology.
Most major ocean currents have dynamics which are significantly nonlinear. This precludes the ready development of inverse methods…Accordingly, most attempts to combine ocean models and measurements have followed the practice in operational meteorology: measurements are used to prepare initial conditions for the model, which is then integrated forward in time until further measurements are available. The model is thereupon re-initialized. Such a strategy may be described as sequential. —Bennet (1992).
Let’s see how Professor Allan Robinson and colleagues at Harvard University used sequential estimation techniques to make the first forecasts of the Gulf Stream using a very simple model.
The Harvard Open-Ocean Model was an eddy-admitting, quasi-geostropic model of the Gulf Stream east of Cape Hatteras (Robinson et al. 1989). It had six levels in the vertical, 15 km resolution, and one-hour time steps. It used a filter to smooth high-frequency variability and to damp grid-scale variability.
By quasi-geostrophic we mean that the flow field is close to geostrophic balance. The equations of motion include the acceleration terms \(D/Dt\), where \(D/Dt\) is the substantial derivative and \(t\) is time. The flow can be stratified, but there is no change in density due to heat fluxes or vertical mixing. Thus the quasi-geostrophic equations are simpler than the primitive equations, and they could be integrated much faster. Cushman-Roisin (1994: 204) gives a good description of the development of quasi-geostrophic equations of motion.
The model reproduces the important features of the Gulf Stream and its extension, including meanders, cold- and warm-core rings, the interaction of rings with the stream, and baroclinic instability (figure \(\PageIndex{1}\)). Because the model was designed to forecast the dynamics of the Gulf Stream, it must be constrained by oceanic measurements:
- Data provide the initial conditions for the model. Satellite measurements of sea-surface temperature from the avhrr and topography from an altimeter are used to determine the location of features in the region. Expendable bathythermograph, AXBT measurements of subsurface temperature, and historical measurements of internal density are also used. The features are represented by an analytic functions in the model.
- The data are introduced into the numerical model, which interpolates and smooths the data to produce the best estimate of the initial fields of density and velocity. The resulting fields are called an analysis.
- The model is integrated forward for one week, when new data are available, to produce a forecast.
- Finally, the new data are introduced into the model as in the first step above, and the processes is repeated.
The model made useful, one-week forecasts of the Gulf Stream region. Much more advanced models with much higher resolution are now being used to make global forecasts of ocean currents up to one month in advance in support of the Global Ocean Data Assimilation Experiment (GODAE) that started in 2003. The goal of GODAE is produce routine oceanic forecasts similar to today’s weather forecasts.
An example of a GODAE model is the global US Navy Layered Ocean Model. It is a primitive equation model with \(1/32^{\circ}\) resolution in the horizontal and seven layers in the vertical. It assimilates altimeter data from Jason, Geosat Follow-on (GFO), and ERS-2 satellites and sea-surface temperature from AVHRR on NOAA satellites. The model is forced with winds and heat fluxes for up to five days in the future using output from the Navy Operational Global Atmospheric Prediction System. Beyond five days, seasonal mean winds and fluxes are used. The model is run daily (figure \(\PageIndex{2}\)) and produces forecasts for up to one month in the future. The model has useful skill out to about 20 days.
A group of French laboratories and agencies operates a similar operational forecasting system, Mercator, based on assimilation of altimeter measurements of sea-surface height, satellite measurements of sea-surface temperature, and internal density fields in the ocean, and currents at 1000 m from thousands of Argo floats. Their model has \(1/15^{\circ}\) resolution in the Atlantic and \(2^{\circ}\) globally.