A Model Land in a Modelling World

Tom van Vuren
26 September 2023
A map of Model-land. The black hole in the middle is a way out. From Escape from Model Land, Economics: The Open-Access, Open-Assessment E-Journal, 13 (2019-40)
A map of Model-land. The black hole in the middle is a way out. From Escape from Model Land, Economics: The Open-Access, Open-Assessment E-Journal, 13 (2019-40)

 

If I hadn’t held on to my red Dutch passport, I might have tried to get one from Model Land, where I spend (too?) much of my time.

Imagine my surprise to find out that a journal article, and subsequently a book, on Escape from Model Land were written by Erica Thompson from University College London’s Department of Science, Technology, Engineering and Public Policy (the journal article co-authored with Leonard Smith, from the University of York).

According to the article, open access and fee to read here, “model land is a hypothetical world in which … everything is well-posed and models (and their imperfections) are known perfectly.

"It … promotes a seductive, fairy-tale state of mind in which optimising a simulation … reflects desirable pathways in the real world. Model land encompasses the group of abstractions that our model is made of, the real-world includes the remainder of things”, which Thompson later calls “known neglecteds”. Who would not want to live there, and why escape?

Thompson’s main message for me is what she says in the abstract: “Decisions taken in the real world are more robust when informed by estimation of real-world quantities with transparent uncertainty quantification, than when based on “optimal” model land quantities obtained from simulations of imperfect models optimised, perhaps optimal, in model land”.

It’s easy to be lulled into a false sense of security by well-calibrated models, converged equilibrium runs, and forget that we are still in model land, and not in the real world. As she says later on: “Uncomfortable departures from model land are required for (good) decision support”.

But also, recognising uncertainty is crucial – and here I see Thompson ask for more than just applying a limited set of plausible futures, such as the DfT’s Common Analytical Scenarios.

We must also “separate the imperfections in the measurement model from structural model error” - what is the point of an almost perfect replication of base year travel conditions, or a best-fit statistical behavioural model applied to a range of future inputs, if the model structure isn’t fit for purpose, or key endogenous model parameter assumptions are plain wrong?

As transport modellers we also face another snag, in that we quite happily “use the same name and the same phrasing to refer to effects seen in the simulation as those used for the real world”.

Haven’t you reported on the traffic flows and delays in the scheme network in a future, that is perhaps 30 years away? The problem arises, says Thompson “when the consumers of this material realise just how different the model-variables are from the real world phenomena they face”.

And it’s up to us, as modellers, to be humble in our forecasts, explaining the assumptions we have made in model land, the known neglecteds (and perhaps even mention that there may be unknown neglecteds, and how these may affect the numbers we produce), being clearer and honest about our own uncertainties.

Here we face a conundrum: making models more and more complex, reducing these known neglecteds, versus keeping things simple. Thompson also addresses this, and takes a firm view: “… there are often much harsher macroscopic errors and shortcomings that have not yet been resolved.

Starting in model land, one can continue forever improving a model and exploring the implications of introducing new complexity: evaluating in model land will no doubt show some manner of “improvement.”

Naïvely, we might hope that by making incremental improvements to the “realism” of a model (more accurate representations, greater details of processes, finer spatial or temporal resolution, etc.) we would also see incremental improvement in the outputs (either qualitative realism or according to some quantitative performance metric).

Regarding the realism of short-term trajectories, this may well be true! It is not expected to be true in terms of probability forecasts. In plainer terms, adding detail to the model can make it less accurate, less useful. How good is a model before it is good enough to support a particular decision?

This of course depends on the decision as well as on the model, and is particularly relevant when the decision to take no action at this time could carry a very high cost”.

Many consider our existing four-stage models more complex than necessary already, even though they have reduced dynamic transport-related decision-making by individuals to only four and sometimes five considerations (whether to travel, where, how and when?) and forced into an equilibrium straight-jacket. 

Known neglecteds are being added in model land to increase realism (the choice of residential location, complex activity patterns distinguishing short and long term choices, and intricate household interactions), without being clear up-front, or even after their application, if these have helped bridging the gap between model land and the real world of decision-making.

An important distinction that Thompson makes is between ““weather-like” tasks, where there are many opportunities to test the outcome of our model against a real observed outcome, and “climate-like” tasks, where the forecasts are made truly out-of-sample, (where) we rely on judgements about the quality of the model given the degree to which it performs well under different conditions”.

We as transport modellers also had better recognise that modelling-wise, the challenge of short term predictions that are mainly used for operational purposes, and where we can see the real-life impacts of decisions made based on the model results, is very different from long-term forecasts for infrastructure investment decisions.

I would add that for the former it doesn’t matter so much how the predictions were made, as long as they are ”good” – a typical environment for the successful application of artificial intelligence approaches such as machine learning and genetic algorithms. For the latter a causal explanation is required to provide confidence, providing insight and improving understanding even if the quantified outcomes cannot be checked for years or decades to come.

As a transport modeller, I can relate to another point that Thompson makes: “Some models are used for convenience, because they are “objective” in the sense of getting a single answer under the same input conditions regardless of user (which we note is not at all the opposite of “subjective”, since the construction of any model requires expert judgement about the applicability of that model and the validity of any assumptions) and because they provide an unambiguous guide for policy-making”.

And, she warns, “ … using a mis-informative forecast simply because it is the 'best available' is a nonsense”. 

Here Thompson has advice for the more academic modellers among us. “For what we term “climate-like” tasks , the realms of sophisticated statistical processing which variously “identify the best model”, “calibrate the parameters of the model”, “form a probability distribution from the ensemble”, “calculate the size of the discrepancy” etc., are castles in the air built on a single assumption which is known to be incorrect: that the model is perfect.

These … are great works of logic but their outcomes are relevant only in model land until a direct assertion is made that their underlying assumptions hold “well enough”; that they are shown to be adequate for purpose, not merely today’s best available model”. Can we as transport modellers continue to hide behind a well-calibrated and converged model, without really challenging whether the assumptions we have made to achieve that are realistic enough to really inform practice?

Will adding yet another variable, or including yet another behavioural response make a fundamental difference? 

As Thompson says in her advice on escaping model land: “Simply “fixing the bug” or adding in the newly-identified mechanism each time something unexpected happens, is not a recipe for confident forward prediction.

Does the model in fact assist with human understanding of the system, or is it so complex that it becomes a prosthesis of understanding in itself? Letting go of the phantastic mathematical objects and achievables of model land can lead to more relevant information on the real world and thus better-informed decision-making”.

And that also requires us to embrace the possibility of very low probability events (and what Thompson calls “model-inconceivable big surprises”), which are rarely captured in models, and also inconvenient (although essential to be aware of) for decision-makers.

What to do, how to escape from model land? Here I pick up a few final pointers from Thompson’s paper. First of all, “sufficient forecast-outcome information (should be) generated to allow calculations which will give us an understanding of where, how, or after what lead time the model is performing poorly”.

As a profession we still don’t carry out enough, and honest enough, back-casts, out-of-sample predictions or evaluations of the actual outcomes in the real world, compared to what we calculated in model land.

Secondly: “Using further expert judgement, informed by the realism of simulations of the past, to define the expected relationship of model with and critically, to be very clear on the known limitations of today’s models … for the questions of interest.

Where we rely more on expert judgement, it is likely that models with not-too-much complexity will be the most intuitive and informative, and reflect their own limitations most clearly”.

It’s going to be uncomfortable,  but “ … the information needed for high-quality decision support (is) a probabilistic forecast, made complete with a statement of its own limitations”. To finish with a Thompson quote: “… presenting model output at face value as if it were a prediction of the real-world, or interpreting simulation frequencies as real-world probabilities, is equivalent to making an implicit expert judgement that the model structure is perfect”.

Traffic Network Engineer
Portsmouth City Council
Portsmouth
£31,067 - £37,937
PROGRAMME LEAD – TRANSPORT APPRAISAL
Cumberland Council

£49,764 – £50,788
Transport Services Manager
Rutland County Council
Rutland
£54,976 - £58,977
View all Vacancies
 
Search
 
 
 

TransportXtra is part of Landor LINKS

© 2024 TransportXtra | Landor LINKS Ltd | All Rights Reserved

Subscriptions, Magazines & Online Access Enquires
[Frequently Asked Questions]
Email: subs.ltt@landor.co.uk | Tel: +44 (0) 20 7091 7959

Shop & Accounts Enquires
Email: accounts@landor.co.uk | Tel: +44 (0) 20 7091 7855

Advertising Sales & Recruitment Enquires
Email: daniel@landor.co.uk | Tel: +44 (0) 20 7091 7861

Events & Conference Enquires
Email: conferences@landor.co.uk | Tel: +44 (0) 20 7091 7865

Press Releases & Editorial Enquires
Email: info@transportxtra.com | Tel: +44 (0) 20 7091 7875

Privacy Policy | Terms and Conditions | Advertise

Web design london by Brainiac Media 2020