Skip to content

Evaluating an Important New Paper.

19 September 2012

I got an email last night from the editors of Medical Decision Making, which is a fairly important journal in my field. They publish on all aspects of how decision processes occur, can be improved, and can effect outcomes in medicine. Both at the patient-physician level, and at the level of health care systems. It’s pretty well-regarded. I’ve reviewed for them. They have an exhaustively thorough review process, in my experience, which involves the papers being re-reviewed many times and editors who seem reluctant to make, shall we say, editorial decisions. Papers are not accepted, again, in my experience, until all the reviewers agree nearly 100%, and the paper is in its final form. This process means that getting a paper through their editorial review process generally takes about a year, minimum.

They just issued a special report on modeling, which consists of seven papers laying out industry best practices. I’m going to focus on just the paper about Discrete Event Simulation (DES)*, because that’s what I’m an expert in. From a first read through, this paper is going to be crucial for people working in DES academically, and who intend to publish in the field.

The paper lays out a huge set of best practices, twenty-four in all, which range from common sense to enlightening and brilliant. None of them are bad. These practices will, I agree, do a decent job of separating DES models into rough silos of “good” and “not so good”. Obviously, no system is perfect, but researchers who adhere to these practices will be starting off on the right foot.

The introduction of the paper (which is linked about as the “out” in that vapor trail of hyperlinks), is a straightforward and commonsense description of what DES is, and how it works. My only quibble is the afterthought inclusion of Agent-Based Modeling (ABM) as a subset of DES. This is technically correct, ABM is a subclass of DES models. However, it is so specialized, and has grown into such a robust field of its own, that I feel it warrants separate treatment. ABM is not constrained by the traditional assumptions that go into process modeling with DES. So, from an academic, theoretical perspective, it’s correct to place it there, from a practical, engineering point of view, they’re different things.

The introduction also identifies a massive problem in the field of economic modeling, which DES is often used to inform, which is that:

“Non-constrained resource models—although unusual in other fields that use DES—are required in our field to accord with the common structural assumption made in most health economic models today: that all required resources are available as needed, with no capacity limitations.”

This is why I say that economic models are mostly made-up. The idea of modeling healthcare delivery economics as a resource unconstrained system is absurd on its face, and yet that is the common structural assumption.

The first set of best practices involve when to use DES. And they get it right: DES is excellent for modeling dynamic, constrained resource, queue-based complex systems. They also specifically suggest that if health outcomes are not an output of the model, it should be explicitly justified. I might go the other direction entirely: if health outcomes are a model output, it must be very carefully justified as to how those outcomes are predicted! Predicting human health outcomes with a computer is very hard.

Of critical value too is the section on parameter estimation, specifically with regards to expert elicited data. Frequently, in modeling DES there is no data for some processes. Either because it is unknown, or cannot be measured, or there isn’t time or funding to measure it. This is often overcome with “expert elicited data”, which is fancy-talk for asking people how long they think stuff takes. For example, “How long is it from when you decide you need a radiology consult until you’ve got a radiologist with you in the ED?”  This kind of estimation is relatively common in DES modeling, though too much of it and the model is not likely to be of much value. Karnon et al recommend adopting these estimates, and then performing sensitivity analysis around the uncertainty in the estimate. Which is commonsense and not arduous. Often, model outputs will be very robust to these types of estimations, but sometimes not.

Here’s a critical piece, well included, and I was thrilled to read it:

“When modeling clinical practice, it should not be assumed that relevant guidelines are actually applied.”

Meaning, when you ask people what the flow is in a clinical system, and they have guidelines for how flow is meant to go, don’t just believe them. Observe, and chart your own flow. People often take shortcuts, add interstitial processes, or preempt and resume processes which are meant to be uninterrupted. A good engineer does not just accept a clinical flowchart as sacrosanct. We model things as they are, not as they “should be”. This goes to taking data too. We do not care how fast people can do things. We care only how long it ordinarily takes.

Here’s another one I understand but don’t entirely agree with:

“Implementation should account only for the outputs required for validation and final analyses. If individual-level data are required, outputs should be stored as attributes; otherwise, aggregated valuesshould be collected.”

They advocate collecting aggregated data when possible, and storing individuated data only if necessary. While this is good memory management, it is not necessarily good modeling practice in the early stages. We may not know what types of outputs are going to be needed for future analyses. It is good practice to create capacity for capturing individual level data, even if that data is not utilized from the beginning.

There are a number of best practices on computing time issues, which I won’t evaluate because I have never had that problem. However, I can say that none of them strike me as inappropriate. If your model is large enough, or your time horizon long enough, that model runs become prohibitive or limiting, then these would be well heeded. But I feel almost like they’re relics from a time when computers were not nearly as fast as they are now. These may also apply to ABM in a way that they don’t to DES clinical models, for example, when modeling a termite mound at the level of individual insects.

They advocate basic good programming procedures and the use of animation for engaging with non-engineers and programmers. Good work all. It’s a strong paper which has only one glaring omission.

They do not discuss model validation. At all. No best practices on how to demonstrate that the model represents the real world. No means of asserting that the conclusions are predictive. I have my own ideas for this, and will discuss them eventually. Hopefully, in a journal article.

_______________________
*”Modeling Using Discrete Event Simulation : A Report of the ISPOR-SMDM Modeling Good Research” Jonathan Karnon, James Stahl, Alan Brennan, J. Jaime Caro, Javier Mar and Jörgen Möller
Practices Task Force -4, Med Decis Making 2012 32: 701, DOI: 10.1177/0272989X12455462

One Comment leave one →
  1. Penelope permalink
    19 September 2012 10:29

    So now that there IS a thing that you are an ACTUAL expert on, all I hear is that voice of Charlie Brown’s teacher. Spoken like a pirate…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s