Skip to content

Post-Publication Review: Optimizing the Pharmacy.

19 June 2012

We look today at a paper from Health Care Management Science, a very well-respected journal in the science of the management of health care (Which is, let’s face it, a matter of some concern for the free peoples of middle earth.). The paper concerns the process of filling outpatient prescriptions at two hospital based pharmacies in London. The goal being to build a discrete event simulation which accurately models the process, and then use this to determine the response of the system to changes in demand, workforce, and increased robotic-dispensary utilization. So right there, we’re pretty stoked about this paper from the outset. Computer simulation, drugs, and robots.

It’s an interesting paper because they perform comparative simulations on the two pharmacies, allowing us to see that different predictions are made based on the different base systems (often called the “plant” in engineering terms). It’s interesting because they find some results that are in line with commonsense predictions, and some that were not predicted and seemed counterintuitive until the model was analyzed more deeply. Though, I have found this to be the rule, rather than the exception: when doing a simulation, you will find something that you didn’t expect. At least, you will if you do it well enough.

So. What did they actually do? First, they identified the setting for their simulation, choosing to focus on outpatient prescriptions, and disregarding inpatient and discharge prescriptions. This was done to focus on the aspect they believed to impact customer satisfaction directly. It’s probably a reasonable restriction, but it wasn’t clear to me how inpatient and discharge orders would complicate the flow, and so if I were to have been a reviewer, I’d have asked for more clarity there. The paper makes it seem as though the outpatient stream can be treated in isolation, which is a pretty big assumption from the outside. It may be perfectly true, but I’d like to see discussion of why it’s true.

Next, they modeled the flow, in a diagram which I assume is printed this way for space, because a few more informative layouts were immediately apparent to me:

Figure 1: DES flow

But that’s not really a criticism. As you can see, the flow is relatively easy to follow. This model was informed with hand drawn data that was gathered by hand over a two month period, resulting in about 2000 observations. This is a solid methodology. Yes, there are problems with hand drawn data, but often they’re the only data available, and there are also problems with other data streams in these circumstances. Computer records are often wildly inaccurate, because they depend too on hand entered data. Interview regarding task duration is also unreliable. Overcoming data acquisition problems in DES is unresolved and to my knowledge, essentially unaddressed. We just state our limitations and plough on. 

So this model was informed with two sets of data, in order to allow it to represent two otherwise identically processed prescription dispensary systems. The models are the same, but the staff mixes, times of tasks, and proportions of robot-dispensed prescriptions vary between models. These models were then validated using face validity methods – which are standard, necessary, and insufficient – as well as input-output external validation. Meaning, the model was shown to accurately reproduce aggregate prescription filling times when given real world conditions. This is a minimally sufficient external validation method. Input-output identification is important, but does not capture internal dynamics the way interstitial queue length comparisons would, showing that the model accurately represented the real world at several points along the way, as not merely as a black box. 

They proposed several scenarios, including adding additional staff of each type, changing the demand (from -10% to +10% in 1% increments), and increasing by 50% in 5% increments the proportion of prescriptions that could be dispensed by robot. These are solidly interesting scenarios, and well designed. Major changes in combination to systems of this sort are less likely to yield revealing insights. Small changes made in isolation, like these, are likely to have high predictive value.

They then simulated these scenarios 100 times each using terminating, independent instantiations. I’m less confident (but not necessarily non-confident) that that’s the best way. No objection to sample size. Power calculations mean very little in simulation, because we can generate arbitrarily large data sets. I’m more concerned with the terminating independent runs. Are there overnight inpatient and discharge scripts? Can we be certain that these would not influence from day to day? I just don’t know. It seems like a pretty good assumption, but without more information, I can’t be truly certain. I should note that they included a list of assumptions they made and agreed upon, which is a really nice touch. 

Figure 2: Staff Perturbation Results

They were surprised that adding an assistant, who has very few competencies that allow for process improvement, made almost as large a difference as any of the other three non-pharmacist staff. Deeper invesitgations revealed two possibilities. Allowing the assistant to do labeling freed more skilled workers to do tasts which did influence throughput, or that the assistant’s main task, assembly, may be a larger bottleneck than had been supposed. 

They did make one error in the paper that is unimportant with respect to the results. They stated that the ‘exponential distribution is inappropriate for task durations because it allows a time of zero’. Well, it’s a continuous distribution. The probability it would produce a time of 0 is 0. However, in general, exponential distributions are inappropriate for task durations because they are not likely to model them very well. But the distributions that do model them well are also continuous distributions bounded below by zero, and so share this “limitation” with the exponential distribution. As a reviewer, I’d just have pointed this out and had them leave this comment out of the paper. 

The paper shows how we use simulation as a bed for experimentation when it cannot be performed in real life, and allows us to plan for changing circumstances in real life. It’s a good paper, with very few drawbacks. There’s a reason it was published in a good journal.

 ________________

M Reynolds, C Vasilakis, M McLeod, N Barber, A Mounsey, S Newton, A Jacklin, BD Franklin, (2011) “Using discrete event simulation to design a more efficient hospital pharmacy for outpatients” Health Care Manag Sci, DOI 10.1007/s10729-011-9151-1

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s