Engineering the Emergency Room, Part II.
Last time we discussed what a system is, and why an emergency room fits the definition. Luckily for me, when your field is as nebulously defined as ‘systems theory’, just about anything can be shoe-horned into it. But emergency rooms are a particularly good fit. In general, they are examples of a subclass of systems called Hybrid Dynamic Systems. Systems, because of last time, Dynamic, because they unfold as time goes on, and Hybrid, because they include both continuous (time) and discrete (patients) elements. And if I were more of a theoretician, I’d go about defining and refining the concept of human interactive Hybrid Dynamic Systems. But I’m not, so I don’t capitalize “human” or “interactive”.
Characterizing human behaviour is one of the hard parts. And it’s idiosyncratic to the system being studied. Engineers have been trying to put humans ‘in the loop’* for centuries, of course. When most people think of capturing human behavior in a computer, they’ll either think of artificial intelligence, or they’ll think of computer graphics and physicality. Neither is the case in what I do. I, and everyone else who builds discrete event simulations of human interactive HDS’s, use stochasticity to account for human behaviour.
*As an aside here, ‘in the loop’ is a casual way of referring to a technical term in engineering. Vast swaths of engineering disciplines are concerned with feedback loops. Feedback is a hydra, and can be a godsend or a disaster. Or just annoying. When a microphone squeals, that’s an example of positive feedback. Output from the speaker gets fed into input from the microphone. The dynamics of the system, like most systems, are unstable under positive feedback, and the system ‘explodes’, meaning, attempts to produce an output (in this case, the frequency and volume) that goes to infinity. But feedback loops are also intentionally designed elements of most engineering systems. The cruise control of your car is a negative feedback system.
But I’m getting ahead of myself. How do we turn an emergency room into a computer model? The first thing we have to do is to decompose the system into its basic pieces. For an emergency room, these are the locations (exam rooms, triage areas, offices, laboratory, imaging, etc.), the resources (physicians, nurses, portable x-rays, EKG machines, etc.), and the entities (patients, paper records, images, phone calls, etc.). Then, the flow of the system is mapped. This involves creating a detailed flowchart which identifies all of the processes and answers the question, “how do entities employ resources at a location, and then proceed to the next location.” Additionally, a flow chart for each resource may be needed, “how does this resource act on an entity, and then choose which entity it will service next?”
Once all of the elements and flow are identified, the system can be coded into one of many different software suites which have DES engines. I’m sure a good computer scientist could tell you how to write your own DES engine too, but that seems like an unnecessary step. Most DES suites are pretty easy to use. Which is probably a drawback, because it allows a lot of people who don’t know what they’re doing to develop decent-looking simulations of systems, and publish about them, and they’re either wrong or useless. One of the things I’ll do from time to time is critique papers in the field. There are some doozies.
Once the system is coded, we’re still not close to done. Because all we have right now is the flow. We don’t know how long anything takes. This is where we do the field-work, and how we account for the variation of human behavior. We go in to the ER, with a stopwatch and a notebook, and we measure events. These days, a lot of events, like turnaround time for labs, etc., is available directly from computers. But face-to-face time between physician and patient, nurse and patient, and the various human elements and the computers they interact with is rarely capturable retrospectively. So we measure it.
This causes a couple of basic problems. People don’t like being watched. Measuring the length of time it takes people to do things automatically makes them speed up. I like to say that “I don’t care how fast you can do it. I am only interested in how long it takes when it’s done well.” Generally, the first dozen data points for a process are discarded. There’s generally a lot of pastry-based gift giving. And then I simply observe how long each different type of process takes. I record observations, and then I take the observations and curve-fit them to probability distributions. For each type of distribution, I may have anywhere from 25 (if tightly distributed around a clear single mode) observations, to 50 or 100, if it is widely distributed. If a distribution looks legitimately bi-modal, then I will generally have to determine the reason, and then stratify my distribution so that I can use only a single-mode pdf.
So, these distributions are iteratively called to create a stochastic process (a sequence of observations of a random variable), so that in the simulation, as in the real world, each patient requires a slightly, or even possibly dramatically, different amount of time to accomplish each task required to negotiate an ER visit. And of course, there are many different types of patients, and there is always a balancing act: do I stratify by type of patient/provider, or do I use a wider distribution. These questions are often answered by the expense, time frame, and by the purpose of the project. How granular is the investigation?
Finally, there is the process of validation. How do we know that the simulation is useful? What is it good for? How can we be certain that this isn’t just a video game with no real-world application? I’d love to answer those questions. And I will. But not today. I have a grant to write, and I’m pushing a thousand words. So, up next: Validity! Experimentation!