As a Doctor, the potential for improvement begins from very basics. For instance - a radiologist sees a funny looking line on an xray and reports "this is probably nothing, but we should repeat the xray in 6 weeks to make sure." Sometimes these requests slip through the cracks and in a fraction of cases the patient presents 2 years later with terminal cancer from their metastatic lung Ca.
Ensuring blood thinners are prescribed post coronary bypass surgery or stenting is another one. It should happen every time, but sometimes by human error it may not.
A good startup idea would be to make a hotlist of the top ~1000 of these obvious, preventable errors, combine IT data sources to predict when they might be occuring, and have checking systems in place to prevent them. If it worked even adequately, hospitals would be seen as negligent for not using such a system.
I'd be interested in your opinion on what errors could be prevented by intervening less. With all the legal, financial, and emotional pressures for doctors to "do something," I'd be surprised if the most common errors are errors of neglect.
A lot of pressure on doctors and nurses (in hospitals) is due to the fact that there are a lot of actual and potential patients, but not so many beds and people to take care of them. So corners have to get cut, less is already being done.
My reasoning is that it's probably easier to harm someone by botching an operation, or by administering an overdose, then it is by their not having the surgery at all, or skipping medication. Not that either case is ideal, of course!
No doubt doctors and nurses are overworked. It occurs to me that our culture may set the bar too low on when to intervene. Things seem to go wrong more often than we like to acknowledge.
Heck, just sleeping a few nights in a hospital, without any treatment, already exposes a human to unhealthy hospital food, infectious disease, and a small, but non-zero, risk of a nurse mistakenly pumping you full of insulin.
Then you would be very surprised. Patient data is fragmented and doctors are overworked, accidentally missing something routine happens constantly. But unnecessary treatment happens too, if you're interested in that, read up on Choosing Wisely.
There was an article making the rounds here on HN about medical checklists, similar to what pilots use for takeoff and landing procedures, for example. Can't remember good keywords to search for, though...
I like the quote from the doctor doing a lot of the work with checklists, answering a question about when they will be widely adopted in medicine:
“At the current rate, it will never happen,” [Pronovost] said, as monitors beeped in the background. “The fundamental problem with the quality of American medicine is that we’ve failed to view delivery of health care as a science. The tasks of medical science fall into three buckets. One is understanding disease biology. One is finding effective therapies. And one is insuring those therapies are delivered effectively. That third bucket has been almost totally ignored by research funders, government, and academia. It’s viewed as the art of medicine. That’s a mistake, a huge mistake. And from a taxpayer’s perspective it’s outrageous.”
As a side note, I've long advocated developers use checklists during releases and more involved git tasks. Why? The cost of using a checklist is minimal-- maybe a few moments glancing at a sheet of paper. The upside is huge though, since you can avoid catastrophic error.
I was fortunate enough to earn my wings before becoming a developer, and as lame as it sounds, learning when and where to use checklists is one of the best lessons I learned.
If someone is getting a procedure, it seems like they should automatically be prescribed required medication unless they have been flagged with an adverse reaction. Sure, review this stuff, but the default needs to be less harmful than the supervised action.
I am guessing there are a lot of historical and cultural reasons why this does not happen. But, why can't we apply basic automation.
Unfortunately hospitals don't have "a database" there are often dozens (maybe more?) of vaguely connected systems with numerous logins required. Getting the one thing you want (e.g. patients phone number) can mire than one computer and several programme logins. Having your accounts randomly blocked and calling IS to unblock them isn't uncommon either, same goes for random outages of periods of painful slowness. Maybe it's just the system I'm in?
Last time i was in a treatment room at a hospital (broken thumb) I was able to open a browser on the PC before the doctor came in a told me off.
He told me his group of physicians had refused to treat anyone until the IT department removed the requirements for login to basic systems because they were sick of not being able to use their tools because a login didn't work for whatever reason - forgotten password, login system not working, it was all the same to them, they just wanted to be able to access their tools.
That was in Adelaide, South Australia, about 4 years ago.
Just to add to this as someone on the other side of the equation. Buying health related software in a fairly corporate environment is the most painful thing I've ever been involved with. Accountants and IT departments who range from unhelpful to hostile and have ridiculous demands. This for a small system in a very small department.
A startup is an experiment, most of them fail. It will probably also have an oversize share of people who don't mind risk but do like (monetary) profits, very much so. This means they will take risks with patients health and lives that we have already chosen not to accept except in cases where bad outcomes don't matter (patient is dying).
The "politics" can also be a lot worse in a fast-growth company with a lot of success-driven people. The risk of losing out is a lot lower in a "boring" company where everybody just gets their salary and advancement is slow and predictable. This leads to less focus on the product/service, because people are occupied with "positioning themselves".
There will also be investors, and they don't really care about the product at all, or about long-term success - as long as they can unload their shares at an IPO like the hot potatoes they often are they are fine. Which is not so because people are evil, but because the selection function is set up in a way that favors this behavior.
In a sense though you are right, the possibilities are rather mind boggling indeed. I just don't think in a positive way.
'A good startup idea would be to make a hotlist of the top ~1000 of these obvious, preventable errors.'
I think that's a great idea.
I don't think it needs to be a start-up, but maybe the AMA, FDA, or whomever could put together a simple list of medical preventable errors, and publish that list in a easy to read format.
Just put the information out there. After my bypass; that will be the first book, or website I'm looking for.
Someting like, "After the first heart attack, an aspirin might prevent the second heart attack."
Something that would legally protect the Doctor.
It seems like the entire system is so afraid of lawsuits they just give the bare minimum of advice. And then there're the doctors who just don't really care.
Yes. I've seen several examples of decision support since 2010. It's not easy to train models because the codes for procedures can be ambiguous. Furthermore, since hospital classifiers favor medical error recall over precision, docs can be swamped with so many warning messages that they tend to ignore them.
Take adverse drug interactions as an example. The training data for drug interactions mostly come from adults, so the resulting models do not apply in a pediatric setting. When the models are let loose in pediatric hospital, a high percentage of the drug interaction warnings are false positives, so these type of warnings tend to be ignored.
It seems that the trend is to use decision support with a lot of human oversight and investigation of the raw data to see if the model conclusions are correct.
What you describe to me seems to be a simple UI problem. I'd add a button "False Positive, Don't show this warning again" and flag the "drug interaction" for review. If I get enough of these, I'd change the behavior for all users in the next update.
Some studies show that sleep deprivation is not a major factor in medical mistakes. Others indicate that patient hand-off is a more important area for medical mistakes.
This isn't to say that doctors shouldn't sleep, only to suggest that having doctors work longer to reduce patient hand-off is a legitimate and justified choice.
My ideal would be to improve the process and technology behind hand-off enough so that there isn't any justification to have doctors work longer hours. That is how you fix the sleep deprivation problem. Not seeing my wife for 30 hours because she is on call is no fun.
> "Overall, the risks of adverse outcomes of elective daytime procedures were similar whether or not the physician had provided medical services the previous night."
> "Implementation of the handoff program was associated with reductions in medical errors and in preventable adverse events and with improvements in communication, without a negative effect on workflow."
If I'm not mistaken, in Kaiser hospitals, each time a drug is administered to an in-hospital patient, the patient's wristband is electronically scanned with a handheld gun to re-verify that this is in fact Patient X with condition Y who needs treatment Z.
My main experience as a patient tells me that one major area of improvement comes from the utter lack of coordination there in on the part of hospitals, insurance companies, and pharmacies to get me the meds I've been prescribed. How many hours do I need to be on the phone? This is "just" at home, though.
In the hospital, I often have just as hard a time to get all my meds because some of my meds are so specialized that the in hospital pharmacy doesn't have them, and I have to bring them from home. If I come in due to an emergency, I am likely to forget them. Thus, additional problems can come about from not having these meds.
Those aren't what most would consider mistakes on the parts on doctors or health care professionals, though, that's institutional blockers to my health care. Do they get counted in this?
As for mistakes on the part of human workers, I wonder just how much is from workers not washing their hands. Anecdotally, a lot of people I know complain that doctors are loathe to do that and other cross-infection control procedures that are simple, but repetitive.
Another barrier I see (although not exactly a "mistake" from one person) to good care is how much effort this can all take, and that it often necessarily means taking time off work. Taking too much time off means you may get fired, etc. Is this taken into account?
I suppose many of my concerns are not direct causes--not getting your meds or getting to the doctor to figure out a problem early on means you miss the opportunity to prevent or fix an easier problem. Then have to deal with it as a full-blown disaster, which means more complicated care, and more opportunities for mistakes in the OR.
At a CU medical shool lecture I just saw a paper cited when the infection rate was reduced from 3% to no incidents (two quarters) after the teams used checklists for a certain complicated procedure. That is one of many improvement tools being adopted.
I'm working on a startup to help with this problem. My belief is doctors don't have enough opportunity to practice the complex tasks of diagnostics and treatment planning. Right now best-practice for medical training involves expensive life-size manikins and dedicated simulation rooms with real staff. My goal is a realistic software simulation of the emergency room with a wide variety of patient scenarios to let students and docs easily practice in a setting where failure is okay.
> Right now best-practice for medical training involves expensive life-size manikins and dedicated simulation rooms with real staff. My goal is a realistic software simulation of the emergency room with a wide variety of patient scenarios to let students and docs easily practice in a setting where failure is okay.
i suppose we'll see the time when instead of "Halo 7" we'll have VR-based "House 2, laparoscopic edition".
Ensuring blood thinners are prescribed post coronary bypass surgery or stenting is another one. It should happen every time, but sometimes by human error it may not.
A good startup idea would be to make a hotlist of the top ~1000 of these obvious, preventable errors, combine IT data sources to predict when they might be occuring, and have checking systems in place to prevent them. If it worked even adequately, hospitals would be seen as negligent for not using such a system.