I am planning a column on the role of experts in translating medical evidence. Evidence is important because it’s how doctors know they are helping not harming people.
It’s hardly news that the new (digital) democracy of information has changed the rules of influence in Medicine.
In the days of old, academic doctors generated, analyzed and translated evidence. We called these people key opinion leaders (KOLs). To become a KOL, you stayed in academics, published lots of studies, and crucially, you were not too critical of prevailing views.
If you did that, you could get invited to speak at meetings, write editorials and participate in expert guideline documents.
This vertical (or top-down) model still exists, but social media and the democracy of information is breaking it down a bit. More and more, ideas can garner influence based on their merit rather than their source. Resistance to expert views seems to be on the rise.
Some recent examples:
- Neurology experts strongly recommend use of TPA (clot-busting drugs) in stroke. Many emergency doctors have looked at the same evidence and are not convinced. I’ve sided with the emergency doctors, but our analyses have been criticized mostly because we are not “experts.”
- Cardiology societies have endorsed recent guidelines for treating high blood pressure. Family medicine leaders have looked at the same evidence and come to a different view.
- The USPSTF (United States Preventive Services Task Force) are an independent voluntary group of scientists tasked with making evidence-based recommendations. Their look at the evidence has sometimes conflicted with those from professional societies–most notably in the review of screening for cancer.
My questions are:
How much a role should experts play in translating evidence?
Can non-expert clinicians come to a more balanced review of the evidence? Perhaps experts are too close to the topic at hand–e.g. AF ablation doctors writing guidelines on AF ablation?
The opposing viewpoint holds that one needs the context of being an expert to understand and translate studies of medical evidence.
A related question is that if you put 10 people on a writing committee for treatment of a medical condition, how many should be experts in the field and how many should be independent experts in evaluating medical evidence, like statisticians and epidemiologists? Now the majority are experts in the field.
Finally,
When I have written about using stents in patients with stable coronary disease, or TPA in stroke, or screening tests for cancer, some have said that I am an electrophysiologist and should “stay in my lane.”
I am interested in your thoughts on these questions.
JMM
P/S: Inherent in this debate is the matter of who is more expert: the hobbiest doctor who spends most of his/her time running trials or the doctor who spends all his/her time seeing and treating patients?
16 replies on “Finding Truth: How Much Do We Need Experts?”
KOLs have been part and parcel of pharma marketing for decades. I don’t know what this looks like from the Doctor side, but I’ve seen it from the pharma side. Find someone who agrees with you (or can be convinced to agree with you), buy them a megaphone, and reward them handsomely.
They will convince themselves that you have in no way influenced their thinking.
The people driving online discussion seem much harder to corral.
http://pharmaceuticalcommerce.com/brand-marketing-communications/digital-opinion-leaders-dols-role-pharma-markets/
This seems like a good thing.
Pharmaceutical and device marketers count as advertisers here:
https://twitter.com/nntaleb/status/1005764764457463808
Find someone who agrees with you …. just like using “experts” in court. Who do you trust ?
I like this quote from the first article
Financial conflicts are pervasive, under-reported, influential in marketing, and uncurbed over time.1
In my opinion, already the discussion of WHO should be allowed to comment is misguided, this is a relict of eminence- based medicine. Attacking the background of a person instead of discussing the argument brought forward is an ad-hominem attack and usually a sign the other side is not on top of their game, so a reason to persist, not to give up.
What we need in my mind is open and candid discourse- ad hominem attacks are not a sign of that- and it has to be about the best argumentation not some dogmatic reading by whoever is considered the expert of the day.
As a matter of fact, many findings in medicine aren’t black or white, they are usually black AND white- a medicine will bring benefit but also harm- or simply grey- ‘it might work’. Expecting a ‘clear’ answer in such a situation would just be oversimplifying reality to a degree that becomes harmful to patients. Which should make us deeply suspicious towards simple answers in medicine…
So in my mind, the question ‘who is the mightiest expert of all’ is not the right one- the best place for learning will be in discussions between people who come at the same problem from different angles. Just like the tale of the blind men and the elephant….
I have become more and more cynical about expert opinion. I have the most expertise and did most of my research in echocardiography.
In echocardiography, the KOLs are at academic centers.
The KOLs have a very cozy, symbiotic relationship with the echocardiography industry and the physician education industry. They list no conflicts of interest when they are benefiting greatly from access to the latest echocardiographic bells and whistles seemingly at no cost.
The cost of this access is a need to over-hype the latest echocardiographic bells and whistles. This is a win-win for the KOL as educational conferences (which pay speakers handsomely and provide them lavish accomodations and travel expenses usually to exotic or sought after locations) prefer speakers who talk about the latest bells and whistles. In addition, journals are always interested in publishing papers, no matter how weak, about the latest bells and whistles.
Subsequently, these weak papers are utilized to justify in guidelines the routine performance of the bells and whistles. And, you guessed it, the guidelines committee has to have the academics writing the guidelines because they understand all the nuances of the bells and whistles.
This self-serving cycle of access, publication, guidelines writing and education contributes to unnecessary costs in echo equipment and procedures but has done little to advance patient care in this field in the last 20 years.
Thanks Anthony. I have noticed the spread of “bells and whistles” in the echo lab the last few years. It seems most of the patients I send for a simple echo end up with an IV in place. I’ve tried to stop this practice, but to no avail. I was told that I did not understand echo, and even if I did, these new bells and whistles had to be done b/c the guidelines supported there use. I was sent the guidelines and I read them. There were no outcomes studies–of course. Further, I was told, if we are to have a “certified” echo lab, we needed to use the bells and whistles.
I hate the IV’s and the echo-extras because these practices enhance the disease effect and diminish the wellness effect. Most of the time in the arrhythmia clinic, I am trying to remove fear. When the techs make a huge deal of a simple ultrasound, it reinforces the notion that, ooh, this must be a serious problem. It’s why I can’t wait for handheld echo to go mainstream.
Promotion of bells and whistles from academia isn’t exclusive to echo. Maybe you have followed the FFR debate. To me, a non-expert, it seems the FFR data does not support its degree of support from many of the experts. AF ablation, too, has many academic leaders promoting such bells and whistles as esophagus moving catheters. Please. The HRS expo was full-up of the latest bells and whistles for ablation. But when you talk with European colleagues, some of them from the most experienced centers in Europe, they tell you they use few of these bells and whistles.
There’s a further much worse problem with the rise of the caste of specialists: no non-specialist even dare question the “diagnosis” or procedure presented by the “upper caste” specialist. The poor patient faced with a condition that might not cleanly fit in or other speciality will get no help from the primary care provider for the PCP is “lower status” than god aka specialist.
Should the patient ask a question of the specialist, he is likely to hear “I am a blah ologist! Talk to your primary care doctor!” Of course! Only an idiot would assume that you are dealing not with a physician first but a “blah ologist”!
I recently had this experience with a specialtist practicing the ology starting with card.
He also was very annoyed when I mentioned John Mandrola…..go figure! I had a copy of “The Haywire Heart” in hand when I saw him.
Thank you Dr Mandrola for shining a bright light on the issues facing medicine and helping people like me!
Thanks for the comment and thanks for reading the Haywire Heart!
I’ve too many times been incorrectly treated by a medical “expert”. I look for the most experience, and that does not necessarily mean much.
Example, the head of rehab for a major medical center. Reviewed my shoulder injuries with newer doctors on fellowship. Knew my history of frozen shoulder. Had x-ray showing good spacing in shoulder joint. Diagnosed impingement. Physical therapist said, when pain increased, ask for an MRI to see what is going on. Result, frozen shoulder with severe edema. Needing opposite PT treatment (range of motion, stretching instead of strengthening). The rehab “expert” acted surprised and only said it would take time and patience. No new PT prescription or advice on going forward. Thought of reporting this to the head of the medical center. And sad what the doctors on fellowship learned with erroneous leadership.
Hi John, We recently faced the sharp end of ‘expert evidence’ in publishing our latest letter from the Therapeutics Initiative. https://www.ti.ubc.ca/2018/05/28/110-stimulants-for-adhd-in-children-revisited/
Our team basically did a simple utilization analysis of the prescribing of ADHD drugs in BC, noting that evidence of the “birth month effect” is mounting (ie: kids born in the latter part of the year were more likely to be prescribed drugs for ADHD), the long term effects are still largely unknown, and the prescribing continues to mount. The swift reaction from a group of psychiatrists, and apparent experts in ADHD trashed us in the pages of a major Vancouver paper.
http://theprovince.com/opinion/op-ed/opinion-adhd-is-a-real-brain-disorder-requiring-treatment-despite-what-some-say
The most obvious difference, between these two contradictory positions is that our team has no relationship with the pharmaceutical industry, and these detractors have multiple, lengthy ties to the companies that make ADHD drugs. But maybe these experts know something about the evidence that we missed? That is always possible, but I’d be more convinced if their characterization of ADHD as a “neurodevelopmental disorder” that affects up to 8% of children and 4% of adults had a reliable reference. What would be nice is if we all agreed that this disorder is controversial and we need to all work deeper to find answers.
Suffice to say, expert opinion can sometimes be thoughtful, helpful and solid, but sadly (and in my sole opinion) it is often arrogant fluff that just adds misleading disinformation into important discussions.
Also recognize the dichotomy that we sometimes see between evidence and practice may rest strongly on the premise that the rigorous research trials that comprise our evidence and inform outlet guidelines are perhaps studying only a very small, well controlled (ideal?) cohort rather than reflective of the average clinical presentations.
Experts are of critical importance insofar as they can leverage evidence and guide translation to practice in a meaningful way.
This is an intractable problem, as both experts and “non experts” bring cognitive bias to interpretation of medical evidence.
The best we can do is exclude those with explicit conflicts of interest from guideline development and editorial authorship.
A good recent example of unreliable expert opinion occurred on last week’s PBS Newshour, when Dr. Larry Norton of MSK Hosp dissembled when asked with incisive insight by Amna Nawaz, ” have women for years been put at risk of harm by overtreatment for intermediate risk breast cancer?”: https://www.pbs.org/newshour/show/most-women-with-smaller-breast-cancer-tumors-can-safely-skip-chemo-study-finds#transcript
I almost fell off my chair.
I support the view developped by Jeanne Lenzer, Jerome Hoffman, Curt Furberg, and John Ioannidis and Götsche in :
http://www.bmj.com/content/347/bmj.f5535.full:
and
http://www.bmj.com/content/345/bmj.e7031
“We believe guideline panels must primarily comprise experts at reviewing scientific evidence.(35)
In current practice, such scientists are either not involved at all in guideline creation, or work in an advisory or consultant capacity to content experts.
These roles should be inverted.
The first task of any guideline panel is to review the evidence to decide if there is an uncontested “best†answer; if not, and scientifically based controversy exists, then the panel constituency must reflect that diversity of thought.
“Panel stacking†with individuals known to believe disproportionately in one school of thought, must be avoided.
Panels whose membership fairly represents differing scientific views are highly desirable.
To assure guideline readers that panels are not stacked, panelists should declare their a priori beliefs about a proposed intervention at the time they are nominated, and again at the conclusion of the panel’s term.
Many such declarations can be vetted because panelists often have published their viewpoints before empanelment.
These are really useful links. Thx.
I am looking forward to your writing on this. I practice primary care, and for most conditions I treat there are one or often several guidelines published by expert panels. In general these experts do not see primary care patients, yet they profess to know better than I do what is best for these patients. Often the experts themselves can’t agree on what is best, for example see the recent arguments between the American College of Physicians, the American Diabetes Association, the American Association of Clinical Endocrinologists, et al about what the proper target for hemoglobin A1C in diabetics should be.
The thrombolytic for stroke example is a great one. I have taken call in a rural ED for many years. Periodically we have an expert from an urban stroke center visit to teach us, make sure we know and follow the guidelines, and encourage referrals. These experts do not respond well when my colleagues and I want to discuss the literature on this subject. In fact, one of them told me that “we set the standard of care, and we are the expert witnesses for the plaintiffs.†Hmm.
I could go on but I know you understand all this.
“If your only tool is a hammer, every problem is a nail.”
If you haven’t read it yet, I think you will appreciate Black Box Thinking