When I was in medical school – sometime last century! – we heard a lot about Koch’s postulates1, referring, of course, to the German microbiologist Dr. Robert Koch and the criteria he developed to establish a causal relationship between a microorganism and illness. Accordingly, the organism must be found in all instances of the illness and not in healthy subjects; it must be isolated from diseased subjects and grown in culture media; the cultured organism should cause disease in healthy subjects; and finally, the organism must be recoverable from a test subject given the illness.
The original criteria helped identify specific pathogens responsible for disease, but there were difficulties culturing some organisms, and viruses were then unknown. It has long been recognized that some infectious agents are responsible for disease even though they don’t readily fulfill Koch’s postulates. As our views regarding health and disease have become more sophisticated, the term has fallen into misuse. In general, much modern illness has not yielded to the search for a single pathogen, and the so-called chronic diseases that plague modernity seem multi-factorial in origin, with influences at work over many years.
We now talk convincingly of the determinants of health, epidemiologic variables that correlate strongly with illness. There are many of these, and as examples, malnutrition, social isolation and stress are factors that surely affect us. Knowing that our impoverished, uneducated and neglected selves will not flourish, prompts us to look to our circumstances in an attempt to improve things. The precise mechanisms whereby social, economic and other factors affect us are debatable. It is easy to invoke stress – particularly recurrent and unmitigated – as a final common pathway, triggering cortisol and adrenergic pathways, suppressing immunity and promoting general breakdown.
That we are at the mercy of the way we live was brought home to me most forcefully half a century ago by a University of California Santa Barbara biologist whose initial concern was over-population. Dr. Garrett Hardin’s 1968 paper, The Tragedy of the Commons2, considered the implications of common pastureland at which a number of independent individuals might work. At start-up on such a commons, there would be ample resources, but these wouldn’t last. Over time herdsmen or farmers would add to their cattle or crops as it made sense on an individual basis but, acting in narrow self-interest, would not consider the others, even though everyone would be impacted in the long run. As Hardin put it, “Freedom in a commons brings ruin to all.”
While Hardin’s view of human nature was pessimistic, it was realistic and has been used to consider problems as diverse as air and water pollution, climate change and even overuse of national parks. In similar fashion, another tragedy of the commons has emerged, one we never anticipated. In our much-vaunted information age, data has never been so abundant, but we have become dependent on communication networks. The human involvement in these networks has not always worked well for us, and I think of this as the Tragedy of the Digital Commons.
Think social media, chat rooms, the “comments” section following editorials and so on. Online with others, we are too often belligerent, deprecatory or just plain nutty. (I could add “tedious” too, but that’s another matter.) Though we can’t be counted on to be civil on the digital commons, we nevertheless look for “friends” or like-minded individuals. We’ve defined a peculiar electronic world, especially with our social media, and I know of several defunct individuals whose accounts persist posthumously. But never mind; who cares? Avatars can help us become whoever or whatever we want. There’s no recognizable difference between our real selves and bots. And once again, we devolve into disparagement or bullying.
Our troubles reside not in our binary gizmos but in our selves. Our (somewhat) evolved beings carry our base tendencies to lie and maraud, near total self-involvement, just as we bear traits for altruism, justice and fair play that have developed over millennia. The difference seems to be that online our darker selves dominate, especially when the flickering screen and its rabbit holes promise anonymity, and, tapping into anger and dissolving into mob rule, we become unrecognizable in our invective. Recently, to add to our general truculence, we’re told our online search for community is energized through algorithms that surreptitiously feed us stuff we likely want to hear, as well as partners that are not of our own choosing.
Along the way we’ve come to know a lot more about communication and about miscommunication, too. We often divide along the line of whether or not we trumpet “fake news,” on hearing things we don’t like, or whether we are open to argument and persuasion. Fake news, of course, is nothing new and has arguably been round since Gutenberg invented the printing press in 1439. By design it is sensationalist and extreme, resistant to discussion. It turns out there are filters3 in our discourse that support our predilection for half-truths and outright lies.
We can be systematically excluded from specific, contrary information in several ways. In what has been called a “cognitive bubble,” we don’t hear voices with other relevant points of view. They’re just not there. This singularity of voice can be upfront or inadvertent and passively imparted to us by our social contacts. Indeed, when networks developed for social reasons become informational networks, what else can we expect?
“Echo chambers” go a step further. Where a cognitive bubble omits contrary views, an echo chamber – think of cult groups’ chanting – actively distrusts and disparages them. Given early and consistent indoctrination, rationality holds little sway. There’ll be no difficulty maintaining contrary – even bizarre – points of view if, built into our networks, are exaggerated levels of agreement and suppressed levels of disagreement.
When we speak of communication filters and echo chambers, we’re used to thinking along political lines, but there are echo chambers aplenty in health care. Think of the continuing and irrational ruckus that persists with anti-vaccinators, or the guff that attends new diets, colonic cleansing, vitamin additives and so on. Consider as well the medical hucksters who push all manner of potions as wrinkle and appearance cure-alls. Medical advertisements – indeed most advertisements – warrant mention, too, since they generally omit pertinent information regarding utility, risk and expense and are, at end, deceptive flim-flam.
“There’s trouble in River City,” as the song says. We’ve not gotten to a post-truth world. We continue to need substantive reasons to believe whatever it is that could be important to us. Did someone mention the word “evidence?” Isn’t it all about evidence?
Stanford Professor Dr. John Ioannidis’s continuing work stands out here. Much of the medical literature, our Holy Grail repository for evidence, doesn’t stand up to scrutiny and is misleading, exaggerated or just wrong. As expected, non-randomized trials get most criticism but so do a quarter of gold standard, randomized trials and a tenth of what should be platinum-standard large randomized trials. Multiple bias, poor study design, inappropriate fiddling and unsupported conclusions are all at fault, with occasional outright fraud.
We’re left with little comfort. To the extent that the online world is a commons, we need better rules to protect us. To the degree we have wrongly come to think that news and truth are arbitrary, we must work to become less ignorant, less gullible. Lastly, when we look for proof or evidence, we must look hard. We must remain profoundly skeptical.
Some things may have changed in the transition from Living 1.0 to Living 2.0.
But not that much.
Caveat emptor.
References available upon request
Banner photo credit: Pete Linforth, Pixabay