Many people on this forum or others have complimented me on my knowledge of a range of subjects. I am less impressed with myself. The main thing I claim is that I am not as ignorant as I was, or as many self-proclaimed experts persist in remaining.
I've learned quite a bit by trying to answer questions asked by people who post here, or argue points with others. If I have disagreed with you that doesn't mean I have failed to learn as a result. Because of this disease, in part, and the associated cognitive deficits I have acquired the habit of checking myself wherever possible. This does not simply mean checking against expert opinion unless I also understand the reasoning behind that opinion. I have experience with other subjects where experts can be flat wrong. Several times I've started out to write material to post, and ended up arguing against my own initial position. I hope I've eliminated the most flagrant blunders, but can't guarantee this.
At the outset I imagined there was a simple question accessible to scientific inquiry: do the people with a disease having an official definition have a particular retrovirus? This seemed clear enough until I saw the difference between the Oxford Criteria and the Canadian Consensus Criteria. The Fukuda definition was pretty much clinically useless, but the CCC seemed to be something you could pin down. The Oxford Criteria basically said "we think these people are thinking wrong, and we are the experts on correct thinking". This was circular reasoning which only permitted falsification via clinical tests which were not to be run in the case of patients diagnosed with CFS. This same problem manifested itself in the CDC's website. There was a fundamental problem in ruling out organic causes of chronic fatigue, which are so numerous I could fill this document with them without exhausting possibilities.
At that point I was badly confused and needed to examine the history of the subject, for which Hillary Johnson's "Osler's Web" was invaluable. I was less interested in the "he said/she said" aspects which bother other people than in the source material I could find. This was where I turned up reference to an outbreak in Punta Gorda in 1956. "Wait a minute! I was there in 1956, but the 'worst flu of my life' definitely took place in 1957." I not only read the original report in NEJM, I also tracked down other papers by Henderson, Poskanzer and Shelokov (who investigated an outbreak in Maryland).
None of the publications by these early investigators struck me as being seriously flawed or hysterical. Henderson's work on the eradication of smallpox also demonstrated exceptional ability. And Poskanzer did not appear to be a lightweight in neurology at Harvard Medical School or Massachusetts General Hospital. By contrast the publication by Stephen Straus on pre-existing psychiatric problems in CFS patients caused me to agree with Leonard Jason: "I didn't think a paper that bad could get published." Comparing the published work of W.C. Reeves with the professional output of any of the early investigators, including Dr. Ramsay, who investigated the 1955 outbreak at the Royal Free Hospital was simply embarrassing. Neither Straus nor Reeves was specially trained in psychiatry, but this was apparently no hindrance in proposing psychiatric etiology. How did such psychiatric amateurs become the official experts? These were people you would choose if you did not wish to find anything.
Diagnostic criteria were only one aspect which bothered me about the first reports disputing findings on XMRV. In all cases the assays used were different. In the first instance sample sizes were about 1/10th the size of those used in the original research. Sample handling was different. Positive controls were a problem, particularly if there were errors in the published sequence data, as is all too likely in a discovery of a slow pathogen which had previously been overlooked. (This later turned out to be true. The XMRV used as a positive control was a chimera derived from three different patients.) I got the distinct impression this disease was not being taken seriously.
On the retroviral side I went back in the literature to see how retroviruses were discovered, and human retroviruses in particular. In the process I found that gamma retroviral sequences have been found in humans from about day one of the retroviral era. A decision was apparently made to disregard these as a confusing smokescreen during the hunt for HIV. One reason, besides the discovery of HERVs, was that HIV-1 does an excellent job of stimulating expression of retroviral fragments that may not resemble HIV at all. This hypothesis was greatly strengthened by the later discovery of some 100 copies of HERV-K111, derived from a beta retrovirus, which had previously been overlooked. These sequences turned up in AIDS patients.
How do you determine that a retrovirus causes a human disease? There are mouse models of human diseases which may guide you, as happens with MMTV and about 95% of mouse mammary tumors. Are there similar beta retroviruses in humans? Those copies of HERV-K111 are an example. Expression of these sequences has been found to be more common in some kinds of breast cancer. Does this mean we have found the cause? Apparently not. Even the statistical result that HIV-positive patients on anti-retroviral drugs have lower rates of breast cancer doesn't apparently mean anything, even though practically every other cancer is more common in infected people.
What about other diseases of unknown etiology? Actual retroviral virions have been recovered from both MS patients and synovial fluid of RA patients, though it is not clear that these can replicate efficiently. Then again, if they replicated efficiently they would cause progressive and rapidly lethal disease. Both examples are considered chronic or remitting/relapsing diseases. If we were arguing about whether the disease in question was acute, progressive or rapidly lethal we could have avoided the whole episode. It is simply not.
So, how would we find an infectious etiology for any chronic disease? I'm still waiting for a good answer. I'm afraid the answer is that it is an infectious disease if the pathogen involved is convenient to find and handle in laboratories. A huge range of known pathogens are extremely hard to culture. Without exponential replication it is not at all clear that any pathogen can be connected with any particular disease.
Meanwhile, the percentage of the population suffering from chronic diseases keeps increasing. This accounts for a surprisingly large percentage of public healthcare expenditures.
What about the alternative explanation of mental illness? At present anyone unfortunate enough to live to an advanced age has about 1 chance in 3 of suffering dementia before dying. Rates of incidence of other recognized mental illness keep climbing. Actual reliable cures seem to constantly remain 30 years in the future, like controlled nuclear fusion.
Somewhere in all this business I investigated the dysautonomia which badly affects my ability to do much of anything except lie in bed or sit up to type. Like many people in this community I have had multiple episodes of syncope. These have been reported as possible seizures, then reduced to "unexplained loss of consciousness". Would you believe that syncope is involved in about 1/3 of all visits to emergency departments? This is a major public health expense, and most of our technology not only fails to pinpoint the cause, it can't even predict which patients are vulnerable.
See why I think it is past time for a major overhaul of the medical profession and medical research?
Over the past few years I've learned a lot about medical subjects and medical experts. The experience has not improved my confidence in either medical research or medical theory and practice.
One common report from medical schools is that young doctors are told "85% of your patients will get better no matter what you do." If "evidence-based medicine" were truly valid we might make a strong case for nihilotherapy, doing nothing while reassuring the patient you are actually doing something vital. (That reassurance may not be essential to the patient, but is definitely required to get paid.)
This is not to say that most doctors are cynically exploiting patients in order to line their own pockets. Like most of us they are simply going through the motions of activities that result in paychecks or bank deposits without analyzing them too carefully. (You don't want to analyze the goose that lays the golden eggs.) It is important to differentiate between lapses tolerated or even encouraged across a profession and personal animus by individual professionals.
There is remarkable continuity in the sociological aspects of medicine from a time in the early 19th century to the present. At the beginning of this period there was no germ theory of disease, and thus no antiseptics, except by accident. Incredibly, experimental discovery of anesthetic properties of nitrous oxide by Humphrey Davy at the "Pneumatic Institute" did not immediately result in medical applications, though use as a recreational drug took off at once.
(Those who favor experimentation with recreational chemicals should make an effort to unearth the history of the discovery of the effects of carbon monoxide on humans. A promising young French chemist left notes that trailed off as he died.)
Various forms of bleeding, cupping, etc. were widely practiced. Washing your hands between autopsies and other medical activities was considered evidence of being too fastidious for the profession. The stethoscope, an early diagnostic instrument, was just being introduced in the most advanced medical schools. It took another generation before this became widespread. There were no clinical thermometers or means of measuring blood pressure. Most diagnoses were made on the basis of "vast professional experience" -- completely without any kind of laboratory test. Theory was based on such ideas as balancing the four humors: blood, black bile, yellow bile and phlegm. (It might also make an interesting study to find out when medical schools stopped referring to astrological and telluric influences. Moral failings of patients were also always popular explanations of etiology, though exactly how this resulted in disease, in the absence of germ theory, was a conundrum.)
The majority of pharmaceuticals were either toxic or addictive. Some were both.
People with delicate dispositions should not inquire too closely about the practice of surgery at that time. It was often the case that "the operation was a success, but the patient died."
The one medical procedure at that time with a real potential to prevent disease and death was vaccination against smallpox. This was controversial within the profession, and the way it worked was not understood at all.
There was widespread belief in "brain fever" as the result of mental exertion, particularly if the patient was thinking the wrong kind of thoughts. This became a literary meme late in the Victorian era, but association with writers well-trained in medicine (like Arthur Conan Doyle) makes it clear this was not simply a popular misunderstanding.
All this is dismissed by most modern doctors with "Yes, yes, we know doctors made mistakes in the bad old days, but we've learned from that, and moved on." This leads me to ask when the "bad old days" ended, and how.
Let me describe the sequence of events in the example of tuberculosis, an acknowledged major public health problem. The identification of mycobacterium tuberculosis as the leading cause of tuberculosis in humans by Robert Koch took place in 1882. There followed 20 years of controversy, but by 1905 it was clear to the Nobel Prize committee that Koch's explanation had prevailed. Along the way there was the discovery of mycobacterium bovis in cattle, which was responsible for a significant portion of the human disease. (There are a number of examples in which mycobacteria have jumped species, and early work had trouble distinguishing these species.) Since Pasteurization was known by this time, and destroyed the bacteria, it would have been easy to go directly to the conclusion that Pasteurization of milk could reduce the incidence of TB, since this process had been known since 1864. In actual fact, Pasteurization of milk was introduced in the U.S. in an effort to control spoilage and eliminate contaminants deliberately added to mask spoilage. This was a fairly early result of the creation of the FDA. Their first ordinance on Pasteurization is dated 1924.
This practice also eliminated transmission of diseases like brucellosis, diphtheria, scarlet fever, etc. by this route, but it is not clear how much of a role the medical profession played in this, except for Dr. Milton Rosenau, of the U.S. Marine Health Service, who found a low-temperature process which did not alter taste in 1906. Pasteurization was mainly done by producers to allow milk to be shipped to more distant markets without spoiling, which was more important in the U.S. than the U.K. Federal regulation followed standards set by individual cities rather than leading.
This shift in practice did not take place in the U.K. until a generation had passed in which the medical value of the practice in the U.S. became obvious in vital statistics from cities where it was required.
A second discovery by Koch in 1890 involved the tuberculin antigen produced by m. tuberculosis and m. bovis infection. The possibility of using this as a test for active infection was raised in a newspaper report of the discovery at the time. In fact the simple popular idea was that this antigen would provoke an immune response which would eliminate the disease. This failed because the immune systems of those with active disease were already compromised, and the generalized inflammation in response was already a problem. It took another generation before doctors stopped trying to use tuberculin that way and start using diagnostic tests based on tuberculin like the current Mantoux test.
Another result of these discoveries was the attempt to develop a vaccine from weakened forms of the bacillus. This resulted in the BCG vaccine in 1921. The effect of this vaccine is variable. It can be as much as 80% effective at preventing active TB for 15 years, or scarcely effective at all. It should not be given to people already infected with m. tuberculosis, which takes us back to those tuberculin tests which were not in use. Modern reviews find it can reduce active TB cases 50%, which is a strong argument that it can help stop epidemics. Unfortunately, it played almost no role in the control of TB during the first half of the 20th century.
I have also known people who had a lung collapsed (pneumothorax) to give it a rest in which it might recover from TB. The rate of success was not striking, but this had little effect on practice. This history convinced me that doctors will keep trying treatments that do not work as long as they continue to practice, and may well ignore treatments which do work if these require much thinking.
The first real cures resulting from antibiotics came with the introduction of streptomycin in 1946, though this drug is no longer very effective. It has largely been replaced by isoniazid and rifampin, and there are TB strains which do not respond to these. If you estimate the time from clear recognition of the problem and identification of the cause to effective solutions at 50 years you will not be far wrong.
Along the way there were other incidents which deserve scrutiny.
Efforts to provide symptomatic relief resulted in experiments with steroids to reduce inflammation. This proved that you could make patients feel better even while allowing the disease that was killing them to run wild. You can find articles in medical journals reminding doctors that this was a known mistake in the 1950s, which indicates the practice continued long after medical schools were aware it was a very bad idea.
Another drug tested as an antibacterial agent, isoniazid, turned out to make patients feel better even when it had no detectable effect on the infection. This was the origin of the first antidepressant without danger of addiction.
There is a problem here in deciding how much of the problem of mental illness is due to unrecognized organic disease. At one time some 10% of patients in mental hospitals were there because of syphilis alone. This could be determined in late stages of the disease when there were unmistakable clinical signs or from evidence found at autopsy. Early stages of neurosyphilis can produce symptoms which fit such categories as schizophrenia, depression, mania or dementia. Despite this we continue to see undiagnosed syphilis turn up among a percent or two of psychiatric patients.
We simply don't know how many TB patients ended up in mental hospitals in the past. The problem is that 90% of those infected with m. tuberculosis have latent rather than active infections. Fatigue and depression are extremely non-specific symptoms, and organic causes may be overlooked if there are no convenient clinical signs of organic disease. Even when standard diagnostic tests and chest X-rays are done, there are still cases in which TB is misdiagnosed as major depression. Present day doctors may never have seen miliary TB, and may be unaware that serious cases may not produce a strong response to tuberculin because of immune exhaustion.
This raises questions about a possible coincidence that treatment with isoniazid for depression may have been treating undiagnosed organic disease. The same doctors who assured me this was highly improbable were stunned to learn that modern antidepressants like fluoxetine (Prozac) strongly inhibit enteroviruses like coxsackie-B. This characteristic now seems common to most SSRIs. (Possible antiviral properties of SNRIs are still under investigation.)
Another aspect of antidepressants concerns the location of affected serotonin receptors. Most doctors do not seem to be aware the vast majority are outside the brain. Even those aware of receptors in the autonomic nervous system will likely be stumped by questions about serotonin receptors on lymphocytes, and possible immunomodulatory effects of antidepressants. This is another instance where I question if we actually know what accepted medical practice is doing.
I've already called attention to the effect of treatment with minocycline right at the time of onset of symptoms of schizophrenia. This could be due to action against several types of infectious agents: parasites like toxoplasma gondii, spirochetes like borrelia burghdorferi or miyamotoi, or even unknown viruses. I'd like to reiterate here that most antipsychotic drugs appear to inhibit reproduction of toxoplasma gondii, which is known to produce symptoms of schizophrenia. Is this yet another pure coincidence?
How many such coincidences does it take to force a second look at intractable problems in healthcare?
While I'm at it, I might as well reiterate another fact: over 95% of those infected with poliomyelitis virus did not develop paralytic disease. If you did not want to find an infectious cause, it would have been easy to deny it. The problem was less the presence or absence of the virus than its rapid replication and damage to nerve tissue. If the disease had not had cases with conspicuous localized damage to specific nerves, I doubt it would have been recognized. In those cases where patients seem at first to experience a full recovery, and it is hard to identify conspicuous localized damage, we have the problem of post-polio syndrome. If this were an isolated condition, and no connection with paralytic disease was made, the resulting disease would be quite a medical mystery.
Put another way round, the difference between those who developed paralytic poliomyelitis and those who did not was the precise strain of the virus, which mutates rapidly, and the way their immune system responded to the infection. Polio vaccines have demonstrated that the vast majority of people can mount effective defenses against this infection if their immune systems have been primed to recognize it.
There is still more food for thought in this history. Few people noticed a disease we call poliomyelitis prior to the introduction of chlorinated city water supplies. There was "infantile paralysis" which was commonly fatal. Those who survived infancy were almost certainly immune. There were fewer conspicuous cripples due to this cause, though there were plenty of other causes. Poliomyelitis was part of a witches brew of disease found in typical city environments. Many really serious health problems could not even be distinguished from the "normal" background death rate.
(Think I'm exaggerating? Check the mortality tables for London published in 1660, but based on data 127 years old, and you will find that 60% those born Londoners died before age 17. Note: this was almost immediately prior to a recognized epidemic of plague which hit in 1665. Conditions were so bad the great London fire of 1666 probably improved matters.)
We should expect previously unrecognized health problems to become apparent after the improvement in conditions caused by the introduction of antibiotics, etc., and this is what I believe ME/CFS is.
What I do not know is exactly what single cause should be blamed. If anything activates HERVs, or even causes transcription of envelope genes with immunosuppressive domains, that could do it. Enteroviruses which enter the nervous system could do it. Herpes-type viruses like EBV, VZV, CMV or HHV-6 could be responsible, and we now know that such viruses found in MS patients need not be identical to those in healthy people, a definite complication for research.
I've highlighted viral causes because of scattered evidence the subtle tissue damage I believe takes place close to the start of the pathology often involves invasion of tissues by CD8+ cytotoxic T-cells. It is as if these cells are looking for a viral cause. They may be mistaken, but that doesn't stop the damage. Other pathogens are certainly possible, like spirochetes, common bacteria or even multicelled parasites.
In a game of 20 questions we haven't even gotten past the question of "animal, vegetable or mineral?" though I'm pretty sure the cause is not "bigger than a breadbox".
I'm especially interested in pathogens inside immune cells because that is one place in the human body where exponential replication normally characteristic of infectious disease is commonly tolerated. A pathogen which infects cells of the immune system subject to clonal expansion in response to unrelated pathogens has a great opportunity to benefit itself if it can exploit mitosis. In such a case the exponential replication of the pathogen inside these cells will be ascribed to the response by the home team to invaders either from outside the body, or from different physiological compartments. The result will be unexplained immune response in the absence of detectable external pathogens.
Autoimmune or autoinflammatory responses which target proteins found in nerves or endocrine glands could play a major role. In this connection I would point out that a surprising number of cases of GWI have turned out to have pituitary hypophysitis, which is normally considered rare. (There are published estimates for hypophysitis due to leukocyte infiltration at 1 in 10,000,000 of the general population, which conflicts with data from autopsies showing pituitary damage in around 5%. Why does this level of ignorance about damage to a vital organ persist?) Do ME/CFS patients show such abnormalities? Other aspects of this disease make me suspect immune damage to ion channels and problems with regulation of electrolytes.
Added: one problem in separating possible causes is the way these organ systems interact. Pituitary hypophysitis can result in disturbances in ACTH and TSH, which lead to problems with adrenal or thyroid function. Problems in production of ADH (vasopressin) can result in polydipsia and polyuria which directly affect balance of electrolytes and possible hypovolemia. Studies which exclude patients with problems like thyroid insufficiency or polyuria a priori
will necessarily miss these connections, and may miss the bulk of the pathology. When you are dealing with conditions of unknown etiology you need to be very careful about treating these like diseases with well-understood etiologies and pathologies. Nothing in nature restricts pathologies to individual organ systems.
The recent discovery of the role of peptides in communication between B-cells and T-cells, which limits recruitment of cytotoxic T-cells to inflamed tissue strikes me as very promising. Endothelial dysfunction probably plays a role in cardiovascular aspects of the disease, and problems like the pituitary damage, or damage to dorsal root ganglia are often caused by invasion of tissues by such T-cells. I also believe the peculiar forms of unusual lymphomas statistically associated with the disease are a clue to the etiology, if anybody important is paying attention.
Added: As if to emphasize my point about prior ignorance concern features of human immune systems, we have a release of research today concerning a previously unsuspected lymphatic system in the meninges surrounding the brain
I've already said that I am not convinced this disease is isolated or in competition with other medical research. That is simply an aspect of the politics of research funding. Nature did not carve the human body up into feudal domains. That is entirely the work of humans. People who have rapidly progressive diseases often deteriorate too rapidly to study what is going on at the time the pathological process begins. We are a unique population in which progressive pathology has been stalled. This ought to be of interest in itself, for many reasons.
Careful manipulation of defective immune response is producing breakthroughs in the treatment of cancer reported almost daily. Eliminating chronic low-level inflammation in tissues also looks like a profitable route toward prevention of such serious problems as colorectal cancer. At some point we simply have to close the current gap between specialists in infectious diseases and those who treat degenerative diseases like oncologists and rheumatologists. Declaring serious illness to be "somebody else's problem" has gone on too long.
There has been a deluge of scientific information about basic biology in the last 20 years, but too often this has not translated into anything useful for treatment of any human disease. (The effect on medical billing is another matter. If asked to name the most dramatic change in the practice of medicine during the 20th century, I would have to place changes in medical billing at the forefront of professional change.) A large part of the reason for this unsatisfactory situation is that the new information has been like new wine in very old bottles. The new thinking needed to apply this information has yet to appear.
This is especially true in the case of the human genome, of which we now have more than a first rough draft. You can find press releases implying that "the gene" that causes disease X, Y or Z has now been found, and elimination of this scourge is merely a matter of time and more funding. Most of this seems based on what I call the "player piano" version of genetics which can be traced back to the "central dogma" of molecular biology, and not even the most current version of this dogma. This says that genes encoded in DNA produce RNA which is then transcribed into polypeptides which are formed into proteins. As one history of 20th century music is titled "The Rest is Noise".
There are problems with this formulation of genetics apparent from the start, including the embarrassing rejection of the possible existence of retroviruses. We now know that the chimpanzee genome is virtually identical to the human genome in terms of expressed proteins, and many identified differences are evolutionarily neutral. While I don't recommend the experiment, it appears that you can take a human gene and insert it into a chimpanzee genome, or vice versa, and the result will function pretty much as before. The result would be a chimpanzee with a human gene or a human with a chimpanzee gene, but not some hybrid from "The Island of Doctor Moreau". The vast majority of differences are not in those proteins produced, but in regulation of genes.
One problem with concentration on only sequences expressed as proteins (exons) is that these constitute less than 2% of the entire genome. Another problem is that virtually all genes are composed of multiple exons separated by "introns" which are excised as the exons are spliced together to assemble proteins, and these are not always spliced in a single unique way. (I just had a question about one gene which consists of 52 exons. You could literally represent this using a deck of playing cards, and I believe we all understand that there can be a LOT of different combinations.)
What is more, different genes are expressed under different conditions, and many important genes are present in multiple copies. (In the case of people I know with late onset muscular dystrophy being able to switch on the gene which functioned during childhood should stop the pathological progression, and might even reverse the disease. A similar possibility presents itself in many cases of sickle-cell disease. There are working genes for hemoglobin present, but no longer switched on.)
All the above is primarily aimed at describing normal functioning of human genomes. In pathological states, or in defense against pathogens, we need to consider the effect of such things as RNA interference, which can "knock down" a gene, or an exon spliced into the final sequence, without changing the DNA at all.
The portion of the human genome which is actively expressed at any particular stage of adult life is more like 1% than 2%. Control of which genes are actively expressed is the subject of epigenetics. Had anyone paid much attention to Barbara McClintock in the 1950s we could be decades ahead of our current understanding in this field, and epigenetic control is much more likely to have early medical applications than the difficult and dangerous practice of actually replacing human genes via genetic engineering.
By contrast, the portion of the human genome consisting of identified viral sequences is around 8%, and there may be more we have not properly identified as happened with HERVs in centromeres. Naturally, this means that study of HERVs in another backwater in research, which officially cannot be connected with any human disease without a great deal of further research. I think an unbiased (nonhuman) observer would conclude that viruses have had much more success with genetic engineering than humans.
Somehow in the above I've overlooked transposable genetic elements. (Remember "jumping genes"?)
Genetics merely starts with the genes you get from your parents. Some of these are then modified by transposons, so that even "identical twins" do not have exactly the same genomes. Expression of DNA sequences in chromosomes is modified by epigenetic control like methylation. This is further modified by variable splicing and RNA interference before RNA sequences are converted to polypeptides and assembled proteins.
Do you begin to see why all the information about gene sequences has not translated into useful medical interventions by doctors thinking in terms of that "player piano" model?
Finally, I want to close with an insight that still seems to elude most experts. The replicated experimental data showing a drop in the threshold at which we enter anaerobic metabolism lasting more than 24 hours after maximal exertion does far more than validate patient reports of "post-exertional malaise". This is fundamental physiology showing something that was never seen before. I've been through as much of the earlier literature as I could lay my hands on, and I continue to say, this finding simply is not there w.r.t. any disease. The size of the drops indicate that substantial quantities of metabolic waste products, on the order of grams, not picograms, must be accumulating and persisting. These have traditionally been ignored because of things "everyone knew" about exercise metabolism.
Just how ignorant have experts been about the most basic physiological measurements?
Added: I'm not going to get into arguments about the label to apply to those patients who exhibit this bizarre change. I'm not impressed by people who say "if you define the illness our way, you won't find patients with those results." No matter what label you apply to them these patients represent what Thomas Kuhn would call an anomaly for current paradigms of exercise physiology. Find out what is going on in those who do exhibit the problem and you should gain insight into fundamental physiology. Nor is this a purely academic question; you can check current expenditures for rehabilitation of patients recovering from a variety of serious diseases of undisputed etiology, some of whom do not benefit from existing practices. This aspect alone is worth more funding than research on male-pattern baldness. Hospitals are already spending millions on exercise facilities, and these are expanding at present. Total expenditures are up in the billions. Arguments that more patients benefit than are harmed are of no use to those individuals who are harmed. Even those who merely fail to benefit cost healthcare systems enormous sums.