The Origins of Medical Blindness
In part one of this series, I aimed to illustrate how members of physician class are often unable to see medically induced (iatrogenic) injuries, and this leads to a significant degree of suffering for patients who are in effect gaslighted by these doctors. I argued this blindness is due to a combination of doctors lacking the capacity to recognize iatrogenic injuries and doctors being unwilling to recognize iatrogenic injuries. I will now seek to explain why they often lack the capacity to recognize these injuries. As this is part of a series, some of the concepts within this article will be difficult to fully understand without reading the previous section.
Historical Examples of Medical Blindness
Historically, the medical profession has been extremely resistant to accepting new ideas (except in cases where pharmaceutical money is given to influential physicians). This results in very bad ideas staying in use for decades (or in a few cases like mercury for centuries).
I created this Substack to bring awareness to the fact we are presently reliving the smallpox vaccine tragedy where a deadly and unproven vaccine was released to the market in 1796. Once it came into widespread use, instead of preventing smallpox, it repeatedly caused widespread smallpox outbreaks and maimed or killed countless people around the world. As the vaccine caused smallpox epidemics around the world, governments responded by increasingly draconian mandates to vaccinate (and then boost) the population to try and address the increasing waves of smallpox. The medical profession and intellectual class supported these mandates while the working class opposed them, often going to prison or losing their possession for doing so.
As the death count mounted, mass protest broke out. This culminated in an approximately 100,000 person 1885 protest against the mandates in Leicester, England. Leicester was forced to drop the mandates and switched to simple (but revolutionary at the time) public health measures to contain the epidemic. The medical profession said mass graves would soon follow (and continued to say so for at least 30 years); Leicester instead became the first place to end the smallpox epidemic and their successful public health model was gradually adopted globally. Despite what you have been taught, Leicester’s model (rather than the vaccine) is what actually ended the smallpox epidemic.
There are a few important takeaways from this story. The first is that much of the insanity we are dealing with now arose from a past mistake that was never reconciled and instead became a mythology of our culture. The second was that it took a century to end this tragedy, and it only happened because of mass working class protest against the mandates. The third is the same gaslighting we see now was in full swing over a century ago:
“Mr. Henry May, writing to the Birmingham Medical Review, in January, 1874 reported that deaths as a result of vaccination were often not reported because of an allegiance to the practice. Often a vaccinated person was recorded as having died from another condition such as chicken pox or erroneously listed as unvaccinated.”
One of the best-known examples of medical blindness regards Ignaz Semmelweis, a doctor who in 1847 investigated why 10% of women were dying of sepsis after giving birth at his hospital, but only 4% died at a neighboring hospital that only used midwives. Despite no knowledge of microbes existing at the time, he eventually concluded the issue arose from the doctors at his hospital dissecting cadavers with their bare hands immediately before going to deliver babies. He suggested washing hands in a chlorine solution before delivering babies and produced data showing this reduced the death rate to below 1%.
The medical community was extremely resistant to his claims, with many doctors voicing their indignation at Semmelweis having the audacity to suggest their “gentlemanly” hands could be unclean or harming patients. His evidence was dismissed as insufficient and without any rational basis. He was eventually committed to a mental asylum by his profession where he was severely beaten by a guard and died not long after from complications of the assault. Decades later when the germ theory emerged and became integral to the medical profession, Semmelweis posthumously became an internationally recognized hero.
In the early 1950s, patients who had heart attacks were confined in hospitals to strict bedrest for months. They were not allowed to sit up, turn on their side without a nurse’s permission, had to be provided a bed pan to go to the bathroom, and were banned from any type of media. All of this was to prevent exposing them to stress it was thought their weakened heart could not tolerate. One in three died, and the patients experienced severe psychiatric distress that resulted in the hospitals deeming it necessary to regularly administer toxic sedatives. Given how ill-advised this is from a lymphatic perspective (as lymphatic stagnation greatly impedes cardiac recovery), it still amazes me that this was the standard of care.
Dr. Bernard Lown proposed a study of having cardiac patients sit in chairs. While he was eventually able to initiate his study, there was widespread opposition to it. Lown was frequently compared to the Nazi doctor Dr. Mengele known for conducting abhorrent human experimentation in the concentration camps and in his own words: “The house staff initially was vehemently opposed, even greeting me with Sig Heil Hitler salutes.”
The senior medical staff caring for these patients pronounced the subjects would experience fatal arrhythmias, heart rupture, or congestive heart failure from an overstressed heart muscle; none of which occurred. Instead to the medical staff’s utter astonishment, once they stopped torturing the patients, their psychiatric issues resolved, and their recovery prognosis improved. Lown’s data was eventually published and transformed cardiac management globally, likely saving hundreds of lives (and possibly millions if the practice had continued to persist). Based on my experience with institutional review boards, I am genuinely doubtful (non-commercial) research like Lown’s could have ever been approved in the current era.
Hopefully, the numerous parallels between these events and the COVID-19 response are clear.
For the interest of brevity, I will skip further examples (ie. the sad story of lobotomies). I will however note, for each event that is known, there are many others that have fallen into the annals of history. Much of Forgotten Side of Medicine encapsulates these incredible medical innovations the creators without success fought tooth and nail for the medical field to adopt. In many cases I believe massive societal benefit would have arisen from their adoption, and a significant part of my (and colleague’s) ability to help challenging medical cases arises from utilizing these forgotten medical innovations.
Vioxx:
The majority of physicians tend to be unable to recognize if a toxic drug is on the market, while the minority who have the capacity to recognize correlations within limited datasets typically are marginalized by their peers (and in some cases equated to a schizophrenic). One of the best known recent examples regards Vioxx, a toxic NSAID (anti-inflammatory drug) that was initially marketed as a safer NSAID. Vioxx was eventually pulled from the market after multiple studies found it quadrupled the rate of heart attacks, resulting in an estimated 140,000 heart attacks, of which approximately 60,000 were fatal.
At the time Vioxx was on the market, there were approximately 200,000 practicing primary care physicians in practice at the time. This means that while Vioxx was notorious for causing a high death count, the physicians responsible for recognizing these events (primary care providers) each would have been unlikely to see more than one patient experiencing a heart attack from the drug (that they had been primed to believe was “extra safe”). Hence few if any would have recognize the correlation was there (Vioxx’s harm was only uncovered due to it being detected by researchers).
My own method for avoiding this mistake is to view all new drugs with suspicion and never prescribe a medication that have not been on the year for less than 7 years. Seven years is the approximate time frame it takes for harms the manufactures failed to disclose to become known (Vioxx took five). One experience prescribing physicians in practice will often experience is receiving a “Dear Doctor” letter from the FDA informing them that a drug they had given had previously unrecognized severe side effects. In spite of this, it is rare that the periodic reception of these letters creates the insight in a physician to avoid prescribing other new medications.
Doors of Perception
As discussed in the previous section, there is a tremendous degree of complexity present in each patient who presents before you. Medical education provides doctors with a way to navigate that complexity by creating models to apply to the patient in front of you. For example, if 15 of the 100 datapoints a patient shows to you match the prototype or signs of “diabetes” you can then label your patient as a diabetic, classify them to a certain stage of the illness, and then treat them or provide medical advice you have accumulated for patients within that classification.
This is not necessarily a bad approach; it provides an empirical and reproducible way to rapidly, and somewhat effectively deal with an otherwise immensely complex problem. By adopting and refining this model, the quality of results allopathic medicine has produced has exponentially improved over the last 200 years, and in many cases the results it provides are superior to that which any other medical system can do. That is no small accomplishment, and a tremendous amount of work and sacrifice was needed to make it possible.
However, there are also some major shortfalls with this approach. A few are as follows:
•Typically, the greater an investment people have in something (it takes a lot to become a doctor) the more sensitive their ego (such as a doctor’s “medical knowledge”) is to being challenged. I fall into a minority that instead genuinely enjoys coming across things I do not understand, and then doing my best to creatively come up with a way to honestly make sense of what is occurring. Before I went into medicine, I had to do this in meditation all the time, which gave me the background to make sense of the complexity in each patient. This has proven very beneficial, as I find many of the things patients ultimately need to recover are found within that complex but normally ignored dataset.
•Doctors are encouraged to dismiss, ignore, or reject the existence of datapoints they cannot easily grasp or map onto something. While this is necessary for the above approach to work, as those datapoints constitute the majority of a patient’s situation, this also frequently results in important parts of the picture being missed.
In fairness, I must also emphasize that I have worked with many alternative health care providers who are able to recognize complexity, but nonetheless abjectly fail at addressing their patient’s needs (which in a few cases has resulted in an unnecessary death) because they do not have the knowledge base and access to the diagnostic models that conventional medical training creates.
•Some illnesses are very straight forward and will consistently present a few key symptoms not shown by other diseases. As a result, it is easy to memorize a list that accurately maps the disease (if x y and z are present you have disease, if either x y or z is not, you do not have the disease). Unfortunately, more complex illnesses (such as Lyme disease or COVID-19 vaccine injuries), have a very wide range of symptoms they can present with that vary from person to person, and some of the presenting symptoms will always overlap with other diseases. Throughout my career, I have found doctors who are trained in the model of diagnosis based on applying memorized lists to diseases typically fall short in diagnosing conditions with a variable symptom picture and in their confusion will often default to a psychiatry referral and attributing the symptoms to something like anxiety since they have no other list they can map the symptoms to.
•Most doctors have to be trained to look for something to find it. In a seven-year education process, it’s impossible to train them to detect every piece of relevant data; there’s just too much out there.
•The specific things doctors are trained to spot are often the result of a political process. This can be at a systemic level, or at a local level due to the pre-existing biases of a supervising physician. To share three examples:
1. A few things (heart rate, respiratory rate, blood pressure, blood oxygen saturation) are routinely checked at medical visits. This is because these simple parameters provide highly useful information on the current health of the patient (hence why they are called “vital” signs).
The current opioid crisis began after the opioid manufacturers developed a marketing campaign to have pain become “the fifth vital sign. “ As a result doctors began asking every patient their current pain level on a 1-10 scale, and as a significant portion of the population will always have a baseline level of discomfort, this created the perception there was an epidemic of unaddressed pain.
The drug companies were then able to use the existence of this “epidemic” to have medical boards penalize doctors for malpractice if they “inhumanely” failed to address their patients pain levels. There were a lot of reasons why this was a terrible idea, one of which is the opioid epidemic that is currently tearing low-income areas of America apart, which has been so damaging to the country, it has overriden the typical corruption that keeps bad drugs on the market and free of any criticism.
2. There have been multiple times where I (and colleagues) have unsuccessfully tried to add a very simple and relatively non-controversial concept into the curriculum at a medical school. I have not yet been successful, as in each case I encountered significant resistance from members of the institution who were had the authority to decide which content is prioritized within the limited time available for a medical education.
3. As a medical student and medical resident, absolute deference to the medical hierarchy is an absolute requirement for graduation. Because of this, you are expected to model and perfectly imitate the pre-existing biases of the supervising doctor who trains you. In this type of highly demanding environment, it is very challenging to also maintain the awareness of a separate model of medicine at odds with that of your supervising doctor (which is almost always extremely conventional and mirroring that of those who trained the doctor).
In the same vein, I know of a few cases of individuals with extensive experience in successfully treating patients with an alternative healing modality who then went to medical school (so they could help more people), which created so much cognitive dissonance for them that they completely abandoned their original form of medicine. Hearing these stories before I am matriculated into medical school played a key role in why I was able to maintain the awareness of different models of medicine throughout my medical education.
Medical Specialization:
Medicine is very territorial (for many people money and prestige matters far more than anything else), and as a result, different specialties will lay claim to different parts of the body. From a training standpoint, doctors are typically taught to recognize if an issue belongs to the scope of a particular specialty, and then refer all complex or challenging cases to the “appropriate” specialty.
The big problem with this approach, is that the specialist you refer your patients to often cannot solve the problem either, and normally just give them a few prescriptions and pass the patient along to another specialist who likewise cannot solve the problem. This is why I am often hesitant to pass the ball off and delegate responsibility to another party for challenging cases.
This is relevant to the central thesis for three reasons:
First, it is often fascinating to observe how different specialties will approach the same problem. This is because training in each specialty gives you a medical model that recognizes a certain aspect of the disease process, and therefore will typically use a diagnostic model and treatment that is used within their specialty, even if it ultimately has minimal relevance to resolving the patient’s condition. I am relatively certain many of my readers have had this experience. As many common conditions fall into a grey zone between the scope of multiple specialties, it is often particularly illuminating to compare the experiences of similar patients you had who went down each road.
Second, when considering illnesses that arise purely on a physical level (as opposed to those having an emotional or spiritual dimension), while the precise value is difficult to estimate, I believe I am within the margin of error to claim half of those diseases arise directly at the site of symptomatology while the other half are a result of pathology somewhere else in the body. This interconnection between different parts of the body is rarely taught in medical education, and often creates a situation where only a general practitioner (who has advanced training) can recognize what is causing a complex illness.
While many alternative healthcare providers present themselves as “seeing the whole person” I find they also often become trapped by only using their model to address the patient’s symptoms, which works when their model is applicable, but does not work when it is not. Friends who run integrative medicine clinics (that I consider to be best in the country) have said the two major problems they find with hiring holistic doctors is most of them want to be given protocols to follow (rather than developing individualized treatment plans) and that these doctors often have a box they must operate within.
If the patient’s condition fits within their box (which it often does) these holistic doctors are successful, but when the patient’s condition doesn’t fit within their box, they are not willing to change the approach and instead double down on it. This is no different from what you observe with conventional medical specialists.
For example, one highly regarded holistic doctor views Lyme disease and mold toxicity as the primary cause of most chronic diseases they see. While these conditions are common, I do not believe they are anywhere near as commonly the primary cause of illness as this physician does. I also know of patients these doctors had who were permanently injured by a more aggressive treatment for Lyme or mold toxicity after the initial lighter treatment provided no clinical benefit, as the initial treatment response was interpreted as a failure to treat Lyme or mold toxicity rather than these conditions not being the primary issue.
Since the standard medical approach for a complex illness is referral to a specialist, general practitioners rarely address these cases, and the involved specialist will often focus on treating a part of the body that is not particularly relevant to the actual disease process.
To cite one example (there are many, many others), chronic dental infections often cause heart conditions. Dentists are not legally allowed to provide medical advice on regions of the body besides the mouth so they do not. Conversely, very few cardiologists have any detailed knowledge of dentistry, and will almost never think to consider examining the mouth or refer applicable patients for dental care. At this time, I know only two physicians in the country who specialize in both of these areas and their patients often greatly benefit from this interdisciplinary knowledge.
Third, because medicine is so territorial, doctors in one specialty do not like to challenge the medical recommendation of another specialty. This is problematic since inappropriately prescribed medications often cause significant side effects in other systems of the body (that are often not seen or evaluated by the model followed by the specialist).
Even when outside physicians do want to discontinue the medication, it’s a dicey proposition because the specialist who feels their territory was violated will sometimes complain to medical boards, and while medical boards do not always prosecute patient complaints, medical boards typically act on complaints from other physicians. This whole dynamic is one reason why patients accumulate more and more prescriptions as they age; everyone likes to start prescriptions but (except for nephrologists and some geriatricians) rarely to take responsibility for ending them.
Given how often you see that trend, I have always suspected that factors like this that promote copious lifelong consumption of pharmaceuticals were intentionally engineered into the medical system. It is particularly sad because whenever I find the occasional older adult who looks vibrant, alive, and not showing the typical physical or mental signs of aging, I always discover they all have religiously avoided all pharmaceuticals for their entire life. This contrasts with most older adults being placed on numerous completely unnecessary prescriptions that I believe play a significant role in many of their “diseases” of aging.
The general trend in medicine has been towards more and more specialization (as it pays more to both the doctors and the medical industry). Many members of the medical field have cited objections to the consequences of this trend, but it seems to be inevitable as the reach of scientific technology continues to increase throughout our society.
For those physicians interested in preserving their livelihoods, with the exception of a few specialties that would be extraordinarily difficult to automate, I believe the least likely physician to become displaced by AI is a talented general practitioner. This is because while AI systems excel with reductionist processes, they tend to struggle with holistic modeling, and a significant percentage of the knowledge base talented generalists use will never appear in the electronic medical records that are training the next generation of (electronic) doctors.
My hope at this point is that I’ve established it is challenging for many doctors to see things they were not trained to see, and that it is inevitable they will have patients who have issues they were not trained to see. While there is a massive ego behind Western medicine which makes this dynamic particularly problematic, I must emphasize this issue is not unique to doctors, rather it’s a reflection of the default perceptual processing mechanism the human mind employs to navigate the world. Almost everyone has perceptual filters; if they did not the world would be too overwhelming to engage with. In every era, it has only been a rare minority who explore discarding these filters.
Medical Evidence
When you examine pharmaceutical trials, one of the key reasons adverse events are significantly underreported is because the companies that conduct these trials do not survey trial subjects on many specific adverse events. Because of this, most of the actual (acute) side effects of medications do not appear in the published trial results (the chronic ones also do not because the trials are structured to only follow subjects for a short period). Instead the side effects “unexpectedly” come to light years later after the drug has entered the market and the manufacturer has already had time to profit off their lucrative drug patent.
One of the initial things that made me so suspicious of the COVID vaccines was reading multiple reports of severe injuries reported in Facebook groups for members of the vaccine trials that never appeared in the published research studies. However, even with that, and my inherent distrust of the pharmaceutical injury, I came nowhere close to anticipating how much the harm of the vaccines was understated within the trial reports.
In Senator Ron Johnson’s panel on COVID Vaccine injuries, participants in the original vaccine trials shared stories of trying to report their adverse reactions, but being unable to do so because their symptoms were not on the list they were able to select from, and research coordinators who the test subjects complained to were unwilling to document their adverse reactions.
Similarly, there are many stories, such as those of Maddie De Garey, who had a severe adverse reaction that was classified as “abdominal pain.” As the pediatric COVID vaccine trial was small, if her reaction had been appropriately documented, it would have made the adverse event rate high enough to make it difficult for Pfizer to obtain the EUA they received to administer the vaccine to children and physicians more open to the possibility injuries in their patients could have occurred (because there was “evidence” from an acceptable trial).
The exclusion of the adverse events from research trials also has heavy consequences for those who clearly experienced a pharmaceutical injury. In most countries (particularly in regards to vaccines) part of the legal criteria to receive some form of compensation from a pharmaceutical injury is for there to be scientific plausibility for that injury.
This frequently means if a specific adverse event was not detected by the (fraudulent) research trial used to approve the drug, some who had that adverse event cannot prove the pharmaceutical caused their injury and therefore is ineligible for any legal remedy to their situation. For example, despite thousands of claims for injury disability or death following a COVID vaccine, the American Government has not yet provided financial compensation to a single applicant as there has been “no evidence” to support their alleged association between vaccination and their specific disease.
Gatekeepers of Perception
Many different brainwashing systems throughout history emphasize teaching their subjects to deny their perceptions of reality. Two of excellent fictional examples of this concept are the ending scene in 1984 where Winston, while being tortured alters his mind to hallucinate the false reality his torture demands absolute obedience to, and an ending scene from a cartoon show where a police officer under pressure is convinced to see a civilian is holding a gun when he is in fact not.
So far, I’ve highlighted how people often cannot see things unless they are taught to look for them. If doctors aren’t taught to see a pharmaceutical injury, they often won’t be able to look for it. The other half of the coin is that people are taught not to see things they see that challenge the institution they work within.
Justified objections are often raised to modern education teaching harmful propaganda and not teaching many important concepts to their students. However, while the content students are taught definitely matters, what is even more important is how students are taught to think. A major problem that has emerged in Western education is that critical thinking has been largely removed from the curriculum and the concept of intelligent thought has instead been equated with consistent deference to “expert thinking.”
Fundamentally, the message that is repeated over and over is that as a human you are full of irrational subconscious biases that make you incapable of objectively recognizing scientific truth. True intelligence is thus the process of finding beliefs supported by credible sources and sharing credible sources to advance your viewpoints. As many of those “credible” sources are a product of corruption, they will often provide false data that directly conflicts with your perceptions and when this happens, you are expected to deny your own individual perception.
In medical education, this concept is aggressively pushed on students and they are frequently ridiculed or reprimanded for sharing anecdotal “unscientific” observations that conflict with the medical literature (which as discussed here is frequently corrupted). Because I have a strong sense of self, I trust my own observations and the connections I draw. Many others do not, as for decades, they have been taught not to. Given that they will deny their own perception of reality to adhere to the “evidence” it is not surprising they will also deny experiences reported by patients that do not adhere to the “evidence.”
Thus, as you might imagine, intentional suppression of adverse events within clinical trials is a highly effective strategy for effective suppression of the recognition of those adverse events by doctors. This physician failure to recognize those adverse events leads a significant underreporting of the adverse events, which is then eventually cited to patients as proof their perceived drug injuries are delusional.
I believe the reason I diagnose pharmaceutical injuries at a higher rate than my colleagues is because I always consider that possibility in the back of my mind. In contrast, excluding a few textbook side effects everyone is taught to focus on, most doctors do not share this index of suspicion for iatrogenesis.
A few scholars I have spoken to with significant iatrogenic injuries are of the belief that the reason specific adverse of common medications are drilled into young physicians is to create a mental reality that excludes the possibility of other adverse reactions occurring. Many doctors erroneously infer that if certain adverse events were detected through clinical research, that must mean all the adverse events were detected, and that by screening for the known adverse events to a medication, they fully checked the box for completing this task (medicine nowadays is all about completing checklists as that is what insurance companies financially reimburse doctors for) and there is no additional need to assess for adverse events.
In the FDA Pfizer documents released through a court order, there are thousands of adverse events that have been reported to Pfizer. In the future, I suspect only a few of these due to their noteworthy nature, such as myocarditis, will be linked to the vaccine. Most of the side effects will instead, as detailed above, erroneously be assumed to have been proven to not be linked to the vaccine and thus rejected by the doctor if a patient claims it is linked to the vaccine.
In the recent example of the rheumatologist interviewed by Steve Kirsch, I believe the primary reason he observed a higher rate of vaccine injury in his practice was extremely simple. He provided an open-ended question (presumably after having seen too many vaccine injuries) that asked each patient if they had been vaccinated and how they responded to the vaccine. As a result, most of his patients who had an adverse event reported it to him.
This contrasts with the typical physician behavior where they ask if you have been vaccinated, but do not ask how you responded to it. There has been extensive research conducted on the correct way to get accurate date from surveys (as the stakes are often very high for incorrect polling results). The approach described above is an example of one that will frequently miss most of the data you are trying to get (either because patients do not voluntarily report it or because they want to but justifiably fear they will be treated negatively by the doctor for doing so).
In medical education, while you are frequently taught to ask for many things during patient encounters, I have personally never seen iatrogenic reactions be taught as part of any diagnostic checklist. Given how frequently iatrogenic complications negative affect patients, I believe this aspect of diagnosis should take priority over many other things students are taught to have significantly less relevance to a patient’s current condition.
In summary, you cannot find something if you do not look for it. Doctors typically do not look for drug injuries and assume their lack of detecting drug injuries means drugs injuries do not occur.
Despite my recognition of the possibility of iatrogenic injuries, due to my position being inherently at odds with my profession, I feel I still underreport pharmaceutical injuries as I will not suggest the relationship is present if the link it ambiguous or unclear to me. Conversely, while I believe a significant portion of pharmaceuticals are intrinsically toxic, I also periodically meet people with a confirmation bias to (erroneously) blame pharmaceuticals for everything that has gone wrong in their life.
Hence, regardless of how you dice it, diagnosing pharmaceutical injuries is a tricky subject. Situations like these are best resolved by large, randomized trials to assess for harms, but since those cost money that can normally only come from drug companies, they never emerge. Thus, when I as a physician attempt to seek out the best available evidence to dictate my practice of medicine, I am often forced to rely upon my own and colleague’s anecdotal observations, something conventionally considered to be the worst form of medical evidence.
I frequently read medical reports from the past, and one thing I find striking is how much more open physicians were to trusting anecdotal observations, and how this allowed them to rapidly develop solutions to diseases that would not have been possible with our current cumbersome scientific bureaucracy. About three months after COVID-19 started in China, I had developed early protocols I used (and subsequently refined) to treat the illness. Two and a half years later there is still no standard of care for treating COVID-19, which I believe is a strong argument against our current evidence-based framework for being the arbiter to discern medical truth.
Conclusion
In this section of the article, I have attempted to explain why doctors often lack the ability to see iatrogenic injuries. In the next section, we will discuss why they frequently choose not to.
In the meantime, I would like you to consider a case I once observed and think over the potential issues that occurred in the physician’s thought process.
Patient: Hello doctor. Since our last visit, I have been having terrible pain in my legs and it is difficult for me to exercise. At our last visit, you started me on a statin (note: I can no longer remember which statin the patient named). After I had these symptoms for two weeks, I realized that they began after I started statin. I went online and found out that many other people have had the same reaction I have been having. Do you think it could be causing my symptoms, and should I stop it?
Doctor (who gets slightly emotional): In my twenty years in practice, I have never seen a patient have a negative reaction to a statin! In fact, they recently did a study on this and they found the negative side effects people claim to experience were actually due to the nocebo effect. You are at high risk for heart disease (note: he was not) and it is critical you do not stop taking this medication or you are extremely likely to have a fatal heart attack.
For those curious, this is an excellent summary of the issues with that nocebo study. As the nocebo hypothesis performed well for gaslighting statin injuries, it has since been revived for COVID vaccine injuries.