Artificial intelligence is everywhere. I have read that self-driving cars are five or ten times “safer” than cars driven by actual human beings. That particular statistic came from a Chinese manufacturer of self-driving cars, so we might wonder whether there’s an element of self-interest in that assessment. I could not tell whether that statistic differentiated between the accidents that actually resulted in significant damage to vehicles and significant injury to humans, or whether all accidents, including minor bumps and scrapes, were figured in. It’s pretty clear that sensors that tell you how close you are to other cars in the supermarket parking lot would be highly useful. However, I have also read that the little cameras that provide information to the driving mechanism sometimes have difficulty distinguishing between a human being and a mailbox and also have difficulty, depending on ambient light, spotting the lines that mark the lanes in narrow roads.
The particular impetus for taking look at the role of AI in medical practice came from a study conducted in Poland that was recently reported in the New York Times. The study found that after only three months of using an AI tool to attempt to detect precancerous growths during endoscopies, physicians performed significantly worse at spotting these growths on their own.
Doctors at four endoscopy centers were given access to an AI tool that spotted suspicious-looking growths during the process of a colonoscopy. As those of us know who have experienced this disagreeable procedure, a long flexible instrument called a colonoscope is inserted into the nether end of our digestive tract, and it sends images of the inner walls of the colon to a screen for scrutiny and evaluation.
The AI tool was programmed to draw a box around each of the suspicious-looking growths on the screen. Physicians also scrutinized the screens for these precancerous growths. The results were, to say the least, disquieting in several respects.
Prior to the introduction of the AI tool, doctors were able to identify about 28% of the precancerous growths. But after the AI tool came into use, the human physicians identified those precancerous growths only 21% of the time, the implication being that because the AI tool was doing the work of spotting those growths, the doctors paid less attention to what they were doing.
As part of this study, the physicians took part in an eye-tracking experiment. While the AI tool was being used, doctors tended to look less at the edges of the image. This suggested that the eye-muscle memory that had been developed in the doctors when they examined the colonoscopy images had eroded after the introduction of the AI tool. Or, perhaps, knowing that AI will scan the images, the doctors just don’t put as much effort into it.
So, in a sense, AI de-skills the doctors.
I find a mundane parallel to this in my own experience. I take a jar in my hands and try to open the lid. I grab it and turn. But I do not exert all my might and main to unscrew that lid, because I know that I have a handy device that permits me to unscrew that lid with just a little effort. Does using that handy little device perhaps diminish my hand strength?
The statistical bit that prior to the introduction of AI, the doctors identified 28% of those precancerous cells is in itself disquieting. Only 28% before their skills were eroded? It’s reasonable that an algorithm that evaluates every square millimeter of the scan would pick up more information than a human being that just gives the scan a once-over, but that data certainly casts serious doubt on the efficacy of the process. Of course, the growths we’re talking about here are precancerous. Presumably, the degree of scrutiny employed in the detection of cancerous (as compared with precancerous) growths would be far more intense, but a process that leaves nearly three-quarters of precancerous growths undetected cannot be characterized as clinically effective.
In a general way, I would say that the overall effect of AI in the practice of medicine has its pluses and minuses. Clearly, AI can go into far more detail than a human physician when it comes to, for example, the effects of potential drugs on specific pathogens and also on our physiology. But the fundamental difference is that while AI focuses on the data, doctors focus on the patient. Care and concern are outside the range of AI’s capacities.
My concern is that the pervasiveness of AI in just about everything will erode the capacity of physicians to focus on the patient with maximum skill and efficiency. The coming generation of physicians has been relying on AI through high school, college, and medical school. AI knows the details, so why bother. Physicians will continue to be concerned about their patients, but they are increasingly likely to leave some of the details of treatment to AI, with unknown consequences.
And another area of concern, which was described in the New York Times on September 7th, is that AI can be deliberately employed to disseminate falsified information. The identity of an endocrinologist, Dr Robert H. Lustig, was appropriated by an AI program. His image and his voice were copied using AI, and videos were posted on Facebook in which his AI-created persona hawked “liquid pearls” for weight loss. In one such faked video, Dr Lustig appears to state that these “liquid pearls” will bring about weight loss – “No injections, no surgery – just results,” the fake video proclaims.
As the Times said, “While health care has long attracted quackery, AI tools developed by Big Tech are enabling the people behind these impersonations to reach millions online – and to profit from them. The result is seeding disinformation, undermining trust in the profession and potentially endangering patients”
My view on doctors who post videos on TikTok or Facebook is strongly negative. These platforms are commonly known to be susceptible to phony posts. None of the healthcare providers that I rely on depend on that kind of internet presence.
Returning to the subject of AI, there’s evidence that when students use AI tools such as ChatGPT to do their schoolwork, it adversely affects their writing skills. This came from a study conducted by MIT researchers, in which university students were divided into three groups. One group wrote with ChatGPT from the start, a second group wrote on their own but could use Google search, and the third group was not allowed to use any AI tools. Those who wrote with ChatGPT from the start exhibited the worst writing quality, and as shown from brain activity measurements, the parts of their brains associated with learning were less active. Participants in the study who did their work unaided performed best. The researchers concluded that in view of these results in supposedly well-educated university students, the effects on the brains of young children would likely be of greater concern.
This was a genuine bona fide honest-to-Pete study, and it reinforces my own mistrust of AI. We humans do better thinking for ourselves than permitting machines to think for us. I’m quite content with having the computer put letters on the screen – and into the digital record – when I hit a key with my finger. But I want to choose the letter. It vexes me when it (my computer, but more frequently my phone) “thinks” it knows in advance what I’m trying to say. Sometimes it does, sometimes it doesn’t. I want and need to keep my brain busy and sharp.
The role of artificial intelligence in medical diagnosis
Diagnosis would seem to be an ideal area to employ AI. All of the symptoms can be figured into the equation, and the links between these symptoms and illnesses or diseases can be explored. AI can handle a colossal quantity of information. For example, AI has the capacity to analyze the immense number of potential compounds that could work as drugs for the treatment of diseases, and also analyze the structure of the human cells that these compounds could bind to, as a way of determining whether these compounds might actually work. However, determining whether these compounds actually do provide real benefit to humans would be, in my opinion, well beyond the capacity of AI. That kind of information would require clinical trials involving human patients.
Determined to stay current in all matters relating to health care, Harvard Medical School has developed an AI tool that may be helpful in arriving at diagnoses of real human patients. Researchers at Harvard Medical School are working on a medical education tool that they have dubbed Dr CaBot. The tool was named after a pathologist at Mass General Hospital named Richard Cabot, who formalized the use of patient case studies for medical education back in the year 1900.
The system, which operates in both live presentation and written formats, shows how it reasons through a case, offering a differential diagnosis, which is a comprehensive list of possible conditions in an attempt to explain what’s going on. That comprehensive list is then narrowed down until the system arrives at what is termed a “final diagnosis.”
Dr CaBot’s ability to spell out its “thought process” rather than focusing solely on reaching an accurate answer distinguishes it from other AI diagnostic tools. According to the Medical School researchers, it is one of only a few models designed to tackle more complex medical cases.Dr Arjun (Raj) Manrai, assistant professor of biomedical informatics in the Blavatnik Institute at the Medical School said, “We wanted to create an AI system that could generate a differential diagnosis and explain its detailed, nuanced reasoning at the level of an expert diagnostician,” Dr Manrai created the AI model with Thomas Buckley, a Harvard Kenneth C. Griffin School of Arts and Sciences doctoral student and a member of the Manrai lab.
Although the system is not yet ready for use in the clinic, Manrai and his team have been providing demonstrations of Dr CaBot at Boston-area hospitals. Now, Dr. CaBot has a chance to prove itself by going head-to-head with an expert diagnostician. The process will be tracked in The New England Journal of Medicine’s famed Case Records of the Massachusetts General Hospital, also known as clinicopathological conferences, or CPCs. It marks the first time the journal is publishing an AI-generated diagnosis.
Each CPC consists of a detailed presentation of the case from the patient’s doctors. Then, an expert not involved in the case is invited to give a presentation to colleagues at Mass General explaining his/her reasoning, step-by-step, and providing a differential diagnosis before homing in on the most likely possibility. After that, the patient’s doctors reveal the diagnosis of the physician actually treating the patient. The diagnostician’s write-up is published in NEJM along with the case presentation.
The core of Dr CaBot’s ability to efficiently search millions of clinical abstracts from high-impact journals, which helps it properly cite its work and avoid factual hallucinations, is OpenAI’s O3 large language reasoning model. Dr CaBot can also search its “brain” of several thousand CPCs and use these examples to replicate the style of an expert diagnostician in NEJM.
Dr CaBot delivers two main products. The first is a roughly five-minute, narrated, slide-based video presentation of a case, in which the system explains how it reasoned through the possibilities to come to a diagnosis. The other is a detailed written version of Dr CaBot’s reasoning and diagnosis.
Although the primary use case for Dr CaBot is as an educational tool, its ability to rapidly sift through millions of clinical abstracts could also make it a valuable research aid.
The advantages of an AI system are that it is always available, doesn’t get tired, isn’t juggling responsibilities, and can quickly search vast quantities of medical literature.
Dr Manrai added that physicians are using AI tools including ChatGPT and a physician-specific platform called OpenEvidence. Eventually, Dr CaBot might join the AI toolbox that physicians are already exploring as they determine how to best help their patients.
The advantages of an AI-powered tool in diagnosis are evident. No human MD can match AI in searching for information, although of course that information needs to be available digitally. However, the human MD has several clear advantages. One advantage is that he/she actually “knows” the patient and has empathy for the patient. The MD also, in all likelihood, has experience regarding the manifestations of the diseases or illnesses related to the possible diagnoses.
My careful conclusion regarding the role of AI in diagnoses is that it can be highly useful in the broadest sense, in that it can scan an enormously wide range of information. However, in arriving at a diagnosis in an individual patient, a human MD has the clear advantage of personal contact and experience with diseases and their treatment. In short, AI is a useful addition to the diagnostic procedure, but the final decision as to how to direct the treatment of the patient is the responsibility of the flesh-and-blood MD.
Better than aspirin in preventing repeat heart attacks?
Before we get into the specifics, let’s take a brief look at the role of aspirin itself in preventing heart attacks. We’ve discussed the way aspirin prevents heart attacks, which is, basically, reducing the clumping of blood cells and the formation of blood clots which can have serious effects, including obstructing the flow of blood to the heart and blocking blood vessels in the brain. The result can be heart attacks, and, when blood vessels in the brain are blocked, the result can be a stroke.
Aspirin is almost a “miracle drug.” It is an effective pain medication, and also reduces fever and inflammation. And, as we said above, it can help prevent heart attacks and strokes by making blood platelets less likely to bind together and form potentially dangerous clots.
For individuals who have never experienced a heart attack or a stroke, the benefits of taking a daily aspirin have been questioned. A side effect of aspirin is that, due to its effect in preventing the binding of blood cells, it increases the risk of bleeding. Bleeding in the gastrointestinal tract is an unfortunate aspirin side effect, and the potential of GI bleeding to some extent offsets the potential benefit in heart attack and stroke prevention. But for persons who have had a heart attack, the benefits of a daily aspirin are reasonably well established and are thought to be greater than the bleeding risks.
However, a recent meta-analysis by a team of cardiologists found that clopidogrel is more effective than aspirin in preventing heart attacks in patients with established coronary artery disease, who have already experienced heart attacks or strokes.
The study, published in The Lancet on September 13, 2025, analyzed data from seven studies comparing clopidogrel and aspirin in more than 29,000 patients over about five and a half years. The results of the meta-analysis showed that the risk of recurring heart attacks or strokes were somewhat lower in patients taking clopidogrel than in those taking aspirin – 10.6% versus 12.7%. This reduction of risk may seem minor, but a more important factor is that, unlike aspirin, it accomplishes that objective without increasing the risk of bleeding.
Clopidogrel inhibits platelet aggregation (clumping) by blocking the action of the receptor that leads to platelet clumping. It has a similar safety profile to aspirin, with a minor increase in the incidence of diarrhea. A rare but serious adverse effect is thrombocytopenic purpura, in which blood clots form in small vessels throughout the body. These clots can limit or block the flow of blood to organs, such as the brain, kidneys, and heart. This affects organ function and can result in significant damage. Clopidogrel is sold as Plavix, manufactured by Sanofi and Bristol-Myers Squibb, and has been available as a generic drug since 2012.
Just to be clear, the study’s conclusion about the benefits of clopidogrel applies only to individuals who have already experienced a cardiovascular event, presumably because of pre-existing conditions in their circulatory system. There is no suggestion that daily clopidogrel should be part of everybody’s regimen.
The role of lithium in Alzheimer’s disease
The accepted doctrine regarding the underlying causes of Alzheimer’s disease (AD) is that the disease is the result of two progressive brain changes – the deposition of a substance called amyloid plaque, and the growth of structures termed neurofibrillary tangles, which consist largely of a substance called tau protein. Let’s take a moment to remind ourselves about those brain changes.
The hypothesis that amyloid plaque is the fundamental cause of AD is the senior contender, by about a century. A German physician named Alois Alzheimer – yes, the disease was named after him – had a patient named Auguste Deter, who became severely demented when she was 50 years old. Her husband, Karl Deter, a railroad engineer, placed her in a hospital for mental patients and epileptics, where she came under the care of Dr Alzheimer, who followed her until her death in April of 1906. Dr Alzheimer obtained permission to examine Frau Deter’s brain and found it to be pervaded by a dense whitish substance, which he identified as a form of amyloid. Amyloid had been identified and named in the late 19th century by Rudolph Virchow, who thought that it was akin to starch and named it “amyloid” after the Latin name for starch, “amylum.” But amyloid is not starch – it is made of amino acid chains (polypeptides) that have tangled and twisted themselves into insoluble masses.
Attributing the symptoms of AD to the presence of amyloid is entirely reasonable. The brains of AD patients are found, on autopsy, to be greatly shrunken. It made intuitive sense that this dense foreign substance should in some way be harmful to brain function.
A problem with the amyloid hypothesis, which is quite common in medicine, is that while the association between a physiologic condition and a disease, as described by a group of symptoms, can easily be established, determining that the condition is the real cause of the disease is not so easy. Part of the reason is that quite often the physiology is only investigated in persons with the symptoms. In the case of AD, the brains of persons who died with severe dementia have been carefully examined on autopsy, and amyloid depositions have been identified. But how many brains of persons who died without severe dementia have been similarly examined?
A study that cast some doubt on the amyloid hypothesis was “The Nun Study of Aging and Alzheimer’s Disease” which began in 1986 and continues to this day. The nuns in the study had agreed to have their brains examined after their deaths. A surprising finding in the study was that some of the study subjects, who had no signs of dementia, nonetheless were found to have extensive deposits of amyloid plaque in their brains. There was a high degree of correlation between the nun’s verbal skills when they were initiated into the sisterhood (based on essays they had composed at that time) and their intelligence and alertness in their later years. This particular finding correlates with evidence that, in general, diagnoses of Alzheimer’s disease are more common among the cohort with less education. A possible conclusion is that brain activity helps to delay the progression of AD, independent of factors like deposition of amyloid plaque.
Another presence in the brains of persons with AD are formations called neurofibrillary tangles (NFTs), which are aggregates of a form of a protein called tau protein. Tau proteins are not in themselves toxic. They are present in the brain and central nervous system, particularly in neurons. Their normal function is related to the structural stability of axons, which are microtubules extending from neurons, connecting neurons to the central nervous system.
Tau is one of a number of phosphoproteins, meaning that there are phosphate radicals attached at various sites on the protein structure. Normal – i.e., non-toxic – tau has about 30 phosphate radicals attached, but some tau proteins have many more potential sites for attachment of phosphate radicals. When more of these phosphate radicals are attached, the tau protein is said to be hyperphosphorylated. It is hyperphosphorylated tau that is thought to be a causative factor in the brain changes liked to Alzheimer’s dementia.
The hyperphosphorylation of tau can result from mutations, and also possibly from other interactions, such as with enzymes. The presence of hyperphosphorylated tau can result in the formation of dense tangles within the neuron and the axon, interfering with the vital link between neurons and the central nervous system, choking off essential nutrients, and resulting in death of the neuron. This would have an evident consequence to mental function of any kind.
But now there’s beginning to be evidence – so far, from studies in mice – that lithium deficiency may be playing a crucial role in the pathology of AD. Lithium, as some of us may remember, is number 3 in the periodic table of elements (after hydrogen and helium), and is by far the lightest of all metals.
The study determined that as amyloid beta begins to form deposits in the early stages of dementia in both humans and mouse models, it binds to lithium, reducing lithium’s function in the brain. The lower lithium levels affect all major brain-cell types and, in mice, give rise to changes characteristic of AD, including memory loss.
The authors identified a class of lithium compounds that can evade capture by amyloid-beta. Treating mice with the most potent amyloid-evading compound, called lithium orotate, reversed Alzheimer’s disease pathology, prevented brain-cell damage, and restored memory. (Aron L., “Lithium deficiency and the onset of Alzheimer’s disease,” Nature . 2025 Sep;645(8081):712-7210)
Although the findings need to be confirmed in humans through clinical trials, they suggest that measuring lithium levels could help screen for early Alzheimer’s. Other lithium compounds are already used to treat bipolar disorder and major depressive disorder, but they are given at much higher concentrations that can be toxic, especially to older people. The study found that lithium orotate is effective at one-thousandth that dose — enough to mimic the natural level of lithium in the brain. Mice treated for nearly their entire adult lives with lithium orotate at that low dose showed no evidence of toxicity.
The team used an advanced type of mass spectroscopy to measure trace levels of about 30 different metals in the brains and blood in three cohorts of cognitively healthy people, those in an early stage of dementia called mild cognitive impairment, and those with advanced Alzheimer’s. Lithium was the only metal that had markedly different levels across groups. This level began to diminish at the earliest stages of memory loss. Its levels were high in the cognitively healthy study subjects but greatly diminished in those with mild impairment or full-blown AD.
The team replicated the findings in samples obtained from multiple brain banks nationwide.
The observation aligned with previous population studies showing that higher lithium levels in the environment, including in drinking water, tracked with lower rates of dementia.
But the new study went much further, by directly observing lithium in the brains of people who had not received lithium as a treatment, establishing a range that constitutes normal levels, and demonstrating that lithium plays an essential role in brain physiology.
Dr Bruce Yankner, professor of genetics and neurology in the Blavatnik Institute at Harvard Medical School, who in the 1990s was the first to demonstrate that amyloid deposits are toxic, said “Lithium turns out to be like other nutrients we get from the environment, such as iron and vitamin C. It’s the first time anyone’s shown that lithium exists at a natural level that’s biologically meaningful without giving it as a drug.”
The study also demonstrated in mice that lithium depletion isn’t merely linked to Alzheimer’s disease — it helps drive it. The researchers found that feeding healthy mice a lithium-restricted diet brought their brain lithium levels down to a level similar to that in patients with AD. This appeared to accelerate the aging process, giving rise to brain inflammation, loss of synaptic connections between neurons, and cognitive decline.
In AD mouse models, depleted lithium dramatically accelerated the formation of amyloid beta plaques and structures that resemble the characteristic neurofibrillary tangles. Lithium depletion also activated inflammatory cells in the brain called microglia, impairing their ability to degrade amyloid; caused the loss of synapses, axons, and neuron-protecting myelin; and accelerated cognitive decline and memory loss — which are all hallmarks of Alzheimer’s disease.
The mouse experiments further revealed that lithium altered the activity of genes known to raise or lower the risk of Alzheimer’s, including the best-known, APOE. The APOE gene encodes the protein that regulates the metabolism of fats in mice as well as in humans.
Replenishing lithium by giving the mice lithium orotate in their water reversed the disease-related damage and restored memory function, even in older mice with advanced disease. Notably, maintaining stable lithium levels in early life prevented Alzheimer’s onset — a finding that confirmed that lithium fuels the disease process.
A few limited clinical trials of lithium for Alzheimer’s disease have shown some efficacy, but the lithium compounds they used — such as the clinical standard, lithium carbonate — can be toxic to aging people at the high doses normally used in the clinic.
The new research explains why: amyloid beta was sequestering these other lithium compounds before they could work. Dr Yankner and colleagues found lithium orotate by developing a screening platform that searches a library of compounds for those that might bypass amyloid beta. Other researchers can now use the platform to seek additional amyloid-evading lithium compounds that might be even more effective.
If replicated in further studies, the researchers say lithium screening through routine blood tests may one day offer a way to identify at-risk individuals who would benefit from treatment to prevent or delay AD onset.
Since lithium has not yet been shown to be safe or effective in protecting against neurodegeneration in humans, Dr Yankner emphasizes that people should not take lithium compounds on their own. But he expressed cautious optimism that lithium orotate or a similar compound will move forward into clinical trials in the near future and could ultimately change the story of Alzheimer’s treatment.
Lithium carbonate has been commonly used in the treatment of bipolar disorder since mid-twentieth century, but clinicians are still awaiting evidence of its effectiveness in Alzheimer’s disease. Lithium compounds are present in some foods, such as nuts, cereals, fish, and some vegetables – not much in meats and dairy products.
Dr Yankner said, “My hope is that lithium will do something more fundamental than anti-amyloid or anti-tau therapies, not just lessening but reversing cognitive decline and improving patients’ lives.”
The possibility that a lithium-based drug might fulfill Dr Yankner’s hopes – as well as the hopes of the entire health-care community – stimulates my optimistic feelings. Up to now, as you know, the best the health-care community has been able to accomplish has been to delay the progression of Alzheimer’s, using the class of drugs called BACE inhibitors.
The mechanism of action of BACE inhibitors is certainly promising. If we can prevent the formation of amyloid beta, and if BACE inhibitors effectively accomplish this task, it would seem evident that BACE inhibitors would significantly alleviate AD symptoms. But BACE inhibitors are very large molecules, and they have great difficulty in passing through the blood-brain barrier in enough concentration to be at all effective. Several BACE inhibitors have been developed, but up to now they have disappointed all parties – pharmaceutical companies, clinicians, and patients. Patients in particular have been waiting for a drug – something! – that will meaningfully slow the progression of AD.
If lithium-based drugs can bring about the same reversal of AD progression in humans as in mice, the medical community will have passed a highly significant milestone. Our hopes are with Dr Yankner and his team.
* * * * * * *
Yes, it’s been a long hiatus. My previous missive was posted on August 13th. In the meantime, we spent our usual couple of weeks on that tiny island off the coast of Maine, and shortly after that took off on an 8,000 mile road trip out to Utah and back. We saw family, friends, and several amazing National Parks. I used the word “amazing” many thousand times. But it’s good to be home, and there’s lots more to be said about what’s going on in the health arena.
Be very well, and keep the flow of comments coming! Michael Jorrin (aka Doc Gumshoe)

















