A recent online article on CBC entitled “Why family doctors across Canada are turning to AI scribes – and what it means for patients” prompted us to reflect on the possible impact of the recent increase in the use of artificial intelligence in the medical community.  

As lawyers, we are already well-versed on the benefits and risks of utilization of AI in our daily dealings. More than one Canadian lawyer has been reprimanded for allowing ChatGPT to write a brief or provide relevant authorities for an argument only to find that the results are factually incorrect or even fictional. Our court of appeal, while not outright forbidding the use of AI in the context of written arguments, has urged counsel to be cautious in their use of this function, and to alert the court to documents, precedents or quotations which might have been procured from an online AI source.

We have also witnessed the increase in use of AI in law schools, with students reportedly relying on an AI platform to write papers or create written responses to questions posed in class.  

Due to recent advances in technology, AI has now moved that discussion into the medical forum.

The author, following discussions with physicians around Canada, summarized the utilization of AI to assist with charting as follows:

Ambient artificial intelligence scribe programs are software that uses a microphone to listen to conversations between clinicians and patients. They filter out small talk off the top and then summarize the visit into a structured medical note that the doctor can use to share with other team members … and becomes part of the patient’s medical file.

In reviewing this, the problem isn’t necessarily with the program listening to, and transcribing patient encounters (so long as the patient is aware and consents to its use). The difficulty is with the notion of the summarization and subsequent creation of a medical note. This is not really that different from EMRs, which allow drop-down menus for certain common diagnosis, or facilitate the cut-and-paste approach to medical record-keeping. In fact, knowledgeable commentators have suggested that the next step in the process is to “… incorporate AI scribes into electronic health records so clinicians don’t have to copy and paste from different software programs.”

There is, however, an inherent risk in allowing technology to take over a valuable and critical part of the medical visit, and to create a record that was never authored by the attending physician.

The next, and perhaps inevitable step in AI’s progression in the medical community is to input patient data and diagnostic test results into a program, and relying on the vast network of resources available, spit out not only a diagnosis and proposed treatment program, but an eloquently worded and carefully written explanation to the patient, convincing them that it is the thoughtful end-product of a human-guided medical interaction.

The idea of the chain reaction leading to a physician relying on a series of AI-assisted medical note entries and analysis to review a patient’s history is to some a little scary purely from a patient safety perspective.

From the legal side, the prospect of a physician on the witness stand in a medical or legal action having to rely on, and perhaps defend, a medical record that they never produced, is an even scarier and less contemplated consequence.

An example was provided in the article by a physician of how the scribe tool misinterpreted information and added specific back exercises that the physician had never mentioned.  Following that, each time a patient had a similar back issue, the physician would have to review and delete the incorrect advice that the program was trained to provide.

Another problem identified is that an AI program has not yet learned to pick up on the nuances of what goes on during the appointment, such as gestures, tone, or how the patient appears or acts.

When speaking to students in our firm regarding the use of AI, we emphasize a cautious approach to using the tool. Strict reliance on the program to create accurate results will undoubtedly lead to disaster at some point. Reliance on the program as a research tool to assist in creating the final product is much less risky and, frankly, probably cost effective. However, it’s important that the end product is carefully reviewed, edited and the sources verified before it is relied upon by a client or other lawyers in the firm.

The problem, again going back to the article, is that the efficiency of using this product is compelling. As one physician explained: 

… Since (Dr. X) started using the AI tool, he was leaving before me every day. There’s a little bit of jealousy there. This clearly is making a difference for him. Maybe it could make a difference for me too.

So, there’s the rub.  Professionals are constantly looking for ways to streamline and effectively utilize both their and their clients’ time, and for the time being the use of artificial intelligence seems to fill that bill. However, sacrificing patient safety and compromising the ability to defend a legal action seems a high price to pay for the additional minutes that the reliance on an AI medical note may provide.

What is interesting is that the Ontario Medical Association recently had physicians evaluate AI scribes to see if they could be used in doctor’s offices and hospitals to save physicians time and improve their quality of life. In response, the Ontario government apparently published a list promoting ways for family doctors to “put patients before paperwork”. The province suggested that scribes should only be used during a visit if the patient gives consent and the privacy of patient health information would continue to be protected. The College of Physicians and Surgeons of Alberta and the CMPA have offered similar guidance. 

Clearly, the barn doors have closed. The technology is out there and will only become more accessible and accurate in its work-product. Users must remember that, thus far, AI platforms don’t think independently. AI learns by gathering information from huge databases, and then scans and assimilates the most likely matches to create the final product. They are in some ways glorified, gigantic word search engines. As long as we treat AI platforms as such, monitor their results, and ensure proper safeguards are in place, negative consequences should be minimized.


Editor’s note: The views, perspectives and opinions in this article are solely the author’s and do not necessarily represent those of the AMA. 

Banner image credit: Pixabay.com