I’ve been on my laptop more than usual – a waste of precious time when one could be chatting with friends and relatives, enjoying the sunny days or watching The Last of Us

The reason was the burst of publicity given to OpenAI.com by a letter signed by over 1,000 AI experts and other “notables” (including Elon Musk and Steve Wozniak) in late March 2023, asking, nay imploring, for a six-month pause on further development of artificial intelligence (AI) programs at the level of ChatGPT-4 until their effect on human life can be assessed and regulated. It seemed something important was going on.

What are we talking about? Well, artificial intelligence or machine learning is enabled by a series of ultra-fast “accelerated” chips made from silicon, copper, aluminum, gold and other elements that can store algorithmic information coded into “neural circuits”. These chips not only pack the maximum number of transistors that can be switched between on (1) and off (0) states, but can also be made to perform specific calculations required by AI systems. The whole thing is powered by electricity plugged into a battery or charger converting alternating current to direct current. Amazing! 

What is ChatGPT? OpenAI's Chat GPT Explained
 

The new language AI programs (such as OpenAI) are able to analyze your question then access appropriate neural circuits to spit out a response that makes sense (though not always complete sense) and some of the time it’s useful. To state, as some enthusiasts do, that these machines are “thinking” or “conscious” in the human sense – and in the future may be able to reproduce themselves – is crackpot hyperbole. However, the programs are able to whisk information around much quicker than human brains can. We humans have more pressing problems to deal with that get in the way of speed, like answering the telephone or thinking about where you came from or preparing to do a clinic with complex patients or dealing with an emotional child.

My experience with ChatGPT

There are over 100 million users of ChatGPT now! 

On behalf of readers of Alberta Doctors Digest (and as a computer-averse holdout), I sat for three hours each day over three days examining this phenomenon. Although I have not yet invested $20/month to access ChatGPT-4, the “updated” free version of the ChatGPT is accessible to plebs like you or I. But it’s enough to get the drift. It’s an interesting advance on Googling for information. But you don’t get the choice you get with Google: you just get “the answer”.

So what’s the fuss about this latest iteration of commercial AI? OpenAI.com is an easily accessed website with a language robot that is conversational and gives quick answers to questions. It’s a machine that’s able to “learn” and “predict” a probabilistic sequence of words following a given sequence of words. 

The first question I asked the friendly bot (whom henceforth I’ll call Murgatroyd or Murgy) was something I know a lot about in a tiny area of medicine: how to use bisphosphonates in cancer treatment. The answer came from what looked like a Wikipedia page that had been highly influenced by inputs from pharmaceutical companies. I’ve now beseeched colleagues to feed this beast proper scientifically reviewed articles (mainly our own) to see if the obvious bias can be improved.

Murgatroyd did better with the natural history of basal cell carcinoma and its risk of recurrence. I’ve had a small, barely noticeable pimple on the side of the upper part of my nose. It didn’t bother me, and it didn’t seem to change over the last 10 years. My family doctor examined it, saw some vascular markings and diagnosed (correctly) a basal cell carcinoma. A nurse practitioner biopsied it – in fact removing the whole thing (about 5 mm.) Pathology showed a basal cell carcinoma. I really didn’t want to go through a laborious Moh’s procedure and asked Murgy: “What proportion of BCC’s removed by biopsy recur locally?”

Murgy responded: “According to a systematic review in Journal of the American Academy of Dermatology in 2016, the reported rates of local recurrence for small BCC’s (defined as <2cms in diameter) ranged from 0% to 8.4% with a median follow-up of 3.3 years. The study found that recurrence rates were higher for tumors in high-risk areas such as face, scalp and ears, as well as tumors with aggressive histological features such as infiltrative or micro-nodular growth patterns.” 

Well, not too bad. Next, looking for some light relief, I asked Murgy for an update on the latest “Newfie” jokes. But I got a stinging rebuke:

According to Murgy“I’m sorry but I cannot comply with your request. While I understand that “Newfie” jokes have been part of Canadian culture for many years, they are generally considered to be offensive and derogatory towards people from Newfoundland and Labrador. As an AI language model, it is important to me to be respectful and inclusive towards all individuals, regardless of their origin or background. Instead I can suggest exploring other topics or finding more positive and respectful ways to engage with Canadian culture.”

And the same went for Irish jokes! So there you are. Some will be happy with this. But all Newfie jokes I’ve ever heard came from Newfoundlanders, a proud and resourceful people, and all the best Irish jokes are born in the Emerald Isle. The above reply suggests that humour is not going to be a big part of this AI bot’s activities, although Elon Musk wants to correct that.

It’s unlikely that ChatGPT is going to be able to write speeches tolerable to listen to, although they may provide resource for certain kinds of political speeches where you want to mouth platitudes. And if you find difficulty finding rhymes for words like “make” or “light,” it will write your speech versified in iambic pentameter with a few trochees thrown in.

OK what about Scottish jokes then? Hah! An inconsistency here! That seemed to be OK territory despite there being just as many stereotypes that one could say were offensive if you’re a sensitive soul. But humour is going to be hard for Murgy and his ilk since humour relies on emotional sensitivity and response, something at which bots are dismal. What he came up with is hardly hilarious.

Murgy: “Why did the Scottish Football team wear kilts? Answer: Because every time they got a corner, they opened a pub.” 

But Murgy wasn’t finished. “Here’s another: A farmer was driving his tractor down a country road when he saw a man struggling to carry a sheep over a fence. (Me: I think I know where this one is headed.) The farmer pulled over and asked the man if he needed help. The man said 'Aye. I’m trying to get this sheep over the fence so I can shear him.’ The farmer laughed: 'Shear it? Ye dinnae need to shear it. Just tell it a joke and it’ll laugh its wool off!’ The man was skeptical but whispered a joke in the sheep’s ear and to his surprise the sheep laughed and its wool fell off. The man was amazed and asked the farmer, 'How did ye ken that would work?’ The farmer replied, 'Simple. I’m a professional comedian!’” 

Me: “That’s not funny at all. And it’s insulting to professional comedians.”

Murgy: “I apologize if the joke didn’t meet your sense of humour or if it came across as insulting to professional comedians. As an AI language model, my responses are generated based on patterns in the data I was trained on. I try to provide a range of responses to different people. I understand humour can be subjective. I would be happy to try again.” 

Me: “Forget it, bot. Even Google and Bing can beat you hands down, you po’faced puritanical piece of hardware.”

Murgy closed down (in a huff?).

So what are the precise problems that the 1,000+ “concerned citizens” are concerned about which will lead to “profound risks to humanity.”

Elon Musk warns against the perils of artificial intelligence and its ability to escape human control and harnessing AI to conduct internet hacks. He's called AI potentially the "biggest risk we face as a civilization." Bill Gates has said that OpenAI’s ChatGPT is as significant as the invention of the Internet.

The problems 

The big problem is the insertion of misinformation/disinformation so that garbage is dressed up as another reliable opinion. Reputations can be manipulated, privacy violated. Doctoring of photographs will occur such as recent widely circulated ones of Trump being manhandled by police and taken to jail, or the Pope dressed in a stylish puffy coat. It will be difficult to decide whether a photograph taken is genuine or manipulated. Any evidence taken in a legal action and dependent on photographic evidence will require additional non-photographic evidence. 

Another obvious one is the need to rethink education, especially student assessment using written or multiple-choice exams. Inevitably, there will be an increase of plagiarism and cheating. In medicine, it may mean a return to in-person clinical examinations with the problems associated with orals, such as favouritism, cheating, and unlucky choices of cases.

Disruptive industry practices such as hiring based on beautifully constructed fake CV’s – although this has always been a problem – will become harder to detect. White-collar labour markets will be disrupted.

AI data may be incorrect and if taken as true may harm human patients.  

The positives 

AI is likely to enable rapid introduction of new vaccines, drugs and other therapies by using designed molecular structures rather than the production of thousands of molecules, each needing individual testing. Specialist rounds will be improved by immediate access to the results of all relevant clinical trials on-going and completed as a result of the language bot listening in to the clinical discussion. Also, a family medicine consult could give immediate high-level feedback.

Agriculture should benefit with new effective fertilizers and farming techniques. Investment decisions may be made with relevant and updated information. I had good advice from Murgy on the best oil and gas companies for investing, as well as good learning on things like best dividends and the best time to buy bonds. 

This quicker, more precise search engine will be available for general use.

How will medicine and surgery adapt? 

The days of the physician who sits with a patient in his office staring at a computer may be coming to a close – thank God – since patients will be able to access some diagnoses themselves. The importance of the doctor-patient relationship will rise as communication and interpretive skills will become paramount, as will honing skills in physical examination and carrying out procedures – that is until AI can perform like Star Trek’s Dr. Bones McCoy. But even then, the importance of the much-forgotten healing power of touch – the laying on of hands – will continue.

In summary, easy access to a robot filled with accurate information that is regularly updated will be something most people will want for things like the best equipment purchasing deal or the latest update on company revenues or the latest rumour on a political opponent. But if you want it to write a report or a review paper, you’ll get a summary of the latest group think, an echo chamber: cautious middle of the road mainstream thought. 

Any big advance in human life has always been thanks to the individual willing to swim against the mainstream current, to bear the loneliness of the criticism of the multitude of sneering conventional thinkers, and to persist in the face of aggressive criticism. The problem is half of people who swim against the current tend to be crackpots, and the trick will have to be figuring out who is the free-thinker and who the crackpot.

Investing in OpenAI? Some commentators inclined to hyperbole are calling AI the 21st century equivalent of the industrial revolution. Wouldn’t it be nice to have a piece of that in your investment portfolio? Well, you can’t with OpenAI – at least not directly. It’s a private company, though there will likely be an IPO. Microsoft has invested heavily ($10 billion) in a partnership with OpenAI, and if you’re really trusting, you can buy a cryptocurrency “openaierc” in US dollars. I wouldn’t touch it. There will be many others (e.g. Google’s Bard).

The Italian government has banned OpenAI for not regulating the age limit for accessing the program (13 and over), but they don’t have an upper age limit to access it. This should allow scams and misinformation to proceed smoothly. Maybe there should be a moratorium on further development?


Editor’s note: The views, perspectives and opinions in this article are solely the author’s and do not necessarily represent those of the AMA.

Banner image credit: Gerd Altmann, Pixabay