Dr. John Patrick drew on his extensive background in technology and experience exploring complex topics in his “Attitude” book series to lead a recent seminar on artificial intelligence (AI) technology at Ridgefield’s Founder’s Hall.
Patrick has degrees in engineering, law, and business management along with a doctorate in medical administration , and he played a key role in establishing the modern internet through his decades long career at IBM and as a founding member of the World Wide Web Consortium at the Massachusetts Institute for Technology in 1994. He began his presentation by screening videos of recent tests conducted by Boston Dynamics showcasing the mobility of their Atlas robots. He asked attendees to imagine the bipedal robots equipped with AGI, advanced general intelligence.
“That’s the next evolution of AI,” Patrick said, making a distinction between the currently popular generative AI which predicts likely answers to questions in the way that ChatGPT and Google’s Bard AI differ from a “strong” AI which is actually intelligent instead of using a large language model to predict the best response to a question without actually understanding the information inputted or outputted.
“We have lots of smart people in the world and they want to work on AI,” Patrick said of the causes for the sudden exponential growth of the industry. “They’re excited about it just like they were 25, 30 years ago with the internet. And of course, the venture capitalists are pretty excited about this too. Right now, there’s about 8,000 new companies and startups focused purely on artificial intelligence. Why are there 8,000 startups? Well, they see a market potential of one and a half trillion dollars.”
Patrick drew on research he conducted while writing “Robot Attitude: How Robots and Artificial Intelligence Will Make Our Lives Better” to make the case that while some see a radical transformation from AI in the near future, there are solutions closer to home. Patrick discussed potential uses for technology in the realm of handling simple repetitive tasks he is familiar with from his work in health care administration and encouraged the audience to think about instances in their own fields which could yield similar results.
“The things that we should be thinking about in particular are the low-hanging fruit,” Patrick said. “People are justifiably concerned about AI, there are a lot of things to be concerned about, but there are also significant benefits. I take the positive approach and look at say, physician notes. I’m not talking about feeding medical information into ChatGPT and getting a diagnosis or the cure. But after a patient is seen by a doctor the doctor has to write up physician notes which are then inserted in the electronic health record, and this basically takes their evening away from their family. In health care, there’s a lot of paperwork that just burns up time, and it doesn’t have to be that way.”
He also noted that as a member of the Digital Patient Experience Executive Committee at Nuvance Health there are opportunities for AI to be implemented in the Digital Patient Exchange.
“The idea is to make it painless for the patient to make an appointment online, to check in ahead of the visit with the doctor,” Patrick said of potential implementations that could eliminate forms and questions being asked repeatedly and with clerical work transmitting information between departments. “AI can look at the workflow of large numbers of people doing necessary administrative efforts and make it, so their jobs don’t have to be so manual and difficult.”
In his view, AI can be best used to eliminate discrete tasks instead of entire jobs, especially since they typically excel in doing a narrowly defined but repetitive action. Transcribing audio and writing notes to highlight the key details is already in the realm of possibility and doesn’t require creativity or intuition , plus, it saves doctors multiple hours that can be used on other duties.
Patrick dismissed the idea of AI providing diagnoses or making medical decisions, but noted that a study from Johns Hopkins that said medical errors were responsible for 250,000 U.S. deaths every year, making obvious their value as a way to double check a doctor’s work or help a pharmacist notice dangerous drug interactions.
“That’s just one tiny area,” Patrick added. “There are many more significant areas, but they all have risks. We don’t have the guard rails and the regulations in place yet as a country so I would never urge any healthcare provider to roar into this too quickly.”
Patrick pointed to the value of AI within research pertaining to genetics, and its widespread application in self-driving cars as other areas where the current form of the technology may soon yield results. However, Patrick had concerns about the ability of unrestrained development to bring AI technology to a safe and productive space.
“One thing that potentially makes the outcome of all this less than possible is regulation, or rather the lack thereof, which is just the opposite of what we faced way back in 1994 when I was a co-founder of the Worldwide Web Consortium at MIT.”
Patrick stated that in contrast to the internet which was at risk of being strangled completely by regulations that would stifle innovation and growth as it was establishing itself, AI likely needs a stricter regulatory environment than currently exists in the U.S. He noted that he was among the 1,100 signatories on the open letter in March 2023 calling for six-month pause on AI development ,tech mogul Elon Musk and Apple co-founder Steve Wozniak also signed that letter.
“In Europe and even China they’re regulating, they’ve put a lot of work into creating guardrails to protect privacy and against the danger which can arise from AI,” Patrick said, noting the potential for AI to design both biological and computer viruses. There’s a lot of things that could go wrong here, but the U.S. is moving at a snail’s pace.”
On the same day as Patrick’s presentation, President Joe Biden held a meeting with the leaders of Amazon, Google, Meta, Microsoft, and several AI startups that resulted in the announcement of initial steps to ensure the companies’ AI research will meet commitments in the fields of safety, security, and trust , however, the commitments are non-binding.