RESEARCH PREPRINT – Analysis of Generative AI Large Language Models (LLMs) for Hyper-Specialized Transfer Learning in Medical Chatbots

Note: This is a preprint of an abstract of an AI research paper. At the time of writing, the paper is still being finalized for publication. A link to the final publication will be provided here once it is published.

Title: Analysis of Generative AI Large Language Models (LLMs) for Hyper-Specialized Transfer
Learning in Medical Chatbots

Authors: Boyle, Jacob; Samal, Soham; Sabale, Aarnav

Generative AI Large Language Models (LLMs), such as the prevalent GPT series of transformer
based models, provide unique capabilities to enable medical chatbots and medical NLP
technologies to generate more personalized responses, gain deeper insight into textual input,
and provide more comprehensive and robust responses and care to patients. However,
commercially available models are not specialized for medical usage and have previously been
shown to produce toxic text, misinformation, and inappropriate if not harmful responses.
Developing specialized medical LLMs currently requires large datasets of training examples and
significant processing time which typically are not available to the average medical practice or
startup due to high cost and time constraints. This paper proposes an alternative method to
creating new medical LLMs by developing specialized medical chatbot skills using minimal
training examples and commercially available LLMs by instructing the LLM to contextualize a
logic-based template based on a patient’s input. Two commercially available generative LLMs
were given varied amounts of training examples on a number of specialized tasks specific to
transforming a patient’s textual input into a medically appropriate output, and the outputs were
scored for medical accuracy, toxicity, personalization, appropriateness, and grammatical
correctness. Preliminary results showed that transfer outputs from generalized LLMs
outperformed generalized LLMs on toxicity and accuracy, while matching specialized LLMs on
toxicity and accuracy while improving on personalization. Using a style transfer approach can
enable the more rapid development and commercialization of medical chatbots by reducing the
time and cost required to compile data and train models while improving accuracy and safety by
reducing variables in the model.

Jacob Boyle serves as CEO and CTO of MARCo Technologies. As CEO, his primary functions include defining the company direction, managing contractors and employees, and spearheading outreach and customer discovery. As CTO, he has handled the development of MARCo, having designed and produced the existing MARCo units; coordinate with manufacturers; develop software for the online edition; and oversee continued R&D. He also served as the Technical Lead in the National Science Foundation’s I-Corps accelerator program on behalf of the company.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>