Clinical Camel - Healthcare Focused Chatbot

Clinal carmel

About Clinical Camel

Clinical Camel is an ongoing project aimed at developing an open-source healthcare-focused chatbot. Inspired by the Vicunas team's work, Clinical Camel builds on the performance seen by fine-tuning LLaMa with a mixture of user-shared conversations and synthetic conversations designed to encode high-quality clinical data from curated clinical articles.

Team:

Augustin Toma, Bo Wang

Contact us at: ,

Access Our Demo Here

Model Development

We fine-tuned LLaMa 13B on approximately 50,000 public chat records, along with 50,000 synthetic chat records produced by parsing curated clinical texts as seeds for dialogue. Details of our data synthesis methods and training will be made available in a coming arXiv pre-print.

Evaluation

Our model was trained with a focus on clinical content, and as such, may perform poorly on basic biomedical questions. We will be performing structured evaluation of our model and comparing its results to other chat models, details will be available in our coming arXiv pre-print.

Release

Due to the nature of the datasets through which this was developed, we are unable to release the raw data or the model itself. However, we will release delta weights and a script to convert LLaMa 13B weights into our model in the coming days.

Limitations

Healthcare is a safety-critical domain, and the deployment of large language models (LLMs) in such settings presents significant challenges due to their known limitations. One notable issue is the propensity of LLMs to generate hallucinated outputs, which can have serious consequences in a healthcare context. The training methodologies and data sources used for this model do not address this issue, and it remains susceptible to such problems, rendering it unreliable for critical applications. LLMs such as Clinical Camel are prone to generating biased and potentially harmful responses, further highlighting the need for caution when considering their use in healthcare settings. We aim to improve the performance of this model in our future research.

License

This demo is a research preview only. In order to obtain access to LLaMa, interested users must get approved access from Meta.

Acknowledgment

We would like to thank Meta for sharing LLaMa with our research lab, and the LM-SYS team for creating and open-sourcing Vicuna and FastChat.