Skip to main content

FAQs regarding Corti's LLMs

Frequently asked questions regarding Corti's Large Language Models

Updated yesterday

Here you will find quick answers to the most common questions regarding Corti's LLMs. Whether you are new or just need a refresher, this page will help you understand some of the basics of our models.


Are the LLMs used by Corti developed in-house?

We employ multiple LLMs in our stack. In many instances, we rely on our own proprietary LLMs. However, we also combine our own models with other leading models in the market to ensure the best possible output quality is delivered for any given task. For example, in a classic text summarization use case, there are multiple LLMs working together to (a) produce the summary and (b) validate the output, thus creating the final result.

💡 For a practical, in-depth view of multiple LLMs working in tandem, you may reference our FactsR publication here.

Corti always trains its LLMs based on open-source machine learning model parameters, which allows us to:

  • Avoid a “cold start” when learning general language features

  • Stay current with new research in the fast-paced field of AI

  • Continuously improve our models through training and configuration on proprietary healthcare data

In line with GDPR and ISO 13485 requirements, we maintain a Software Bill of Materials (SBOM) and inform all customers of sub-vendors through our Data Processing Agreements (DPAs).


Are the LLMs based on open-source tech?

Yes. We rely on open-source machine learning model parameters from leading LLM frameworks, such as the ones referenced here: Hugging Face Open LLM Leaderboard. The technology stack, fine-tuning, and resulting models are proprietary to Corti.


Does Corti use customer data to train its LLMs?

No. Corti does not use individual customer data to train its models unless explicitly agreed upon in a Data Processing Agreement (DPA). All data processing follows GDPR and related regulatory standards in each market, ensuring strict separation between customer data and model training pipelines.


How often are Corti's LLMs retrained or updated?

Models are continuously improved through iterative fine-tuning and configuration updates, ensuring alignment with new clinical data, regulatory standards, and performance insights. Full retraining occurs on a scheduled basis to incorporate validated healthcare data sources.

Check out Corti's Release Notes for the most recent API updates


In which language(s) are Corti's LLMs written?

Corti's backend infrastructure is written in a combination of Go, Python, and C++.


On which OS are Corti's LLMs running?

Corti's models are running on Linux.


Have a question for our team?

Click Support in the bottom-left corner of the console to submit a ticket or reach out via email at [email protected] and we'll be happy to assist you.

Did this answer your question?