All Posts
HIPAA-compliant healthcare conversational AI platforms: what to look for


All Posts


A patient wakes at 2 am with a question about the medication she started yesterday. The discharge instructions don't answer what she's feeling right now, so instead of calling the ER or waiting until Monday, she may stop taking it. A follow-up call that could have caught a complication never happened.
Healthcare leaders know conversational AI belongs in these gaps, and that what patients need at 2 am is presence: the sense that someone is actually paying attention. The harder part is buying that presence without compromising what the Health Insurance Portability and Accountability Act (HIPAA) requires.
This guide is written for the three people in the buying room: the product lead, the engineering lead, and the compliance officer evaluating AI Personas for patient-facing workflows.
HIPAA-compliant healthcare conversational AI platforms are real-time audio or video systems designed to conduct patient-facing conversations while meeting the legal and technical requirements set by HIPAA for handling patient health information.
They combine conversational capabilities such as speech recognition, language understanding, and response generation with the administrative, physical, and technical safeguards the law requires of any system that creates, stores, or transmits that data.
In practice, that means signed Business Associate Agreements with every vendor in the data path, audited access controls, encryption for data in motion and at rest, and clear escalation rules when a conversation crosses into clinical territory. These platforms handle protocol-driven workflows such as intake, medication adherence, post-discharge follow-up, and chronic condition check-ins, while routing anything requiring a clinician's judgment to a human.
What HIPAA requires from a conversational AI platform
HIPAA has three rules that matter most:
Any vendor that creates, receives, maintains, or transmits PHI on behalf of a covered entity must sign a Business Associate Agreement (BAA) before touching patient data. Using a cloud service provider to process ePHI without a BAA violates federal regulations.
The Security Rule organizes requirements into administrative safeguards, physical safeguards, and technical safeguards, all of which must account for data in motion during live conversational AI sessions carrying PHI in real time.
Traditional Electronic Health Record (EHR) systems typically handle PHI as structured records governed by access controls and auditability. Conversational AI changes that model in six ways.
Real-time audio and video capture can create PHI before later processing steps occur. Large language model (LLM) components can generate and infer PHI risk in ways traditional systems do not. Transcripts stored for model improvement can create additional privacy and retention risk.
Persistent Memory carrying context across sessions introduces retention and cross-session contamination risks. Retrieval-Augmented Generation pipelines use probabilistic retrieval rather than deterministic access controls. Integrations with EHR systems via Fast Healthcare Interoperability Resources (FHIR) APIs can pull structured clinical records into conversational applications, where access control must be managed through mechanisms such as SMART on FHIR and OAuth 2.0.
Real-time capture, model inference, transcript retention, Persistent Memory, probabilistic retrieval, and FHIR-based EHR access define the evaluation framework for buyers.
Four evaluation areas separate platforms that can hold PHI safely from those that cannot: how data is handled, whether a BAA is in scope, who has access and how that access is logged, and how the platform handles clinical escalation. Walk through each in a proof of concept before signing.
A patient describes a miscarriage history during video intake. That utterance is PHI the moment it's captured, and every downstream handling decision matters.
Verify where that audio stream is stored, how long it persists, and whether any copy is used beyond the immediate clinical interaction. Confirm data residency: which regions and cloud providers host patient data, along with the retention policy and whether retention windows are configurable per customer.
Most critically, confirm in writing whether any patient data is used for model training or product improvement. If the answer isn't an explicit contractual prohibition, the compliance officer will have follow-up questions.
Your pilot deploys a medication-adherence AI Persona to 200 patients. If the pilot tier isn't covered by the BAA, those 200 conversations just created unprotected PHI.
Will the vendor sign a BAA, and which product tier does it cover? Some vendors restrict BAA availability to enterprise pricing, leaving pilots uncovered.
Ask which subprocessors are in scope and whether BAA obligations flow down to them, including cloud infrastructure and LLM inference providers.
A patient discloses substance use during intake, and six months later, an audit asks who accessed that transcript. Your logs must answer that question at the session level.
Role-based access must govern who views session recordings, accesses transcripts, and modifies the AI's clinical knowledge base. Audit logs must capture every PHI access event, exportable and compatible with your Security Information and Event Management (SIEM) infrastructure.
A patient mentions unexpected chest tightness in a post-discharge conversation. The platform needs to keep the conversation inside scope, detect triggers that require a human clinician, and make the escalation path clear. The practical question is what happens next: does the conversation pause, route to a nurse line, or log the event and continue?
If a vendor can't answer those questions directly, treat that as a warning.
Conversational video is well-suited to protocol-driven, high-volume healthcare interactions where presence helps, and independent clinical judgment is not required. In those workflows, patients often follow through more readily when the interaction feels attended to.
Acute clinical decision-making, emergency triage, and any conversation requiring a licensed clinician's judgment belong on a human escalation path. Stating that boundary explicitly answers the compliance officer's concern about scope creep.
Healthcare teams are already using conversational AI in several structured, high-volume workflows.
Healthcare teams get the most from these workflows when the path is structured, volume is high, and escalation boundaries are explicit.
Video-based conversational AI introduces biometric data, ambient audio, and visual signals that expand both the PHI boundary and the platform's capacity for clinically relevant perception. That expansion is what makes presence possible in a patient conversation, and it's also what requires a full-stack platform, not a face on top of an LLM.
Tavus provides real-time conversational video infrastructure as a full-stack platform across four pillars: perception, intelligence, personality, and rendering. The Conversational Video Interface (CVI) is the pipeline that wires those pillars together into a single real-time session.
Within that session, the LLM layer reasons over clinical policy, Knowledge Base content, and Objectives and Guardrails to produce responses that stay within scope. It decides what to say next, routes content, and commits, or discards generated responses based on updated signals from the other models in the loop.
In practice, a patient in a post-discharge conversation trails off mid-sentence, and her expression tightens. Raven-1 fuses the audio hesitation with the visual tension, catching the mismatch between what she's saying and how she's saying it. The LLM layer determines this, matches an escalation trigger, and decides what to say next, while Sparrow-1 holds timing to give her space.
When she doesn't continue, the AI Persona acknowledges her concern, explains that it's connecting her with a nurse, and logs the event. Phoenix-4 renders that response with the facial behavior a patient would expect from someone actually paying attention.
HIPAA compliance is available on Tavus Enterprise plans, and the platform holds SOC 2 (Service Organization Control 2) certification.
When evaluating AI platforms that are HIPAA compliant, these are the platform features worth testing in a proof of concept:
For healthcare teams, Knowledge Base, Guardrails, Objectives, Persistent Memory, and Function Calling matter most when the goal is grounded responses and a clear escalation path.
The patient on the other end of a HIPAA-compliant conversation is the one awake at 2 am, the one trying to remember what the clinician said three days ago, the one navigating recovery alone. What she needs is the feeling that someone is actually in the room with her, which is what presence at 2 am looks like when a clinic is closed.
Compliance is what makes that conversation legally possible; presence is what makes it worth having. The platforms worth evaluating treat both as part of the same system.
See it for yourself. Book a demo.