Thursday, January 15, 2026

The PHI-Safe Way to Use AI for Clinical Research

Eddie Czech

When Cody Lee saw his first high-level athlete recovering from ACL reconstruction surgery, he realized his general physical therapy knowledge wasn't going to be enough. But he also knew that turning to AI tools meant navigating serious privacy concerns.

Most clinicians in smaller practices face this dual challenge. They treat complex cases without access to condition-specific specialists, but they can't compromise patient privacy by feeding protected health information into AI systems.

Over the course of seven years, Cody moved from front office staff to licensed PT. That journey helped him understand both the administrative constraints of smaller healthcare systems and the absolute necessity of maintaining patient privacy—even while seeking better clinical knowledge.

When he encountered that ACL patient, Cody had a choice—follow the standard protocol and hope it worked, or find a way to build specialized knowledge without exposing a single piece of patient data.

He chose to build a privacy-first digital clinical specialist using AI project tools.

Key Takeaways:

  • AI project features can compress months of expertise development into days
  • Protecting patient information isn't optional: every query must use only de-identified, generic clinical data. Never input names, dates, or any protected health information
  • Verification protocols prevent AI errors from reaching patient care

You're flying blind on complex cases

Every clinician treating patients outside their specialty faces the same challenge. They receive patients who need more than general care, but smaller health systems rarely employ specialists for every condition.

"I had a high-level ACL reconstruction patient, and it was going to be the first time I had seen a patient like that in particular," says Cody of his experience.

Surgeons provide rehabilitation guidelines, and clinicians can follow them step by step. But protocols don't account for patient specifics:

  • When should you progress exercises?
  • What does normal recovery look like versus concerning patterns?
  • How do you handle patients who aren't responding as expected?

Cody knew he needed answers beyond what any protocol could provide. He started researching manually—reading articles, listening to podcasts, reviewing clinical practice guidelines. But the volume of information to go through was eating time, and his patient needed treatment now.

AI projects can be your first pass at testing clinical thinking

When Cody stopped thinking about AI as a search tool and started treating it as a research assistant he could train, he developed a systematic way to build condition-specific expertise on demand rather than hoping his general PT knowledge would suffice.

During this same period, Claude released its Projects feature, which changed how Cody could interact with AI. Instead of asking random questions and getting generic responses, he could upload clinical research and scientific papers to create a knowledge base focused entirely on ACL rehabilitation.

"I created a clinical mentor for myself, specifically for ACL rehab in this particular case. But you could make one for anything."

Shaping and curating the knowledge-base for an AI specialist opens up new possibilities for clinicians to improve care for patients, both in quality and in speed. It starts with the foundation of resources the project ingests.

Cody's process from evidence library, to upload, to query and to verify

Step 1: Build your evidence library

Cody started by gathering every high-quality resource he could find about ACL reconstruction rehabilitation. He manually researched and read through sources to validate the information.

His library included:

  • Peer-reviewed research articles on ACL rehabilitation
  • Clinical practice guidelines from professional associations
  • Surgical protocols from multiple universities
  • Expert podcasts and blog posts from recognized specialists

The key was collecting resources that represented different perspectives and the most current evidence. Cody worked to build a comprehensive collection that covered the full spectrum of ACL rehabilitation knowledge.

For clinicians starting this process, focus on gathering quality over quantity. Look for resources from institutions known for treating the specific condition, recent publications that reflect current best practices, and materials that come from reputable sources with proper citations.

Step 2: Upload everything to an AI project

Once Cody had his evidence library, he needed to organize it in a way that let him query it intelligently. Generic AI chat tools wouldn't work. They would pull from their general training and might miss the nuances of his specific case. That’s where Claude’s project feature came into play.

AI projects work differently than standard chat interfaces. Their retrieval system prioritizes the documents you've uploaded over generic training data. When you ask a question, the AI searches your uploaded materials first, then constructs answers based on that specific evidence.

Cody uploaded surgeon protocols, rehabilitation guidelines from multiple universities, research articles, and clinical practice guidelines. This created a knowledge base he could query conversationally, asking detailed questions about specific recovery milestones, exercise progressions, and expected outcomes.

Step 3: Query without compromising patient privacy

Healthcare creates unique constraints for AI use. Cody needed to discuss his patient's progress without violating privacy regulations or exposing protected health information (PHI) to external systems.

He developed a questioning method that used only de-identified clinical data. "Without using any PHI or patient identifiers, I gave it some objective measurements: this is what the range of motion looks like, this is what swelling looks like. Where should we be in the rehab process compared to what the evidence says?" Cody explains.

To avoid using names, dates, or identifying details, create templates for how you describe patient presentations that are anonymous. Focus on measurable data points that any patient might have, avoiding anything that could identify a specific individual.

Step 4: Verify every recommendation

By now, we should all know that AI systems can generate confident-sounding information that's completely wrong. Cody understood this risk and built verification into his process from the start.

He required the AI to cite which uploaded document contained each piece of information. When recommendations seemed questionable, he could go directly to the source article and read the original context.

Cody also acknowledged the limits of AI verification. As an expert in his own field, Cody has the knowledge to determine what is and isn’t accurate, but: "If you're not familiar at all, then it could be challenging to pick up on the accuracy of information."

When his patient's progress stalled, Cody reached out to a recognized ACL rehabilitation specialist for a 15-minute consultation to combine his AI research with human expertise. The AI helped him develop enough knowledge to ask intelligent questions and recognize when consulting another expert was still necessary.

From protocol-follower to confident specialist

Cody had to turn into the specialist his clinic lacked. He could evaluate his patient's progress against evidence-based milestones, adjust treatment based on research rather than guesswork, and identify when consultation with another expert was necessary.

What would normally take months of continuing education courses and case experience happened in a much more condensed timeline of focused, AI-assisted research. But perhaps more importantly, Cody created a repeatable process. The next time he encounters an unfamiliar condition, he knows exactly how to build the expertise he needs: gather evidence, upload it to a project, query it carefully, and verify thoroughly.

For clinicians in smaller practices, this unlocks an entirely new toolbox. You don't need to wait for your clinic to hire specialists or send you to expensive training programs. You can develop condition-specific expertise on demand, creating your own digital mentors who help you deliver better patient care.

The technology is here and the evidence can support it. The new challenge will be building the discipline to use it both safely and effectively.

AI agent visual

Frequently Asked Questions

How can small independent healthcare practices access specialist knowledge without hiring experts?

Clinicians at small, independent healthcare practices can create AI-powered, PHI-safe clinical research assistants by gathering high-quality evidence sources and uploading them to AI project tools like Claude Projects or ChatGPT.

This approach allows practitioners to develop condition-specific expertise on demand by building a curated knowledge base from peer-reviewed research, clinical practice guidelines, and specialist protocols. The process transforms months of expertise development into days of focused research while maintaining the rigor needed for patient care.

What precautions should clinicians take when using AI tools for patient cases?

Healthcare providers must never input protected health information or patient identifiers into AI systems. Instead, clinicians should use only de-identified clinical data such as objective measurements, range of motion values, and functional test results. Creating standardized templates for describing patient presentations helps maintain privacy while still allowing meaningful clinical consultation.

Every AI recommendation should be verified against source materials, and clinicians should maintain their clinical reasoning rather than accepting AI outputs at face value.

How do AI projects differ from regular AI chat interfaces for clinical research?

AI project features prioritize user-uploaded documents when they retrieve information to answer a user query. When a clinician asks a question, the AI searches the specific evidence library first rather than pulling from its broader knowledge base. This creates a more focused and relevant consultation experience.

Regular chat interfaces lack this document prioritization and may provide generic responses that miss the nuances of specific clinical conditions or treatment protocols.

Can AI tools replace continuing education and expert consultation?

AI research assistants complement rather than replace traditional learning methods. They help clinicians rapidly develop baseline knowledge and ask intelligent questions, but human expertise remains essential for complex cases. Combining AI-assisted research with brief consultations from recognized specialists provides the most effective learning pathway.

The technology serves as a force multiplier that helps practitioners make better use of limited access to experts while building their own clinical judgment and reasoning skills.

Educational only; verify all AI output.