Skip to main content

Ethical AI: Privacy and Security

This is part 1 of a blog series on Ethical AI.

This content was adapted from an internal learning and development session developed by HealthEdge’s AI team, focused on educating our organization on ethical AI. At HealthEdge, we believe that safe and responsible AI is of the utmost importance. This principle shapes both how we use AI internally to accelerate our own efficiency and how we build AI-powered solutions for our customers.

These materials reflect how our AI team thinks about these problems day to day. Ethical AI isn’t something we address at the end of a project or check off during a compliance review. Rather, it’s a lens we apply from the earliest stages of design through deployment and beyond and sharing these principles openly — with our own teams and with the broader community — is part of how we hold ourselves accountable.

Ethical AI Starts with Privacy and Security

As artificial intelligence becomes more widely adopted across healthcare technology platforms, protecting sensitive data has become a critical responsibility for organizations that build and deploy AI solutions. Many users rarely think about where their inputs go or how they may be stored until something goes wrong.

In healthcare environments, where protected health information (PHI) is involved, the stakes are particularly high; privacy failures can lead to regulatory consequences, loss of trust, and real harm. For organizations developing AI-powered tools, privacy and security must be designed into every decision, from tool selection to system architecture.

Privacy Isn’t Just a Policy — It’s a Design Problem

At its core, privacy is about ensuring that personal data is collected, used, and retained appropriately, and that people maintain control over their information. Simple in theory. AI makes it complicated in practice.

Large language models can memorize training data and spit back Personally Identifiable Information (PII) in unexpected contexts. People paste sensitive information into third-party tools without thinking about retention. Data gathered for one purpose quietly gets repurposed for another. And “anonymized” datasets? Often not as anonymous as advertised, as re-identification is a well-documented risk. For those of us in healthcare, this extends to Protected Health Information (PHI), meaning privacy failures aren’t just bad practice — they’re compliance violations.

If you’re a user, know what you’re feeding into these tools. Assume your inputs may be stored. Don’t paste in someone else’s personal data without authorization. And understand what your organization actually allows as input.

If you’re evaluating tools, ask the uncomfortable questions. Where does the data go? How long is it kept? Does the free tier use your inputs for model training, evaluation, or monitoring? (Many do.) Where does data physically reside? If a vendor can’t give you straight answers about data handling, that tells you what you need to know.

If you’re building, design for privacy from day one. Collect the minimum data you need. Be upfront about how you use it. Build in deletion and user control. And don’t lean on the LLM itself for access control, that’s not what it’s for. Assume that any data used to train the model could end up being model output; curate training datasets carefully.

Security: The Threat Surface You Might Be Underestimating

Security means protecting systems and data from unauthorized access, manipulation, and exploitation. With AI, the attack surface has grown in ways that catch teams off guard.

  • Prompt injection lets bad actors manipulate model behavior through crafted inputs.
  • Model inversion can extract training data from responses.
  • Adversarial inputs slip past safety controls.
  • Indirect prompt injection, poisoned content embedded in documents or data sources, is particularly stealthy if LLM guardrails don’t scrutinize the content before processing that information as instructions, causing unexplained or undetected malicious agent behavior.

Add API key exposure, credential leaks, and supply chain vulnerabilities, and there’s a lot to account for.

If you’re a user, never hand API keys, tokens, or credentials to an AI tool. Think twice before running AI-generated code. Double-check any output that touches security decisions or access controls. If something looks off, report it. When testing new tools, use sandboxed accounts with limited permissions.

If you’re evaluating tools, look for real security hygiene, such as documented incident response and documented guardrails. Ideally, these include published metrics, SOC 2 or ISO 27001 certification, clear credential management, and evidence of pen testing or red teaming. No security docs? Vague authentication story? Third-party integrations without defined boundaries? Walk away.

If you’re building, assume every input is hostile. Rate-limit and validate aggressively. Give your AI components their own credentials with the minimum necessary permissions. Never let the model make authorization decisions. Set up guardrails and monitor for prompt injection and data exfiltration patterns. Stay on top of dependency updates and run your systems against the OWASP Top 10 for LLMs.

This Isn’t a One-and-Done Conversation

Privacy and security are not one-time considerations addressed during an architecture review. They are ongoing disciplines that influence every stage of AI development and deployment, from the engineer designing prompts and guardrails to the product leader evaluating vendors and integrations. Organizations that embed these principles into their AI strategies will do more than reduce risk. They will build the level of trust that responsible AI adoption ultimately depends on.

For more information about HealthEdge’s approach to AI, visit www.healthedge.com.

About the Author

Marcus Barlett is a machine learning engineer who thrives at the intersection of data, automation, and a good challenge. With a knack for turning messy datasets into sleek, intelligent tools, he’s helped streamline everything from solar performance models to healthcare contract extraction. Marcus enjoys streamlining workflows with automation and has a knack for turning technical challenges into practical solutions. Outside of work, he’s always exploring new ways to blend creativity with technology.