App store policies for health apps: labeling, disclosures, and compliance
On a rainy afternoon I chased a stubborn rejection in App Store Connect and realized something obvious I’d been resisting: health apps don’t just “ship features”—they ship promises about safety, privacy, and accuracy. I opened my notebook and sketched a single line that has since guided every release: tell people plainly what you collect, why you collect it, and what you won’t do with it—and prove it in your app store label and your in-app flows. That one sentence has saved me from so many avoidable delays and “metadata” rejections.
What changed my mind about labels and trust
I used to treat the App Store “privacy nutrition label” and Google Play’s “Data safety” as paperwork. Then I watched a friend try a weight-tracking app while managing a new diagnosis. The first thing she did was scan the listing for what data would be collected and whether it would be sold or used for ads. Her download decision came down to those few lines. That’s when it clicked: the label is the handshake. If the handshake isn’t clear or consistent with reality, trust evaporates. To get specific, Apple expects developers to declare data collection and uses in the App Privacy section, tied to its review rules, and Google surfaces a similar disclosure in the Play listing’s “Data safety” card. These aren’t marketing blurbs; they’re attestations—and stores can act if they’re inaccurate (Apple App Privacy Details, Google Play Data safety).
- High-value takeaway: Treat the store label as a mini data inventory. If it’s not true in code and vendor configs, it’s not ready to publish.
- Cross-check your label against your SDKs and server logs. If a third-party library touches personal or device data, it likely belongs on the label.
- Remember that omissions are also disclosures. If you leave something out and the store detects it (or a researcher flags it), you can face removal or forced updates.
The two checklists I keep taped to my monitor
To stop thrash the week before submission, I keep parallel checklists—one for Apple, one for Google. They’re boring, which is precisely why they work.
Apple App Store quick pass (my go-to “preflight”):
- App Review Guidelines 5.1 sanity check: permissions are the minimum necessary, purpose strings are clear, and I’m not nudging people into consent they don’t need to give (App Store Review Guidelines).
- Health data use boundaries: if HealthKit or health/fitness context is involved, I confirm we don’t use that data for advertising or other use-based data mining, and we have a published privacy policy accessible from the listing and in-app settings (HealthKit privacy guidance).
- App Privacy label: finish the “App Privacy Details” questionnaire in App Store Connect after a fresh data audit; align “collection,” “linked to you,” and “tracking” to reality (Apple App Privacy Details).
- Health research? If the app conducts health-related human subject research, I make sure the consent flow is readable and IRB approval is on file if required by the study design (see ResearchKit HIG and related ethics notes in Apple docs).
Google Play quick pass (what repeatedly averts rejections):
- Data safety form: complete and keep it accurate; it must reflect collection, sharing, and security practices, and match the privacy policy in your listing (Google Play Data safety).
- Health content rules: avoid health misinformation and impossible medical functionality claims; if functionality relies on hardware (e.g., oximetry), say so plainly (Play Health Content & Services).
- Policy alignment: metadata (title, screenshots) must match actual functionality; no ambiguous “doctor-recommended” puffery. Keep billing, subscription, and consent flows consistent with Play policies.
Disclosures that users can actually understand
When I draft disclosures, I try to write the way I’d explain it to a sibling who doesn’t code:
- What we collect in one breath: “email, step count, mood check-ins.” Then a second sentence for why each is needed. If it’s optional, say so.
- What we do not do with health data (e.g., “We do not sell your health data or use it for advertising”). Put this both in the listing and in the first-run experience.
- Where data goes: name categories of service providers (e.g., analytics, crash reporting), the country or region if relevant, and retention windows.
- How to opt out or delete: link the exact setting path and provide a support email. Promise a verification step for deletion requests, then keep that promise.
That’s the humane version of the store’s ask. It also keeps your “label” synchronized with the living privacy policy the lawyers will wordsmith later.
Beyond store rules the legal floor keeps moving
Store policies are table stakes, but real compliance often lives outside the stores. I try to run a quick three-question triage on every change:
- Am I a HIPAA entity or a business associate? If I’m building for a health plan, a provider, or processing PHI on their behalf, HIPAA kicks in (privacy/security rules, BAAs, breach processes). The U.S. Department of Health & Human Services has a concise primer on when HIPAA does or doesn’t apply to apps (HHS HIPAA & Health Apps).
- If HIPAA doesn’t apply, what does? The FTC’s Health Breach Notification Rule now explicitly reaches many health apps and “PHR” services outside HIPAA; an unauthorized disclosure (even without a hack) can be a breach that triggers notices and timelines (FTC Health Breach Notification Rule).
- Any state consumer health privacy laws? Several states regulate “consumer health data” well beyond HIPAA. Even if you’re not linking to each statute in your listing, design as if opt-in consent and granular sharing controls are required. It’s easier to build for stricter states first than retrofit later.
When a health app becomes a medical device
One of my early lessons: an app that only logs symptoms is very different from one that analyzes symptoms to inform diagnosis or treatment. The U.S. FDA regulates a subset of software functions (including some apps); the test isn’t “is it on a phone?” but whether the function meets the device definition and its risk. Many wellness and self-management features fall outside active FDA oversight. But calculation engines that guide dosing or interpret physiologic signals can land you in device territory with premarket and quality system implications. When in doubt, read the FDA’s risk-based policy overview and examples, then scope your claims accordingly.
- Claims drive classification. “Helps you remember meds” ≠ “Optimizes insulin dosing.” The second may require FDA pathways.
- Design for auditability. Keep a record of training data sources for algorithms and a change log for any model updates that could impact risk.
- Coordinate your store copy. If your marketing overstates capabilities, reviewers may flag you for unapproved device claims despite conservative in-app behavior.
(If you want the primary reference that summarizes this, start with FDA’s policy page and linked guidance for device software functions; it explains what is and isn’t under active oversight.)
Tiny habits that kept me out of trouble
I’m not naturally organized, so I made the process lightweight:
- Ship a data map with every feature. One spreadsheet tab: fields, purpose, legal basis (if applicable), retention, third parties. Update your store label from this, not from memory.
- Do “SDK stand-downs.” Quarterly, I remove any analytics or marketing SDK that I’m not actively using. Fewer integrations mean fewer disclosure surprises.
- Practice your breach drill. Even if you’re not HIPAA-covered, the FTC rule expects timely notification after certain unauthorized disclosures. Have a plain-English playbook (FTC Health Breach Notification Rule).
- Pin your promises in-app. I add a “Privacy at a glance” card in settings: what we collect, how to delete, data never used for ads. It sets the tone for support conversations later.
Common rejection patterns I’ve personally hit
I wish I could say I learned these by reading. I didn’t. I learned them by getting rejected and then fixing the root cause.
- Apple: missing or mushy purpose strings. Review flags vague requests like “App needs Health data.” Say why, in human terms, and ensure the feature still works (in limited form) if people decline (App Review Guidelines).
- Apple: health data used for ads or profiling. Even implied ad use can trip a rejection if HealthKit or a health context is present. Keep health/fitness data completely out of ad tech paths (HealthKit privacy guidance).
- Google Play: label drift. I once shipped a new crash SDK and forgot to update Data safety; the form must reflect collection and sharing—and it has to stay in sync with reality (Data safety).
- Google Play: health misinformation and impossible features. If the app implies it can measure a vital sign using only the camera when that’s not clinically validated (or requires external hardware), expect problems (Health Content & Services).
- Cross-store: policy-copy mismatch. Stores compare your listing, in-app behavior, and privacy policy. If your policy says “no location data,” but the app asks for it, you’ll get called on it.
My plain-English disclosure blueprint
Here’s the little template I paste into drafts before the lawyers wordsmith it. It hits what I’ve seen reviewers check repeatedly:
- Data we collect: list health data types (e.g., steps, sleep), identifiers, device info, and diagnostics.
- Why we collect them: explain the feature that depends on each data type.
- What we never do: “We do not sell your health data; we do not use health data for advertising.”
- Sharing: name categories of processors (analytics, cloud hosting). If any sharing meets Play’s definition, reflect that in the Data safety label.
- Controls: toggles, export, deletion—plus a support contact with SLA.
- Security: at least mention encryption in transit and at rest, access controls, and incident response basics.
- Special notes: if you conduct research, summarize the consent and oversight plainly.
A quick compliance warm-up I run before each release
Five minutes can catch a week of pain:
- Open the listing side by side with the app. Do your screenshots, claims, and permission prompts tell the same story?
- Re-submit the label forms (Apple privacy details / Play Data safety) after any SDK or analytics change—even “minor” ones.
- Re-read the health sections of the policies for each store; Google Play updates its health policy and examples periodically, and Apple tightens privacy areas over time (Play Health Content & Services, App Review Guidelines).
- Check breach readiness. If a vendor misconfiguration exposed logs with health data, could you meet the FTC’s notice steps in time (FTC HBNR)?
What I’m keeping and what I’m letting go
I keep the discipline of writing labels and disclosures as if my best friend were reading them on a stressful day. I keep a humble attitude about how fast these rules evolve, with a recurring reminder to reread Apple’s and Google’s health-related sections before each major release. And I let go of the impulse to bury edge cases in legalese. Clear, precise, and honest beats clever—with reviewers and with users.
FAQ
1) Do I need HIPAA compliance for my health app?
Answer: Only if you’re a covered entity (like a provider or plan) or a business associate handling PHI on their behalf. Many direct-to-consumer health apps aren’t HIPAA-covered but still face FTC and state consumer-health privacy rules. A quick primer is here: HHS HIPAA & Health Apps.
2) What exactly goes into the Apple “privacy nutrition label”?
Answer: In App Store Connect you disclose what you collect (by category), how it’s used (e.g., analytics, app functionality), whether it’s linked to the user, and whether it’s used for tracking. Keep it consistent with your code and privacy policy (Apple App Privacy Details).
3) How is Google Play’s Data safety different?
Answer: The Play form emphasizes collection, sharing, and security practices and shows a summary card on your listing. You’re responsible for accuracy and keeping it up to date. Play also has explicit health content rules banning misinformation and impossible features (Data safety, Health Content & Services).
4) My app estimates vitals with the phone camera. Is that allowed?
Answer: Be extremely careful. Play prohibits misleading or potentially harmful medical functionality claims; unsupported claims can trigger removal. If external hardware or specific sensors are required, say so clearly in the listing—and reassess whether your claims drift into device territory that may require FDA clearance (Play Health Content & Services).
5) If we accidentally shared health data with an analytics vendor, is that a “breach”?
Answer: It can be. Under the FTC Health Breach Notification Rule, unauthorized disclosures by health apps and related services outside HIPAA can count as a breach, triggering notification duties on specific timelines. Read the current rule text before you need it (FTC HBNR).
Sources & References
- Apple App Store Review Guidelines
- Apple App Privacy Details
- Google Play Data safety
- Google Play Health Content & Services
- FTC Health Breach Notification Rule (Final)
This blog is a personal journal and for general information only. It is not a substitute for professional medical advice, diagnosis, or treatment, and it does not create a doctor–patient relationship. Always seek the advice of a licensed clinician for questions about your health. If you may be experiencing an emergency, call your local emergency number immediately (e.g., 911 [US], 119).