Last year the FDA tried to inject some clarity with a new guidance that breaks down health apps into three buckets. The first contains things that are clearly medical devices—like an app that analyzes the content of your urine, just by looking at a photo of a pee-soaked chemical strip, or an app that tells you if a rogue mole is actually a cancerous melanoma. These trigger a formal FDA approval process (which involves clinical trials). The second bucket contains wellness apps—products that help you track your sleep, what you’re eating, how many steps you’re getting, and what your moods are like. These are at the other end of the regulatory spectrum and require no federal clearance. The third bucket contains everything in between. These are apps that could meet the definition of a medical device, but because they don’t actively market themselves as lifesaving, fall outside the (short-staffed, budget-strapped) FDA’s attention span.

The FDA’s position is based on a simple risk-cost analysis; an app that isn’t going to kill someone isn’t worth enforcing. Bradley Merrill Thompson, a partner at Epstein Becker Green, who specializes in regulatory law for digital health, says it’s a reasonable strategy. Mostly. “The marketplace does quite well policing itself when the financial and public health risks are low,” he says. “Consumers will shut down any business where the truth is easily discoverable, but they’re never going to conduct clinical trials to figure out if something works.”

Which would be great—for app makers—if the FDA were the only federal agency involved in this. But medical apps suggesting some sort of outcome could also trigger unwanted attention from the Federal Trade Commission—which protects consumers from fraud. Does this thing do what it says it does?