Will Your “Smart” Devices and AI Apps Have a Legal Duty to Report on You?

I just ran across an interesting article, “Should AI Psychotherapy App Marketers Have a Tarasoff Duty?,” which answers the question in its title “yes”: Just as human psychotherapists in most states have a legal obligation to warn potential victims of a patient if the patient says something that suggests a plan to harm the victim (that’s the Tarasoff duty, so named after a 1976 California Supreme Court case), so AI programs being used by the patient must do the same.

It’s a legally plausible argument—given that the duty has been recognized as a matter of state common law, a court could plausibly interpret it as applying to AI psychotherapists as well as to other psychotherapists—but it seems to me to highlight a broader question:

To what extent will various “smart” products, whether apps or cars or Alexas or various Internet-of-Things devices, be mandated to monitor and report potentially dangerous behavior by their users (or even by their ostensible “owners”)?

To be sure, the Tarasoff duty is somewhat unusual in being a duty that is triggered even in the absence of the defendant’s affirmative contribution to the harm. Normally, a psychotherapist wouldn’t have a duty to prevent harm caused by his patient, just as you don’t have a duty to prevent harm caused by your friends or adult family members; Tarasoff was a considerable step beyond the traditional tort law rules, though one that many states have indeed taken. Indeed, I’m skeptical about Tarasoff, though most judges that have considered the matter don’t share my skepticism.- READ MORE