Take Care With Your Prompts

Be careful to carry on a personal conversation or share data with your favorite AI assistant - or worse yet - what you consider to be your “AI friend” ??? That budy of yours is a corporate product - thats it - nothing more. And everything you share is going places far and wide.

A study from Stanford Institute for Human‑Centered AI found that major AI chatbot providers (such as Anthropic, OpenAI, Google, Microsoft, Meta Platforms, and Amazon) routinely use user chat inputs to train their large-language models by default, often retaining that data indefinitely and in many cases merging it with other user data streams.

Because of this, users face enhanced privacy risks: telling a chatbot something personal—even in a separate file upload—can lead to model training, inference about health or financial vulnerabilities, or exposure to broader data ecosystems of the provider. The researchers recommend stronger regulation, default filtering of personal data, and affirmative opt-in consent for training use as key ways to mitigate this risk.

Be Careful What You Tell Your Chatbot

Citation: Nikki Goth Itoi, Be Careful What You Tell Your Chatbot, HAI (Stanford University Human Centered Artificial Intelligence), October 15, 2025, https://hai.stanford.edu/news/be-careful-what-you-tell-your-ai-chatbot.

________________________________

Disclaimer: This blog post is provided for informational purposes only and does not constitute legal advice. The linked article is the work of its respective author(s) and publication, with full attribution provided. BAYPOINT LAW is not affiliated with the author(s) or publication; it is shared solely as a matter of professional interest.

Previous
Previous

The Demise of Microchips? What About All Those Colosseum Sized Data Centers?

Next
Next

Sourcing Authentic and Reliable Data for AI Development is Problematic