Data Provenance and Lineage Tracking in LLM Systems
LLM developers are increasingly under pressure due to international and U.S. state legislation for multiple reasons and one important data protection privacy area, the "Right to be forgotten", is increasingly important in the domain of LLMs. Although some current research investigates methods to edit individual model parameters, e.g., deleting certain information, the general problem of erasing one’s erroneously digested data from a trained language model is yet to be solved.
Abstract: Large language models (LLMs) are deployed at scale, yet their training data life cycle remains opaque. This survey synthesizes research from the past ten years on three tightly coupled axes: (1) data provenance, (2) transparency, and (3) traceability, and three supporting pillars: (4) bias & uncertainty, (5) data privacy, and (6) tools and techniques that operationalize them. A central contribution is a proposed taxonomy defining the field’s domains and listing corresponding artifacts. Through analysis of 95 publications, this work identifies key methodologies concerning data generation, watermarking, bias measurement, data curation, data privacy, and the inherent trade-off between transparency and opacity.
Tracing the Data Trail: A Survey of Data Provenance, Transparency and Traceability in LLMs
Hohensinner, Richard & Mutlu, Belgin & Estrada, Inti & Vukovic, Matej & Kopeinik, Simone & Kern, Roman. (2026). Tracing the Data Trail: A Survey of Data Provenance, Transparency and Traceability in LLMs. 10.48550/arXiv.2601.14311.
________________________________
Disclaimer: This blog post is provided for informational purposes only and does not constitute legal advice. The linked article is the work of its respective author(s) and publication, with full attribution provided. BAYPOINT LAW is not affiliated with the author(s) or publication; it is shared solely as a matter of professional interest.