Time's Up!

Robust Watermarking in Large Language Models for Time Series Generation

More Info
expand_more

Abstract

The advent of pretrained probabilistic time series foundation models has significantly advanced the field of time series forecasting. Despite these models’ growing popularity, the application of watermarking techniques to them remains underexplored. This paper addresses this research gap by benchmarking several widely used watermarking methods to time series models and by introducing a novel watermarking technique named HTW (Heads Tails Watermark). Unlike traditional probabilistic watermarking approaches, HTW uses a pseudo-random function to directly embed a signal into the numeric structure of the series, thereby greatly enhancing its robustness against potential attacks. Comprehensive experiments and evaluations reveal that on average, HTW retains 98.4% prediction accuracy, significantly outperforming other conventional LLM watermarks. Furthermore, HTW demonstrates robust performance with an average z-score of 5.28 across various datasets and attack scenarios for a series length of 48. These findings establish HTW as a superior alternative for securing pretrained probabilistic time series foundation models