Is it possible to make old datastore data self destruct?

I want to use datastores for a task related to storing information/data for a temporary amount of time. The reason I don’t want to use memory stores for this is that I plan to move around large amounts of data, meaning the 32KB the memory stores provide aren’t enough even when the data is split into multiple keys(for reference datastores give 4MB of space, meaning that to compensate with memory stores I need to make 128 calls to the API).

So the question is, is it possible to clone the memory store behavior of data self-destruction after a specific amount of time is passed into datastores? I thought of running some sort of routine that scans and clears old entries once in a while, how would I make such process effective? Is there a way to mass delete datastore keys?

Thanks in advance if you figure out a way that is fast and doesn’t evolve a lot of API calls.

1 Like

Not really, other than repeatedly call the delete API.

I’m not overly familiar with Memory Stores, so what I propose may not be useful:

Instead of storing just the data in the datastores, you could instead store an array containing the time (os.time) and the data. Then, when you use GetAsync on the datastore, check if the difference between the current os.time and the stored os.time is beyond the deletion threshold. If it is, then disregard the data and replace it with an empty array / some other default data (Optionally delete it at this point). Otherwise, simply get the data from the array.