The main usecase we use Crate.io for is a cache, where we have 2-3 tables that are being update frequently.
Every minute we fetch data from a service (c.a. 500 records), we do some calculation, and then we UPDATE records in our Crate.io cluster.
Meaning, the data size itself is relatively small, c.a. 3000 records in that table, and c.a. 9000 records in another table, and the number of records will stay that small.
Still, after running the software for a month, maybe two, the free disk space starts to decrease until it eventually gets filled up to the point where Crate.io gets into read-only mode.
Looking at the disk, we can see that the translog files are being written, but not removed, so that the “translog” directory gets really large over time.
So far, I’ve only found a way to limit the size of the translog file itself (via the translog.flush_threshold_size), but I can’t find a way to limit the TOTAL size of all translogs on a particular node, or across the cluster.
Also the OPTIMIZE with flush does not remove those translogs, which should get cleared according to your documentation here about durability and storage.
So, right now it seems like, even though I’m doing updates only to my table, the translog would rise indefinitely, or is there a way to avoid this and to limit the total size of all translogs?