Designing schema to avoid having to use REFRESH TABLE

I’m designing a schema for an application where I need to be able to reliably select, update, or delete certain rows essentially immediately after insert. According to the docs, “If a query specification results in a get operation, changes are visible immediately.” Therefore, in order to avoid having to a REFRESH TABLE statement, I’m trying to ensure that I only access the records by primary key, which as I understand it will result in a get.

That seems to work for the most part. For example, if I have a composite primary key (x, y), my testing shows a “DELETE WHERE x=1 and y=1” works consistently even when done immediately after insert. However, a “DELETE WHERE x=‘foo’ AND y IN(1, 2, 3, …)” is not working. It only consistently deletes all matching rows if I do a refresh beforehand.

Is this expected behavior? If so, it appears that just using primary key columns in the WHERE clause isn’t enough, it depends how they are used.

If I have to resort to using REFRESH TABLE, how much of an impact on performance should I expect, given that in my use case these inserts are very infrequent relative to reads?

Hi, welcome back to the community,

Yes, it is expected behaviour, the where clause containing IN results in a query operation, not a get one.

Under the hood, CrateDB uses Lucene for persistence, whereby each shard is a Lucene index, and internally each table row is an unstructured document (can contain nested documents and arrays) within the index.

Each document has a unique identifier derived from the primary key (_id). If this identifier is provided in the WHERE clause, or it can be worked out, then the access to the specific document is direct, otherwise a search must be performed to match specific documents within the index, based on your search criteria.

create table a(i int primary key, t text);
insert into a values(42, 'answer');
select _id, * from a;

| _id |  i | t      |
| 42  | 42 | answer |

Writes to a ROW/Document are atomic (all or nothing), they go first into a write ahead log (translog). We have this so that we do not have to force a commit on Lucene after each write. When the translog gets flushed, all data changes are stored in the Lucene files, and the translog is cleared. THIS is the role of the refresh operation, which you can trigger manually, and which gets periodically performed.

On searches, internally we use shared IndexReaders, which can only pick up changes when they are refreshed.

The get operation will look first in the translog by direct access (_id figured out).

We do refreshes periodically for you, adding a few extra explicit refreshes will not result in additional flushes if there is nothing to be flushed in the translog, so the impact should be small.

However best to run your own measurements, to make sure you attain your desired level of performance.

Kind regards,