Data too large?! I have 24Gb or RAM for 3 nodes and 12 cores

I need to run a query across 500GB data, do we need 500GB data on RAM?

CrateDB and memory management:

  • CrateDB runs on the JVM (for ninjas), you set its heap size with env var CRATE_HEAP_SIZE.
  • CrateDB uses G1 GC, large heaps may eventually result in increased latency (fine tuning G1 params).
  • When you issue a query, all the intermediate result structures, as well as final ones, must reside in heap space, so the heap needs to be as big as these result sets, which are dependent on your specific use case.
  • Our usual recommendation is to start off with 25% of available memory in the host. Notice that all Lucene level memory management is done off heap, via memory mapped files. Lucene is the indexing/retrieval/persistence engine we use at the bottom of the application stack.
  • We also recommend that you do not exceed 30.5GB of heap, so that you can benefit from a JVM level optimisation (for ninjas).

Memory config guide: here.

In addition, at the node level (crate.yml confifg file), you can configure bootstrap.memory_lock, which if true will result in CrateDB executing system command mlockall on startup a.k.a. bootstrap.

Finally, CrateDB uses a memory circuit breaker at the cluster level. Any query resulting in memory usage above a certain threshold, OR if the cluster is at memory utilisation limit, will be terminated. There are six kinds of circuit breaker:

  1. Query
  2. Field data
  3. Request
  4. Accounting
  5. Stats
  6. One to rule them all

Parting thoughts. If your node has 8Gb of RAM, using defaults (60% query breaker 1), means 60% of 8GB => 4.8GB. Your query intermediate/final results (the live set) would need to fit in 4.8GB.

A count(distinct) query on an absolutely humongous dataset will tend to be shutdown by the query breaker, thus for such cases we recommend the use of hyperloglog-distinct.