Skip to content

New large-logs-dataset challenge in elastic/logs track #631

@salvatore-campagna

Description

@salvatore-campagna

We would like to run an experiment in Rally which uses considerable amount of data. The idea is to be able to fill the disk of an AWS instance with 7.5 TB of storage. Indexing such large amount of data poses at least two challenges, anyway, which are a result of the way the elastic/logs Rally track is designed:

  1. Data generation needs to be done before running each experiment (note that we could do data generation once and the use it multiple times, but that would be possible only if multiple experiments are ok with using the same dataset).
  2. Raw data to Json expansion means we need to generate far more Json data to index into Elasticsearch. If we imagine a raw-to-json expansion factor of 10, it means that filling storage with 7.5 TB of raw data needs 75 TB of Json data on the Rally load driver.

For our experiment described in an internal Jira ticket, we:

  1. Can't reuse the same dataset (we can't do data generation just once), because at least the document @timestamp needs to change depending on how much data we need to index per each day (raw_data_volume_per_day).
  2. Finding an AWS instance with such large storage is challenging and expensive, one of them is is4gen.8xlarge that has 4 x 7.5 TB = 30 TB of storage available. Note that if we assume x10 raw-to-json expansion we would need 75 TB of Json data to have 7.5 TB of raw data. This means that even the instance with the largest storage can't handle the amount of data we need.

As a result, benchmarking this scenario is practically impossible because of resource constraints but also because of the time data generation and indexing would require.

So the idea is to adopt the following strategy which we would like to implement in a new challenge part of the elastic/logs track:

  1. Index 100 GB of raw data, which means generating about 1 TB of Json data on the load driver (ideally reuse the raw_data_volume_per_day)
  2. Create a snapshot out of the indexed data
  3. Restore the snapshot multiple times (ideally using a challenge parameter)
  4. Execute queries part of the logging-querying existing challenge to collect query latencies

For the use case above where we need to fill the instance with 7.5 TB of raw data it means restoring the snapshot 75 times. We expect:

  1. time required to have the full dataset indexed to be far less than generating the full dataset and indexing it as it would normally happen with the elastic/logs track and the logging-querying existing challenge.
  2. queries to crunch more documents because of data duplication: that is ok as long as we just compare query latencies with other setups using the same track and challenge (i.e. "standard" index mode versus LogsDB)
  3. lower storage footprint due to data duplication and better data compression

An experiment configured like described above mimics and environment where 75 hosts are logging exactly the same dataset.

Note that the snapshot API is only available in on-prem deployments...which means we need to run the benchmark on-prem.

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions