Benchmarking database systems has a long and successful history in making industrial database systems comparable and is also a cornerstone of quantifiable experimental data systems research. Defining a benchmark involves identifying a dataset, a query- and update-workload, and performance metrics, as well as creating infrastructure to generate/load data, drive queries/updates into the system-under-test and record performance metrics.
Creating good benchmarks has been described as an art. One can inspire dataset and workload design from "representative" use cases queries, typically informed by knowledge from domain experts; but also exploit technical insights from database architects in what features, operations, and data distributions should come together in order to invoke a particularly challenging task.a
No entries found