Good performance testing requires good tests and good procedures. This paper discusses experiences creating and using an automated test environment.
The paper also describes work done at Open Source Development Labs (OSDL) in rewriting and modernizing the AIM7 and AIM9 benchmarks. The intent is to make the benchmarks relevant for modern hardware by making it flexible and extensible.
This paper talks about how to create a testing environment, how to automate it, and how to select and evaluate potential tests. The paper talks about the differences between low-level (micro) workloads and application-modeling (macro) workloads, using OSDL Scalable Test Platform tests as examples, and talk about the difference between tests that focus on specific areas and tests that exercise broad areas.