Understanding the impact of workload and database characteristics on the performance of both DB2®, MQ, and the replication process is useful for achieving optimal performance.Although existing applications cannot generally be modified, this knowledge is essential for properly tuning MQ and Q Replication and for developing best practices for future application development and database design. It also helps with estimating performance objectives that take these considerations into account.
Performance metrics, such as rows per second, are useful but imperfect. How large is a row? It is intuitively, and correctly, obvious that replicating small DB2 rows, such as 100 bytes long, takes fewer resources and is more efficient than replicating DB2 rows that are tens of thousand bytes long. Larger rows create more work in each component of the replication process. The more bytes there are to read from the DB2 log, makes more bytes to transmit over the network and to update in DB2 at the target.
Now, how complex is the table definition? Does DB2 have to maintain several unique indexes each time a row is changed in that table? The same argument applies to transaction size: committing each row change to DB2 as opposed to committing, say, every 500 rows also means more work in each component along the replication process.
This Redpaper™ reports results and lessons learned from performance testing at the IBM® laboratories, and it provides configuration and tuning recommendations for DB2, Q Replication, and MQ. The application workload and database characteristics studied include transaction size, table schema complexity, and DB2 data type.
Chapter 1. Introduction
Chapter 2. Environment and scenario
Chapter 3. Impact of variations in workload and table characteristics
Chapter 4. Tuning Q Replication
Chapter 5. Tuning MQ