About Caching

There is one buffer pool cache per XDB Server, and all applications connecting to the XDB Server share this global memory. The overall size of the buffer cache has a large impact on database performance. If the cache is large enough to keep the required data in memory, less disk activity occurs and performance increases. If the cache pool is not large enough, the overall performance of the XDB Server can decrease, with the XDB Server becoming I/O-bound, as physical disk activity is required to service application requests.

The buffer pool cache effectiveness increases with multiple users, when a large amount of repeated access to the same data and index pages occur, and when one primary location is accessed.

The cache manager stores pages (4K each) of database object data. (Therefore, the number of pages is the cache size divided by four.) These database objects consist of table, index, and dictionary pages for all accessed locations. The dictionary pages originate from the dictionary files that reside in every location. As an application requests data from one of these object types, pages containing the data are transferred from the disk into the buffer pool.

Pages are not written to disk until one of the following occurs:

Consider the following when determining how much RAM to allocate to cache:

Use the Cache Statistics screen of the Monitor Utility to help determine the best amount of cache for your system. When your XDB Server has reached a steady state (work is well in progress), check the percentage shown for Pages in Use. If Pages in Use is running at or near 100% and the Number of Tosses is high, then most of your allocated cache is being used and you might want to try increasing it. If Pages in Use and Tosses are running low, you have more cache allocated than you need.