Chapter 3: IMS Performance and Tuning

This chapter describes performance considerations within IMS Option. The early sections introduce general issues. The later sections describe specific changes in order to improve performance.

3.1 Overview

In network environments, performance depends to a large extent on the underlying hardware. The speed of the server and PCs, the network and protocol, the network adapter cards and the operating system all play a significant role in overall performance. IMS Option uses a network interactively rather than performing batch file transfer. Different networks and servers respond differently to this usage. The total system load on the server also affects the overall performance. For an explanation of all of the variations in performance and capacity issues for networks, please refer to your network documentation.

You need to do some experimentation to determine the best configuration, and hence the optimum performance, level for your system.

3.2 Tuning

Tuning, in this context, means changing hardware or software to improve performance. There is often a trade-off between performance and flexibility. Flexible and dynamic configurations are very important during development. For system tests, simpler configurations might provide better performance. You need to weigh the benefits of any performance gains against easier administration and other considerations.

The first step is to identify what needs to be tuned. This can be the most difficult step. Most business applications are constrained by I/O performance and not CPU speed. Reducing the number of I/Os or increasing the speed of frequent I/Os often provides the biggest performance gains. On PCs, the CPU manages all I/O requests so faster CPUs contribute to improved I/O performance.

When debugging programs, most of your time is spent in stepping through source code. You need to analyze program logic, view data items and other tasks unrelated to IMS Option performance. The time for any one database call is generally a small part of your testing. It would be unproductive to spend too much time improving IMS Option performance if it does not reduce the time needed to complete a project.

3.3 ACBGEN Performance

IMS Option performs a dynamic ACBGEN whenever a transaction or batch program is scheduled. For CICS applications, the ACBGEN is performed when the PCB schedule call is issued. The time required for an ACBGEN can be observed using a source code debugger. For example, the elapsed time between entering a Trancode and the program source appearing in the debugger is roughly the time for the ACBGEN. When message switching, it is the time between a GOBACK and the start of the next program. Other work is performed along with the ACBGEN, however, if the time seems excessive the dynamic ACBGEN is probably the cause.

A simple way to measure the time for the ACBGEN is to create a test program. This test program would contain just a DISPLAY and a GOBACK. Run this program as a batch program. The time between pressing Enter to start the program and the time of the display is roughly the time for the ACBGEN.

There is one option for improving ACBGEN performance that can reduce repeated ACBGENs to a single I/O. This is to use the DBDGEN Memory Cache setting of the IMS DB System Configuration. This option caches DBDGEN member definitions to eliminate I/Os for frequently accessed DBDs. Gains are achieved when a DBD is referenced by multiple PCBs or by subsequently scheduled PSBs. When timing ACBGENs using batch programs, you cannot measure all of the benefits of this cache. The cache is more valuable in an online environment or when a PSB contains SENSEGs for the same DBD in multiple PCBs.

3.4 Database Performance

Performance on mainframe systems is typically measured in transactions-per-second. This is helpful when tuning a transaction system but isn't as useful when tuning database performance. A better measure is the number of database calls per second.

You can use the DL/I Call Trace facility to measure performance. The tracing Call Stats option displays the total number of database calls issued by a program. Divide the number of calls by the total elapsed time for the call rate. The longer the test, the more accurate this number will be. When the test is short, start-up overhead causes the rate to appear lower than it is. However, as long as you keep the test consistent you can judge the impact of tuning changes. Batch programs make the best benchmark tests for evaluating database call performance.

Database calls perform differently depending on the structure of the call. For example, calls which have SSAs qualified on unique root keys are often resolved with one I/O operation. Calls which use complex booleans and/or search fields often require more I/O operations. The size of the database does not have a large effect on calls which are well structured. However, the performance of other calls can degrade as the database size increases because more segments need to be scanned to locate a segment or to determine the call.

There are advanced diagnostics in IMS Option designed to analyze database call optimization and performance. They are very easy to enable and run but the output requires interpretation by a specialist. If you are unable to resolve a performance problem on your own, please contact your technical support representative for assistance.

3.5 Application Design

Application design can have a significant effect on performance. An application which makes 200 database calls is likely to run more slowly than one which makes five calls. The symptoms of a slower application may become more apparent on a PC platform.

With Fileshare and Remote IMS databases, data contention in your application can impact system throughput. The longer the transaction, the more likely that data contention will occur. Some examples are:

3.6 Network Load Balancing

Another significant item is choosing which components should be on the local PCs and which should be on a network server. The easiest installation is to have all possible components on the server, however, this often results in the poorest performance.

As the complexity of the configuration increases, so do your administration costs. You need to weigh performance gains against other issues. For best performance, a simple rule is to start with as much of the data as possible on the individual PCs. From there, systematically move components to the server and measure the impact on performance.

The number of users on the system can also affect performance. Testing a particular configuration with one user may be different than 5 users which may be different than 10 users and so on. How well your server and network are able to process larger numbers of users is the primary factor.

3.7 Database Catalog Types

Choosing the Database Catalog Types is similar to the issues described for Network Load Balancing. However, you may have less flexibility for database configuration than with other files. If you need to share and update a database, it must be defined as a Fileshare database on a server or as a Remote IMS database on your IMS/ESA system.

If there is no need to update a shared database, you can achieve large gains by defining the database as a shared, read-only database on your server. There is much less network traffic and system overhead in general for processing read-only shared databases. Remember that no one can update a read-only database. If anyone needs to update the database, it must be defined as a Fileshare or Remote IMS database.

For a single-user, local database, selecting the Exclusive use data type (on the DB Catalog Defaults page of the IMS System Properties dialog box) generally result in the best performance. This type of database cannot be shared between users and does not provide for dynamic rollback of updates or logging. Dynamic rollback is often not required during testing.

In some cases a Remote IMS database may perform better than a Fileshare database. The larger the number of I/Os required to resolve a call against a Fileshare database, the more likely it is that Remote IMS performs better. A Fileshare database requires a network transmission for each I/O operation. A Remote IMS database only requires one network transmission for each database call. Network transmission and its associated overhead is often relatively slow. Remote IMS provides very quick response with a fast network and IMS/ESA system.

3.8 Multiple Load Library Folders

Multiple Load Library folders provide concatenation of the IMS Option Gen files. In general, performance decreases as the number of Load Library folders increases. More I/Os are required to locate a Gen member in a concatenation list than with a single Gen file location. See the chapter Advanced Customization for details on concatenation.

If you do not list entries in the Database Catalog, performance degradation results for each DBD accessed. This affects the first call to a database since the Application Region was started. It does not affect calls made in subsequent transactions. The degradation should be small with a small number of Load Library folders but may become noticeable when using the maximum number of Load Libraries. When the first database call is made, IMS Option searches each of the Load Library folders until it finds a catalog entry. If one is not found, the System Configuration Catalog defaults are used. You can minimize the searching by creating catalog entries for your databases as early in the concatenation list as possible.

3.9 INT or GNT Executables

There are intermediate (default) and generated COBOL executable programs. Intermediate programs run more slowly than generated programs, but this depends on the program. In general, the more COBOL verbs executed, the faster the generated programs run. Many applications spend most of their time performing database I/O so the benefit of optimized generated code would be small.

To change from the default intermediate executables:

  1. Select the Files tab

  2. Select the COBOL folder on the left-hand pane of the project window

  3. Right-click on your selected program in the right-hand pane and select Build Settings for program on the popup menu

  4. Check Create optimized code on the General page

  5. Click OK

3.10 Dynamic Database Cache

The Dynamic Database Cache is a DB System Configuration setting which can be accessed on the DB page on the IMS System Properties dialog box. It controls the size of the database buffer pool. It can be decreased to reduce IMS Option memory use and increased to provide improved performance. The default setting of "3" provides good performance with moderate memory use.

If you were running a cache level of 2 you may be able to run at a level of 3 or 4. There is a significant performance improvement in raising the cache level from 2 to 3. In most cases, the difference between running at a cache level of 3 or 4 is not very large. When Trancode message switching occurs, level 4 provides better performance than level 3 if there are a large number of databases being accessed.


Copyright © 2001 Micro Focus International Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law.