Configuring and Starting Transaction Data Collection

To use HSF to collect data about transactions, you need to set a number of configuration options in Enterprise Server Administration. You can also use ESMAC to override these settings dynamically.

Note: In order to collect information on EXEC SQL statements, you need to compile the application with the HSFTRACE directive set.

To configure HSF

  1. Access the Enterprise Server Administration screen for your installation.
  2. Check that the server you want has stopped, and if not, stop it.
  3. In the Status column of the Enterprise Server to monitor, click Details.
  4. Select the Historical Statistics tab.
  5. Set the configuration options:
    Enable collection of Historical Statistics Facility (HSF) records
    This switches on HSF processing.
    Write to disk
    Enables writing of HSF records to comma-separated files. These are called cashsf-a.csv and cashsf-b.csv, and are written to the system directory. Only one file is written to at any one time - this is called the active file.

    You write to the active file until you click the ESMAC Switch button or when the active file reaches the maximum size, at which point the alternate file becomes active. If the alternate file already exists it will be backed up with the name cashsf.nnn where nnn is the number of the backup. When you start an enterprise server, cashsf-a.csv is always set as the active file, and if it already exists it is backed up.

    Backup extensions are numbered from .001 up to .999. When a backup with extension .999 exists then the next backup will be created as cashsf.001. If cashsf.001 already exists then it will be overwritten.

    Maximum HSF file size (KB)
    If you have selected Write to disk, this is the size in kilobytes the .csv file will reach before Enterprise Server switches to the alternate .csv file. A value 0 selects the maximum size possible (4 Gb).
    Number of records displayed by ES Monitor & Control
    The number of HSF records that Enterprise Server will hold in memory. These records can be viewed by clicking on the HSF button in ESMAC while the server is running. When this number is reached, older records will be deleted when a new one is created. Records older than one hour are deleted too.
    The minimum value is 0 (no HSF data displays in ESMAC), the maximum is 4096.
    Create JCL file records
    This switches on the generation of JCL file (JCLF) records for 'mainframe' files - i.e. those that are accessed with FCDCAT and ASSIGN(EXTERNAL). JCLF records are local to a step, so that multiple records can be generated for a single dataset name in the same job - one record is created for each step in which the dataset is accessed.

To reconfigure HSF while a server is running

  1. Access the Enterprise Server Administration screen for your installation.
  2. In the Status column of the started Enterprise Server, click Details.
  3. Select the Server > Control tab and click ES Monitor & Control, then click Control.
  4. Set the configuration options. These are the same as those in the Enterprise Server Administration Historical Statistics tab described above apart from the following differences:
    Enable collection of HSF records
    To enable this option you must first either set Write to disk or enter a value in the Number of records to view field.
    Clicking this button will switch collection to the alternate .csv file before the active file has reached the maximum size.
    Create JCL File records
    This switches on the generation of JCLF records - collection can be switched on (and off) dynamically.
  5. Click Apply to start collecting the data.

To include custom data in the record

To include more detail in your HSF records, you can insert custom data by adding up to five CUSTOM fields per record.

Configure the number of CUSTOM fields to appear by setting the environment variable ES_HSF_CFG=CUSTOM=x; where x is a value between 1-5. To populate the field with your custom text, call the ES_WRITE_CUSTOM_HSF library routine, passing in the user-defined text and a unique ID (0-255). The first call that is processed populates the first CUSTOM field with data, the second call populates the second CUSTOM field, and so on...