Skip to content

Command Messages

This section includes messages that occur when you issue a command using either the commandline Client or or the Administrative Console in conjunction with the Client Manager Service.

These messages are written to the log file and are displayed on the console, unless otherwise noted. Some of the messages are only written to the log file. And some messages are only written to the trace file when the 0x10000 (65,536 in decimal) bit is set in the trace mask. This bit is referred to as TR_VERBOSE in this section.

Common Log Messages

These messages, common to almost every command, are listed in the section below instead of repeating them for each command. We also include a few common messages marked as “(Server connection only)”, which are generated at the start of the run for commands that connect to the Databridge Server (DBEngine or Enterprise Server). These commands include define, redefine, process, clone, switchaudit, and tcptest.


All garbage data successfully flushed

(Server connection only) This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the Client was able to read all of the false TCP data after receiving an incorrect length response to the test pattern. It always follows the message "Flushing garbage data". This message does not appear under normal circumstances.


ATM_ECHO: Pattern = 'string1', Response = 'string2'

(Server connection only) This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the Client and server successfully exchanged the given (matching) test patterns using the ATM_Echo RPC.


Begin processing configuration file "name"

This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, confirms that the Client is reading the specified text-based configuration file.


Calling TLSStartSecurity

(Server connection only) This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the Client is initiating the SSL negotiation to establish an encrypted connection to DBServer on the MCP system.


Client exit code: dddd[(nnn)] - exit_code_text [(Client will try to recover from this error)]

This message appears any time a Client command completes. On UNIX, the 8-bit exit code nnn is shown only if it differs from the case ofactual exit code dddd. The daemon uses the actual exit codes.

You need to deal with only 8-bit exit codes in scripts that control the running of command line Client ((dbutility). A few exit codes for a process command cause dbutility to restart itself after a brief delay. When this happens you will see the additional text ""Client will try to recover from this error".". The Client gives up after 3 retries if the error persists.

These exit codes include DBM_AUD_EOF (9), DBM_BAD_AUDITLOC (11), DBM_WRONG_ABSN (92), DBM_BLOCKTOOLONG (1179), DBM_LOC_MISMATCH (33) and DBM_ABSNMISMATCH (1180)`.

Additionally, if the Client is terminated because a relational database deadlock was detected, dbutility will also attempt to recover by restarting after a brief delay.

When using the service to manage Client runs, error recovery is handled by the service, in which case the above mentioned exit codes result in the service relaunching DBClient to run a process command after a brief delay. The service gives up after 3 retries, if the error persists, in which case the data source is disabled.


Clustered System (nnn nodes)

(Windows only) This message, which is only written to the log file when running on a clustered system, displays the number of nodes in the cluster (typically 2).


"cmd_name" command output logged to log file "file_spec"

During the execution of the auxiliary Client program DBClntCfgServer, the log file is switched to the main log file when running Define/Redefine, Generate, Reorganize, Refresh commands. This ensures that the main log file has everything needed to be able to see the conditions that led up to a problem.

Before executing the switch, this message is written to the DBClntCfgServer the log file, followed by a line of dashes. After the switch is executed, a line of dashes is then written to the main log file, followed by the line "cmd_name" command issued from console, followed by the date in the form Current date is: Day Mon dd, yyyy.

When the command completes, a line of dashes is then written to the main log file to close off the command and the logging is switched back to the alternate log file where a line of dashes is written to indicate the completion of the command for which the log file was switched.

The Administrative Console's Customize command uses the main log file.


Configuration information read from binary file "name"

This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, confirms that the Client has successfully read the specified binary configuration file.


Connected to host, port nnnn

(Server connection only) This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the Client has successfully established a host connection, where host is the host name or IP address and nnnn is the TCP/IP port number.


Connecting to host, port nnnn

(Server connection only) Indicates that the Client is establishing a connection to the host, where host is the host name or IP address and nnnn is the TCP/IP port number.


Current date is: Day Mon dd, yyyy

This line appears, following a line of dashes, in the log file at the start of every Client run to identify the date on which the Client run was made. This line is also logged when the date changes after midnight, as all log messages only have a time prefix.


Databridge Client, Version vvv (nn-bit) {ODBC | OCI}/{SQLServer | Oracle} [OS]

The version string vvv (for example, 7.0.0.000) identifies the Databridge Client that you are running. The value of nn (64) identifies the CPU type. ODBC and OCI are the APIs used to access the relational database (SQL Server or Oracle). For UNIX platforms, the Oracle version for which the Client is built (such as 19c) and the operating system (for instance, Linux) is also listed.


Databridge Configuration Server, Version vvv (nn-bit) {ODBC | OCI}/{SQLServer | Oracle} [OS]

The version string vvv (for example, 7.0.0.000) identifies the secondary Databridge Client (DBClntCfgServer) that you are running. The value of nn (64) identifies the CPU type. (We support only the 64-bit Client since Databridge version 6.6.) ODBC and OCI are the APIs used to access the relational database (SQL Server or Oracle). For UNIX platforms, the Oracle version for which the Client is built (such as 19c) and the operating system (for instance, Linux) is also listed.

This program provides database access to the Administrative Console and supports the Administrative Console's Customize command. Unless you use the Administrative Console's Customize command, this program only runs for brief periods of time and shuts down automatically after a minute of inactivity.


DBEngine Version: version_string

(Server connection only) This message, which is only written to the log file, shows the version of the Databridge Engine being used on the mainframe. It is only present in commands that connect to the Databridge Server (or Enterprise Server).


DBEnterprise Version: version_string

(Server connection only) This message, which is only written to the log file, shows the version of the Databridge Enterprise Server being used. It is only present in commands that connect to the Databridge Enterprise Server.


DBServer Task Number: nnnn

(Server connection only) This message, which is only written to the log file, shows the DBServer task number on the MCP system.


DBServer Version: version_string

(Server connection only) This message, which is written to the log file shows the version of DBServer used. It is present in commands that connect to the Databridge Server (or Enterprise Server).


DBSupport Title: file_title

(Server connection only) This message, which is only written to the log file, shows the file title of the Databridge Support Library being used on the mainframe (for example “(DB70)OBJECT/ DATABRIDGE/SUPPORT/DEMODB ON DPACK”). It is only present in commands that connect to the Databridge Server (or Enterprise Server).


DBSupport Version: version_string

(Server connection only) This message, which is only written to the log file, shows the version of the Databridge Support Library being used on the mainframe. It is only present in commands that connect to the Databridge Server (or Enterprise Server).


Disconnecting and restarting transport initialization

(Server connection only) This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the response to the ATM_Echo remote procedure call (RPC) timed out. The Client recovers from this error by disconnecting from the server and trying to restart the connection process (up to three times).


DMSIISupport Title: file_title

(Server connection only) This message, which is only written to the log file, shows the file title of the Databridge DMSII Support Library being used on the mainframe (for example “(DB70)OBJECT/ DATABRIDGE/DMSIISUPPORT/DEMODB ON DPACK”). It is only present in commands that connect to the Databridge Server (or Enterprise Server).


DMSIISupport Version: version_string

(Server connection only) This message, which is only written to the log file, shows the version of the Databridge DMSII Support Library being used on the mainframe. It is only present in commands that connect to the Databridge Server (or Enterprise Server).


End processing configuration file "name"

This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, confirms that the Client has processed the specified text-based configuration file.


Filter: filter_name

(Server connection only) This message, which is only written to the log file, shows the name of the FILTER being used in the Support Library on the mainframe. It is only present in commands that connect to the Databridge Server (or Enterprise Server).


Flushing garbage data

(Server connection only) This message only appears in the trace file when the TR_VERBOSE bit in the trace mask is set. To verify that the transport layer works correctly, the Client exchanges a test pattern with DBServer. If the Client receives a response that has an incorrect pattern or length, the Client attempts to recover from this situation and returns this message. You should not see this message under normal circumstances.


Negotiated protocol level = number, Host version = major_vers.minor_vers

(Server connection only) This message, which is only written to the log file, shows the protocol level that the Client and the server use, which is the lesser of the Client and server protocol levels (the 7.0 release uses a protocol level of 33). The second part of the message contains the major and minor version numbers of the server (e.g., 7.0).


ODBC driver: "name", version = vv

This message, which only applies to the SQL Server Client, identifies the name and the version of the ODBC driver used. We recommend using ODBC driver 17.4 or newer.


Oracle database name: name

This line is written to the log file only when the database parameter in the Client configuration file has no assigned value and the command line -D option has not been used to specify the database name. It indicates that the default database name is used. This is not typically done in a production environment, but it could be done when using Oracle Express to evaluate the product. The preferred way of doing things is to create an entry for the database in the file “tnsnames.ora” in the “network/admin” folder in the Oracle files.


Oracle version: major_vers.minor_vers

This line, which is only written to the log file, displays the major and minor version of the Oracle database you are using (such as 19.0). This message is present for all commands that connect to the Oracle database.


OS version: version_string

This line, which is only written to the log file and is only applicable to UNIX Clients, displays the Operating System version string (such as 7.1 on an AIX platform). This message is present for all commands that connect to the relational database. This information is crucial to support to determine if you are using a supported platform. For a complete list of supported platforms and system requirements, see the Databridge Installation Guide.


OS: Windows version

This line, which is only written to the log file, displays the name of the Windows operating system. For example, "OS: Windows Server 2019 Standard".


Process ID pid

The Client writes the log file when it starts up, as this information can be useful when you have multiple Clients running simultaneously and you need to find a particular process's PID.


Retrying ATMEcho RPC after flushing input

(Server connection only) This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the Client received a response of the wrong length from DBServer and is retrying the ATM_Echo RPC.


Server communications initialization complete

(Server connection only) This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that DBServer or Enterprise Server has successfully executed the DBINITIALIZE RPC.


SQL Server version: major_vers.minor_vers

This line, which is only written to the log file, displays the major and minor versions of the Microsoft SQL Server database you are using. Note that SQL Server 2012 is version 11.0, SQL Server 2014 is version 12.0, SQL Server 2016 is version 13.0, SQL Server 2017 is 14.0, and SQL Server 19 is 15.0. This message is present for all commands that connect to the SQL Server database.


SSL negotiation completed successfully

(Server connection only) This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the SSL negotiation for the encrypted connection to DBServer on the MCP was successful. At this point the Client proceeds to initialize the connection to the server.


System: host_system_desc

(Server connection only) This message, which is written only to the log file, shows information about the type of Unisys mainframe being used (for example, System: CS790 :7003 SSR 56.189.8111). It is only present in commands that connect to the Databridge Server (or DBEnterprise Server).


Configure Command Messages

The following messages appear in response to the command line Client dbutility's configure command.

"locks" directory created

This message appears when the configure command creates the locks subdirectory in the global working directory. The global working directory and the locks directory are created only if they do not already exist.


"name" sub-directory created

This message appears when subdirectories for the Client's working directory are created. These subdirectories are config, logs, dbscripts, discards, and scripts.


Beginning Databridge Client configuration


Client configuration file "dbridge.cfg" created in "config" sub-directory

This message appears when the Client creates the binary configuration file dbridge.cfg in the config subdirectory. If the directory already exists and contains an old configuration file, this file will be used instead.


Creating control table name

This message indicates that the specified control table and its associated index are being created in the relational database.


Databridge Client configuration completed

This message indicates that the dbutility configure command is complete and that all of the empty control tables were successfully created in the relational database.


Dropping control table name

This message indicates that the specified control table is being dropped from the relational database. When a Client control table is found to exist, this command will drop it before creating it again. This message appears when you have previously executed a dbutility configure command and are executing it again using the -u option.


Generating control tables...


Working directory "path" created

This message appears when the command creates the global working directory for the Client (that is, the working directory for the service). This directory and the locks subdirectory are created only if they do not already exist.


Define Command Messages

The following messages appear in response to the Databridge Client define command.


"locks" directory created

This message appears when the define command creates the locks subdirectory in the global working directory. The global working directory and the locks directory are created only if they do not already exist.


"name" sub-directory created

This message appears for each subdirectory of the Client’s working directory that is created. These subdirectories are config, logs, dbscripts, discards, and scripts.


Beginning New DataSource definition

The data source with the host name and port number specified on the command line is being defined. The dbutility define command will get an error if the specified data source is present in the control tables. To resolve this error, use the –u option to force delete old entries in the Client control tables.


Beginning Databridge Client configuration


Client configuration file "dbridge.cfg" created in "config" sub-directory

This message appears when the Databridge Client creates the config subdirectory and a binary configuration file, "dbridge.cfg", in that directory. If the config directory already exists, the existing configuration file is used.


Creating file "source_NullRec.dat"

The Client is creating the file, source_NullRec.dat to hold the NULL VALUES for data set records from the specified data source. The source entry is the data source specified in the data_source column of the corresponding DATASOURCES Client control table. The Client uses these records to determine if items are NULL.


DataSet name is a global data set, active column set to 0

The Client automatically disables cloning for the indicated data set. Normally, the data in the global data set is not cloned because it is not very useful. If you need to clone this data set, simply set the active column to 1 in the corresponding row of the DATASETS control table using a user script or the Administrative Console's Customize command. If you are not using the Administrative Console's Customize command, you will need to rerun the define command with the -u option (or run a redefine command with the -R option) to make the change take effect.


DataSet name is a restart data set, active column set to 0

The Client automatically disables cloning for the DMSII restart data set because it does not contain any information that is worth replicating. .


DataSource definition completed

This message indicates that the define command is complete. This means that a row with the data source name, host name, and port number has been added to the DATASOURCES control table. In addition, a row for each data set has been created in the DATASETS control table. The DMSII layout information has been downloaded to the DMS_ITEMS control table; and the corresponding relational database table layout information has been created in the DATATABLES and DATAITEMS control tables.


DB_Info: update_level = ddd, update_ts = timestamp, highest_strnum = ddd
DB_Info: database_ts = timestamp, database_name = name
[DB_Info: OptionFlags=options]

These messages are written only to the log file. The first line provides the database update level and the database update timestamp, and the highest structure number in the database for the data source being accessed. The second line provides the database timestamp and the database name for the data source being accessed.

A third line of comma-separated options appears when any of these options are true:

  • IndependentTrans (INDEPENDENTTRANS is set for the DMSII database)
  • AccessActive (READ ACTIVE AUDIT is set in the Engine Control File)
  • RDB (the DMSII database is an RDB secondary database); FileXtract (the data source is a FileXtract file)
  • LINKS (LINKS is set to TRUE in the Engine Control File).

Defining table entries for DataSet name[/rectype] (struct_number)

The Client control table entries for the specified data set are being defined. /rectype appears only for variable-format data set records that have a non-zero record type (they contain a variable part). For more information, see Variable-Format Data Sets in the Databridge Client Administrator's Guide. struct_number is the DMSII structure number of the data set. The Databridge Engine processes data sets in structure number order; the structure number is an indication of how close the define command is to completing.


Inserting data into control tables...

This message indicates that data is being inserted into the control tables.


Launching makefilter utility to create the binary filter file "dbfilter.cfg" in the config directory

This message indicates that the Client is launching the makefilter utility to compile the filter "dbfilter.txt" that was found in the config subdirectory. The makefilter output is written to a separate log file named "prefix_flt_yyyymmdd.log" (where prefix is the prefix used for the Client log file and defaults to "db").


Loading control tables for datasource

This message appears at the beginning of the second phase of the define command, when the Client reloads the control tables in order to pick up updates that result from the running of user scripts that alter the DMSII layout (script.user_layout.primary_tablename).


Mapping table entries for DataSet name[/rectype] (struct_number)

This message, which normally appears only in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the DMSII layout for the data set in question is being mapped to the corresponding relational database tables. It only appears in the log file if the status_bits column in the corresponding DATASETS table entry has the bit DS_Needs_Mapping (4) set.


Rows updated = ccc

This message, which only appears in the log file when the -v option is enabled, shows the row counts for all SQL statements that are executed when processing user scripts. A value of 0 is usually an indication that the user script is in error. At this point, it's a good idea to rerun the command with user script tracing and log output tracing enabled (that is, -t 2049). Log output tracing creates a trace file with both row counts and the SQL statement, which can be hard to match up otherwise.


Running script script_file_spec

This message is a confirmation that the Client is running the specified script. For each SQL statement in the script, when the -v option is enabled, the Client writes the number of rows that have been changed to the log file. See the preceding message, "Rows updated = ccc", for more information.


Working directory "path" created

This message appears when the define command needs to create the global working directory for the Client (that is, the working directory for the service). This directory and the locks subdirectory are created only if they do not already exist.


Redefine Command Messages

The following messages appear in response to the Databridge Client redefine command. These messages can also appear when running the Customize command from the Administrative Console.


Beginning updates of DataSource definitions

This message indicates that the data source definitions are being recreated. Old Client control table entries that correspond to data sets which need to be redefined (for the specified data source) are deleted first. However, any customization that was done to them using the Administrative Console's Configure command will be preserved. In the case of dbutility this is normally achieved by using user scripts, unless the configuration parameter use_dbconfig is set to True.


Command returned a status of nnn

This message, which only appears when using the Administrative Console's Configure command , shows the result of the Compare Layouts operation, which determines if the relational database layout has changed as a result of the data source being redefined. Possible values for nnn:

  • 0 indicates that the layouts are unchanged
  • 2032 indicates that a generate command is required (most likely because there are some datasets that have tables that need to be cloned or re-cloned)
  • 2033 indicates that a reorganize command is required, as the changes can be made without having to re-clone anything.
  • Any other exit status indicates that an error has occurred while comparing the layouts. Check earlier error messages.

The same message is also used to indicate that a remote procedure call by the Administrative Console's Configure command returned a non-zero status, which means that the RPC encountered an error.


Command returned a status of nnn (text)

This message, which only appears when using the Administrative Console's Configure command, shows the result of a Define/Redefine command. The message text gives a brief explanation of the error and can be one of the following:

  • No Further Action Required - processing can continue. In the case of an existing data source this indicates that the redefine command does not require that a reorganize or generate command be issued.

  • You need to run a Reorganize command. In the case of an existing data source this indicates that the redefine command has found differences between the old and new database layouts and that you need to run a Reorganize command to alter the tables.

  • You need to run a Generate command. This indicates that the scripts in the dbscripts directory need to be generated before you can run a process command.


Comparing old and new relational database layouts

This message, which only appears when using the Administrative Console's Customize command, indicates that old and new relational database layouts are being compared. The results of this operation determine if the data source needs any special attention prior to resuming normal operations, such as running a generate or reorganize command.


Creating file "datasource_NullRec.dat"

This message indicates that the Client is creating the file, datasource_NullRec.dat to hold the NULL VALUES for data set records from the specified data source. The datasource entry is the source specified in the data_source column of the corresponding DATASOURCES control table. The Client uses these records to determine if items are NULL.

A redefine command with the -R option (“Redefine All Data Sets” when using the Administrative Console) recreates the Null Record file. If you accidentally delete this file, this how you go about recreating it.


DataSet name[/rectype] did not previously exist -- you will need to run a generate command

This message indicates that a new data set was added to the DMSII DASDL since the last time a define command or a redefine command was executed and is a reminder that you need to execute a generate command before cloning the new data set. This message appears only when the parameter suppress_new_datasets is set to False.


DataSet name[/rectype] did not previously exist, defined with the active column set to 0

This message indicates that a new data set was added to the DMSII DASDL since the last time a define command or a redefine command was executed. This message appears only if the parameter suppress_new_datasets is set to True. In this case, the active column for the new data set is set to 0 in the DATASETS table.


DataSet name[/rectype] no longer exists

The specified data set was deleted from the DMSII DASDL since the last time a define or redefine command was executed.


DataSet name[/rectype] unaffected by reorganization ds_mode is n[, however a generate command is required as status_bits=ssss]

The layout of the relational database table mapped from the specified data set was not affected by the database reorganization. Therefore, the specified data set does not require re-cloning. In the unusual case where the command requires that you execute a generate command, the full message is shown.


DataSet name[/rectype] will be re-defined

This message, which appears only in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the specified data set will be redefined.


DB_Info: update_level = ddd, update_ts = timestamp, highest_strnum = ddd
DB_Info: database_ts = timestamp, database_name = name
[DB_Info: OptionFlags=options]

These messages are written only to the log file. The first line shows the database update level, the database update timestamp and the highest structure number in the database for the datassource that is being accessed. The second line shows the database timestamp and the database name for the data source that is being accessed. The third line lists comma-separated options and only appears when any of the following is true:

  • IndependentTrans (INDEPENDENTTRANS is set for the DMSII database)
  • AccessActive (READ ACTIVE AUDIT is set in the Engine Control File)
  • DBPlus (DBPlus is being used by the Engine)
  • RDB (the DMSII database is an RDB secondary database)
  • FileXtract (the data source is a FileXtract file)
  • LINKS (LINKS is set to TRUE in the Engine Control File)

Defining table entries for DataSet name[/rectype] (struct_number)

The Client control table entries for the specified data set are being defined. /rectype appears only for variable-format data set records that have a nonzero record type. (They contain a variable part.) struct_number is the DMSII structure number of the data set. Because the Databridge Engine processes data sets in structure number order, the structure number is an indication of how close the redefine command is to completing.


Format level change mmm -> nnn detected for DataSet name[/rectype]

This message, which is printed at the start of a redefine command, indicates that the format level for the given data set has changed. Every time a data set is changed in the DASDL, the format level for the data set is set to the database update level when the DASDL is recompiled. A format level change is an indication that the data set was affected by a DMSII structural reorganization.


Item count change mmm -> nnn detected for DataSet name[/rectype]

This message, which is printed at the start of the redefine command, indicates that the number of items in the data set has changed. This could be the result of a filler substitution reorganization or a change in the column filtering specified in GenFormat.


Loading control tables for datasource

The redefine command shows this message every time it loads the Client control tables. The redefine command always starts by loading the Client control tables. The Client control tables are also reloaded after running user scripts, at the end of the two main phases of the command, in order to pick up updates that result from the running of these scripts.


Mapping table entries for DataSet name[/rectype] (struct_number)

This message, which normally only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the DMSII layout for the data set in question is being mapped to the corresponding relational database tables. It only appears in the log file if the status_bits column in the corresponding DATASETS table entry has the bit DS_Needs_Mapping (4) set.


No Further Action Required - Processing can continue

This message, which only applies to the command line Client dbutility, is a confirmation that the redefine command did not detect any layout changes. You can resume processing audit files.


Redefine DataSet name[/rectype]: index name for table 'name' was changed from 'name1' to 'name2'

This message indicates that the name of the index for the given table has changed. The Client simply renames the index using a reorganize command to avoid problems at a later time.


Redefine DataSet name[/rectype]: index type for table 'name' was changed from index_type1 to index_type2

This message indicates that index type for the given table has changed (for example, from a unique index to a primary key). The Client drops the old index and creates the new index using a reorganize command.


Redefine DataSet name[/rectype]: Table 'name' no longer being used as a result of the DMSII reorg - Run the script 'script.drop.name' to drop the table and its stored procedures

As a result of the DMSII reorganization of the specified data set, the specified table was removed from the DATATABLES and DATAITEMS entries mapped from this data set. The table is not dropped. To drop the table and the stored procedures associated with it, execute the script noted in the message. If using the runscript command specify the -n option and include the directory "dbscripts\" ("dbscripts/" for UNIX) in the file name, as the command will otherwise look for the script in the scripts directory.


Redefine DataSet name[/rectype]: Table 'name' was added [--active column set to 0 in DATATABLES]

As a result of the DMSII reorganization of the specified data set, the specified table was added to the DATATABLES and DATAITEMS entries mapped from this data set. If the parameter suppress_new_columns is set to True, the Databridge Client sets the active column of this entry (in DATATABLES) to 0.


Redefine DataSet name[/rectype]: UseStoredStoredProcs option bit was changed from aa to bb

This message indicates that the use of stored procedures for the data set in question changed. This happens when you change the bit by either changing the use_stored_procs configuration parameter or by using a user script that changes the corresponding bit in the ds_options column of the DATASETS table. The Client will ask you to run a reorganize command. The reorganize command will create a new set of scripts and it will refresh the stored procedures for the data set. This means that when you go from using stored procedures to not using them, they will be dropped. Conversely, if you go from not using stored procedures to using them, they will be created.


Redefine Table 'name': Column 'name' (Item# number), changed from dec_type(p1[,s1]) to dec_type(p2[,s2])

As a result of the DMSII reorganization, the data type of the specified column changed.

  • dec_type indicates the values of the sql_type column of the corresponding DATAITEMS table entries
  • p1 and p2 indicate the values of the sql_length columns
  • s1 and s2 indicate the values of the sql_scale column if applicable for the specified SQL type

Redefine Table 'name': Column 'name' (Item# number) dms_subtype value changed from mmm to nnn

This indicates that the dms_subtype for the given column changed. If this is a date this means that the format of the DMSII data is now different. This usually means that you will need to re-clone the data set.


Redefine Table 'name': Column 'name' (Item# number) item_key value changed from n(key#=k1) to m(key#=k2)

This indicates that the order of the columns in the index have changed. The numbers in parentheses are the actual positions of the columns within the index. As long as they're the same, the Client ignores any changes in the item_key values.


Redefine Table 'name'': Column 'name'' (Item# number) masking_info value changed from 0xhhhh to 0xhhhh

This message, which applies only to the SQL Server Client, indicates that as a result of a DMSII reorganization (or a change in customization for the column) the state of the masking for the column in question was changed. This will be handled by the ensuing reorganize command which will alter the table to change the masking information for the column.


Redefine Table 'name': Column 'name' (Item# number) NULL attribute changed from old_val to new_val

As a result of a DMSII reorganization, the NULL attribute of the item changed. In addition, the message shows both the old and new values. The Client alters the column using the reorganize command, which runs the reorg scripts created by the redefine command.


Redefine Table 'name': Column 'name' (Item# number) SQL length value changed from old_length to new_length

As a result of a DMSII reorganization, the SQL length for the specified column in the specified table changed. In addition, the message shows both the old and new values. The Client alters the column using the reorganize command, which runs the reorg scripts created by the redefine command.


Redefine Table 'name': Column 'name' (Item# number) SQL type value changed from old_type to new_type

As a result of a DMSII reorganization, the SQL type for the specified column in the specified table changed. In addition, the message shows both the old and new values. The Client alters the column using the reorganize command, which runs the reorg scripts created by the redefine command provided that the type of transformation is allowed. For example, if a column with a data type of int changes to date, the data set must be re-cloned; the relational database's ALTER statement does not support this type of transformation.


Redefine Table 'name': Column 'name' (Item# number) was added [--active column set to 0 in DATAITEMS]

As a result of a DMSII reorganization, the specified column was added to the specified table. If the parameter suppress_new_columns is set to True, the Client sets the active column of this entry (in DATAITEMS) to 0. If the parameter suppress_new_columns is set to False, the Client adds the column to the table using the reorganize command, which runs a script that executes the actual ALTER statement.


Redefine Table 'name': Column 'name' (Item# number) was deleted

As a result of a DMSII reorganization the specified column in the specified table was removed. The Client drops the column from the table using the reorganize command, which runs a script that executes the actual ALTER statement. If you want to keep the column, modify the script not to drop the column. Make sure that the column has the NULL attribute or has a DEFAULT defined; the Client will not provide a value for this column. Otherwise, the update will fail.


Redefine Table 'name': Column 'name' (Item# number) will be handled as a nullable key

This message, which is only applicable to MISER databases, indicates that a MISER date, which is a key, has been encountered when the parameter use_nullable_dates is set to True. The Client will generate special code for the update stored procedure to handle the case when the value of the key item in question is NULL.

Note

This code is limited to one nullable date in the index of the table. If you have more, you must use a different index.


Redefine Table 'name': Column 'name' moved from position n1 to n2

This message indicates that the column in question is in a different position in the table as a result of a DMSII reorganization or layout changes made using user scripts or the Administrative Console's Customize command. The order of the columns in the table matters when you are using stored procedures, as the parameters will be different. If not using stored procedures the order of the columns does not matter as the SQL statements are dynamically created.


Restoring user changes to DATATABLES and DATAITEMS tables

This message only appears when the redefine command runs in Administrative Console's Customize command compatible mode (that is, the configuration parameter use_dbconfig is set to True). If the data source was not created using the Administrative Console's Customize command, you must run the dbscriptfixup program, which will automatically enable the parameter use_dbconfig if it runs successfully.

When run in this mode, the redefine command does not use user scripts. Instead, it restores the changes from the old copy of the control tables. This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the customization changes for the relational database tables and their columns are being restored from the old copy of the control tables.


Restoring user changes to DMS_ITEMS table

This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the customization changes for the DMS_ITEMS entries are being restored from the old copy of the control tables. See the preceding message for details.


Rows updated = ccc

This message, which only appears if the -v option is enabled, shows the row counts for all SQL statements that are executed when processing user scripts. A value of 0 is usually indicates that the user script is in error. When this occurs, it's a good idea to rerun the command with the user script tracing and log output tracing enabled (that is, -t 2049). This creates a trace that provides both the SQL statement and the row counts, which are otherwise hard to match up.


Running script "script_file_spec"

This message confirms that the Client is running the specified script. For each SQL statement in the script, if the -v option is enabled, the Client writes the number of rows that have been changed to the log file. For details, see the preceding message.


Update of definitions of DataSets completed

This message indicates that the redefine command is complete. As a result, the DMSII layout that the corresponding relational database table layout information has been updated in the Client control tables. If the layout has changed, the scripts that the reorganize command executes are written to the working directory for the data source. Additionally, the original control tables are saved in the unload file named source_reorg_nnn.cct, where source is the data source name and nnn is the original update level of the DMSII database. If the command fails because of a bad user script, you can reload the control tables from this file and re-execute the command after fixing the bad user script.


You must also run a generate command for DataSet name[/rectype] which is to be recloned

This message is a reminder that the table layout for the data set in question has changed and a generate command must be run before the data set can be re-cloned. Failure to do so will simply result in the process command failing. This message sometimes follows the message “WARNING: AA values for DataSet name[/rectype] are no longer valid, ds_mode set to 0 to force a re-clone”.


Generate Command Messages


Beginning script generation

This message indicates that the scripts for creating, populating, and updating the Databridge data tables in the relational database are being created in the dbscripts subdirectory.


Creating scripts for 'tabname'

This message indicates that scripts are being created for each table that will be stored in the relational database.


Generate command found nothing to do -- all scripts are current

This message indicates that the generate command did not find any data sets whose DS_Needs_Generating (2) bits in the status_bits column of the DATASETS control table were set. The program suppresses unnecessary script generation to avoid overwriting any changes users might have made to these scripts. To disable this safeguard, use the -u option for the generate command; or from the Administrative Console, select the Generate All Scripts command.


Loading control tables for datasource

This message indicates that the DMSII and relational database layout information is being loaded into memory from the Client control tables.


Script generation completed

This message indicates that the generate command is complete and that the necessary script files have been written to the dbscripts directory.

Reorganize Command Messages

The following messages appear in response to the Databridge Client reorganize command.

Clearing DataSet name[/rectype] records

This message indicates that the cleanup scripts are being run for the all the tables of the specified multi-source data set.


Creating history table 'name'

This message indicates that a history table is being created because the configuration parameter enable_dynamic_hist is set to True.


Creating index 'name' for history table 'name'


Creating index 'name' for table 'name'

This message indicates that the reorganize command is creating the index for a table whose reorganization requires the index to be dropped and recreated after the table is altered.


Creating scripts for 'tabname'

This message is printed as a result of the reorganize command executing a generate command before it does anything else.


Dataset name[/rectype] successfully reorganized

This message is printed after a data set is successfully reorganized. It is meant to show the progress of the command that might take a very long time to complete.


DataSource name successfully reorganized

This message is printed at the end of a reorganize command to indicate that the command completed successfully. It is mainly meant to provide some feedback when the command is executed from the Administrative Console.


Dropping index for table 'name'

This message indicates that the reorganize command is dropping the index for a table whose reorganization requires the index to be dropped and recreated.


Loading control tables for datasource

This message indicates that the DATASETS table and relational database layout information is being loaded into memory from the Client control tables.


Multisource DataSet name[/rectype] ds_mode is 35
The multi-sourced table has been dropped as there are no records left
Set the ds_mode to 0 in the DATASOURCES table for all of the sources to re-clone

These messages indicate that a table for a multi-source data set was dropped as a result of a reorganization that forced a re-clone and that the other data sources have removed all their records from the table, which can now safely be re-cloned. When the reorganization of all data sources is complete, set the ds_mode column to 0 in the DATASOURCES table to re-clone it. See the next message for more details.


Multisource DataSet name[/rectype] ds_mode is 35
Set ds_mode to 0 when all sources have been reorganized

These messages indicate that a multi-source data set was dropped and that the other data source must be reorganized before further action can occur. When the reorganization of that data source is complete, set the ds_mode column to 0 in the DATASOURCES table to re-clone it. When re-cloning a data set in a multi-source environment, the first data source processed must use the -k option on the command line to make the Client drop the table instead of running the cleanup script to remove its records from the table.

Multi-sourced data sources get their inputs from two (or more) structurally identical DMSII databases (same DASDL) that reside on different systems (for example, two different branches of the company, each with their own database). The data is stored in the same tables in the relational database, using the source_id columns in all the tables to keep track of where the records originated.

When reorganizing a multi-sourced data source, the changes must be completed in both data sources before you can alter the tables. The data source that is reorganized first is placed in a waiting state until the remaining data sources are reorganized. If the reorganize command determines that the table contains no records that come from any of the other multi-sourced data sources, it issues this message to indicate that it is time to re-clone the table.


Reorganizing DataSet name[/rectype]

This message is printed to the log file when a data set with a ds_mode of 31 is found. When the reorganization is completed, the message “Dataset name[/rectype] successfully reorganized“ is shown.


Stored procedures for all tables of DataSet name[/rectype] successfully refreshed

This message indicates that after altering the tables the reorganize command dropped and recreated the stored procedures associated with the table mapped from the data set.


Process and Clone Commands Messages

The following messages appear in response to the Databridge Client process and clone commands.

ABORT command initiated by TERM signal

This message, which is limited to UNIX Clients, indicates that a kill command was used to generate a SIGTERM signal that the Client is responding to. The Client treats this signal exactly like a command line console QUIT NOW command. This is particularly useful, if you are running the Client as a background run.


Attempting to clear duplicate records for table 'name'

This message indicates that following the failure to successfully create an index for a table at the end of the data extraction phase, the Client will attempt to run the script "script.clduprecs.tablename" to remove duplicate records from the table. This situation can occur if the Databridge Engine sees the same record twice during data extraction. This is much more likely in the case of COMPACT data sets where records can move around in the data set when their sizes change.


Begin populating/updating database [from AFN=afn, ABSN=absn, SEG=seg, INX=inx, DMSII Time=timestamp]

This message appears at the start of a process or clone command after all the data sets have been successfully selected. The absence of any audit file information indicates that all of the selected data sets need to be cloned and that the Databridge data tables have been successfully created. The audit file location information indicates that a process command has found at least one data set ready to receive DMSII updates.


BI image for update to table 'name' is now filtered, deleting old image; Keys: column_name = value, ...

This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the row in question, that was not previously filtered, now satisfies the filtering condition and needs to be removed from the corresponding table. The Client automatically deletes this row from the table in this situation.


BI image for update to table 'name' was previously filtered, inserting new image; Keys: column_name = value, ...

This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the row in question, that was previously filtered, no longer satisfies the filtering condition and needs to be stored in the corresponding table. The Client automatically inserts this row into the table in this situation.


Build_BCP_Record: table='name', record filtered for name1 = occ
Build_Pipe_Stream: table='name', record filtered for name1 = occ

These messages are written to the trace file when the TR_VERBOSE bit in the trace mask is set, during the data extraction phase when an OCCURS table filter is present. They indicates that the specified occurrence of the item (or GROUP) is being suppressed as it meets the filtering conditions. In the case of the SQL Server Client the first message is used when using the BCP API, while all other cases use the second message.


Build_Parameters: table='name', record filtered for name1 = occ

This messages is written to the trace file when the TR_VERBOSE bit in the trace mask is set, during the tracking phase when an OCCURS table filter is present. It indicates that the specified occurrence of the item (or GROUP) is being suppressed as it meets the filtering conditions.


Bulk load count verification for table 'name' complete: number rows

This message only appears when the configuration parameter verify_bulk_load is set to 1 or 2 and the number of records in the relational database table is equal to the number of loaded records.


Bulk loader parameter "max_temp_storage" = mmm MB

(Windows only) This message, which is only written to the log file when one or more data sets are to be cloned, records the value of the max_temp_storage parameter in the Client configuration file. This provides readily available information for analyzing and resolving a slow clone. You should use a value of at least 400 MB for this parameter.


Bulk_loader thread no longer hung, main thread resuming

(Windows only) The bulk loader thread, which had fallen so far behind that it caused the main thread (or one or more update worker threads when using multi-threaded updates) to block, has caught up and allowed the threads to unblock and start running again. If this situation occurs again, you should investigate why the bulk loader is running so slowly. If you are using a remote connection over Oracle, try increasing the value of the sql_bindsize to 1 MB.


Cleaning up table 'name' [fully]

This message indicates that the Client is deleting selective records from the specified table at the beginning of the data extraction phase, instead of dropping the table and recreating it. This action is taken only in special cases, such as when deleted records are preserved. Another case is when a table that gets its input from more than one data set is partially re-cloned. The presence of the word "fully" indicates that the script "script.cleanup2.table" is being run, as opposed to the script "script.cleanup.table".


Clear duplicate records script ran successfully

This message confirms that the script "script.clrduprecs.tablename” was successfully run. The purpose of this script is to delete records that appear multiple times in the table. The fixup process will reinsert the correct copy of these records. This situation is rare, but tends to happen when compact data sets are involved.

Caution

You should disable the use of this script when creating composite keys, that you are unsure of. Running this script could end up deleting perfectly good data and forcing you to re-clone, instead of deleting duplicate records encountered during the data extraction. To disable the running of the script for a particular data set, you need to reset the bit DSOPT_Clrdup_Recs (32,768) in the ds_options column of the corresponding row in DATASETS control table.


Clone of selected DataSets completed

This message indicates that the clone command for the specified data sets completed successfully.


Closing file "bcppipe.name[_number]"

This message, which applies only to the SQL Server Client and appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the specified temporary file has been successfully closed prior to being queued for bulk loading.


Closing file "lpipe_number[_number].dat"

(Windows only) This message, which applies only to the Oracle Client and appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the specified temporary file has been successfully closed prior to being queued for bulk loading.


Creating index 'name' for table 'name'

This message shows the progress of the data set being cloned and indicates that the Client is creating an index for the specified table.


Creating table 'name'

This message, which shows the progress of the data set being cloned, indicates that the specified table is being created in the relational database.


Creating temp file "name" for table 'name'

(Windows only) This message shows the progress of the clone during the data extraction phase and indicates that the Databridge Client is creating a temporary file to hold the records that will be passed to the bul loader. When using the Oracle Client the file is named lpipe_tablenumber.dat and bcppipe_tablename.dat when using the SQL Server Client. This message is only printed when the first temporary file for the given table is created, unless the TR_VERBOSE bit in the trace mask is set. In which case it is written to the trace file every time a temporary file is created.


Cumulative Statistics:

This message contains statistics for the entire run (compared to incremental statistics, which applies only to each individual audit file processed). This message is written to the log file at the end of the run.

For a description of individual messages in these statistics, see Update Statistics.


Data extraction phase for table 'name' complete,
num DMSII records processed, num rows loaded [, num rows in error][, num rows discarded][, num rows filtered]

Indicates that the data extraction phase for the specified table is complete. It also reports the DMSII record count, the corresponding number of rows that were loaded into the relational database, the count of records that had data errors, the number of discarded records, which are placed into discard files named tablename.bad in the discards subdirectory, and the number of filtered out records. If any of the last 3 counts are 0 they are omitted from the message.

Note that records discarded by the bulk loader are placed in different files in the discards subdirectory. The files are named bcp.tablename.bad and sqlld.tablename.bad for the SQL Server and Oracle Clients respectively.


Data Extraction [Phase 1] Statistics:

This message precedes a list of data extraction statistics that are printed to the log file. In the case of a MISER database, which has virtual data sets that get their input from more than one data set, the data extraction has two phases. The message at the end of the first phase of the data extraction is marked “Phase 1”. The statistics at the end of the second phase include both phases.

For a description of individual messages, see Update Statistics.


Database clone/update completed

This message appears when a process or clone command completes successfully. This message can also appear when the Databridge Engine has successfully read the available audit files but there were no updates to pass on to the Client.


DataSet name[/rectype], mode 1->2, datasets_to_fixup = nnn

This message, which appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that a State Information record with a mode of 2 was received for the given data set which had a mode of 1. The Client updates the mode to 2 and decrements the count of data sets that are still in mode 1 after displaying this message.


DataSet name[/rectype] will be cloned

This message, which applies to the clone command, indicates that the specified data set will be cloned. /rectype only appears for variable-format data set records that have a non-zero record type (contain a variable part).


DataSets initialized in DMSII

This message appears when the Databridge Engine sends the Client a status indication that one or more data sets have been initialized (emptied). This message is preceded by one or more of the following messages that occur during the purging of a data set: "Dropping tablename"; "Creating tablename"; and "Creating index name for tablename".


DB_Info: update_level = ddd, update_ts = timestamp, highest_strnum = ddd
DB_Info: database_ts = timestamp, database_name = name
[DB_Info: OptionFlags=options]

These messages are only written to the log file and only appear when one or more data sets are in tracking mode.

The first line provides the database update level, the database update timestamp, and the highest structure number in the database for the data source being accessed.

The second line gives the database timestamp and the database name for the data source being accessed.

The third line, which only appears when there is something to report, consists of a set of comma-separated option names. These include: IndependentTrans (INDEPENDENTTRANS is set for the DMSII database), AccessActive (READ ACTIVE AUDIT is set in the Engine Control File), RDB (the DMSII database is an RDB secondary database), FileXtract (the data source is a FileXtract file) and LINKS (LINKS is set to TRUE in the Engine Control File).


DB_Wait parameters: retry_secs = rrr, maxwait_secs = ddd, eee

This message, which is only written to the log file, shows the values of the DB_Wait RPC parameters when the configuration parameter use_dbwait is set to True. This information is logged as it is one of the first things we want to know, when the Client appears to run very sluggishly. Setting max_wait_secs to a high value will make the Client go idle for the specified amount of time when the Engine reaches the end of the audit trail. rrr is the retry interval for the Databridge Engine.

When eee is 0 ddd is the time interval after which the Engine stops retrying when it finds no updates. On the other hand if eee is non-zero, it defines the time interval after which the Engine stops retrying when it finds no updates. In the latter case the Client will complete the wait-and-retry loop by issuing DBWait RPCs until ddd seconds elapse with no updates received from the server. A value of 0 for ddd is taken to mean retry forever.


{DMAuditLib | FileXtract_Reader_Name} Version: version_string

This message, which is only written to the log file when the Client starts processing audit files, shows the reader being used for the audit files. When using FileXtract, the reader’s name and version appear in the log file at the start of a process or clone command. These include SUMLOG, TTRAIL, PRINTFILE, BICSS, DISKFILE, LINCLOG.


Deleting file "name"

(Windows only) The specified file is being deleted. Instead of initiating the bulk loader, the temporary data files are deleted since the -z option implies that the database cannot be updated. This message only appears if you use the -z option.


DMSII item data: offset = dddd (0xhhh), len = dd half bytes
0000 xx ...

This message, which is only written to the log file, appears after a data error warning if the parameter display_bad_data is enabled. These errors include bad digits in numeric data and control characters or 8-bit characters in ALPHA data. xx represent the DMSII values of the data bytes that make up this field.


Dropping table 'name'

This message appears when a table gets initialized during the processing of updates, as a result of an INITIALIZE of the data set in DMSII. The Client does this by dropping the table and recreating an empty table.


Effective COMMIT Parameters: BLOCKS = bbb, UPDATES = uuu, TRANS = ttt, ELAPSED = eee, LONG TRANS = {True | False}

This message, which is only written to the log file, shows the effective values of the various CHECKPOINT FREQUENCY parameters. The original values come from the Engine Control File, but are sometimes overridden by the values specified in the Client configuration file.


End fixup phase for cloned DataSets

This message indicates that the fixup phase for the cloned data sets is ending. At this point, all tables in the relational database that are mapped from active data sets are synchronized.


End populating/updating database at AFN=afn, ABSN=absn, SEG=seg, INX=inx, DMSII Time=timestamp

The process or clone command ends at the audit file location that corresponds to the given AFN, ABSN, SEG, INX, and DMSII timestamp values. The next process command starts at this point.


Extended translation library "name" successfully loaded and initialized

This message, which only applies when using an external translation DLL to perform the data translation, indicates the DLL was successfully loaded.


Filter file "dbfilter.cfg" successfully processed

This message, which is written to the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the binary was successfully loaded and the associated data structures were successfully initialized.


Incremental Statistics (AFN=nnnn):

This message contains a set of incremental statistics, which apply only to each individual audit file processed. The AFN displayed identifies the audit file for these statistics. It is the previous audit file, as we print these statistics when we encounter the first quiet point in an audit file.

For a description of individual messages, see Update Statistics.


Index 'name' for table 'name' created successfully

This message is printed when the index creation for a table, that was cloned, is successful.


Initiating process command for DataSource name

This message indicates that a dbutility process command that has scheduling enabled in the configuration file dbridge.cfg file has just woken up and is run a process command. When using the service, it handles the scheduling by launching a DBClient run.


Key change detected in MODIFY for DataSet name[/rectype], handling it as a DELETE/INSERT instead

This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the Client detected a change in the value of one or more key items. At the start of a process or clone command, the Databridge Client registers the keys being used with the Databridge Engine for data sets that use a SET that has the KEYCHANGEOK attribute as the source for the index. The Databridge Engine handles this situation by sending the updates to the Client as MODIFY records when the keys are unchanged or as MODIFY BI/AI pairs when the keys change. For information about the DSOPT_CheckKeyChanges (0x1000) bit in the ds_options column in the DATASETS table refer to the Databridge Client Administrator's Guide.


Launching refresh command to update stored procedures

This message indicates that the dbfixup program set a bit in the status_bits column in the DATASOURCES table's row for the data source to indicate that there are OCCURS tables present. Upon seeing this bit a process or clone command initiates a refresh command to get the stored procedures z_tablename created. These stored procedures are used to speed up delete operations for such tables. Rather than deleting the rows of secondary table for a given key one by one, the Client deletes them all in a single SQL statement by using this stored procedure.


Loading binary filter file "dbfilter.cfg"

This message, which will only be displayed when there is an occurs table filter present in the config subdirectory for the data source. The filter file is always named dbfilter.cfg. This message is a simple confirmation that the binary filter file was read by the Client. Refer to the section on OCCURS table filtering in the Client Administrator's guide for detail on how this type of row filtering works.


Loading control tables for datasource

This message indicates that the Client control tables are being loaded for the data source you specified with the process or clone command.


Log file switched from "filename" (reason)

This message is written to the new log file immediately after a log switch occurs. It provides the name of the previous log file, which is sometime useful if need to find out what happened at a time before the switch.


Log file switched to "filename" (reason)

This message is written to the log file under the following conditions, which cause the Client to close the current log file and open a new one:

  • The logsw_on_size configuration parameter is set to True and file size exceeds the configured maximum during an audit file switch.

  • The logsw_on_newday configuration parameter is set to True and the Client notices that the date has changed.

  • The operator issues a Logswitch command.

The values for reason include "Operator Keyin", "Max file size", and "Date change".


Mainframe Time 'hh:mi:ss'; {ahead | behind} by hh:mi:ss

This message, which appears at the start of a process command, shows the time difference between the mainframe and the Client machine clocks. This value is factored in to all lag time calculations. The Client periodically checks the mainframe clock, to prevent the lag time from going negative if the clocks are adjusted. You will therefore see this message multiple times during the course of a long Client run.

We always display this message at the start of the run and from thereon only if it drifts by more than 2 seconds.


MODIFY occurs depending on, item = 'name', bi_count = ddd, ai_count = nnn
- Keys: columnname = value,...

This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, shows the old and new values of the depends item(s) for an OCCURS DEPENDING ON clause. The values determine how the update is handled when the OCCURS is not flattened.


Next update for DataSource name will run at hh:mm (delay = nn secs)

This message appears only when you have scheduling parameters enabled for dbutility in the configuration file. It tells you when to expect the process command to run again. hh:mm corresponds to the time at which the next run starts and nn represents the length of this delay (in seconds).


Processed: nnnn DMS recs, rrrr SQL rows

This message appears if the display_statistics configuration parameter or the -v option is enabled. It is useful when you are not sure if the Client is running or if it has stopped operating, especially when the cloning requires several hours.


Processed: nnnn DMS recs, rrrr SQL rows[, bbbb SQL rows rolled back]

This message, which is printed after the cumulative statistics, contains the total count of DMSII records that were processed and the corresponding count of SQL updates. In addition, when rollbacks occur, it displays the count of rows rolled back.


Processing updates from: AFN=afn, ABSN=absn, SEG=seg, INX=inx, DMSII Time=timestamp

This message appears after the incremental statistics, which are displayed when the Client encounters the first quiet point in a new audit file. It indicates that the Client is processing updates from the specified AFN, ABSN, SEG, INX, and DMSII time stamp. The statistics that precede it apply to the previous audit file.


QUIT command initiated by QUIT signal

This message, which is limited to UNIX Clients, indicates that a kill command was used to generate a SIGQUIT signal, which the Client is responding to. The Client treats this signal exactly like a command line console QUIT command. This is particularly useful if you are running the Client as a background run.


Redundant update for table 'name'; Keys: columnname = value,...

This message only appears in the trace file when the following conditions are met: the TR_VERBOSE bit in the trace mask is set; the configuration file parameter optimize_updates is set to True; and the data set has been marked to receive before- and after-images. The message indicates that the Client found no value changes in the columns of the table and no update is needed. This means that the update can be skipped as it does nothing.


ReleaseSemaphore for name reached maximum value, retrying after brief delay

(Windows only) This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the Client has exceeded the maximum posting limit for the semaphore by attempting to post too many requests. The Client will stall until the thread that handles requests catches up with the requests. In the case of bcp_work_semaphore, this usually means that you have attempted to clone a large number of empty data sets. The Client should normally recover from this situation.


Rerunning the script to clear duplicate records for table 'name'

(SQL Server only) This message indicates that the script which clears duplicate records got a schema change error (539); that is, the schema of tables changed while a select into statement was executing. The Client recovers from this error by rerunning the script. This error only occurs on high-end multiple-CPU machines. If the script fails a second time, the Client returns the message, "WARNING Attempt to clear duplicate records for table 'name' failed." In this case, the ds_mode column in the DATASETS table for the data set will be set to 11 and the data set will not be tracked.


Restarting fixup phase for previously cloned DataSets

This message appears at the start of a process command (after the message "Begin populating/updating database from AFN = afn, ABSN = absn, ...") if there are data sets in fixup mode (ds_mode = 1) and no data sets need to be cloned (ds_mode = 0).


Restarting process command for data source name

This message, which is limited to the command line Client dbutility, indicates that the Databridge Engine returned an error status that is handled by restarting the Client after a brief delay. These include DBM_AUD_EOF (9), DBM_BAD_AUDITLOC (11), DBM_WRONG_ABSN (92), DBM_BLOCKTOOLONG (1179), DBM_LOC_MISMATCH (33) and DBM_ABSNMISMATCH (1180). These errors all have to do with race conditions when the Engine is attempting to read the current audit file before DMSII has finished writing it. Additionally, if the Client is terminated because a relational database deadlock was detected, dbutility will also attempt to recover by restarting after a brief delay.

When using the Client Manager Service you will never see this message, as the service handles the error recovery.


Rows loaded by table:
name rowcount name rowcount name rowcount

This message, which is only written to the log file at the end of the data extraction phase when the show_table_stats parameter is set to True, shows the number of rows loaded for each of the affected tables.


Running script "script_file_spec"

This message appears only when a data table creation user script or an index creation user script is run.


Selecting DataSet name[/rectype]

This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, specifies that the data set in question is being selected at the beginning of a process or clone command. It gives you a chance to make sure that you have selected all of the data sets you want.


{SQLCancel | OCIBreak} operation completed

The Client’s timer thread monitors SQL queries that fail to complete within a reasonable amount of time controlled by the configuration parameter sql_exec_timeout, which contains two values.

  • The first value is the time interval after which the timer issues a warning about the SQL operation taking longer than anticipated.
  • The second value is the time interval after which the timer thread initiates steps to remedy the situation. The first step is to issue a database API call to cancel the query. In the case of SQL Server, this is the ODBC SQLCancel procedure. In the case of Oracle, this is the OCIBreak procedure.

We observed that when the Client is run in a virtual machine (VM), this operation sometimes also hangs because of a loss of connectivity between the ODBC driver and the database machine caused by a problem with the VMWARE networking.

This message indicates that the SQLCancel call did not hang. In order to prevent the timer thread from hanging, the SQL Server Client creates a temporary thread to issue the SQLCancel call. The timer thread then monitors this thread and if it detects that the SQLCancel is hung, it kills the Client task, which allows the service (or the script file that launched the Client) to detect the situation and restart the Client, which normally resolves the issue.


Starting bcp for table 'name'

(SQL Server Client only) This message indicates that the Client opened a BCP API connection and is about to start loading the specified table.


Starting { bcp | sql*loader } for table 'name'

(Windows only) This message indicates that the SQL Server (when the using BCP) or Oracle Client is starting the bulk loader for the given table.


Starting command: "filename"

(UNIX only) The given shell script that runs the bulk loader is being started. The shell script (load.tablename.sh) runs the bulk loader and establishes a named pipe used to communicate data between the main process and the bulk loader process that is spawned from the script.


Starting fixup phase for cloned DataSets

This message indicates that the Client has finished extracting data and is starting the fixup phase for the cloned data sets.


Stopping: All available audit information has been processed (status)

This message indicates that no more audit file information is available on the host (that is, a normal stopping point). When READ ACTIVE AUDIT is set to FALSE in the Engine Control file, or when audit files are available but there are no updates, the Databridge Engine will also cause the Client to display this message and stop.

Note

If the Databridge Engine finds updates but reaches the end of the audit file before a commit, the Client rolls back the updates. The discarded updates are included when the next audit file or quiet point becomes available.

The status received from DBServer is one of the following:

  • AUD_UNAVAIL indicates a normal exit.

  • LIMIT_NAME indicates that the run stopped because the Databridge Engine encountered a task name that satisfied the stop condition. This condition can be specified using a STOP statement in the DBServer parameter file or the Client configuration file.

  • LIMIT_TIME indicates that the run stopped because the Databridge Engine is processing an audit file record whose time stamp satisfies the stop condition. This condition can be specified using a STOP statement in the DBServer parameter file or the Client configuration file.


Stopping: Audit information not available (status)

For an explanation of statuses, see the preceding message. This message occurs when no audit files on the host have been read by the Databridge Engine. This can indicate that no audit files are available (that is, no audit file is closed) or that the Databridge Engine does not have visibility to the audit files. In this case, try again when an audit file is closed. If READ ACTIVE audit is set to FALSE in the Engine Control file, you will get this message when the Client tries to open the active audit.


Stopping: Client operations inhibited between hh:mm and hh:mm

This message indicates that the Client stopped the processing of audit files because it is entering a blackout period defined by the configuration parameter blackout_period in the scheduling section.

In the case of dbutility, this message also applies to a blackout period defined using the stop_time and end_stop_time columns in the DATASOURCES table entry for the data source. To use this feature, which can be associated with the shutdown parameter, set the configuration parameter controlled_execution to True.


Stopping: Database reorganization – execute a redefine command followed by a reorganize command

This message appears when you run a process command, and the Databridge Engine detects a DMSII structural reorganization or a filler substitution that occurred. See "Changes to the DMSII Database" in Chapter 4 of the Databridge Client Administrator's Guide.


Stopping: Database update level changed -- execute a redefine command

This message appears when you run a process command and the Databridge Engine detects a DMSII structural reorganization that does not affect any of the selected data sets. In this situation, the Client will stop, except when you use a MISER database, in which case it will display the warning, "Database update level change ignored".

When the Client stops after a database update level change, it forces you to run a redefine command. This ensures that DASDL changes that affect the operations of the Databridge Client are addressed, even if those changes don't result in a format level change for the data sets.

For example, if an item is added to the SET, which is used by the Client as the source of the index, the Client would not discover that item. Instead, the Client would create false duplicate records, which would result in missing records in the relational database. Another example is if changes are made to the KEYCHANGEOK attribute of the SET that Client uses as the source for index. The Client would not know that KEYCHANGEOK had been set and would create extra records in the relational database when a key change actually occurs.


Stopping: DBEnterprise audit file origin changed

This message appears when you run a process command and the Databridge Client detects a change in the audit file origin (access method) when using Databridge Enterprise Server. When starting to process a new audit file the Client gets a DOC record that has information about the audit file. This includes the access method (direct disk, indirect disk or cache), which defines how the audit file is being read. The configuration parameter stop_on_dbe_mode_chg determines whether or not the Client should stop when it detects that the audit file origin no longer matches the value specified in the configuration parameter dbe_dflt_origin.


Stopping: Discard threshold exceeded

This message indicates that the Client has exceeded the limit on the total number of discard records specified in the configuration file using the first value of the parameter max_discards. This situation results in exit code 2054.


Stopping: Errors occurred during data extraction [and index_creation]

This message appears before the fixup phase if discard files were created and (if the last part of the message is present) index creation errors occurred. Instead of entering the fixup phase, the program stops and gives you a chance to look into the problem before continuing.


Stopping: Errors occurred during index creation

This message appears before the fixup phase if index creation errors occurred. Instead of entering the fixup phase, the program stops and gives you a chance to look into the problem before continuing.


Stopping: Garbage collection reorganization has occurred

This message indicates that the processing of updates is being interrupted. This message appears at the first quiet point after a garbage collection reorganization if the stop_after_gc_reorg parameter is enabled.


Stopping: Operator issued a "quit" command

This message indicates that a QUIT command for the Client was issued from the console. The Client stops at the next quiet point after displaying this message.


Stopping: Processing of fixup records deferred to next process command

This message appears when the program typically enters the fixup phase if the configuration parameter defer_fixup_phase is enabled (the -c option toggles this parameter). Instead of entering the fixup phase, the program stops.


Stopping: Processing of updates deferred to next process command

This message, which appears at the end of the fixup phase if the stop_after_fixups parameter is enabled, indicates that updates will be processed at the next process command.


Stopping: Processing through requested AFN completed

This message appears when the value of an audit file number goes past the audit file number passed to the Client using the -F afn command line option or using the Stop after Afn command from the console.


Temporary storage threshold reached, starting bulk loader

(Windows only) This message indicates that the Client has reached the bulk loader max_temp_storage cutoff, which is half the value of this parameter. All tables for which temporary files were created will be queued for loading and the main thread (or update worker threads, if the parameter n_update_threads is greater than 0) of the Client will continue processing extracts until the full threshold is reached. At that point, it will block waiting for the bulk loader thread to finish loading the tables. If the bulk loader thread finishes before this happens, the Client looks for cutoff at which point it repeats the aforementioned process.


TranCommit, AFN=afn, ABSN=absn, SEG=seg, INX=inx, DMSII Time=timestamp

This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the last group of SQL statements were committed on the relational database, along with the DMSII time and the ending audit file State Information.


Transaction group aborted by DBEngine, nnn operations rolled back

This message, which is written to the trace file when the TR_VERBOSE bit is set in the trace mask, appears when the Databridge Engine sends the Client an abort status (instead of a commit status). The Databridge Engine rolls back the current transaction group. Transaction rollbacks can happen when the Engine reaches the end of the audit trail, or when READ ACTIVE AUDIT is set to FALSE in the Engine Control file and the Engine reaches the active audit file. In the cases, these updates are applied the next time the Client is run. Additionally, programs on the mainframe that roll back updates can sometimes cause the Engine to send the Client rollback requests. When rollbacks occur, they are included in the statistics that the Client logs. See see Update Statistics.


Updates by table (average update times in ms):
name updates (m.mmm) name updates (m.mmm) name updates (m.mmm)

This message, which is only written to the log file after an audit file switch when the show_table_stats parameter is set to True, shows the number of updates and the average update time for each table during the processing of the last audit file in question. Tables that have no updates are omitted from this list.


Data Extraction Statistics

These messages are part of the Data Extraction Statistics that are printed at the end of the Data Extraction phase of process and clone commands. These messages are listed in the order they appear in the log file.

Data Extracted nnn.nn KB in sss.sss secs, throughput = ddd.dd KB/sec, DMSII recs/sec = rrr.rr

nnn.nn represents the number of kilobytes of DMSII data received. sss.sss represents the elapsed time (in seconds). ddd.dd represents the corresponding throughput. rrr.rr represents the corresponding rate at which DMSII records were processed.


Bytes Received nnn KB in sss.sss secs, total throughput = ddd.ddd KB/sec

nnn.nn represents the number of kilobytes of total data received (including the packet headers and the non-data packets). sss.sss represents the elapsed time in seconds. ddd.dd represents the corresponding throughput.


DMSII Buffers Used = dd (configured_max = mm)

dd is the actual number of DMSII buffers used and mm is the configured maximum. If the parameter n_update_threads is set to 0 the value of dd will be 1 unless you have DMSII links.

Caution

When you have DMSII links the data extraction will use more buffers, therefore you should not try to reduce the default setting as this can cause the clone to fail if it runs out of DMSII buffers.


TCP/IP time = sss.sss secs, (dd.dd% of total time)

sss.sss represents the amount of time (in seconds) that the main thread of the program spent waiting for TCP/IP data to appear from the host. dd.dd is the corresponding percentage of total elapsed time.


SQL_exec time = sss.sss secs, (dd.dd% of total time)

sss.sss is the amount of time (in seconds) that the main thread of the program spent waiting for the execution SQL statements to complete. dd.dd is the corresponding percentage of total elapsed time.


TXN_exec time = sss.sss secs, (dd.dd% of total time)

sss.sss represents the amount of time (in seconds) the main thread of the program spent waiting for commits to complete. dd.dd is the corresponding percentage of total elapsed time.


File_I/O time = sss.sss secs, (dd.dd% of total time)

This line indicates the amount of time that the main thread spent waiting for file I/O operations to complete. For Windows, this represents the I/O writing the temporary files. For UNIX, it represents the amount of time spent writing to the UNIX pipes. These times also include any blocking time when the bulk loader falls behind the main process. In all cases, sss.sss is the amount of time (in seconds) the program spent waiting on file I/O operations to complete. dd.dd is the corresponding percentage of total elapsed time.


buf_wait time = sss.sss secs, (dd.dd% of total time)

This line only applies to multi-threaded extracts. sss.sss represents the amount of time (in seconds) the main thread of the program spent waiting for a DMSII buffer to become available. dd.dd is the corresponding percentage of total elapsed time. DMSII buffers are used to hold the raw DMSII records while they are being processed by the Client. In the case of multi-threaded extracts these buffers are placed on the various updater threads' work queues using working storage header blocks described below.


ws_wait time = sss.sss secs, (dd.dd% of total time)

This line only applies to multi-threaded extracts. sss.sss represents the amount of time (in seconds) that the main thread of the program spent waiting for a working storage header block to become available. dd.dd is the corresponding percentage of total elapsed time. Working storage header blocks are small blocks used to queue DMSII buffers, as they can be on multiple queues when a DMSII data set maps to multiple tables.


Bulk_load time = sss.sss secs, (dd.dd% of total time)

(Windows only) In the single-thread case (n_update_threads = 0), this includes the amount of time that the main thread spent waiting for the bulk loader operations. While in the muti-threaded case it represent the amount of the time the main thread waited for teh bulk loader thread to finish loading tables. The value sss.sss is the amount of time in seconds that the main thread spent waiting on resources tied up by the bulk loader thread and dd.dd is the corresponding percentage of total elapsed time. A high value of Bulk_load_time might be an indication that the value of the parameter max_temp_storage you are using is too small. For MISER databases setting max_temp_storage to 1 GB seems to work best.


Inx_wait time = sss.sss secs, (dd.dd% of total time)

sss.sss is the amount of time (in seconds) that the main thread of the program spent waiting for the index creator thread to finish before it could start the fixup phase. dd.dd is the corresponding percentage of total elapsed time.


CPU/other time = sss.sss secs, (dd.dd% of total time)

sss.sss represents the amount of time (in seconds) the main thread of the program was not waiting for any of the above actions to complete. dd.dd is the corresponding percentage of total elapsed time. This metric is a derived value and it represents the time during which the thread was in a runnable state.


Server_Packet_Counts:
CREATE:ddd DELETE:ddd MODIFY:ddd MOD_BI:ddd MOD_AI:ddd STATE:ddd
LINK_BI:ddd LINK_AI:ddd BULKDEL:ddd DOC:ddd COMMIT:ddd ROLLBK:ddd

These values represent the counts of the various record types and commits received from the Databridge Server. During data extraction, CREATE represents the number of records extracted. When using DMSII links, LINK_AI represents extracted data for the link images in the various records. STATE represents State Information records. All the remaining record types will be zero at the end of the data extraction.


Bulk-loader Thread Statistics:
Load_time = sss.sss secs, (dd.dd% of total time)
Idle_time = sss.sss secs, (dd.dd% of total time)
Other_time = sss.sss secs, (dd.dd% of total time)

(Windows only) The value sss.sss in the first line is the amount of time (in seconds) that the bulkloader thread spent waiting for the bulk loader executions to complete. dd.dd is the corresponding percentage of total elapsed time.

In the second line, sss.sss is the number of seconds that the thread spent waiting for something to do. dd.dd is the corresponding percentage of total elapsed time.

In the third line sss.sss is the amount of time (in seconds) that the thread was not waiting for any of the above actions to complete and dd.dd is the corresponding percentage of total elapsed time. This metric is a derived value and it represents the time during which the thread was in a runnable state.


Index Thread Statistics:
Load_time = sss.sss secs, (dd.dd % of total time)
Idle_time = sss.sss secs, (dd.dd % of total time)
Other_time = sss.sss secs, (dd.dd % of total time)

The value sss.sss in the first line is the number of seconds that the index-creator thread spent waiting for the execution of the create index SQL statements to complete. dd.dd is the corresponding percentage of total elapsed time.

In the second line sss.sss is the amount of time (in seconds) that the thread spent waiting for something to do and dd.dd is the corresponding percentage of total elapsed time.

In the third line sss.sss is the amount of time (in seconds) that the thread was not waiting for any of the above actions to complete and dd.dd is the corresponding percentage of total elapsed time. This metric is a derived value and it represents the time during which the thread was in a runnable state.


[Data Errors: eeeeee SQL rows discarded, dddddd SQL rows in error]

This line shows the number of records that had data errors or were discarded. It doesn't appear when both counts are zero.


[Filter: dddddd occurs tables records suppressed]

This line shows the number of records that were not stored in OCCURS tables because of filtering. It doesn't appear when the count is zero.


Thread[1]:
SQL_time = sss.sss secs, (nn.nn % of total time), update_cnt = dddd, average update_time = m.mmm ms
[BCP_time = sss.sss secs, (nn.nn % of total time), record_cnt = dddd, average load_time = m.mmm ms]
[FileIO_time = sss.sss secs, (nn.nn% of total time)
Idle_time = sss.sss secs, (nn.nn % of total time)
Other_time = sss.sss secs, (nn.nn % of total time)

Thread[2]:
. . .
Total thread BCP_time = sss secs, total extract_cnt = dddd, average sql_time = m.mmm ms
Total thread SQL_time = sss secs, total extract_cnt = dddd, average sql_time = m.mmm ms
Total thread FileIO_time = sss secs, total extract_cnt = dddd, average sql_time = m.mmm ms

These messages only appears when multi-threaded updates are enabled.

In the "SQL_time" line of each thread the value sss.sss is the amount of time (in seconds) that the thread spent waiting for the execution SQL statements to complete. dd.dd is the corresponding percentage of total elapsed time. The value update_cnt, which will be 0 unless you have some data sets that are not using the bulk loader, represents the number of SQL statements the thread executed. The average_update_time represents the average duration of the updates expressed in milliseconds with three fractional digits.

(SQL Server only) In the "BCP_time" line of each thread the value sss.sss is the amount of time (in seconds) that the thread spent making BCP API bcp_sendrow calls or waiting for the bulk loader thread to complete. dd.dd is the corresponding percentage of total elapsed time. The record_cnt value represents the number of records that the thread loaded and average_load_time represents the average duration of the operations expressed in milliseconds with three fractional digits.

In the "File_IO" lines" sss.sss is the amount of time (in seconds) that the thread spent doing file IO. For Windows Clients this is the IO time for writing records to the temporary files used to pass data to the bulk loader, while for UNIX Clients this represents the IO time writing to the pipe used to pass data to SQL*Loader.

In the "Idle_time" line of each thread the value sss.sss represents the amount of time (in seconds) that the thread spent waiting for work. dd.dd is the corresponding percentage of total elapsed time.

In the "Other_time" line of each thread the value sss.sss represents the amount of time (in seconds) that the thread was not waiting for any of the above actions to complete, and dd.dd is the corresponding percentage of total elapsed time. This metric is a derived value and it represents the time during which the thread was in a runnable state.

The last three lines after the thread resource utilization statistics represent the corresponding statistic across all threads.


Update Statistics

These messages are part of the Incremental Statistics and Cumulative Statistics of process and clone commands. Incremental statistics are written to both the display and the log file, while incremental statistics are only written to the log file.

Processed nnn.nn KB in sss.sss secs, throughput = ddd.dd KB/sec, DMSII recs/sec = rrr.rr, lag time = hh:mm:ss

nnn.nn represents the number of kilobytes of DMSII data received. sss.sss represents the elapsed time in seconds. ddd.dd represents the corresponding throughput. rrr.rr represents the corresponding rate at which DMSII records were processed. The lag time, which is represented as hours, minutes and seconds, is the difference between the times when a record is updated in the relational database and when it was updated in DMSII (this is only meaningful when doing real/time replication).


Received nnn.nn KB from DBServer in sss.sss secs, total throughput = ddd.dd KB/sec

nnn.nn represents the number of kilobytes of total data received (including the packet headers and the non-data packets). sss.sss represents the elapsed time in seconds. ddd.dd represents the corresponding throughput.


DMSII Buffers Used = dd (configured_max = mm), Audit access rpc = {DBRead | DBWait}, Audit file origin = AF_origin

dd is the actual number of DMSII buffers used and mm is the configured maximum. If the configured value is 0, the maximum value is computed by the program. The rest of the line indicates whether the DBRead or DBWait RPC was used to get updates from the Databridge Engine and the method by which the audit file is being read. The possible values for AF_origin are HostAudit, DirectDisk, IndirectDisk, and DBECache (the last three apply to Enterprise Server).


TCP/IP time = sss.sss secs, (dd.dd% of total time)

sss.sss represents the amount of time (in seconds) that the main thread of the program spent waiting for TCP/IP data to appear from the host. dd.dd is the corresponding percentage of total elapsed time.


SQL_exec time = sss.sss secs, (dd.dd% of total time)

sss.sss is the amount of time (in seconds) that the main thread of the program spent waiting for the execution SQL statements to complete. dd.dd is the corresponding percentage of total elapsed time.


TXN_exec time = sss.sss secs, (dd.dd% of total time)

sss.sss represents the amount of time (in seconds) the main thread of the program spent waiting for commits to complete. dd.dd is the corresponding percentage of total elapsed time.


[buf_wait time = sss.sss secs, (dd.dd% of total time)]

This line only applies to multi-threaded updates. sss.sss represents the amount of time (in seconds) the main thread of the program spent waiting for a DMSII Buffer to become available. dd.dd is the corresponding percentage of total elapsed time. DMSII buffers are used to hold the raw DMSII records while they are being processed by the Client. In the case of multi-threaded updates these buffers are placed on the various updater threads' work queues using working storage header blocks described below.


[ws_wait time = sss.sss secs, (dd.dd% of total time)]

This line only applies to multi-threaded updates. sss.sss represents the amount of time (in seconds) that the main thread of the program spent waiting for a working storage header block to become available. dd.dd is the corresponding percentage of total elapsed time. Working storage header blocks are small blocks used to queue DMSII buffers, as they can be on multiple queues when a DMSII data set maps to multiple tables.


[thr_wait time = sss.sss secs, (dd.dd% of total time)]

This line only applies to multi-threaded updates. sss.sss represents the amount of time in seconds that the main thread of the program spent waiting for the updater threads to finish processing updates committing the updates. dd.dd is the corresponding percentage of total elapsed time.


CPU/other time = sss.sss secs, (dd.dd% of total time)

sss.sss represents the amount of time (in seconds) the main thread of the program was not waiting for any of the above actions to complete. dd.dd is the corresponding percentage of total elapsed time. This metric is a derived value and it represents the time during which the thread was in a runnable state.


Thread[1]:
SQL_time = sss.ssss secs, (nn.nn % of total time), update_cnt = dddd, average update_time = m.mmm ms
Idle_time = sss.sss secs, (nn.nn % of total time)
Other_time = sss.sss secs, (nn.nn % of total time)
Thread[2]:
. . .
Total thread SQL_time = sss secs, total extract_cnt = dddd, average sql_time = m.mmm ms

This message, which is only included in the incremental statistic when multi-threaded updates are enabled. It contains the resource utilization statistics for each of the update worker threads. sss.sss is the number of seconds the thread spent executing SQL statements (SQL_time), waiting for work (Idle_time) or doing neither (Other_time). nn.nn is the corresponding percentage of the elapsed time this represents. dddd is the count of the updates executes by the corresponding thread and m.mmm is the average time in millisecond for the updates for each thread.

The line after the thread resource utilization statistics represents the SQL_time statistic across all threads.


Server_Packet_Counts:
CREATE: ddd DELETE: ddd MODIFY: ddd MOD_BI: ddd MOD_AI: ddd STATE: ddd
LINK_BI: ddd LINK_AI: ddd BULKDEL: ddd DOC: ddd COMMIT: ddd ROLLBK: ddd

These values represent the counts of the various record types and commits received from DBServer. CREATE represents insertions into the database. DELETE and MODIFY represent delete and update operations during audit file processing. MOD_BI and MOD_AI represent updates that are before and after image records for updates.

LINK_AI represent link data that is sent to the Client as a separate record. LINK_BI and LINK_AI represent before and after images of link items. The LINK_BI count will always be 0 as they are not currently used. BULKDEL will normally be 0, as it is only used when implementing embedded subsets.

STATE represents State Information records, which contain the location of the audit trail. This contains the audit file number (AFN), the audit block sequence number (ABSN), the segment and index of the block in the audit file and the DMSII timestamp. This information is used when a process command starts processing updates to tell the Databridge Engine where in the audit trail it should start looking for updates. COMMIT represents commits. ROLLBK represents rollbacks. DOC represents documentation records mostly used for debugging. The two exceptions are DOC records that provide information on an audit file, when the Engine starts reading a new audit file and DOC records that indicate that a data set has been reorganized (this is used to inform the Client that a garbage collection has occurred, as it otherwise would not know about it).


Server_Rolledback_Packet_Counts:
CREATE: ddd DELETE: ddd MODIFY: ddd MOD_BI: ddd MOD_AI: ddd STATE: ddd
LINK_BI: ddd LINK_AI: ddd BULKDEL: ddd DOC: ddd

These values are printed when rollbacks occur and represent the counts of the record types that were rolled back.


[Processed: dddd bytes of before image data, rrrr redundant SQL updates skipped]

This line normally appears when using the optimize_updates feature. dddd represents the number of bytes of DMSII before image data received (the cause) and rrrr represents the number of redundant SQL updates that were eliminated (the effect). A low value of rrrr and a high value of dddd is a clear indication that the optimize_updates feature is not helpful in this case.

BI/AI pairs are used by Databridge for COMPACT data sets that contain items with OCCURS DEPENDING ON clauses and when doing OCCURS table filtering.


[Data Errors: eeeeee SQL rows discarded, dddddd SQL rows in error]

This line shows the number of records that were not stored in OCCURS tables because of filtering. It doesn't appear when the count is zero.


[Filter: dddddd occurs tables records suppressed]

This line shows the number of records that were discarded from OCCURS tables because of filtering. It doesn't appear when the count is zero.


DBServer TXN Group Statistics (cumulative):
Commits = ccc, Avg UPDATE_inc = uu.uu, Avg Trans time = ss.sss sec, Avg ABSN_inc = bb.bb
Rollbacks = rrr, Avg UPDATE_inc = uu.uu, Avg Trans time = ss.sss sec

This message displays statistics for commit and rollback operations performed under the direction of the Engine. ccc is total the number of commits; rrr is the total number of rollbacks; uu.uu is the average number of updates contained in the individual transactions; ss.sss is the average duration of these transactions; and bb.bb is average number of audit blocks the committed transactions span. If there were no commits or rollbacks in the updates the corresponding line is not printed out and if both counts are all three lines are not printed out.


Aux STMT Statistics:
Configured_max = nnn, Max_used = mmm, Recycled_stmt_cnt = rrr
STMT reuse stats: min_sql_ops = nnn, max_sql_ops = mmm, avg_sql_ops = rrr.rr
STMTs never reused = nnn, min_sql_ops = mmm, max_sql_ops = ddd, avg_sql_ops = rrr,rr

This message appears at the end of the incremental statistics following an audit file switch. It provides information about the auxiliary statements used by the Client. Unlike the rest of the update statistics these statistics are cumulative, as auxiliary statement usage can span multiple audit files and sometime the entire run. They are omitted from the cumulative statistics, as they would contains the same information.

The first line shows nnn, the value of the aux_stmts parameter in the Client configuration file, and mmm, the maximum number of statements that were used. The recycled statement count, rrr, indicates how many statements were reused to execute different updates. If the rrr value is high (or if the nnn and mmm values are the same) you may not have enough statements configured. If you change these values, keep in mind that higher values can result in better performance but will require more memory.

The second line represents the number of SQL statements that were executed using a given auxiliary statement. The minimum, maximum and average values are shown. The minimum value is typically 1, if you have tables that are very rarely updated. A high value for the maximum is encouraging, but it can be misleading if you have a small number of tables that get updated a lot. If the average value is high, this indicates that you have enough auxiliary statements. Any statement that is reused will run much faster than one that executes for the first time, as the first execution requires additional I/O. This is particularly visible in the Oracle Client, where the speed up is quite visible.

The third line represents the minimum, maximum, and average number of SQL operations that were executed by these SQL statements and the number of SQL statements that have not been reused. If this number is close to the number of configured auxiliary statements, you might benefit from increasing the value of the configuration parameter aux_stmts. This would allow more SQL operations to re-use statements, thereby improving performance.


Drop and Dropall Commands Messages

The following messages appear in response to the Databridge Client drop or dropall commands.

You must create a separate working directory for each data source. When you need to drop a data source, make sure that the current directory is the working directory for the data source before running the drop command. This ensures that dbutility will be able to locate the required scripts. In the case of a dropall command, you can only drop one data source, as the working directory must be changed for each data source. This command will also drop the Client control tables when dropping the last data source. The dropall command is rarely used in this way. It is mainly used for dropping the Client control tables.

Cleaning up table 'name'

This message indicates that the Client is deleting selective records from the specified table instead of dropping the table. This action is only taken in special cases, such as when tables that contain non-DMSII data are populated by the Client. The drop command cannot drop the table; instead, it removes all the records that the Client created.


Deleting control table entries for DataSource name

This message indicates that all of the Client control table entries related to the specified data source are being removed.


Drop of all Databridge tables successfully completed

This message indicates the following:

  • All tables for the various data sources defined in the Client control tables have been removed from the relational database.

  • All of the corresponding scripts have been deleted from the current directory.

  • All of the Client control tables have been removed from the relational database.


Drop of DataSource name partially completed

The drop and dropall commands deletes all tables, stored procedures, and scripts for each table. If no error occurs, the commands proceed to delete the Client control table entries corresponding to the specified data source.

If an error occurs while deleting the Client control table entries, the command continues trying to delete the Client control table entries for the data source from the remaining tables. In this case, you must manually remove the remaining tables, stored procedures, and/or scripts, as well remove the corresponding entries in the Client control tables.


Drop of DataSource name successfully completed

This message indicates that the tables and associated stored procedures for this data source have been removed, as well as the data source entry in the Client control tables. In addition, scripts for this data source were deleted from the dbscripts subdirectory.


Dropping control table name

This message is used by the dropall command to indicate that the Client control table in question is being removed from the relational database in the final stages of the command.


Dropping table 'name'

This message, which only appears in the trace file when the TR_VERBOSE bit in the trace mask is set, indicates that the specified table and its associated stored procedures are being removed from the relational database.


Loading control tables for datasource

This message indicates that Client control tables are being loaded for the data source that you specified (with the drop or dropall command).


Starting drop of DataSource name

This message indicates that the drop (dropall) command has begun for the specified data source.


Switchaudit Command Messages

The following message may appear in response to the Databridge Client switchaudit command, which is limited to the command line Client dbutility.

Audit file switched (Current DMSII AFN = nnnn)

This message indicates that the DMSII audit file was closed and a new one was opened. If the READ ACTIVE AUDIT parameter is set to FALSE in the Engine Control file, the Databridge Engine will not attempt to process the active audit file, unless you are doing a clone. The value nnnn is the DMSII database’s current audit file number after the command completes.


Display Command Messages

The following messages appear in response to the Databridge Client display command.

Control tables for DataSource name written to file "fname"

This is the confirmation message indicating the successful completion of the command. This message is output to the screen, not the Client log file.


Loading control tables for datasource

This message indicates that the Client control tables are being loaded for the data source you specified with the display command.


Runscript Command Messages

The following messages appear in response to the Databridge Client runscript command.

Running script "script_file_spec"

This message is displayed when the Client runs the specified script.


Script SQL statements and row counts will be written to file "name"

This message is a reminder that the runscript command automatically enables SQL tracing and also writes the row counts for insert, delete, and update SQL statements executed in the script. The row counts are of the form "nn row(s) {inserted | deleted | updated}".


User script "name" executed successfully

This message indicates that the specified script ran with no errors.


Unload Command Messages

The following messages appear in response to the Databridge Client unload command.

Control tables for all DataSources written to file "name"

This message is displayed at the end of an unload command when the data source name field of the command line contains the value _all. This message indicates that the command completed successfully.


Control tables for DataSource name written to file "name"

This message can arise in two situations:

If a data source name is specified on the command line, this message indicates that the unload command completed successfully.

or

If the data source name field of the command line contains the value _all and the TR_VERBOSE bit in the trace mask is set, this message is displayed after each data source is unloaded.


Loading control tables for datasource

This message is displayed when the program loads the Client control tables for the specified data source before writing their records out to a file. If several data sources are being unloaded, this message is displayed multiple times.


Unloading control tables for datasource

This message indicates that the unload command is writing the control tables entries for the data source to the file in question.


Reload Command Messages

The following messages appear in response to the Databridge Client reload command.

Control tables for all DataSources reloaded from file "name"

This message will appear at the end of a reload command if the data source name field of the command line contains the _all value. It indicates that the command completed successfully.


Control tables for DataSource name reloaded from file "name"

This message appears at the end of a reload command if a data source is specified on the command line. It indicates that the command completed successfully.


DataSet name[/rectype] will be reloaded

This message appears if a data set list is specified in the command line of a reload command. It is a confirmation message printed prior to reloading the Client control table entries that pertain to the data set in question.


Loading control tables for datasource

This message is displayed if a data set list or the -k option is specified on the command line of the reload command. The command needs to first load the Client control tables to determine if the specified data sets exist and to possibly preserve the State Information.


Reloading Control table entries for DataSource name from file "name"

This message can appear in two situations:

If a data source name is specified on the command line, this message indicates that the reload command is about to reload the control tables for the data source in question.

or

If the data source name field of the command line contains the value _all, this message is displayed before each data source is reloaded. There is no confirmation message in this case, except at the very end of the command.


Refresh Command Messages

The following messages appear in response to the Databridge Client refresh command. This command is normally embedded in the reorganize command. If you decide to manually process a DMSII reorganization by writing your own alter table commands, you will need to run this command. Make sure that you first execute a generate command to ensure that the Client scripts are up-to-date.

Loading control tables for datasource

This message indicates that Client control tables are being loaded in preparation of the execution of the refresh command.


Script SQL statements executed will be written to file "name"

This message is a reminder that the refresh command automatically enables SQL tracing when the -v option is enabled. The refresh command drops the stored procedures for all the tables mapped from the specified data set and then recreates them. If a variable-format data set is specified, all data sets with the given name that have their active columns set to 1 in the DATASETS control table are refreshed (the stored procedures of all replicated record types are refreshed).


Stored procedures for all active tables of DataSource name successfully refreshed

This message indicates that the refresh command, which has a data set name specification of _all, completed successfully. It confirms that all stored procedures for all active tables in the specified data source were successfully refreshed.


Stored procedures for all tables of DataSet name[/rectype]successfully refreshed

This message confirms that the refresh command completed successfully for the data set that you specified on the command line.


Export Command Messages

The following message may appear in response to the Databridge Client export command.

Text configuration file "name" successfully created

This message is confirmation that the exportcommand completed successfully. If using the defaults, the binary file dbridge.cfg is read and its equivalent text configuration file dbridge.ini is created in the config subdirectory for the data source.


Import Command Messages

The following messages appear in response to the Databridge Client import command.

Binary configuration file "name" successfully created

This message confirms that the import command completed successfully. If using the defaults, the text configuration file dbridge.ini is read and its equivalent binary file dbridge.cfg is created in the config subdirectory for the data source.


Rowcounts Command Messages

The following message may appear in response to the Databridge Client rowcounts command. The command executes a select count(*) from tablename SQL statement to get the rows count form each individual table. The command can take a long time to execute when you have large tables.

Loading control tables for datasource

This message indicates that Client control tables are being loaded in preparation of the execution of the rowcounts command.


Row counts for all active tables for DataSource name written to file "name"

This message confirms that the rowcounts command completed successfully. It confirms that the command completed successfully. The row counts of all user tables, whose active column is 1 in DATATABLES, that are associated with the data source in the DATASETS control table are written to the Client log file.


Table row counts:
name rowcount name rowcount name rowcount
. . .

This message consists of the names of the various tables followed by their row counts. There are 3 entries per line. Tables that have no data are skipped.


CreateScripts Command Messages

The following message may appear in response to the Databridge Client createscripts command.

Backing up user scripts to directory "name"

This message indicates that the old user scripts are being copied to the specified directory. Unless inhibited by the command line -n option, user scripts are backed up to the datasourceYYYYMMDD[_HHMISS] subdirectory of the directory whose name is specified by the configuration parameter user_script_bu_dir. If this parameter is not specified, the directory specified by the parameter user_script_dir is used instead.

Note

You need to periodically delete old copies of these directories as the Client does not try to manage the backup user scripts directories.


Creating DataSet selection script

This message indicates that the createscripts command is creating the script script.user_datasets.source. This script normally contains data set selection scripts which manipulate the active column in the DATASETS control table.


Creating user define script for DataSet name[/rectype]

This message indicates that the createscripts command is creating the script script.user_define.dataset, where dataset is the name of the primary table for the specified data set. These scripts typically perform renames of tables and columns. This is achieved by updating the DATATABLES and DATAITEMS control tables.


Creating user layout alteration script for DataSet name[/rectype]

This message indicates that the createscripts command is creating the script script.user_layout.dataset, where dataset is the name of the primary table for the specified data set. These scripts typically modify the di_options and dms_subtype columns in the DMS_ITEMS control table to perform customizations, such as cloning numbers as dates or flattening OCCURS clauses.


Loading control tables for datasource

This message indicates that Client control tables are being loaded for the data source you specified with the createscripts command.


User scripts for DataSource name written into directory "name"

This message confirms that a fresh copy of the user scripts were written into the specified directory. The Client removes all user scripts from the directory after backing them up and recreates all the user scripts.

Note

If you set the active column to 0 for a data set prior to running the createscripts command, the user scripts for that data set will not be created. You might need to retrieve them from the backup directory if you later change the active column back to 1. Alternatively, you can run the command again with the -n option to prevent a second backup copy from being created.


Tcptest Command Messages

The following messages appear in response to the Databridge Client tcptest command.

Bytes Processed nnn.nn KB of DMSII data in sss.sss secs, throughput = ddd.dd KB/ sec
Bytes Received nnn.nn KB in sss.sss secs, total throughput = ddd.dd KB/sec
TCP/IP_time = sss.sss secs, (dd.dd of total time)

This message appears at the end of the tcptest command.

In the first line, nnn.nn represents the number of kilobytes of simulated DMSII data received. sss.sss represents the elapsed time (in seconds). ddd.dd represents the corresponding throughput.

In the second line, nnn.nn represents the number of kilobytes of data received. sss.sss represents the elapsed time (in seconds). ddd.dd represents the corresponding throughput. The number of bytes received is slightly greater than the number of bytes of simulated DMSII data as it also includes the protocol overhead bytes.

In the third line, sss.sss is the amount of time (in seconds) that the program spent waiting for TCP/IP data to appear from the host. dd.dd is the corresponding percentage of total elapsed time.


TCP Test completed successfully

This message indicates that the Client has successfully completed the tcptest command.


TCP_Test: len=nnnn, count=nnnn

This message, which appears at the start of the tcptest command displays the length of each message and the number of messages to be sent in the test.


TCP_Test: nnnn iterations completed

The Client displays this message while executing the tcptest command if the show_statistics parameter is set to True. The Client displays this message after every nnnn iterations, where nnnn is the smallest value specified for the statistics_increment parameter's arguments.


Databridge Client Console Messages

The following messages appear in response to dbutility console commands and commands issued from the Administrative Console.

Aux STMT Statistics:
Configured_max = nnn, Max_used = mmm, Recycled_stmt_cnt = rrr.rr
STMT reuse stats: min_sql_ops = nnn, max_sql_ops = mmm, avg_sql_ops = rrr.rr
STMTs never reused = nnn, min_sql_ops = mmm, max_sql_ops = ddd, avg_sql_ops = rrr.rr

This message appears in response to a dbutility ASTATS console command. In the case of the Administrative Console it appears in the Statement tab in response to a Statistics command from the Run menu. For details, see Update Statistics.


Client State = state_name

This message is the first line of the response to STATUS command during a process or clone command. This command's output is covered in a separate sub-section at the end of this section, see Client Status Messages.


Commit ABSN increment will be set to nnn at next quiet point

This message is displayed in response to a dbutility COMMIT ABSN nnnn command. In the case of the Administrative Console it is written to the log file and the console window.


Commit Parameters: ABSN_inc = aaa, UPDATE_inc = bbb, TIME = ccc, DMS_Txn = ddd
DBserver TXN Group Statistics (cumulative):
Commits = ddd, Avg UPDATE_inc = nnn.nn, Avg Trans time = sss sec, Avg ABSN_inc = ddd
Rollbacks = ddd, Avg UPDATE_inc = nnn.nn, Avg Trans time = sss sec

This message is displayed in response to a dbutility COMMIT STATS command. For details, see Update Statistics.


Commit TIME increment will be set to nnn at next quiet point

This message is displayed in response to a dbutility COMMIT TIME ssss command. In the case of the Administrative Console it is written to the log file and the console window.


Commit TXN increment will be set to nnn at next quiet point

This message is displayed in response to a dbutility COMMIT TRAN nnnn command. In the case of the Administrative Console it is written to the log file and the console window.


Commit UPDATE increment will be set to nnn at next quiet point

This message is displayed in response to a dbutility COMMIT UPDATE nnnncommand. In the case of the Administrative Console it is written to the log file and the console window.


Connection to server not yet established

This message indicates that the console operator issued an SSTATS command before the Client established a connection with DBServer.


Console Input: 'text'

The Client logs all console commands issued by the operator, ensuring that there is a record of all such commands. This message, which only appears in the log file, contains the console command text as entered by the operator.


Console RPC: {Quit | Quit At hhmmss | Quit After AFN dddd | Quit Now | Get_Server_Stats | Switch_Log_File | Switch_Trace_File}

These messages, which are only applicable to DBClient, are only written to the log file. They provide a log of the console command RPCs that were received by the Client.


DataSource name idle

This message indicates that the DBClntCfgServer program terminated after 1 minute of inactivity. When a new Administrative Console connection starts up it causes the DBClntCfgServer program to be started for each of the data sources included in the service's configuration file. This program provides access to the control tables for the Administrative Console. This message indicates that the DBClntCfgServer program terminated after a minute of no console activity.


{DBClient | dbutility} will stop after AFN dddd

This message, in response to a QUIT (STOP) AFTER afn command, indicates that the Client will stop after the given audit file is processed.


{DBClient | dbutility} will stop at hh:mm:00

This message, in response to a QUIT (STOP) AT hh:mm command, indicates that the Client will stop at the specified time.


{DBClient | dbutility} will stop at the next quiet point

This message, in response to a QUIT (STOP) command, indicates that the Client will stop at the next quiet point.


Log file switched to "filename" (Operator Keyin)

This message is displayed in response to a successful LOGSWITCH command, which closes the current log file and starts a new one.


Operator commands: cmd_list

This message displays a list of the available dbutility console commands when a HELP command is issued.


Performance statistics:

This message is printed in response to a dbutility PSTATS command or a Statistics command from the Administrative Console. It is followed by performance statistics that look exactly like the Incremental Statistics for the Process and Clone Commands see Update Statistics.


Performance statistics not available

This message indicates that the operator issued a PSTATS command. However, performance statistics are not currently available because the Client has not yet started receiving extracts or updates from the Databridge Engine or Databridge Enterprise Server.


Performance statistics only available during process and clone commands

This message indicates that the operator issued a PSTATS command while executing a command other than process or clone.


Scheduling {disabled | re-enabled}

This message, in response to a SCHED command, indicates that the operator either disabled scheduling, or enabled scheduling that was previously disabled by a SCHED OFF command. This command takes effect only when it is time to schedule the next process command and does not affect the currently executing process command. If scheduling is disabled, the process command simply exits normally when the command terminates.


Server statistics only available during a process or clone command

This message indicates that the operator issued a SSTAT command during a command other than the process or clone.


Server statistics will be displayed after next quiet point

This message indicates that the server statistics are not available by Client request until the next quiet point. This is the standard response to the SSTATS console command.


Server Statistics:
Usercode: usercode
Priority: nn
Processor time: nn.nnnn seconds
I/O time: nn.nnnn seconds
ReadyQ time: nn.nnnn seconds
Support version: vv.vvv.vvvv [timestamp]
Support: (usercode) filespec
Filter: name

These messages are the result of the SSTATS command, which requests the server statistics at the next quiet point when the Client is in control of the communications channel to the server. Some of these output lines may be omitted if they are not applicable. Usercode shows the USERCODE under which DBServer is run. Priority shows the priority of the DBServer worker task. Processor time displays the processor time used by the DBServer worker. I/O time displays the I/O time used by the DBServer worker. ReadyQ time displays the ready queue time used by the DBServer worker; this is time spent when the task is ready to run but cannot get a processor.

Note

If the -v option is enabled, this message will include additional lines of output. These lines are typically suppressed by the Client because the information is redundant.


Trace file switched to "filename"

This message is displayed in response to a successful TSWITCH command, which closes the current trace file and starts a new one with the specified name.


Trace_options set to 0xhhhhhh

This message, in response to a TRACE command, indicates that tracing is now set to the specified value.


Possible values for wait_condition include:

  • work which indicates that the thread is idle

  • SQL execution to complete which indicates that the thread is waiting for the database API code to return control to it after the update is completed.

  • waiting for mutex name which indicates that the thread is waiting on a mutex whose name is provided. If this condition does not clear in a reasonable amount of time, the program might be deadlocked.

  • waiting for semaphore name which indicates that the thread is waiting for the specified semaphore to be posted. If this condition does not clear in a reasonable amount of time, the program might be deadlocked.

  • running indicates that the thread is actively executing the SQL for an update.

  • terminated indicates that the thread has exited. This only happens when the Client is shutting down.

  • unknown status dd which indicates that the status of the thread is invalid because of an internal error in the program.

Verbose flag set to {true | false}

This message, which is a response to a VERBOSE command, indicates whether the verbose option is set.


Client Status Messages

These messages show the Client status during a process or clone command. When using the command line Client dbutility they are displayed onscreen and in the log file in response to a STATUS console command.

When using the Administrative Console similar messages formatted by the Administrative Console server are displayed in the Client tab of a new page in response to a Statistics command from the Run menu.

Client State = state_name

This message shows the Client state during a process or clone command, followed by information on the status of the Client. Possible values for the state_name are CLONE, FIXUP, TRACKING, Idle. When using the command line Client dbutility they are displayed onscreen and in the log file in response to a STATUS console command.

When using the Administrative Console similar messages formatted by the Administrative Console server are displayed in the Client tab of a new page in response to a Statistics command from the Run menu.

The remaining output lines for this message are described in this section.


Processing updates from AFN=afn, ABSN=absn, SEG=seg, INX=inx, DMSII Time=tstamp [(lag time = hh:mi:ss)]

This line displays the current State Information, which is followed by the lag time when processing audit files.


RCI: Initiating "cmd" command
RCI: Initiating "cmd" command for DataSource dsname [hostname host, hostport port]
RCI: Initiating "cmd" command for DataSet dsname in DataSource dsname
RCI: Initiating "cmd" command for DataSource dsname {from | to} file "fname"

These messages, which are only written to the log file are written to the auxiliary log file used for the DBClntCfgServer program's manage command log all commands issued by the operator form the Actions and Advanced menus of the Administrative Console.


Server = {DBServer | DBEnterprise}, Audit_access_rpc = {DBRead | DBWait (retry_secs = mmm, maxwait_secs = nnn, mmm)} [, Audit file origin = AF_origin]

This line provides information about the RPC being used to read the audit file (in the case of DBWait we also display the parameters of the RPC) and the origin of the audit file updates. Possible values for AF_origin include DBServer-based operations HostAudit and Databridge Enterprise Server-based operations IndirectDisk, DirectDisk and DBECache.


RPC Info: last_rpc = name, send_seq_no = 0xhhhh, recv_seq_no = 0xhhhh, resp_count = dddd

This line is included to help in diagnosing occasional issues with the communication between the Client and Enterprise Server hanging with each side claiming that it is waiting for an input from the other side. During a process command name will typically be Read or Wait, the rest of the line contains information about the current state of the RPC protocol. A similar line of output can be found in the Enterprise log.


[Waiting for reason_for_wait, wait_time = mmmm ms]

This line indicates that the main thread of the program is waiting for an event or a resource to become available. Thread Wait States for information on the various Values for reason_for_wait.

Wait conditions are discussed at the end of this section, as they also apply to the update worker threads when the parameter n_update_threads is set to a non-zero value.


Log File: "name"

This line displays the name of the Client log file, which resides in the logs folder.


Trace: {on, mask = 0xhhhh, trace_file = "name" | off}

This line indicates whether tracing is enabled and provides the trace mask value and the name of the trace file, if any.


Verbose:{on | off}, Scheduling: {on | off}

This line shows the status of the verbose option which enables additional log output that gets written to the log file. The Scheduling flag is only meaningful for the command line Client dbutility. When using the service, the scheduling function is handled by the service.


[Client operations inhibited from hh:mm to hh:mm]

This line indicates that a blackout period that inhibits Client operation between the specified times is present in the Client configuration file.


[Stop processing updates on mm/dd/yyyy @hh:mm:ss]
[Stop processing updates at first QPT of AFN dddd]

These lines indicate whether the Client stops processing updates at the first QPT of the specified audit file or at the specified date and time.


Update Worker Thread[n] {waiting for reason_for_wait, wait_time = dddd | running | terminated}
...

These lines indicates the state of the various updater threads when the parameter n_update_threads is set to a value greater than 0. Possible states are "running", "waiting" and "terminated". If the thread is waiting for an event, the event is shown in wait_condition and ssss is the amount of time (in milliseconds) the thread has been waiting. The possible for reason_for_wait are covered below, see Thread Wait States.


Thread Wait States

This section provides information about the various wait states for the main thread and the update worker threads that appear in thread state line of the form Waiting for reason_for_wait, wait_time = mmmm ms.

Waiting for backlog to dissipate, wait_time = mmmm ms

(Windows only) This message indicates that the Client has been waiting for the backlog caused by excessive posting for the bulk loader thread or index thread work queues to dissipate. If this number is very large, it may indicate an internal error in the Client.


Thread waiting for bulk loader thread to complete, [table = name, ] wait_time = mmmm ms

(Windows only) This message is one of several lines of output produced by the STATUS command. It indicates that the Client has been waiting for the specified amount of time for the bulk loader thread to complete. If this number is very large, the bulk loader thread may be blocked waiting for a database resource to become available. The thread_state is a text message that describes what the bulk loader thread is doing. Wnen using the SQL Server BCP API the table name is included in this message.


Waiting for index thread to complete, wait_time = mmmm ms

(Windows only) This message is one of several lines of output produced by the STATUS command. It indicates that the Client has been waiting for the specified amount of time for the index thread to complete. If this number is very large, the bulk loader thread may be blocked waiting for a database resource to become available.


Waiting for mutex name, wait_time = mmmm ms

A thread that needs access to a critical section of code must first acquire the mutex. If the mutex is locked, the thread must wait until it becomes available. If a thread is stuck waiting for a mutex, this is usually an indication of a deadlock. In such cases, all you can do is kill the run. An abort command is unlikely to have nay effect when there is a deadlock.


Waiting for semaphore name, wait_time = mmmm ms

If a thread needs to execute code after another thread has finished a related task, it will typically do that by using a semaphore to synchronize the two threads. Unless the associated thread has already posted the semaphore, the thread blocks when it tries to acquire the semaphore. When the associated thread is done it posts the semaphore, which wake up the the thread waiting for the semaphore. If a thread is stuck waiting for a semaphore, it is usually an indication of a deadlock, which requires that you kill the run. An abort command is unlikely to have nay effect when there is a deadlock.


Waiting for SQL execution to complete, [table = name, ] wait_time = mmmm ms

This state indicates that the thread is waiting for the SQL update to return. This includes SQL updates to control tables. If this situation persists the timer thread will eventually issue a warning about a SQL execution that appears to be stuck. Index creations are exempt from this situation, as they can take a long time when the tables involved are large. If the name of the table involved is available it is include in the message.

Note

In the case of SQL Server consider upgrading your ODBC driver, as older ODBC drivers are know to have timing problems with systems using SSD drives. We recommend using ODBC driver 17.4 or newer.


Waiting for TCP/IP input from {DBServer | DBEnterprise}, wait_time = mmmm ms

This state, which only applies to the main thread, indicates that the Client has acquired a DMSII buffer to hold the next update record and is waiting for TCP/IP input from the server to become available. If this number is very large, the Databridge server could be blocked waiting for some event (for example, for the operator to make an audit file available). If the situation persists the timer thread will issues a warning once the threshold is reached.

The situations where this can occurs are:

  • At the end of data extraction when the main thread signals the bulk loader thread to shutdown when it finds its work queue empty. The main thread then waits to be signalled back that the bulk loader thread has exited.

  • Following any BCP API call, which could potentially block.

  • For Windows Client when a thread is about to queue work for a table on the bulk loader thread's work queue blocks because the previous temporary file for the table has not test completed.