Skip to content

Appendix A: Troubleshooting

This appendix provides instructions for troubleshooting problems you may experience with Databridge Client.


General Troubleshooting Procedures

If you have problems using the Databridge Client, complete the following steps:

  1. Check to see that your system meets the minimum hardware and software requirements. For details, see the Databridge Installation Guide.

  2. Check that you've selected the correct configuration options for connecting to the relational database server:

    • The relational database name

    • Your user ID and password to log in to the relational database server. Does your user ID to the relational database server have the correct privileges?

    • If you use configuration file parameters or environment variables to supply the signon parameters, did you enter them correctly?

    • If you use command-line options, did you enter them in their correct uppercase or lowercase? Did you enter them with each dbutility command? See dbutility Command-Line Options.

    • If you use a UNIX Client, make sure that the ORACLE_HOME, and LD_LIBRARY_PATH variables point to the correct directory, (for example, LD_LIBRARY_PATH=/opt/oracle/product/19.0.0/dbhome_1/lib:/home/dbridge/db70/lib).

  3. Check that you've selected the correct configuration options for connecting to the host.

    • Is Databridge Server running on the host?

    • Did you use the data source name as it is defined in the DBServer control file? For more information, refer to the Databridge Host Administrator's Guide.

    • Did you enter the correct host name or IP address?

    • Did you enter the TCP/IP port number as it is defined in the DBServer control file?

    • If there is a password defined in the DBServer parameter file, did you enter the correct password?

  4. Make sure that the PATH environment variable contains the Databridge Client's directory and the appropriate relational database bin directory (named bin for Oracle and binn for Microsoft SQL Server).

  5. Check your cable connections to make sure that they are securely attached.

  6. Determine whether the problem is caused by the host and DMSII (versus Databridge Client) by using Databridge Span on the host to clone a data set from the DMSII database in question.

    • If you cannot clone the data set, the problem is most likely on the host.

    • If you can clone the data, the problem is most likely occurring between the DBServer and Databridge Client.

  7. Resolve any errors. If you receive error messages or status messages that you don't understand, see the Databridge Error and Message Guide.

  8. If you cannot identify and solve the problem without assistance, contact your product distributor or Micro Focus Technical Support from a location where you have the ability to run dbutility.


Troubleshooting Table

The following table lists some common problems and their solutions.

Problem Solution
You made changes to the Client control tables, such as changing the active column value, but none of your changes are taking effect. This problem, which only occurs when using SQL*Plus in an Oracle database, is an indication that your SQL statements did not get "committed." The default mode of operations of SQL*Plus is transaction mode. SQL statements only get committed when you explicitly issue a commit or when you exit SQL*Plus. You can make the program automatically issue a commit after every SQL statement by typing set auto[commit] on.
You changed one or more table names, but the new tables are empty after you do a clone or an update. Most likely you did not update the table_name columns in the DATAITEMS Client control table.
You have the correct host name, port number, and data source name, but you still cannot connect to the host. Make sure the domain name server is running. If the domain name server is down, change the host name in the DATASOURCES table to the IP address and try the dbutility command again.
You get a "constraint violation" error when you run the process command to update the relational database. Most likely you have placed a constraint on one of the columns in the Databridge data tables. When this occurs, remove the constraint and re-clone the data set to get all of the records.

IMPORTANT: You must not place constraints or other restrictions on any Databridge data table. If you do, Databridge will not work. Instead, filter rows on the host using the DBGenFormat utility.
The Databridge Client becomes unresponsive at the following message:

Begin populating/updating database from AFN=afn, ABSN=absn, INX=inx, SEG=seg, DMSII Time=time_stamp
Check the host ODT for a waiting entry from Databridge Server, similar to the following:

(usercode) DBSERVER/WORKER-n
NO FILE (usercode)databasename-AUDITnnnn


In this case, make the audit file available to the Databridge Engine. For example, if the file is on tape, copy it to the usercode indicated for the AUDITnnnn file. Once you make the audit file available, the Databridge Engine automatically begins processing again.

If for some reason you cannot make the audit file available, stop running the Databridge Client by typing QUIT NOW on the Client system.
You are running multiple Databridge Clients, and all of them seem to stop processing. Most likely, only one of the Databridge Clients has stopped processing because of a problem, and the other Databridge Clients have stopped not because of a processing problem, but because of a resource contention problem on the host or network.

To correct this situation, look at the ODT and at the Windows Event Viewer for messages related to the Databridge Client. (The previous two problem descriptions in this table list possible messages.)

When you locate and respond to the message for the problem Client, the other Clients start processing automatically from where they left off.
You are unable to execute the dbutility program. Make sure you have included the Databridge Client program directory in the operating system’s PATH environment variable.
The Databridge Client gets an index creation error for a table that uses a legitimate DMSII SET as an index. There is no guarantee that the Databridge Engine will always produce tables without duplicate records at the end of the data extraction phase.

Most of the time, duplicate records occur when records are deleted and later reinserted into the data set (this sometimes occurs in environments where the DMSII applications use delete/create pairs or in compact data sets). If a record ends up in a location that is different from the original one, the Databridge Engine sees it twice, resulting in a duplicate record.

The Client normally runs the script "script.clrduprecs.tablename" when an index creation fails. This script removes all occurrences of duplicate records, as they will be reinserted during the fixup phase. You can inhibit the running of this script by resetting the bit DSOPT_Clrdup_Recs (32768) in the ds_options column of the DATASETS table entry. This must be done manually if you have disabled this bit.

When this problem occurs, use the procedure described in "Using SQL Query to Find Duplicate Records" to query for duplicate records and remove them.

Alternatively, you can clone the data set when the database is inactive or clone the data set offline (the Databridge Host Administrator’s Guide provides information about cloning offline).
The Databridge Client stops at the start of the fixup phase with the following error:

Stopping: Errors occurred during data extraction
The Databridge Client stops at this point if records were discarded. There are two types of discards:
  • Discards created by the Databridge Client because of data errors in items used as keys.
  • Discards created by the bulk loader because of internal errors. This type of error typically does not occur. If it does occur, it indicates that the program failed to detect a data error.

The Databridge Client stops so that you can review these errors. You can fix the data in the discard files that the Databridge Client creates and load the records using a relational database query tool. Alternatively, you can fix the bad data on the mainframe and let the normal update processing take care of things. If you restart the process command, the fixup phase proceeds normally.
The Databridge Client stops at the start of the fixup phase with the following error:

Stopping: Errors occurred during index creation
The Databridge Client stops at this point if one or more index creations fail. You need to determine why the index creation failed and remedy the situation, if possible. For example, if you did not have a large enough TEMP SEGMENT in Oracle, increase its size and execute the index creation scripts using SQL*Plus. Once the indexes are created, you can change the ds_mode of the affected data sets to 1 and resume the process command, which proceeds normally.

Tables that do not have indexes do not cause the Databridge Client to stop at the beginning of the fixup phase. The Databridge Client deselects such data sets and sets their ds_mode column to 11 before entering the fixup phase. Any subsequent process commands will not select such data sets unless you fix the problem and set their ds_mode columns to 1. You can re-clone such data sets at any time.
The Databridge Client stops at the start of the fixup phase with the following error:

Stopping: Errors occurred during data extraction and index creation
This message indicates that both of the last two conditions have occurred.

Using SQL Query to Find Duplicate Records

Use the following SQL query to list the keys and the record counts for duplicate records in a table. Duplicate records result when the given combination of keys is used as the index. This query is also useful when trying to determine if certain key combinations produce a unique key.

SELECT key_1, key_2,...key_n, COUNT(*) FROM tablename
GROUP BY key_1, key_2,...key_n
HAVING COUNT(*) >1
Where Is
key_1 key_2 key_n The list of columns that make up the index for the table.
tablename The name of the table for which the error occurs.

If no records are duplicated, the output within the relational database query tool will indicate that no rows have been affected. If the SQL query returns a GROUP of duplicates, do the following:

  1. Manually delete the extra record or records for each combination of duplicate records.

  2. Execute a dbutility runscript command for each table that contained duplicate records, specifying the index creation script as follows:

    dbutility -n runscript dbscripts\script.index.*tablename*
    
  3. Set ds_mode = 1 for each data set that contained duplicate records.

  4. Execute a dbutility process command.

    Note

    If the query routine returns an unusually high number of duplicates, there may be more serious problems with your keys or the process that creates them. For more information about how Databridge uses keys, see Creating Indexes for Tables.


Log and Trace Files

The Databridge Client produces log files and trace files. This topic describes these files and the differences between them.

Log Files

The log file contains information about errors that the Client encounters and statistics that are useful in tracking performance problems. Additionally the log contains messages that are useful when reporting problems to Micro Focus Technical Support (for example, versions of the various host components). When a command is executed for a data source, one or more messages appear onscreen and are written to the log file for that data source. Log files are created in the logs subdirectory of the data source's working directory. Log files are named

dbyyyymmdd.log

where db is a configurable prefix that can be redefined in the configuration file and yyyymmdd is the date the log file was created. A time (_hhmnss) is appended to the filename if the filename is already in use. (For details about configuring the log via the file see Export or Import a Configuration File.)

If more than one log file is created for a data source on the same date, the time of day is included after the date to make the filename unique (for example, dbyyyymmdd_hhmnss.log).

Some messages are written only to the log file. These messages generally include information that may be useful when reporting problems to Micro Focus Technical Support, such as version information for the various host and Client components, the OS version, the database version and in the case of Microsoft SQL Server the ODBC driver version. We recommend you use the ODBC driver version 17.4 or newer.

When sending log files to Micro Focus Technical Support always send the entire log file (do not screen shots or segments of the file), as we capture a lot of information about the environment in which the Client was run at the beginning of the log file. In version 7.0 we repeat most of this information if a log switch occurs during the Client run. Knowing exactly what version of the software we are dealing with is very important when trouble shooting.

Trace Files

Tracing is a powerful option that provides details on the internal processing of the Databridge Client.

Note

Trace files are only required if you experience a problem that requires further diagnostics by Micro Focus Technical Support. Do not enable tracing during routine operations as the trace files tend to be huge. You can delete these files when you no longer need them.

Trace files are named

traceyyyymmdd.log

where trace is a user configurable prefix and yyyymmdd is the date the trace file was created. The file extension is .log. If more than one trace file is created on the same date, the time is added after the date to make the filename unique. Trace files are written to the working directory for the data source.


Using Log and Trace Files to Resolve Issues

When an error or problem occurs, use log and trace files to troubleshoot the cause.

  • Review the log file, which contains a record of all data errors.

  • To prevent problems caused by excessive trace and log file size, use the max_file_size parameters to limit file size. On UNIX, the Client will crash if the trace file exceeds the system imposed file size limit.

  • If you are having problems and contact Micro Focus Technical Support, they may request a copy of the log file. We recommend that you use a compression utility before sending the log file.

  • If Micro Focus Technical Support requests a trace, make sure that the old trace files are deleted before starting the Client with the -t nnn (or -d) option. You will need to use a compression utility (such WinZip on Windows and gzip on UNIX) before sending the trace file (which can be quite large). You can use the splitter utility to break up big trace files into smaller, more manageable files. For help on running the splitter program, type splitter with no parameters.

    The splitter program can also split binary files (for example, WinZip® files) that are too large to ship as an e-mail attachment. The original file can be reconstructed from the split files by using the copy /B Windows command. When splitting binary files, you must specify the -B option for the splitter program.


Enabling Tracing

Note

We recommend that you enable trace options only when directed to do so by Micro Focus Technical Support. Specifically, avoid full tracing, SQL tracing, protocol tracing, or API tracing. The volume of logging data is so large it can dramatically slow performance of the Client and fill up your hard disk. Compress files using a compression utility before you send them to Micro Focus Technical Support for analysis. Very large trace files should be broken into manageable pieces with the splitter utility. For help on running the splitter utility, type splitter with no parameters.

The trace option controls the volume and type of information written to the trace file.

To enable a trace using dbutility

  1. Determine the type of trace you want. Then, add the value for each tracing option (see the table below), and use the result for nnnn.

  2. Specify the -t nnnn (or the -d) option using the following syntax:

    dbutility -t nnnn command arguments
    dbutility -d command arguments

    where nnnn is a bit mask that specifies the tracing option. You can prefix it with 0x to provide the value in hex.

    If you are not sure which tracing masks to use, use the -d option. This is the equivalent of -t 0xB7F, which enables the most useful trace options.

    You can enter other command-line options, such as -U, -P, and-D with the trace option. The order is not important as long as all dash (-) options precede each command line argument. (See dbutility Command-Line Options.)

  3. (Optional) To analyze performance, you can use an additional command line option, -m. This option includes a five-digit millisecond timer in all output messages. The timer is appended to the timestamp as (mmmmm).

  4. (Optional) To change the trace option when the Databridge Client is running, use the commands explained in Controlling and Monitoring dbutility.

To enable a trace from the Administrative Console

To create a trace file, you can use the available options in the Administrative Console by clicking on the "Trace and Log Options" item in the data source's Advanced menu. If there is no active run for the data source the trace option you select will be applied to the next launched run, and if there is an active run the tracing will be dynamically enabled for the run in question. The tracing options are not persistent once they are used the Administrative Console clears them. If you want to start a run with tracing a simpler option is to use the Process (with options) item in the Advanced menu of the data source and select the -d options, which will give the default tracing, which is why you should use unless we tell you otherwise.

To enable tracing for a clone command only, the Clone item in the Advanced menu of the data source also allows you to select the -d option. Alternatively you can clicking on the "Trace and Log Options" item in the data source's Advanced menu and select the desired trace option.

To stop tracing, click on "Select None" in the Trace and Log Options dialog and push OK.


Trace Options

Decimal Hexadecimal Description
0 0 Disables tracing.
1 0x1 Writes log messages to the trace file in addition to trace information.
2 0x2 Traces all SQL commands as the Databridge Client passes them to the relational database. Typically, these messages are SELECT or UPDATE SQL statements and stored procedure calls.
4 0x4 Traces all DBServer or DBEnterprise communications and key actions associated with Databridge on the host, including RPC calls such as DB_SELECT and DB_READ and their responses.
8 0x8 Traces information on the Databridge Client control tables as they are loaded from the relational database (that is, load tracing).
16 0x10 Enables relational database API tracing, which traces calls from the Databridge Client to the ODBC, OCI or CLI APIs.
32 0x20 Traces the records that are written to temporary data files (or UNIX pipes) and used by the bulk loader utility during the data extraction phase of cloning.
64 0x40 Traces information exchanged between the Databridge Server and the Databridge Client. The blocks of data are traced as they are read and written to the TCP interface. The messages are listed in DEBUG format, which is an offset followed by 16 bytes in hexadecimal, followed by the same 16 bytes interpreted as EBCDIC text. The non-printable EBCDIC characters are displayed as periods (.).
128 0x80 Traces all messages that are routed through the Databridge Client Manager (primarily messages from the Client Console and Client Configurator to the Client, DBClient).
256 0x100 Traces debugging output that is temporarily added to the Databridge Client (primarily engineering releases).
512 0x200 Displays the configuration file parameters as they are processed.
1024 0x400 Enables exchange of traces information between DBClient (or DBClntCfgServer) and the service. The output looks like a DBServer protocol trace, except for the fact that all the data is ASCII.
2048 0x800 Enables SQL tracing while running user scripts during define and redefine commands.
4096 0x1000 Prints the Read_CB exit line in the trace file. This option is useful only for determining when the execution of a SQL statement ends because the start of the subsequent wait for TCP input is not traced.
8192 0x2000 Traces DOC records. This option provides the same information you would get by setting trace option bit 4, which traces all messages used in server communications. This bit allows you to trace only the DOC records. When used in conjunction with 4 bit, this bit is redundant.
16,384 0x4000 This bit is reserved for internal use only.
32,768 0x8000 This bit is reserved for internal use only.
65,536 0x10000 Enables verbose tracing.
131,072 0x20000 Enables thread tracing.
262,144 0x40000 Enables DMSII buffer management tracing.
524,288 0x80000 Enables row count tracing.
1,048,576 0x100000 Enables SQL buffer size calculations.
2,097,152 0x200000 Enables load balancing tracing.
4,194,304 0x400000 Enables host variable tracing.

Examples

Following are different ways you can set the logging options.

Log Option Example (Decimal and Dexadecimal) Result
dbutility -t 7

dbutility -t 0x7
Traces log data (1), SQL (2) and host events (4)
dbutility -t 2943

dbutility -t 0xB7F

dbutility -d
Traces the most commonly desirable options.

NOTE: Whenever Micro Focus Technical Support asks
you for a trace, use the -d option, unless you are told otherwise.

Trace Messages

Any of the messages in this section may appear in the trace file, depending on which options you select when you execute dbutility. See Enabling a Trace. Successful executions of dbutility are separated by a line of 132 equal signs (=).

Database API Trace

Database API tracing is available via the -t 16 or -t 0x10 command-line option. The API trace messages trace calls to ODBC (Microsoft SQL Server) or OCI (Oracle). The following messages may appear when you use database API tracing:

Message Description
Abort_Transaction: This message indicates that the Databridge Client is making an API call to rollback the current transaction group.
Begin_Transaction: This message indicates that Databridge Client is starting a transaction group.
BindColumnPtr, stmt=nnnn: This message only appears when the configuration parameter aux_stmts has a nonzero value. It indicates that the columns involving a host variable in the SQL statement that was just parsed are being bound to a memory address. This message follows every Parse_SQL message.
Bind_Record: col=number, name=colname, ofs=number, size=number, type=number This message appears when the various data columns referenced explicitly or implicitly in a select statement are bound to fields in a program structure. This messages lists the column number (col=number), item name (name=colname), offset of the field in the structure expressed as a hexadecimal number (entry ofs=number), size of the field (in bytes) expressed as a decimal number (size=number), and code for the sql_type of the column (type=number).
Cancel_SQL: This message indicates that Databridge Client canceled a SQL statement that failed to complete in the designated time. The timer thread performs this operation when it determines that the threshold specified by the configuration parameter sql_exec_timeout has been reached.
Close_Database: This message indicates that a database session has been closed. The Databridge Client typically uses two database sessions at a time.
Commit_Transaction: This message indicates that the Databridge Client is making an API call to commit the current transaction group.
Execute_PreParsed_ SQL for stmt number, table 'name' This message is immediately followed by the SQL_DATA message, which displays the actual values of the host variables for the pre-parsed SQL statement that is being executed.
Execute_SQL: This message indicates that the Databridge Client called the Execute_SQL procedure, which executes most SQL statements not involving host variables. This call is preceded by one or more calls on Process SQL, which constructs the SQL statements in a temporary buffer.
Execute_SQL_Direct: This message indicates that the Databridge Client called the Execute_SQL_Direct procedure, which executes SQL statements directly (versus from the buffer that Process_SQL creates).
Fetch_Results: No more rows This message appears when the Databridge Client loads the Client control tables and indicates the no more rows are available in the select statement result.
Fetch_Results: Row retrieved This message appears when the Databridge Client loads the Client control tables and indicates that the Databridge Client successfully read the row when it retrieved the results of a select statement.
OCIBindByPosition: col_no= nn, addr=0xhhhhhhhh, len =0xhhhh, ind=nn This message, which is limited to the Databridge Client for Oracle, indicates that the column in the given position in the parsed SQL statement was bound to a host variable at the given address and length.
Open_Database: user =userid, pwd=**, {db=database data source=src}, rslt= dbhandle
Open_Stmt: Opened stmt nnnn This message indicates that the Client allocates a new stmt structure associated with a SQL statement that uses host variables. The Client allocates a maximum number of auxiliary statements (configuration file parameter aux_stmts) before it starts reusing these structures. The Client reuses the least recently used (the oldest) stmt in this case.
Oracle NLS parameter name= value This message appears when the Databridge Oracle Client connects to the database. One of the first things it does is to read the NLS parameters to determine the language and decimal character being used. The Client then automatically adjusts the connection so the Client operates properly in the given environment. The bcp_delim parameter is automatically set the value that SQL*Loader expects.
Parse_SQL: SQL[number]=stmt This message indicates that the SQL statement involving a host variable is being parsed using the stmt in question.

Using host variables improves performance by only parsing statements, binding the host variables to specific columns, and executing the statement multiple time after setting the host variables to the desired values.
Procedure_Exists(name) This message indicates that the Databridge Client called the procedure Procedure_Exists, which reads the data dictionary to determine if the given stored procedure exists.
Process_SQL: SQL=SQLText This message, which should not be confused with a similar SQL tracing message, overrides the SQL trace when both SQL and API tracing are enabled. This avoids having duplicate entries in the trace.
SQLBindParameter: col_no=nn , addr=0xhhhhhhhh, len=0xhhhh, ind_addr=0xhhhhhhhh, ind=nn This message, which applies to all ODBC Clients, indicates that the given column in the prepared SQL statement was bound to a host variable at the given address and the given length. The ind column is an indicator that is used to mark columns as being null.
SQL_DATA[number]= ...|...|... This message, which should not be confused with a similar SQL tracing message, overrides the SQL trace when both SQL and API tracing are enabled.
Table_Exists (name) This message indicates that the Databridge Client called the procedure Table_Exists, which reads the data dictionary to determine if the given table exists.

Bulk Loader Trace

Bulk loader tracing is available via the -t 32 or -t 0x20 command-line option. Bulk loader data tracing results in records of the bulk loader data files (or UNIX pipes) being written to the trace file during the data extraction phase of cloning. Bulk loader data trace messages are in the following form:

Message Description
Build_Pipe_Stream: table=name, record=data where data is the actual ASCII data that is written to the temporary data file (or UNIX pipe) used by the bulk loader utility.

Configuration File Trace

The configuration file trace is available via the -t 512 or -t 0x200 command-line option. These messages log configuration file parameters as they are being processed.

For example:

CONFIG: nnn. Config_file_line

If a binary configuration file is used, the Client uses the same output procedure as the export command to write the text version of configuration file into the trace file.


DBServer Message Trace

Databridge Server message tracing is available via the -t 4 or -t 0x4 command-line option. This trace highlights pertinent information during communications with Databridge Server on the host. These messages are listed in the trace file and may include the following:

Message Description
Common_Process: DBDeSelect Table=name, stridx= nnnn, rslt= errorcode The DBDeselect RPC call is used to deselect data sets that need to be excluded from change tracking. An example would be a data set whose AA Values are invalidated by a garbage collection reorganization. This message shows the name of the data set and its related structure index. If errorcode is nonzero, this message is followed by a Host message.
Common_Process: DBSelect Table=name, stridx=nnnn, rslt=errorcode The DBSelect RPC call is used to select data sets when the Databridge Client starts a process or a clone command. This message shows the name of the data set and its related structure. If errorcode is nonzero, this message is followed by a Host message.
Datasets_CB: dataset_name [/rectype] (strnum), subtype = dd, ds_options=0xhhhhhhhh, misc_flags = 0xhhhhhhhh CB stands for callback. This message shows the receipt of a data set information record from the Databridge Server during the execution of a define or redefine command.
Define_Table_Items: table= name, item=name data_type (sql_length) This message shows the data type and SQL length of data items as they are inserted in the Client control tables. This occurs during execution of the define or redefine command.
Get_Response: Req=req Rslt=rslt Len=len where req is the request type (RPC name), rslt is the returned status (typically OK), and len is the number of bytes of data that follow the status in the response packet.

This message indicates that the Databridge Client received a response to a remote procedure call other than DBREAD or DBWAIT.
Layout_CB: DataSet = name[/rectype], item (number) = name, data_type = dd, dlen = dd, scaling = dd CB stands for callback. This message shows the receipt of a data set item layout information record from the Databridge Server during the execution of a define or redefine command.
Read_CB: Type=typename StrIdx=iii, aa= hhhhhhhhhhhh This message indicates that the Databridge Client received a response from the Databridge Server in response to a DBREAD or DBWAIT remote procedure call.

typename is the response name (CREATE, DELETE, MODIFY, STATE, DOC, MODIFY_BI, or MODIFY_AI)

iii is the structure index assigned to the structure when it is selected via the DBSELECT call

hhhhhhhhhhhh is the value of the absolute address of the DMSII record (For protocol levels greater than 6, this value is all zeros unless the data set uses the AA Value as a key.)
Read_CB: Type=DOC[AF_HEADER], Afn=afn, RectoQPT=dd, UpdateLev=ul, TS='ts', DMSRel=nnn, DMSBuild=nnn, AudLev=nnn, AFSize=nnn, AFOrigin=orig,firstABSN=absn1, lastABSN=absn2 This message is always sent to the Client when the Databridge Engine opens a new audit file. It contains information about the audit file, including the audit file number afn, the update level ul, and the audit file origin orig. This last item is particularly useful when using DBEnterprise as it allows the Client to detect what access method is being used to read the audit file (i.e. direct-disk, indirect-disk or cache).
Read_CB: Type=DOC [type], . . . This message is printed only when the enable_doc_records parameter is set to Yes in the configuration file. The Databridge Client uses the DOC record only for debugging purposes. DOC records are documentation records that are optionally sent by the Databridge Engine to document the events that occur while Databridge Engine is reading the audit files.

The various types include BEG_TRAN, CLOSE, END_TRAN, OPEN, REORG. The rest of the message varies based on the DOC record type. In the case of BEG_TRAN and END_TRAN, the message includes the transaction count, while OPEN and CLOSE messages give information about the job number, the task number and the task name of the program that accessed the DMSII database. REORG DOC records are sent to Client to notify it that some sort of reorganization has occurred for the specified structure index, which is printed out in the message. The remaining DOC records are only identified by type with no additional information.
Read_CB: Type=LINK_AI StrIdx= number This message indicates that the Databridge Client received a DMSII LINK after image from the Databridge Server in response to a DBREAD or DBWAIT remote procedure call.

Information Trace

Information tracing occurs via the default -t 1 or -t 0x1 command-line option. The information messages include the following messages that are not displayed on the screen, as well as all messages that are displayed on the screen.

Message Description
command line echo Everything you type at the command line is echoed in the trace file.
Current date is: day month year This is the date you ran the Client. It is used to identify sections of the trace file as there might be several runs of dbutility logged to the same trace file.
Negotiated Protocol level = n, Host version n.n This is the negotiated protocol level that the Databridge Client and the Databridge Server are using to communicate. For example, a protocol level 7 Databridge Client and a protocol level 6 server use a negotiated protocol level of 6 in all communications.

Load Trace

Load tracing is available via the -t 8 or -t 0x8 command-line option. Load tracing messages refer to the Client control tables. To check these tables, use the dbutility display command. See dbutility Commands.

The Load External messages are displayed only during a dbutility define or redefine command. They indicate that the Databridge Client is reading table names defined in other data sources to make sure that any newly-defined tables and indexes do not duplicate table names or index names defined previously in other data sources.

The following messages may appear when you use load tracing:

Message Description
Load: DataSet = name[/rectype], strnum = number, AFN = afn, ABSN = absn This message appears for every data set loaded from the DATASETS Client control table. The message lists the data set name (and the record type for variable-format data sets) as well as the structure number, the audit file number, and the audit block serial number. For most commands, this message appears for only those data sets whose active column is 1.
Load: dms_item = name, item_number = number, DataSet = name[/rectype] This message appears for every DMS item loaded from the DMS_ITEMS Client control table. The message lists the data set name (and the record type for variable-format data sets) as well as the DMSII item name and the corresponding item number.

This message does not appear during the process and clone commands because all of the information the DMS_ITEMS entries contain is in the DATAITEMS Client control table.
Load: datatable = name, DataSet = name[/rectype] This message appears for every data table loaded from the DATATABLES Client control table. The message lists the data set name (and the record type for variable-format data sets) and the table name.
Load: dataitem = name, datatable = name This message appears for every data table loaded from the DATAITEMS Client control table. The message also displays the table name to which the item belongs.
Load External: DataSource = name, TableName = name, IndexName = name The Load External messages appear during a dbutility define or redefine command only. They indicate that the Databridge Client is reading table names defined in other data sources to make sure that any newly-defined tables and indexes do not duplicate table names or index names defined previously in other data sources.
Load: global_dataset = Global_DataSet, AFN = afn, ABSN = absn This message appears when the global data set is loaded from the DATASETS Client control table. Under normal circumstances, the AFN and the ABSN is 0 as the Databridge Client sets these entries to 0 after it propagates the global stateinfo for all data sets that have a value of 1 in their in_sync columns before the process command terminates.

If an OCCURS table filter is being used the Load Trace also includes a display of the filter data, which can also be generated by using the display command of the makefilter utility. This immediately follows the log message "Loading binary filter file "config\dbfilter.cfg".

Filter:  NumFilters = nnn, NumFilterEntries = nnn, ConstantPoolSize=0xhhhh
Constant Pool:
0000 hh hh hh . . .
Table 'name', filter_start = nnn, num_entries = nnn
    Type = ColumnName: item_name = 'name'
    Type = Constant: associated item,_name = 'name', offset = ddd, length = lll
    Type = Operator: op
    Type = Operator: END
. . .

Each OCCURS table that is being filtered has a starting index and a count that represents the number of tokens associated with the table. Constants are associated with an item, whose properties they share. Constants are put into a global constant pool that is shown in debug format. Individual constants are represented in DMSII native form (i.e. binary data). The offset into the constant pool is used to reference a constant, its length is the same as that of the associated data item. An offset of -1 is used to denote a NULL. The filters are represented in reverse polish form. The various operators are represented by 2 or 3 letter terms such as EQL, NEQ, AND, OR and so on. Every filter ends with an END operator.


Protocol Trace

Protocol tracing is available via the -t 64 or -t 0x40 command-line option. Protocol traces display the data that is read from or written to the TCP/IP interface during all communication with the Databridge Server.

Message Description
read: number_of_bytes_read Received data. These messages are followed by a hexadecimal dump of data in DEBUG format, with all data interpreted as EBCDIC text. Non-printable characters are displayed as periods (.).
write: number_of_bytes_read Sent data. These messages are followed by a hexadecimal dump of data in DEBUG format, with all data interpreted as EBCDIC text displayed in ASCII. (Non-printable characters are displayed as periods (.).

SQL Trace

SQL tracing is available via the -t 2 or -t 0x2 command-line option. The following SQL messages may appear in the log file:

Message Description
SQL=sqltext Indicates general SQL tracing where sqltext is the actual SQL command sent to the relational database.
SQL[number]=sqltext Indicates SQL tracing that involves host variables in the Databridge Client for Oracle when the configuration parameter aux_stmts has a nonzero value.

Number is the stmt number, and sqltext is the actual SQL command sent to the relational database.
SQL_DATA[number]= ...|...|... This message shows the data being passed to the database API when executing updates involving previously parsed SQL statements that use host variables. Number is the stmt number.

User Script Trace

User script tracing is available via the -t 2048 or -t 0x800 command line options. This causes the SQL statements in user scripts to be traced only during a define or redefine command. This option provides a subset of the SQL Trace. This option has no effect if SQL tracing is enabled.


Read Callback Exit Trace

Read callback exit tracing is available via the -t 4096 or -t 0x1000 command line options. This causes the Client to display the message shown below when it exits the read call back procedure. This indicates that the Client is done processing a data buffer and is ready to read the next one. This is only useful when looking for reasons why the Client is running slow. In such cases we recommend that the command line option -m be used, as this will give you a finer granularity timestamp.

Read_CB: Exit


DOC Record Trace

DOC record tracing is available via the -t 8192 or -t 0x2000 command line options. This causes the DOC records received from the Databridge Engine to be traced during a process or clone command. This option is redundant when the Databridge Server message tracing is enabled, see DBServer Message Trace.


Verbose Trace

Verbose tracing is available via the -t 65536 or -t 0x10000 command line options. These messages are described in the Databridge Errors and Messages Guide and identified by using the TR_VERBOSE bit, which is the above-mentioned bit in the trace mask.


Thread Trace

Thread tracing is available via the -t 131072 or -t 0x20000 command line options. These messages include the following:

Message Description
Bulk_loader thread[nn] {started | ready | exiting} (Windows only) These messages indicate a change in the state of the bulk loader thread(s).
  • started indicates that the thread was started. The thread is only started when there are tables to be bulk loaded.
  • ready indicates that the thread is ready to process requests to run the bulk loader. The bulk loader thread gets the load request from its work queue. If there is none, it blocks until one becomes available.
  • exiting indicates that the thread is no longer needed and is exiting. At this point, the Client is ready to start processing audit files, as soon as the index thread finishes creating indexes for all of the tables that were cloned.
Bulk loader thread[nn] starting {sql*loader | bcp} for table 'name' (Windows only) This message indicates that the bulk loader thread in question is launching the bulk loader for the specified table.
Console_Reader thread {starting | ready | exiting } These messages indicate a state change in the Console thread. The command line Client uses this thread to read console commands from the keyboard. The service-based Client (DBClient) uses this thread to handle console commands that originate in the GUI Console and are passed to the Client as RPCs. The various states indicate the following:
  • starting indicates that the thread was successfully started.
  • ready indicates that the thread is waiting for keyboard input in the case of dbutility and waiting for an RPC in the case of DBClient.
  • exiting means that the thread is about to exit.
Index_creator thread {started | ready | exiting} These messages indicate a state change in the index creator thread.
  • started indicates that the thread was started because there are tables for which indexes must be created.
  • ready indicates that the thread is ready to process requests to create indexes for tables. The index creator thread gets the index creation request from its work queue. If there is none, it blocks until one becomes available.
  • exiting indicates that the thread is no longer needed and is exiting. At this point, the Client is ready to start processing audit files.
Update Worker thread [nn] empty_work_queue, EOT=n, SDW=n, n_active_threads=nn This message, which is only seen when using multi-threaded updates,.indicates that the specified update worker is performing an update for the given table. It shows the address of the work descriptor storage block that is used to queue the request. This information is only useful if you are diagnosing a problem that deals with the management of work descriptor storage blocks.
Update Worker thread [nn] {started | ready | exiting} These messages, which are only seen when using multi-threaded updates, indicate a state change in one of the update worker threads.
>started indicates that the thread was started. The update threads are started at the start of the process command.
  • ready indicates that the thread is ready to process requests to execute updates. The update worker threads get the update requests from their work queues. If there is no request in the queue, the thread blocks until one becomes available.
  • exiting indicates that the thread is no longer needed and that it is exiting. This only happens when the Client is shutting down.
  • Waiting for bulk_loader thread to finish (Windows only) This message indicates that the bulk_loader thread is not finished loading tables. The main thread, which is ready to enter the fixup phase, must wait for these operations to complete before updates can be processed. When the bulk loader thread is finished it displays the message "Bulk_loader thread exiting."
    Waiting for index_creator thread to finish (Windows only) This message indicates that the index_creator thread is not finished. The main thread, which is ready to enter the fixup phase, must wait for these operations to complete before updates can be processed. When the index creator thread is finished, it displays the message "Index_creator thread exiting.”

    DMS Buffer Trace

    Buffer size tracing is available via the -t 262144 or -t 0x40000 command line options. This causes the Client to display the following messages when a DMS buffer is gotten from the free buffer list or when it is returned to the list.

    Message Description
    XDR_Get_Buffer: buf=0xhhhhhhhh, buf_cnt=dd, sem_cnt=dd This line is printed every time a DMS buffer is gotten off the free list; buf is the address of the buffer, buf_cnt is the number of DMS buffer that have been allocated and sem_cnt is the number of buffers that are available (note that all of these may not yet have been allocated).
    XDR_Return_Buffer: buf=0xhhhhhhhh, sem_cnt=dd This line is printed every time a DMS buffer is returned to the free list; buf is the address of the buffer, and sem_cnt is the number of buffers that are available (note that all of these may not yet have been allocated).

    Row Count Trace

    Row count tracing is available via the -t 524288 or - t 0x80000 command line options. This causes the Client to display the following message when the Client fetches the row count following the execution of a SQL statement. Note that in the case of user scripts, using the -v option causes the exact same output to appear in the log file when a user script executes an update statement.

    Rows updated = dd The value dd represents the number of rows updated.


    Buffer Size Trace

    Buffer size tracing is available via the -t 1048576 or -t 0x100000 command line options. This causes the Client to display the following messages at startup when the control tables are being loaded.

    Message Description
    Item name: hv_len=dd, sqlcmd(oh=dd, gr=dd), ins=(dd,dd), upd(dd, dd); total ins=(dd,dd), upd=(dd,dd) This line is printed every time a data item is processed. It shows the contributions of the item to the various SQL buffer sizes.
    Computed SQLcmd lengths for table name: [hv_len = dd,], sqlbuf_len = dd, sql_buf2_len = dd, sql_buf_size = dd, [thr_sql_buf_size = dd,] sql_buf2_size = dd At the end of the processing of the items in a table this summary line is displayed. In the case of the Flat File Client the sections enclosed in square brackets are not present.
    Buffer sizes are gSQLcmd/SQLCMDLEN = dd/dd, gSQLcmd2 = dd When all the tables have been processed this line is displayed. It shows the sizes for the two SQL buffers used by the main thread. When using multi-threaded updates refer to the previous message to see what the size of the update thread SQL buffers are.