Skip to content

Databridge 7.1 Release Notes


Databridge 7.1 Update 2

March 2024

  • A number of problems in the 7.1 base release and 7.1 update 1 were corrected; see Patch Notes for details.

  • The atm_crypto libraries were updated to version 3.6.54 (OpenSSL 3.0.13) to address recent vulnerabilities in the Open SSL code.

Databridge 7.1 Update 1

November 2023

  • A number of problems in the 7.1 base release were corrected. See Patch Notes for details.

  • A few minor enhancements were made to the Client, which include:

    • The implementation of IDENTITY columns for the Oracle Client, which can be used in history tables.
    • Enhancements to the redefine command to make it easier to add new data sets when the parameter suppress_new_datasets is set to true.
    • Enhancements to history tables to allow the Client to run with the Engine parameter CONVERT REVERSAL TO UPDATES set to true, without having the reversed updates and their reversals end up in history tables as (false) updates. The client now detects reversals and removes the original update from the history table, which produces the exact same results as when this parameter is set to false. Consider setting this parameter to true, as the processing of program aborts is a lot more efficient. See Patch Notes for details.
  • The atm_crypto libraries were updated to version 3.6.53 (OpenSSL 3.0.12) to address recent vulnerabilities in the Open SSL code.

    For information about these vulnerabilities, see https://www.openssl.org/news/vulnerabilities-3.0.html#y2023

Patch Notes

This section describes the various patches in Hot Fixes, Updates and Service Packs for version 7.1. Only relevant patches are included in the lists below.

The patches are grouped by component, are listed in chronological order, and specify the patch numbers that implement them. The patch number is in the last part of the version string. Within each list, all completed releases are marked with a line that specifies the release name and date (such as 7.1 Hot Fix 1 -- 9/30/2023 ). The lack of such a line indicates that work on the Hot Fix is still in progress.

The Databridge 7.1 Updates resolve the following issues:

DBEngine

003 - Preserve reversal bit in UI array when the Engine parameter Convert reversals to updates is true.

004 - It was possible for an aborted transaction to affect the valid update of another transaction on a different stack. This was caused by not heeding the LCW reversal bit on a reread following the aborted transaction.

DBServer

005 - Rework monitor statistics.

006 - Increase protocol level to 36 to support the convert reversals to updates change in DBEngine to preserve the reversal bit.

DBEnterprise

001 - The internal mapping of base and filtered data sources could result in a duplicated source name. The data source could not be customized when in this state.

002 - DBEnterprise would fail to the obtain host database update level if the key was expiring. This issue resulted in filtered data sources being unable to be customized.

003 - Duplicate family names are no longer treated as errors if the LUNs are different.

004 - The host connection now uses KeepAlive for MCP connections. This allows long periods without host interaction to keep the connection open. Previously, large initial extracts could lose host connectivity resulting in a failed clone.

005 - NUMBER (1) STORED OPTIONALLY NULL 0 items were filled in using the digit 3 instead of 0 (which ends up being NULL).

009 - It was possible for a compact data set change that is split across the end of an audit block section to result in an error if proceeded by a number of compact data set changes in the same block.

DBClient

001 - The cloning of FileXtract data sources hung at the end of the data extraction phase.

002 - The sequence_no column of history tables is incremented twice after every update instead of once.

003 - The Kafka client mishandles JSON output when the configuration parameter json_output_option is set to 1 or 2.

004 - The audit timestamp being sent to the Administrative Console was wrong. This was reflected in the Dashboard and the Run > Statistics output.

005 - OCCURS table filter generation fails to update DATASETS, which causes the filter generation to fail.

006 - Data extractions in multi-threaded Windows clients sometimes end up with a load error. The situations that lead to this are large values for the parameter max_temp_storage and data sets with multiple tables.

The cause of the problem is that the temporary storage threshold is reached while the last tables for a data set are not yet fully loaded. The client ends up queuing an extra request to load the file, which results in the loader trying to load the last file that was loaded, causing bcp to get a file not found error as the file in question no longer exists.

007 - The Kafka client does not suppress MODIFY records that have no changes when the configuration parameter json_output_option was set to 1 or 2.

008 - The stored procedure prefix table was corrupted in a recent correction, which caused calls for stored procedure with prefixes m_ to pick up a space after the m, which results in a SQL error. Stored procedures with prefixes of z_ also cause SQL errors.

010 - Enclosed passwords that appear in the bulk loader scripts file in double quotes to avoid getting syntax error when the password contains non-alphanumeric characters. This allows passwords to contain non-alphanumeric characters, except for double quote and NUL.

011 - Added a test to allow MISER sites to be able to set the configuration parameter use_stored_procs to false, as the client would otherwise have tried to generate a SQL statement to update the table.

012 - Added code to log the ODBC driver name and version for the PostgreSQL Client.

013 - PostgreSQL client failed to detect duplicate records errors during index creations.

014 - When duplicate keys were found during the index creation and the clear duplicates script was run, the row_count column in DATATABLES was not updated to reflect the resulting record count.

015 - When doing data extraction for a COMPACT data set Databridge Enterprise sent the client deleted records. These records ended up getting discarded after generating error messages about their keys being null. The client now detects such records and silently ignores them when using Databridge Enterprise.

017 - Enhanced the history table code to handle reversals by deleting the original record for a reversal from the history table. This makes the client produce the same results regardless of whether CONVERT REVERSALS TO UPDATES is enabled. This change requires the matching Engine, as the older Engines did not send the reversal bit in the updates’ update information data.

018 - Added the IDENTITY column for the Oracle client and modified history tables to use it instead of the update_time and the sequence_no user columns, as the default columns for history tables. The redefine command will not change history tables, while the define command will. To preserve compatibility with history tables generated by older clients avoid using defaults for user columns in a define command. The default name for the identity column is my_id, its default datatype is NUMBER(10).

019 - The bit SRC_NewHistory (0x800000) was added to the status_bits column of the DATASOURCES control table. It is set by the define command to allow the Oracle client to use the IDENTITY column by default in history tables (in place of the update_time and sequence_no columns).

020 - Implemented the configuration parameter new_history_tables, which is used to enable new code that sets the update types for history records involved in a key change to MODIFY_BI (6) and MODIFY_AI (7) instead of DELETE (2) and CREATE (1). These records are always sequential in the history table and allow the user to notice that they are the same record in DMSII.

021 - Added the ability to specify a file name in the create table and create index DDL suffixes, excluding global suffixes. A suffix of #addfilename will cause the content of the file in the user scripts directory (usually scripts) to be used as the suffix. This allows arbitrarily long suffixes to be used, provided that the lines in the file are less than 256 characters in length.

022 - Added code to the PostgreSQL client to get and log the PostgreSQL version.

023 - The Kafka client's length calculations were incorrect causing the JSON data to have an extra NUL character appended to it and the keys to be missing the closing double quote character.

024 - The client no longer displays error messages about records with null key items during data extraction with Databridge Enterprise. The count of such records (when not zero) is reported in the end of extraction statistics.

025 - Running back-to-back redefine commands did not work when there was an actual DMSII reorganization. If a data set ended up with a mode of 31, the second redefine command got an error claiming that a reorganize command should be run next.

This allows a new data set to be added to the client when suppress_new_datasets is true. The first redefine command will see the new data set and set its active column to 0. You can then use the console and navigate to Settings > Data Sets and set the active column to 1 using the properties of the data set. The second redefine command will pick up where the first one left off and complete the task.

027 - Reworked patch 24 not to assume that the keys were always the first items in the tables. Renumbering the keys resulted in all the record being ignored. 028 The Postgres Client’s generate command failed with an IO error when there was a SERIAL column present.

029 - The Postgres Client got a SQL error when updating the DATASETS table after a client failure.

030 - The export command sometimes got an access violation after the text configuration file was written.

031 - Enhanced the table statistics to provide statistics by update type in addition to the cumulative update statistics.

--- New in version 7.1 Update ---2

032 - The DBClntCfgServer "verifysource" command, when doing an "Add > Existing" data source from the Administrative Console, caused the client to crash when DBClntCfgServer tracing was enabled in the service.

033 - Cloning a database where all the tables were empty using the BCP API caused the client to hang,

034 - Eliminated error message in the redefine command when the old copy of the script file <source>_drop_obsolete_tables_<ul>.<ext> does not exist. This file provides an easy way of dropping obsolete tables that result from running the redefine command.

035 - A timing hole in the clients sometimes resulted in the index thread being created twice. As a result of this, the two index threads stepped all over each other leading to a multitude of Database errors. This happened only when there were lots of empty data sets at the start of the run.

036 - When using the BCP API the SQL Server client did not display the extraction statistics for empty tables.

037 - Enhanced the define command for history tables to make the my_id column a key and to only make the update_time column a key when the my_id column is not present.

038 - SQL Server & Postgres history tables did not work when the configuration parameter dflt_history_columns was set to include the update_type, update_time, and sequence_no user columns. The default user columns for history tables in a define command is update_type and my_id (identity column for SQL Server and Oracle and serial or bigserial for Postgres).

039 - The DBClntCfgServer verifysource command returned an exit code of 2056 when it detected that the control tables need upgrading using the dbfixup program. This caused the Administrative Console to disable the data source.

The situation was rectified by enhancing the command to verify the existence of the data source and return an exit code of 2109, which the service changed to a 0 after scheduling a launch of the dbfixup utility for the data source. This allowed an Existing to work correctly when the data source’s control tables needed dbfixup to be run.

040 - The Administrative Console was being passed the wrong value for the ABSN when the first 4 bits of its leading byte were non-zero.

041 - The Administrative Console was being passed the wrong value for the audit_time6 column of the DATASETS control table.

042 - A new status bit was added to the DATASOURCES control table to allow the Administrative Console to gray out the "Data Set State Info" menu in the data source's Settings menu when the data source is not in change tracking mode.

043 - The client was not setting the Engine parameter for NoReversals to the correct value.

044 - The unload command was not generating files compatible with the older clients' unload files.

045 - When a preserved deleted record was not in the relational database, the client stopped with an exit code of 2097. We now recover from this situation by inserting the record and marking as deleted.

046 - Prevented the verify_bulk_load code from causing a SQL error when the -z option is enabled, by bypassing the count verification as we have not loaded anything into the table.

047 - Fixed the client’s handling of history tables that use IDENTITY or SERIAL columns as keys not to include the my_id column in the key value pairs. These are used when displaying keys and when constructing where clauses in select statements used in handling reversals.

048 - The binding of host variables for update statements used to preserve deleted records does not work correctly when user columns are not at the end of the record. This causes the update not to find the target row in the table, which leads to the client stopping with an internal error.

049 - Host Variable tracing resulted in a null pointer exception when a history table using an IDENTITY or SERIAL column was involved.

DBFixup

001 - The dbfixup program was getting errors when the configuration file contained encrypted passwords.

Client Manager Service

001 - The service failed to suppress the sending of a data source added message for the console that initiated the add, which caused the console not to update the status of the added data source unless the console user forced the data source to be refreshed by disconnecting and reconnecting or getting the data source's read/only information.

002 - Implemented an RPC that allows the Administrative Console to display the active TCP/IP sessions in the service/daemon.

--- New in version 7.1 Update ---2

003 - Added log statements for failed signons to make it easier to determine the cause of the signon failures

004 - The service was enhanced to handle the exit of 2109, which is mentioned in client patch 39.

005 - A few additional changes to auto_dbfixup runs during upgrades.

006 - Added code for displaying the service's sessions in the console.

007 - Modified the "Settings > Data Set State Info" menu item to be grayed out when the client is not in tracking mode.

008 - If a run that is started from the batch console ends, the service can sometimes get a null pointer exception if the alternate end_of_run script file does not exist.

009 - If the service encounters a black period when starting a scheduled process command, the run does not get started when the blackout period ends.

Administrative Console

001 - The Define/Redefine menu item in the data source Actions menu was not getting disabled when the user's privilege to run this command was revoked by the administrator.

002 - The Configure command was not displaying items in the SQL suffixes page and it was not letting you update them, as a result of a bookkeeping error.

003 - Added the menu item Service Sessions to the Client Managers page’s Actions menu to displays the list active TCP/IP sessions.

--- New in version 7.1 Update 2 ---

004 - Fixed the console's handling of automated dbfixup runs by the service so it does not show a status of Fixup pending after the dbfixup run completes successfully.

005 - Fixed the service and Administrative Console to handle the automatic running of dbfixup during an upgrade. The console was showing a status of "Locked(Fixup pending)", which did get updated on its own.

Version Information

The Databridge components and utilities are listed with their version numbers in the base release of version 7.1. All host programs have been compiled with MCP Level 57.1 software.

Databridge Host Base release Curent release
DBEngine 7.1.0.002 7.1.1.004
DBServer 7.1.0.005 7.1.1.006
DBSupport 7.1.0.001
DBGenFormat 7.1.0.001
DBSpan 7.1.0.001
DBSnapshot 7.1.0.001
DBTwin 7.1.0.000
DMSIIClient 7.1.0.000
DMSIISupport 7.1.0.001
DBInfo 7.1.0.002
DBLister 7.1.0.001
DBChangeUser 7.1.0.000
DBAuditTimer 7.1.0.001
DBAuditMirror 7.1.0.000
DBCobolSupport 7.1.0.000
DBLicenseManager 7.1.0.000
DBLicenseSupport 7.1.0.001
FileXtract Base release
Initialize 7.1.0.000
PatchDASDL 7.1.0.000
COBOLtoDASDL 7.1.0.000
UserdatatoDASDL 7.1.0.000
UserData Reader 7.1.0.000
SUMLOG Reader 7.1.0.000
COMS Reader 7.1.0.000
Text Reader 7.1.0.000
BICSS Reader 7.1.0.000
TTrail Reader 7.1.0.000
LINCLog Reader 7.1.0.000
BankFile Reader 7.1.0.000
DiskFile Reader 7.1.0.000
PrintFile Reader 7.1.0.000
Emterprise Server Base release Current release
DBEnterprise 7.1.0.000 7.1.2.009
DBDirector 7.1.0.000
EnumerateDisks 7.1.0.000
LINCLog 7.1.0.000
Databridge Client Base release Current release
bconsole 7.1.0.000
dbutility 7.1.0.000 7.1.2.049
DBClient 7.1.0.000 7.1.2.049
DBClntCfgServer 7.1.0.000 7.1.2.049
dbscriptfixup 7.1.0.000 7.1.2.049
DBClntControl 7.1.0.000 7.1.2.009
dbctrlconfigure 7.1.0.000 7.1.2.009
dbfixup 7.1.0.000 7.1.2.001
migrate 7.1.0.000
dbpwenc 7.1.0.000
dbrebuild 7.1.0.000 7.1.2.049
Databridge Administrative Console Base release Curent release
Administrative Console 7.1.0 7.1.2

New Features in Databridge 7.1

July 2023

Databridge version 7.1 introduces these new features and functions. For detailed descriptions, see What's New in this Release in the Databridge Installation Guide.

Important

  • Be sure to note the changes in the software installation procedures.
  • The Kafka client for Linux platforms is now in the Linux folder as DB_Linux64_Kafka.tar. It no longer resides in the Kafka folder on the release medium.
  • Added support for DMSII SSR 63.0.

  • Implemented Postgres client for Windows and Linux platforms.

  • Added Notifications. The Administrative Console can send alert messages using email to designated personnel when something goes wrong with a Databridge component.
  • Modified the client to encrypt all passwords in configuration files (both text and binary). Previous obfuscated passwords will be encrypted when a configuration file is updated.
  • Modified the redefine command to make back-to-back redefine commands work like the Administrative Console’s Customize command. As a result, the -u option for the Redefine command means start over using the control tables in the unload file.
  • Added the parameter use_dmsii_keys to the Kafka client to make it use the DMSII SET selected by the Engine as the keys, rather than always using the AA Values/RSN as the key for partitioning data.
  • Updated the Kafka client:

    • The Kafka client now supports transactional operations.
    • The Kafka client can operate using the daemon on Linux. A Windows version of the Kafka client is included in Databridge 7.1.
    • The Kafka client can operate without requiring a database. The SQLite database is included with these clients and is used to hold the control tables.
    • Upgrading the Kafka client requires different steps. See Upgrading from earlier versions in the Kafka Client Administrator's Guide.
  • Enhanced the UNIX/Linux dbdaemon script to use the TERM signal to stop the daemon in an orderly fashion. Additionally, the USR1 or USR2 signals are now used to cause the daemon to write the RPC trace and the program status to the file trace.log in the working directory. The daemon script needs to be updated using the provided dbaemon.smp file as template to support this feature.

  • Modified the second database connection used by the clients to be dynamic, which means that unless doing data extraction, the client uses only a single database connection.
  • Switched log files to include a line that points back to the previous log file, which may have a much older date.

  • Modified the client to maintain row counts for all tables. The counts are updated after an audit file switch and at the end of a client run. In the case of Oracle, do not use stored procedures if you want the row counts to be correct. In these cases, do not terminate the client prematurely (that is, do not kill the service or the client). Instead of using a UNIX kill command or using the task manager to terminate a client run, use the Administrative Console's Stop and Abort command.

  • Updated the client control tables to replace all binary data (raw in the case of Oracle) by numeric data (BIGINT for the SQL Server and Postgres clients, and NUMBER(15) for the Oracle client).

  • Enhanced the Oracle client’s index creation to use the parallel 8 and the nologging options to speed the index creation. In the case of a primary key, a unique index is created using the parallel 8 and the nologging options. Once the index is created, the table is altered to add a primary key constraint, which uses the index that was created.

  • Enhanced the deleted_record user column to support the data type of BIGINT(18), which causes the client to combine the timestamp with the sequence number to form a 48-bit quantity that is used in place of the timestamp. This enhancement eliminates the duplicate record problems that occurred when the same record was deleted and inserted multiple times during the same second in the client machine.

Changes in Databridge version 7.1

  • The migrate utility, which was designed to migrate sites from command line-based operations to service-based operations in version 6.0, was discontinued in 7.1.
  • The dbpwenc utility, which was designed to obfuscate passwords in text configuration files, was discontinued in 7.1. Use import and export commands to achieve the same result.
  • The configuration parameter use_ctrltab_sp has been dropped for the Oracle and SQL Server because these clients always use host variables to update the control tables.

Known Issues

Beginning with SQL Server 2016, enhanced security no longer allows you to add NT AUTHORITY\SYSTEM to the sysadmin role. We suggest one of the following options to mitigate this problem:

  • Run the service under the user account that is set up to run the command line client (dbutility) and use Integrated Windows authentication.

  • Set up the client to use SQL Server authentication to connect to the database, and continue to run the service under the SYSTEM account.

Using TLS with Databridge

To configure data encryption for use with Databridge Clients, follow the steps to configure MCP and the client. A Databridge Server can support only one type of connection. When configured to use TLS, then only TLS connections are supported.

MCP configuration

  1. Using the MCP Cryptographic Services Manager, create a new key in Trusted Keys > Other Keys. Complete the form and create a certificate request to submit to a certificate authority. When the certificate has been obtained, install it into the newly created key.

    Note

    The Application and Service entries are concatenated to form a key container id. If the application is DATABRIDGE and the service is SSLKEY, then the key container will be DATABRIDGE_SSLKEY. Note that the trusted key usercode must be that of the Databridge install.

  2. Edit the DATA/SERVER/CONTROL file located in the Databridge usercode and uncomment the KeyContainer parameter and set it equal to the key container created above. Using the example key container in the Note above, the parameter in DATA/SERVER/CONTROL should be:

    KEYCONTAINER = “DATABRIDGE_SSLKEY”

  3. Restart Databridge Server to use the new KeyContainer.

Client configuration

To configure a Databridge Client to use TLS, verify these settings:

  • The enable encryption parameter must be set to True.

  • The server certificate requires either the CAFile or CAPath parameters to be verified. The CAFile is the full path to a file that can be used to verify the server certificate, and the CAPath references a directory where the server certificate can be verified.

For more information, see Encryption in the Databridge Administrative Console User Guide.

File Structure

This release uses the same directory structure as previous releases.

Note: The Administrative Console is a separate installation, located in the Console\Windows directory on the release image. The Console install includes a private JRE.

System Requirements

Databridge 7.1 includes support for the following hardware and software requirements.

System Support Updates. Databridge will remove support for operating systems and target databases when their respective software companies end mainstream and extended support.

Supported Internet Browsers: Microsoft Edge, Mozilla Firefox, Google Chrome

Databridge 7.1 Supported Systems
Databridge Host Unisys mainframe system with an MCP level SSR 59.1 through 63.0
DMSII or DMSII XL software (including the DMALGOL compiler)
DMSII database DESCRIPTION, CONTROL, DMSUPPORT library, and audit files
Databridge Enterprise Server ClearPath PC with Logical disks or MCP disks (VSS disks in MCP format)
-or-
Windows PC that meets the minimum requirements of its operating system:
- Windows Server 2022
- Windows Server 2019
- Windows Server 2016 (CORE mode must be disabled for installation and configuration)
- Windows Server 2012 R2 (CORE mode must be disabled for installation and configuration)
- Windows Server 2012

Direct Disk replication (recommended) requires read-only access to MCP disks on a storage area network (SAN)
TCP/IP transport.
NOTE: To view the product Help, JavaScript must be enabled in the browser settings to navigate and search Help.
Databridge Administrative Console One of following platforms can be used for the Administrative Console server:
- Windows Server 2012 or later
- Windows 10 x64
- Intel X-64 with Red Hat Enterprise Linux Release 7 or later
- Intel X-64 with SUSE Linux Enterprise Server 11 SP1 or later
- Intel X-64 with UBUNTU Linux 14.4 or later
- Sun Microsystems SPARCstation running Solaris 11 or later
Databridge Client We recommend running the Administrative Console on a different machine from the Client to avoid negatively impacting the client's performance. To access the Administrative Console, use a supported browser on the client machine.
NOTE:
   - Disk space requirements for replicated DMSII data are not included here. For best results, use a RAID disk array and store the client files on a separate disk from the database storage.
   - Memory requirements do not include the database requirements when running the Client in the server that houses the relational database (consult your database documentation for these). The numbers are for a stand-alone client machine that connects to a remote database server.
Client - Windows Unisys ES7000
-or-
Pentium PC processor 3 GHz or higher (multiple CPU configuration recommended)
2 GB of RAM (4 GB recommended)
100 GB of disk space in addition to disk space for the relational database built from DMSII data)
TCP/IP transport

One of the following operating systems:
- Windows Server 2022
- Windows Server 2019
- Windows Server 2016 (CORE mode must be disabled for installation)
- Windows Server 2012 R2 (CORE mode must be disabled for installation)
- Windows Server 2012
- Windows 10

One of the following databases:
- Microsoft SQL Server 2022
- Microsoft SQL Server 2019
- Microsoft SQL Server 2017
- Microsoft SQL Server 2016
- Microsoft SQL Server 2014
- Microsoft SQL Server 2012
- Oracle 12c, 18c, 19c, 21c
Client - UNIX and Linux One of the following systems:
- Sun Microsystems SPARCstation running Solaris 11 or later
- IBM pSeries running AIX 7.1 or later
- Intel X-64 with Red Hat Enterprise Linux Release 8 or later
- Intel X-64 with SUSE Linux Enterprise Server 11 SP1 or later
- Intel X-64 with UBUNTU Linux 18.04 or later
2 GB of RAM (4 GB recommended)
100 GB of free disk space for installation (in addition to disk space for the relational database built from DMSII data)
TCP/IP transport
One of the following databases: Oracle 12c, 18c, 19c, 21c

Obtaining Databridge 7.1

Maintained customers are eligible to download Databridge from the Software Downloads site.

Installing Databridge 7.1

  • When installing Databridge 7.1 for the first time, download and install Databridge Host, Databridge Enterprise Server, and All Clients. ZIP format. (7.1).

  • See the Databridge Installation Guide for detailed installation and upgrade instructions.

Notes about installing the Administrative Console

  • The Administrative Console must be installed after you install the Databridge Enterprise Server and the client.

    See Installing the Databridge Administrative Console in the Databridge Installation Guide for detailed steps to install the Administrative Console on Windows or UNIX machines.

  • We recommend that you install the Administrative Console on a separate server from the client machine(s) because:

    • The Administrative Console can use significant resources that may impact the client's performance.
    • When the Administrative Console is installed with a Client machine, the Administrative Console cannot monitor activity when the client machine is down. By having the Administrative Console on a different machine, you can monitor the Client Manager(s) and receive alerts for address warnings and connectivity errors.

Contacting Customer Support

For specific product issues, contact Customer Support.

For online technical information, see:


© 2024 Open Text

The only warranties for products and services of Open Text and its affiliates and licensors (“Open Text”) are as may be set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Open Text shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice.