Skip to content

Databridge 7.0 SP1 Release Notes

Last Updated: March 2024


Version 7.0.SP1 Update 5

March 2024

What's new in Version 7.0 SP1 Update 5

  • Implemented a double-byte translation dll named DEATRAN_CP950.DLL to translate EBCDICUTL (Traditional Chinese) data to Microsoft code page 950.

Issues resolved in version 7.0 SP1 Update 5

  • The atm_crypto libraries were updated to version 3.6.54 (OpenSSL 3.0.13) to address recent vulnerabilities in the Open SSL code. For more information, see https://www.openssl.org/news/vulnerabilities-3.0.html#y2023.

  • See the patch notes of the various Databridge component for a list of issues addressed in this update.


Version 7.0 SP1 Update 4

February 2023

Fixes and changes in version 7.0 SP1 Update 4

  • The parameter use_dmsii_keys was added to the Kafka client to make it use the DMSII SET selected by the Engine as the keys rather than always using the AA Values/RSN as the key.
  • The parameter json_output_option was added to the Kafka client to control the content of UPDATE and DELETE records in the JSON output.
  • Implemented keep alive to broker for Kafka client.
  • Updated the Kafka library librdkafka.so.3.
  • The daemon script was enhanced to use the TERM signal to stop the daemon and the USR1 or USR2 signals to write the RPC trace and the program status to the file trace.log in the working directory.

  • The atm_crypto libraries were updated to version 3.4.24 to mitigate these vulnerabilities:

    • CVE-2023-0286
    • CVE-2023-0215
    • CVE-2022-4304
  • Resolved several issues. This release includes cumulative fixes in previous versions. Refer to the patch notes for individual software component for details.


Version 7.0 SP1 Update 3

November 2022

Fixes and changes in version 7.0 SP1 Update 3

  • Removed the JXPath library to mitigate CVE-2022-41852.

Version 7.0 SP1 Update 2

October 2022

Fixes and changes in version 7.0 SP1 Update 2

  • Apache Commons Text library was updated to version 1.10.0 to mitigate CVE-2022-42889.
  • Apache Shiro library was updated to version 1.10.0 to mitigate CVE-2022-40664.
  • The Administrative Console was updated to use Java 11.

Note

On Linux systems, you must install the provided Java 11 (replace Java 8) because the Administrative Console now requires Java 11.


Version 7.0 SP1 Update 1

October 2022

Fixes and changes in version 7.0 SP1 Update 1

  • Corrected the problems with updating the “Blackout period” parameter from the Administrative Console.

  • Changed the data sources page to show "Blacked out" data sources as Blacked out rather than Not defined.

  • Modified the Kafka client to use update types of 1 – 3, which represent "insert", "delete", and "update".

  • Corrected the Kafka client’s handling of key changes to implement a key change as a DELETE followed by an INSERT.

  • Enhanced the Oracle client’s index creation to use the "parallel 8" and the "nologging" options to speed the index creation.

  • Reworked the redefine command to eliminate the concept of back-to-back redefine commands, by making the command restore the old copy of the control tables from the unload file before calling Compare_Layouts(), like the Administrative Console’s Customize command does.

    This change makes the -u option meaningless for the "Redefine" command.

  • Modified the UNIX/Linux version of the dbfixup program to launch a shell script to refresh the control tables.

    Launching dbutility directly returned an exit code of -1, which caused dbfixup to assume that the refresh failed.

  • Implemented the -n option for dbfixup to defer the running of redefine command and ensuing generate command. When the user tries to run a process command, it will exit with an exit code indicating that a redefine command is required before updates can be processed.

  • When a blackout period ends, the service now restarts process commands that were interrupted by the blackout period.

  • Resolved several issues. This release includes cumulative fixes in previous versions. Refer to the patch notes for individual software component for details.


Version 7.0 SP1

June 2022

Fixes, features, and changes introduced in version 7.0 SP1

  • The install package contains only 64-bit software. The Windows folder replaces the Windows64 and Window32 folders.

  • Added support for the "max_string_size" parameter in the Oracle Client to allow it to use wider varchar2 and raw columns, which is more efficient than using clob.

  • Added support for the Oracle TIMESTAMP data type in user columns that allows fractions of seconds in time stamps.

  • The configuration file parameter "keep_undigits" was changed to three values instead of being a boolean. Values of 0 and 1 work in the same way as before. A value of 2 extends 1 to treat undigits in numeric fields like 9s, similar to most MCP COBOL applications.

  • Support for HP-UX Itanium was removed; it is available on demand.

File Structure

The directory structure is similar to previous releases; however, the Administrative Console is now a separate install located in the Console\Windows directory in the install package. The console install includes a private JRE.

Hot Fixes, Updates, and Service Packs will use the same directory structure as the install package.

Note

Hot Fixes, Updates and Service Packs still contain a patch in the Windows directory, which updates the Client and Enterprise Server. The console is patched by running setup.exe from the Console\Windows directory, which reinstalls the console software and its private JRE.

Patch Notes

The Databridge 7.0 SP1 Updates resolve these issues.

DBEngine

003 When the first and last ABSN of a newly opened audit file were stored in the audit cache and they had the same cache index, the engine would endlessly loop searching for the first ABSN.

004 When establishing the start extract fixup audit location, it is possible to encounter the stopper pattern before reaching the DMInq or control file audit location. On a database with infrequent updates, this can occur with higher frequency. The result is a bad start extract fixup audit location.

005 If the inital search for an ABSN with an unknown segment resulted in a wrong ABSN error, the subsequent search did not set the segment even if the ABSN was found.

006 An audit discontinuity in the first block was not being reported and resulted in looping between two audits.

007 It was possible for an aborted transaction to affect the valid update of another transaction on a different stack. This was caused by not heeding the LCW reversal bit on a reread following the aborted transaction

DBServer

003 Made the server report the SSLERROR associated with an OPEN error when the the error occurs on a TLS port.

004 Added keepalive to portfile declaration. This will apply to all subfiles. A future implementation may apply this on a subfile basis by creating a control file parameter KEEPALIVE Boolean.

DBInfo

001 When run with TASKVALUE = 6, DBInfo will run like TASKVALUE = 1 except that it uses the ?CLOSE. Intended to be used as a COMS remote window application.

DBEnterprise

001 The internal mapping of base and filtered data sources could result in a duplicated source name. The data source could not be customized when in this state.

002 DBEnterprise would fail to the obtain host database update level if the key was expiring. This issue resulted in filtered data sources being unable to be customized.

003 Change "duplicate family name" from error to warning when LUNs are different.

004 Added keepAlive to host connection. Currently hardcoded to 900 secs.

005 NUMBER (1) STORED OPTIONALLY NULL 0 items were filled in using the digit 3 instead of 0 (which ends up being NULL).

009 It was possible for a compact data set change that is split across the end of an audit block section to result in an error if proceeded by a number of compact data set changes in the same block.

DBClient

001 The Client mishandles a Miser date containing an undigit when it is a key. It was storing the record into the database with the date set to NULL and falsely claiming to have discarded the record. The problem arose when it tried the get the values for the keys in the error message and encountered a secondary error.

002 Added the parameter “sasl_kerberos_kinit_cmd” to allow the Kerberos kinit command to be customized to meet site requirements.

003 Implemented the [bulk_loader] section parameter "bulk_loader_path" for Windows Clients to avoid having to add the directory where the bulk loader program resides to the system PATH.

When present, this parameter causes the generate command to use the full file specification for the bulk loader program.

004 Extended the parameter keep_undigits to have 3 values, where 1 is the same as true in the older Clients and 2 makes the client treat undigits as 9’s for numeric items and acts like 1 for items stored as alpha.

006 The SQL Server client, when run single-threaded, gets a bounds error in some rare situations when processing updates. This happens when the host variable storage requirements for the widest table exceed the computed size for storage block that is used in updates. When using multi-threaded updates this situation does not occur as each table has its own host variable storage, which is correctly sized.

008 Increased the size of the text configuration file input buffer to handle lines that are longer than 255 characters for the Kafka client, to allow the "kafka_brokers" parameters to accommodate more brokers.

009 Updated the client not to require that "bulk_loader_path" parameter end in a backslash.

010 Modified the Flat File and Kafka clients not to create the second database connection, as it is never used.

011 Modified UNIX clients to handle hung queries like the Windows client do. If the OCIBreak() call hangs we kill then run using a kill() function. This yields an 8-bit exit code of 137. If the SQLCancel works, the client exits with an exit code of 2058; the 8-bit UNIX exit code for it is 207.

014 Added a line at the beginning of a switched log file that points back to the previous log file.

015 Corrected the thread resource utilization statistics did not include File I/O time.

016 Modified the EBCDIC to ASCII translation code for the Kafka Client to handle special characters (like double quote) in strings that need to be changed to two-character sequences that use “\” as the force character.

017 Setting the parameter "show_statistics" to false caused the DMS record count not to be increment, which led to rather strange statitics.

018 Implemented the configuration file parameter "suppress_delete_msgs" that when set to True stops the Client from reporting data errors for DELETE operations that result in discards. The Client totally ignores such discards, as the target records cannot be in the relational database because they have keys with data errors, they are not included in the discard count and they are not written to the table's discard file.

019 The Oracle Client is susceptible to buffer overruns when cloning the dataset with the widest table, as the size estimation fails to take into account the fact that character data is enclosed in quotes.

020 The user column “create_time”, which was originally implemented for the SQL Server client, did not work correctly for the Oracle client.

021 Implemented the parameter "enable_extended_types" for the Oracle clients to allow the client to honor the Oracle database’s “max_string_size” parameter, which when set to “extended” increases the maximum size of the varchar2 and raw data types to 32K. If the client finds the parameter set to extended it then starts using the value for the maximum size for such columns. The upgrade requires the running a redefine command before it takes effect.

022 Implemented protocol negotiation between the administrative console and the service to allow 7.0 consoles to connect to 6.6 services and 6.6 consoles to connect to 7.0 services. The needed changes to reconcile the two protocols were then implemented. All console operator commands that have protocol dependencies pass the protocol level along with the command allowing the client to generate the response using the right protocol.

024 Implemented the TIMESTAMP data type for the Oracle client and made the DATASETS control table use it for the audit_ts column. Upgrades are done automatically without using dbfixup. You can use the TIMESTAMP data type for update_ts user columns. This data type uses the same entry as the SQL Server datetime2 data type (19) and it uses the sql_length to specify the number of fractional digits to include (range 0 - 9).

025 Moved the check for blackout periods to insure that the client stops on an idle system after the DB_Read or DB_Wait RPC returns.

026 Fixed the inconsistent handling of the configuration parameter “Blackout period” so that a no-blackout period is always internally represented by “-1,-1”, which is necessary to properly initialize the slider for this parameter in the Adminstrative Console UI.

027 Allowed the Administrative Console to update the configuration file during a blackout period, as it was not possible to cancel a blackout period once it started.

028 Eliminated SQL errors in the case of Add New Data Source into a database where the control tables were not yet created.

029 Fixed the data source page to act in a reasonable fashion when a blackout period is in effect. We now mark the source as “Blacked Out” and disable most of the menu items.

031 Fixed bug in no stored procedures update of update_time column that had a spurious = sign in the values part of an insert statement, which caused a SQL error. Enabling stored procedures is a temporary mitigation for this bug.

032 Modified the Kafka client to only use update types of 0 - 3).

033 The Kafka client was not handling key changes correctly.

034 The Kafka client was not handling updates correctly when OCCURS and OCCURS DEPENDING ON clauses were not flattened.

035 The Kafka client was including non-key items in DELETE messages.

036 The SQL Server client did not verify the validity of GUIDS during data extraction using BCP.

037 The Oracle client upgrade code caused to a one-time failure of the client resulting in a DBM033 error. This was rectified by making the client exit and let the service automatically run dbfixup.

038 The Oracle client fails to generate an entry for virtual keys when OCCURS clauses are not flattened.

039 The process command for the Oracle client gets a SQL error when propagating the global state information at the start of a new audit file.

040 The data extraction for the Oracle client does not handle the user column "update_time" correctly, which leads to a SQL*Loader error.

041 Prevented the client from processing input records for data sets after a bulk loader failure.

042 Enhanced the Client Status command to show the statuses of the extract worker threads during data extraction.

043 Added code to clear the changes column of the DATATABLES and DATAITEMS entries in a redefine command before comparing the layouts.

044 Enhanced the periodic DMSII and update count display to include the number of DMS buffer that are in use when running multi-threaded.

045 The thread load calculations used for load balancing are not working correctly.

046 A race condition between the main thread’s updating of the DATASETS ds_mode and the update worker sometimes resulted in the update worker trying to execute an INSERT while the bulk loader was running leading to a deadlock.

047 The value of the expanded audit_ts column for the Oracle client contained extra zeroes causing SQL*Loader to reject the extract records.

048 Single threaded data extraction consistently gets a SQL error.

049 Modified the client to force the data extraction to be single-threaded, as it appears to be much faster than the multi-threaded case and it does not have any timing issues like the multi-threaded case.

050 Updating the client configuration file from the Administrative Console corrupts the "status_bits" column of DATASETS entries for the data source.

051 The recent data extraction changes were not handling the end of data extraction correctly. The State Information was not being updated and the indexes were not being created.

052 Enhanced the Oracle client’s index creation to use the “parallel 8” and the “nologging” options to speed the index creation. In the case of a primary key, we first create a unique index using use the “parallel 8” and the “nologging” options. Once the index is created, we then alter the table to add a primary key constraint, which uses the index that we created.

053 Added the original_name column to the DATATABLES section of the display command.

054 Fixed problem in the data extraction that caused the index thread to get sporadic errors. These errors occurred in the count verification, the index creation and the updating of the data sets' mode.

055 Modified the unload command to generate unload files that are compatible with older clients when the -V option is used. This makes the unload file created by dbfixup to be compatible with earlier clients.

056 Fixed the Oracle client not use the TIMESTAMP data type by default for DMSII TIMESTAMP and TIME(6) items.

057 Corrected double free error in the Discover_Sources code that is used when adding a new data source from the Administrative console.

058 The dbutility scheduling did not respond immediately to a SCHED OFF console issued was the process was idle. This only took effect after the subsequent process command finished.

059 The DBClntCfgServer program did not clear the SRC_BlackedOut bit in the status_bits column of DATASOURCES when the blackout period was cleared.

060 Clearing a blackout period from the Administrative Console while it was in effect was not working correctly.

061 Made the Oracle client handle the ORA-12801 error, which is a catch when using parallel mode. The error message has a secondary error message that contains the actual error, which we now retrieve.

062 Changed the index creation to only run the clear duplicates query when the error indicates that the index creation failed because of duplicate keys.

063 Modified DBClntCfgServer to check for blackout periods and take appropriate action so the console reports these periods and prevents users from starting client runs during a blackout period.

064 Modifying config parameters that require a redefine or generate command does not update the data sets' status_bits and the data source's status page in the Administrative Console.

065 The reorganize command does not clear the "Needs Reorganize" status in the Adminstrative Console.

066 Reworked the redefine command to eliminate the concept of back-to-back redefine commands, by making the command restore the old copy of the control tables from the unload file before calling Compare_Layouts(), like the Administrative Console’s Customize command does. This change makes the -u option meaningless for the "Redefine" command.

067 Implemented the configuration parameter "use_dmsii_keys" for the Kafka client to make it use the DMSII SET selected by the Engine as the keys rather than always using the AA Values/RSN as the key.

068 Implemented the parameter "json_output_option" to control the content of UPDATE and DELETE records in the JSON output. The parameter has 3 values:

0 (default) indicates the current behavior. 
1 indicates that UPDATE records will contain all columns. 
2 indicates both UPDATE and DELETE records will contain all the columns.

069 Modified the Oracle version of the Drop_Index procedure to first drop the primary key constraint when dropping a primary key. Attempting to drop the index no longer works as the index is no longer created by adding the primary key constraint.

070 Removed the code in temporary patch 4.49, as after extensive testing we now know that data extraction works reliably with multiple threads and it is much faster.

071 Index thread failed to set ds_mode to 11 for data sets whose tables had no indexes when running multi-threaded. This resulted in such data sets constantly being re-cloned when you ran process commands.

072 Implemented keep alive to broker for Kafka client.

073 Updated the Kafka library “librdkafka.so.3”.

074 Corrected patch 67 that caused the client to crash when the parameter use_dmsii_keys was false (its default value).

075 The createscripts command generated the wrong original name for a virtual key item that was renamed, which caused the update to fail.

076 The -s option had opposite effects in the dbscriptfixup utility and the createscripts command.

  • When using the -s option for the createscripts command some updates to DATAITEMS did not include the test for equality of the data source name.

  • The execution of the createscripts command after an upgrade command in the dbscripfixup utility only worked when no options where specified.

077 The handling a getaddrinfo error resulted in the fault error message:

SOCKETS ERROR: getaddrinfo call failed, error=0 (Success)

078 The discover command disconnects from the service before responding to the RC_Stop RPC, which causes the service to erroneously report that the run crashed.

079 Added the parameters sasl_mechanisms, sasl_username and sasl_password to the client configuration file and the Kafka security structure.

080 Corrected patch 74 that failed when use_dms_keys was false.

082 The atm_crypto libraries were updated to version 3.2.24, see "Issues Resolved by 7.0 SP1 Update 4" for details.

083 Fixed XDR_String to handle NULL, this is necessitated the Kafka that hassome configuration parameter with initial values of NULL rather than "".

084 Corrected error in formatting timestamps for user columns using a data type of datetime for the SQL Server client. This caused a SQL error as the fractional part of the seconds has more than 3 non-zero digits.

085 Corrected Kafka client to skip updates where no DMSII columns change as a result of filtering.

086 Fix error in unload command for columns of type bigint.

087 Corrected the AF_STATISTICS computations that were using global values instead of incremental values for some of the statistics.

088 The "reorganize" command got a SQL error when updating the DATASOURCES table because the host variable buffer got expanded. the client recovered from this by repeating the update without using host variables.

089 Updated the TLS code to use version 3.6.

090 Implemented the -p command line switch for the generate command that adds the "purge" options to the drop table SQL statement in the drop_table stored procedure for the Oracle client.

091 Implemented a double byte translation dll named DEATRAN_CP950.DLL to translate EBCDICUTL data to code page 950. To install this DLL select the Japanese DLL feature in the installer, this will copy both DLLs and their sample configuration files to the install directory.

  • Set the configuration parameter "use_ext_translation" to true and the parameter "eatran_dll_name" to "DBEATRAN_CP950.DLL" in the client configuration file. Then copy the sample configuration file "dbtrans_cp950.smp" from the client's install directory (SQLServer) to the config directory for the data source as "dbtrans.cfg".

092 Corrected the CP950 translation table and fixed DLL termination error.

093 The client got a bounds error when a discard for an OCCURS table was generated.

094 The cloning of FileXtract data sources hangs at the end of the data extraction phase.

095 The "sequence_no" column of history tables is incremented twice after every update instead of once.

096 The Kafka client mishandles JSON output when the configuration parameter "json_output_option" is set to 1 or 2.

097 Data extractions in multi-threaded Windows clients sometimes end up with a load error. The situations that lead to this are large values for the parameter "max_temp_storage" and data sets with multiple tables.

  • The cause of the problem is that the temporary storage threshold is reached while the last tables for a data set are not yet fully loaded. The client ends up queuing an extra request to load the file, which results in the loader trying to load the last file that was loaded, causing bcp to get a file not found error as the file in question no longer exists.

098 The Kafka client does not suppress MODIFY records that have no changes when the configuration parameter json_output_option was set to 1 or 2.

099 Patch 93 corrupted the stored procedure prefix table, which caused calls for stored procedure with prefixes “m_” to pick up a space after “m”, which resulted in a SQL error. Stored procedures with prefixes of "z_" also got SQL errors.

100 When doing data extraction for a COMPACT data set Databridge Enterprise sent the client deleted records. These records ended up getting discarded after generating error messages about their keys being null. The client now detects such records and silently ignores them when using Databridge Enterprise.

101 The Kafka client's length calculations were incorrect causing the JSON data to have an extra NUL character appended to it and the keys to be missing the closing double quote character.

102 Modified patch 100 so it applies to all data set types rather than just COMPACT data sets.

103 Running back-to-back redefine commands did not work when there was an actual DMSII reorganization. If a data set ended up with a mode of 31 the second redefine command got an error claiming that a reorganize command should be run next.

  • This intended to allow a new data set to be added to the client when "suppress_new_datasets" is true. The first redefine command will see the new data set and set its active column to 0. You then can use the console and navigate to "Settings > Data Sets" and set the active column to 1 using the properties of the data set. The second redefine command will pick up where the one left off and complete the task.

104 A timing hole in the UNIX clients can sometimes result in the index thread being created twice. As a result of this, the two index threads step all over each other leading to a multitude of Oracle errors. This happens only when you have lots of empty data sets at the start of the run.

105 The Administrative Console was being passed the wrong value for the ABSN when the first 4 bits of its leading byte were non-zero.

106 The client was not setting the Engine parameter for NoReversals to the correct value.

108 Implemented the configuration file parameter "purge_dropped_tabs" for the Oracle client’s generate command that adds the PURGE option to the “drop table” SQL statement in the drop_table stored procedure.

109 The binding of host variables for update statements used to preserve deleted records does not work correctly when user columns are not at the end of the record. This causes the update not to find the target row in the table, which leads to the client stopping with an internal error.

110 Host Variable tracing resulted in a null pointer exception when a history table using an IDENTITY or SERIAL column was involved.

dbfixup utility

003 Modified the UNIX/Linux version to launch a shell script to refresh the control tables. Launching dbutility directly returned an exit code of -1, which caused dbfixup to assume that the refresh failed.

004 Implemented the -n option to defer the running of redefine command and ensuing generate command. In this case when the user tries to run a process command, it will exit with an exit code indicating that a redefine command is required before updates can be processed.

005 Modified the UNIX/Linux version to use a script file rather than launch dbutility commands directly using a system() command, as the return value was -1 causing dbfixup to error.

Client Manager Service

002 Specifying too many data sets in a clone command causes the service to get a buffer overrun. Added tests to prevent this from happening.

005 Added a line at the beginning of a switched log file that points back to the previous log file.

006 Increased the buffer size used for setting up the command line for launched run to allow for a longer command line for clone commands.

007 Fixed buffer sizing bug in service code for the one-time password used to allow scripts launched by the service to connect back to the service. This bug occasionally caused the service to crash.

008 Implemented protocol negotiation between the administrative console and the service to allow 7.0 consoles to connect to 6.6 services and 6.6 consoles to connect to 7.0 services. The needed changes to reconcile the two protocols were then implemented. All console operator commands that have protocol dependencies pass the protocol level along with the command allowing the client to generate the response using the right protocol.

009 The service was failing to recognize the situation where the start and end times of the "blackout_period" parameter were equal as an indication that the blackout period was disabled.

010 The UNIX/Linux daemon crashed when an Add_Existing_DataSource RPC was issued from the Administrative Console.

011 The daemon hangs after a log switch when the logsw_on_newday option is set in the service’s configuration file.

012 If a blackout period is defined and a client detects that a blackout period is in effect and exits, the service crashes with a NULL pointer exception.

013 Made the service restart process commands interrupted by a blackout period, when the blackout period ends.

014 Modified the service’s scheduling to generate data source changes events when a blackout period starts and when it ends. This allows the console to notice that there is a blackout period when there is no active run.

016 The service has a resource leak was losing file handles for the log file. This resulted in the service getting the "Too many files open" error.

017 The service mismanaged the data source count when an "Add > New" command was executed. As a result of this, the dummy data source involved in the operation decreased the maximum by 1.

018 Added diagnostic code to the UNIX/Linux daemon to capture the recent RPC trace to a memory buffer and maintain the state of the program.

  • The daemon script was enhanced to use the TERM signal to stop the daemon and the USR1 or USR2 signals to write the RPC trace and the program status to the file "trace.log" in the working directory.

  • The daemon now has the following commands:

    start: same as before

    stop: orderly shutdown

    abort: hard shutdown to be used if the stop command fails to stop the daemon

    history: writes the trace information captured in the trace buffer to the file "trace.log" and shows the status of the program (this command overwrites the file "trace.log")

    status: appends the status of the program to the file "trace.log"

019 The service fails to properly update the binary configuration file when an "Add > New" command is issued from the console.

020 The service floods the network with DataSourceChanged unsolicited messages due to a scheduling error that fails to test for empty periods when processing blackout periods that are empty.

021 Increased the size of the ODBC data source string in the service, as longer data source names got truncated during an "Add > New" console command.

022 The service did not always restart a run that was interrupted by a blackout period.

023 The service failed to suppress the sending of a data source added message for the console that initiated the add, which caused the console not to update the status of the added data source unless the console user forced the data source to be refreshed by disconnecting and reconnecting or getting the data source's read/only information.

024 If a run that is started from the batch console ends the service can get a null pointer exception when the alternate end_of_run script file does not exist.

025 If the service encounters a blackout period when starting a scheduled process command, the run does not get started when the blackout period ends.

Administrative Console

001 Added missing service configuration parameters to the Client Manager properties and implemented updatable properties for it.

002 Menu items that are not applicable for data sources that are not defined were not grayed out.

003 Added the "Bulk Loader Path" parameter to the BULK LOADER page of the Configure command dialog for Windows Clients. This allows users not to have to include the directory where the bulk loader resides in the SYSTEM PATH.

004 Replaced the "Keep Undigits" slider by a group of radio buttons in order to implement the change in Client patch 004.

005 Added the missing "TLS messages" option to the Trace Options dialog in the Advanced menu.

006 Adding a new data source resulted in duplicate entries in the Client Manager's data source page for the console that executed the command. The UI straightened itself out if you disconnected and reconnected.

007 Fixed formatting errors in the "Lag time" and "db_op_counts" columns of the monitor.

008 Changes to the scheduling entries were not reflected in the service and client configuration files.

009 The shared components of the Administrative Console were updated to address the following vulnerabilities. In the process of doing this the console was modified to use Java 11.

  • Apache Commons Text library updated to version 1.10.0 to mitigate CVE-2022-42889 (12.8.0.6)
  • Apache Shiro library updated to version 1.10.0 to mitigate CVE-2022-40664 (12.8.0.6)

010 Removed the JXPath library to mitigate CVE-2022-41852.

011 The Customize command gets a null pointer exception when accessing the properties of an occurs table.

012 Renaming indexes gets an error if the original index name is longer than 28 characters.

013 Changing the data type of a virtual key item does not work when its parent item has a scale.

014 The "_all" option for the "Refresh Data Sets" command did not work, it only refreshed the first data set in the list.

015 Daily scheduling list whose first entry is 12:00 am does not enable the Daily button in the scheduling page of the Configure command.

016 Unsolicited client status and statistics messages were mishandled leading to data set's state info being corrupted in the Settings menu commands of the console.

017 The Logout in the version 7.0.9 Administrative Console didn't log the user off.

Batch Console

001 Fix the handling of command line parameters for bconsole that contain non-alphanumeric characters, so they are parsed as a single string.

Version information

Databridge Host Base release Current release
DBEngine 7.0.0.002 7.0.8.007
DBServer 7.0.0.002 7.0.8.004
DBSupport 7.0.0.000
DBGenFormat 7.0.0.001
DBSpan 7.0.0.000
DBSnapshot 7.0.0.000
DBTwin 7.0.0.000
DMSIIClient 7.0.0.000
DMSIISupport 7.0.0.001
DBInfo 7.0.0.000 7.0.1.001
DBLister 7.0.0.000
DBChangeUser 7.0.0.000
DBAuditTimer 7.0.0.000
DBAuditMirror 7.0.0.000
DBCobolSupport 7.0.0.000
DBLicenseManager 7.0.0.000
DBLicenseSupport 7.0.0.000
Databridge FileXtract Base release Current release
Initialize 7.0.0.000
PatchDASDL 7.0.0.000
COBOLtoDASDL 7.0.0.000
UserdatatoDASDL 7.0.0.000
UserData Reader 7.0.0.000
SUMLOG Reader 7.0.0.000
COMS Reader 7.0.0.000
Text Reader 7.0.0.000
BICSS Reader 7.0.0.000
TTrail Reader 7.0.0.000
LINCLog Reader 7.0.0.000
BankFile Reader 7.0.0.000
DiskFile Reader 7.0.0.000
PrintFile Reader 7.0.0.000
Databridge Enterprise Server Base release Current release
DBEnterprise 7.0.0.000 7.0.8.009
DBDirector 7.0.0.000
EnumerateDisks 7.0.0.000
LINCLog 7.0.0.000
Databridge Client Base release Current release
bconsole 7.0.0.000 7.0.7.001
dbutility 7.0.0.000 7.0.8.110
DBClient 7.0.0.000 7.0.8.110
DBClntCfgServer 7.0.0.000 7.0.8.110
dbscriptfixup 7.0.0.000 7.0.8.110
DBClntControl 7.0.0.006 7.0.8.025
dbctrlconfigure 7.0.0.006 7.0.8.025
dbfixup 7.0.0.000 7.0.4.004
migrate 7.0.0.000
dbpwenc 7.0.0.006
dbrebuild 7.0.0.000 7.0.4.110
Databridge Administrative Console Base release Current release
Administrative Console 7.0.0 7.0.9

System Requirements

Databridge 7.0 SP1 includes support for the following hardware and software requirements.

System Support Updates

  • Databridge will remove support for operating systems and target databases when their respective software company ends mainstream and extended support.
Databridge Host - Unisys mainframe system with an MCP level SSR 59.1 through 62.0
- DMSII or DMSII XL software (including the DMALGOL compiler)
- DMSII database DESCRIPTION, CONTROL, DMSUPPORT library, and audit files
Databridge Enterprise Server - ClearPath PC with Logical disks or MCP disks (VSS disks in MCP format)
   or
- Windows PC that meets the minimum requirements for one of these operating systems:
  • Windows Server 2022
  • Windows Server 2019
  • Windows Server 2016
  • Windows Server 2012 R2
  • Windows Server 2012
- Direct Disk replication (recommended) requires read-only access to MCP disks on a storage area network (SAN)
TCP/IP transport
- To view the product Help, a supported Internet browser (such as Microsoft Edge, Firefox, or Google Chrome) is required. In addition, JavaScript must be enabled in the browser settings to navigate and search Help.
Databridge Administrative Console - We recommend that you install the Administrative Console and the Client on different machines. Installing the Administrative Console on the same machine as the client can lead to performance issues as well as monitoring failures because connection issues on the Client machine will also affect the Administrative Console's ability to monitor activity.
- The Administrative Console install includes a private JRE that is used to run the Administrative Console server.
Databridge Client NOTE: Disk space requirements for replicated DMSII data are not included here. For best results, use a RAID disk array and store the client files on a separate disk from the database storage.
NOTE: Memory requirements do not include the database requirements when running the Client in the server that houses the relational database (consult your database documentation for these). The numbers are for a stand-alone client machine that connects to a remote database server.
- If you run the Administrative Console on the same machine as the Client, it will need an additional 1-2 GB of memory depending on how many data sources you have and how long you let it run. All log information is saved in memory.
Client - Windows - Unisys ES7000
  or
- Pentium PC processor 3 GHz or higher (multiple CPU configuration recommended)
- 2 GB of RAM (4 GB recommended)
- 100 GB of disk space in addition to disk space for the relational database built from DMSII data)
- TCP/IP transport

One of the following operating systems:
  • Windows Server 2022
  • Windows Server 2019
  • Windows Server 2016
  • Windows Server 2012 R2
  • Windows Server 2012
  • Windows 10
One of the following databases:
  • Microsoft SQL Server 2019
  • Microsoft SQL Server 2017+
  • Microsoft SQL Server 2016 SP2 or above
  • Microsoft SQL Server 2014 SP3 or above
  • Microsoft SQL Server 2012 SP4
  • Oracle 12c, 18c ++, 19c, 21c
NOTE: On Windows Servers, CORE mode must be disabled for installation and console cannot be local.

+ For Windows systems only
++ Oracle 18c requires Solaris 11.4 or newer
Client - UNIX and Linux One of the following systems:
  • Sun Microsystems SPARCstation running Solaris 11.4 or later
  • IBM pSeries running AIX 7.1 TL5 or later
  • Intel Pentium with Red Hat Enterprise Linux Release 7.7 or later
  • SUSE Linux Enterprise Server 11 SP4 or later
  • UBUNTU Linux 18.04 LTS or later
- 2 GB of RAM (4 GB recommended)
- 100 GB of free disk space for installation (in addition to disk space for the relational database built from DMSII data)
- TCP/IP transport

One of the following databases:
  • Oracle+ 12c, 18c++, 19c, or 21c (AIX, Linux and Solaris only)
+ Supported Oracle clients are 64-bit programs
++ Oracle 18c requires Solaris 11.4 or newer

Downloading Databridge 7.0 SP1

The Micro Focus Software, License and Downloads (SLD) site offers two download options. Choose the appropriate one for your environment.

  • If Databridge 7.0 is already installed, then download and install Databridge 7.0 SP1.

  • If you are installing Databridge 7.0 for the first time (such as upgrading from 6.6), then download and install Databridge Host, Databridge Enterprise Server, and All Clients. ZIP format. (7.0 + SP1).

Installation Instructions

See the Installation Guide for information on system and installation requirements, upgrade instructions, and other helpful tips.

Notes

  1. The new Administrative Console must be installed after you install the Databridge software. See the Databridge Installation Guide section on Installing the Databridge Administrative Console for instructions on installing the Administrative Console on Windows and UNIX machines.

  2. We recommend that you install the Administrative Console in a separate server from the client machine(s) for the following reasons:

  3. The Administrative Console can use significant resources that may impact the client's performance.

  4. If the Administrative Console is installed with a Client machine, the Administrative Console would not be able to monitor activity when the client machine is down. By having the Administrative Console on a different machine, you can monitor Client Manager(s) and receive alerts for to address warnings and connectivity errors.

  5. The instructions for installing the Administrative Console on Linux/UNIX have been updated (as of 7.0.1 SP1 Update 5).

Installing the Administrative Console on Linux or UNIX (Solaris)

Note

When updating the Databridge Administrative Console over an existing Linux/UNIX installation, we recommend removing the contents of the lib directory beforehand.

To install the Administrative Console in Linux or UNIX, create a directory into which the Administrative Console will be installed. Make this the current directory and copy databridge-container-7.0.0.tar and the appropriate JRE file from the install medium. Note that a JRE has been provided for the two supported platforms: Linux (zulu8.52.0.23-ca-jdk8.0.282-linux_x64.tar.gz) and Solaris (zulu8.52.0.23-ca-jdk8.0.282-solaris_x64.zip).

Install the Databridge Administrative Console by using the following commands:

On Linux:

tar -xvf databridge-container-7.0.9.tar
tar -xvf zulu11.56.19-ca-jdk11.0.15-linux_x64.tar.gz

On Solaris:

tar -xvf databridge-container-7.0.9.tar
unzip zulu11.56.19-ca-jdk11.0.15-solaris_sparc9.zip

Issue the commands below to complete the installation. The 'mv' command renames the installed JRE and postinstall.sh performs the initial configuration required for the Administrative Console to run on this system.

On Linux: mv zulu11.56.19-ca-jdk11.0.15-linux_x64 java
./postinstall.sh

On Solaris: mv zulu11.56.19-ca-jdk11.0.15-solaris_sparc9 java
./postinstall.sh


Contacting Customer Support

For specific product issues, contact Technical Support.

For additional technical information, see:


© 2024 Open Text

The only warranties for products and services of Open Text and its affiliates and licensors (“Open Text”) are as may be set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Open Text shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice.