Updating the Relational Database
This chapter covers updating the Databridge Client data tables in the relational database with changes made to the DMSII database. In addition, it explains DMSII reorganizations and how they update the Databridge Client data tables.
Updating the Databridge Data Tables
Updating is the process of applying the DMSII database changes to the Databridge data tables in the relational database by sending only the changes to the Databridge data tables.
You can update the Databridge data tables after they have been cloned as long as they meet the following requirements:
Each Databridge data table you want to update has a unique index. If a table you want to update does not have a unique index, see Creating Indexes for Tables.
The DMSII database has not been reorganized or rolled back. If the DMSII database has been reorganized, see DMSII Reorganizations and Rollbacks.
You can update the Databridge Client data tables by running a
command each time you want the update to occur, or you can schedule a
process command to run at fixed times or a fixed amount of time after
the run finishes. How often you update the Databridge tables depends on
How current you want the data to be. For time-critical information, you may want to update the Databridge data tables several times a day.
If you do not update the relational database often, it may be more effective for you to re-clone the DMSII data sets rather than update them. For example, if you are interested in weekend information only, and several audit files have been closed (and possibly moved to tape) during that time, recloning may be faster than updating.
How often audit files are available on the host. When an audit file is not available, the Databridge Engine temporarily stops processing until the audit file is available. The Databridge Engine can access the active DMSII audit file when the DBEngine control file parameter READ ACTIVE AUDIT is set to true.
How often closed audit files are available on the host. In some cases, a closed audit file is not available because it has been copied to tape. In this instance, a host operator must mount the tape before the closed audit file can be made available to Databridge.
Performing Updates Without Using Stored Procedures
Version 6.6 of the Databridge software introduced a new and efficient way of updating
tables that does not use stored procedures. This
feature can be controlled globally by setting the configuration
use_stored_procs to False, see the
use_stored_procs section for more information.
This parameter makes the
clone commands generate the
actual SQL command instead of using a stored procedure call to perform
an update. The Client still uses host variables, as was the case with
stored procedures calls. Executing the SQL directly eliminates some
overhead and makes processing the update faster. This can be controlled
at data set level as described in the use_stored_procs section.
Setting the configuration parameter
use_stored_procs to False, does not have any effect, unless you run a
-R option (
redefine all) to propagate this setting to the
ds_options columns of the data sets. The
command with return an exit code of 2033 (
Reorg command required). You need run the
reorganize command to complete task.
This command will generate new scripts for the tables and refresh all the data sets, which results in the
dropping of all the stored procedures, which will no longer be used. If you revert to using stored procedures, the procedure
is the same, in this case the
reorganize command will recreate the stored procedures.
processcommand has a built-in scheduling mechanism that allows
the run to hibernate and resume at the next scheduled time. When the
Databridge Server sends the Client an end-of-audit-reached status,
dbutility normally terminates. However, if you enable scheduling, the
Databridge Client disconnects from the server and the database. The Client
hibernates until the next scheduled
processcommand, when it
reconnects to the server and the database. This scheduling mechanism
only works after you run dbutility for the initial clone of the DMSII
database. If the Client crashes or the power fails, scheduling will
fail. Service-based scheduling has none of these shortcomings, as the
scheduling task is taken over by the Client Manager service.
If you use the Client Manager service, it takes over this functionality.
When a DBClient run terminates, the service determines when the next
processcommand should be run and starts it when that time arrives.
The advantage of service-based scheduling is that it is immune from
system failures, as the service automatically gets restarted when the
system is rebooted. For details about service-based scheduling,
see the Databridge Administrative Console Help.
To schedule updates
Uncomment the scheduling parameters in the Databridge Client configuration file. Scheduling parameters are listed under the [Scheduling] header in the configuration file.
Select one of the following scheduling methods:
Enter appropriate values for the following scheduling parameters, depending on whether you are using daily scheduling or fixed-delay scheduling. Each parameter is explained in [Scheduling].
[Scheduling] ; ;dbutility process command only ; ;daily = 08:00, 12:00, 17:00, 24:00 ;exit_on_error = false ;sched_delay_secs = 600 ;sched_minwait_secs = 3600 ;sched_retry_secs = 3600 ;blackout_period = 00:00, 02:00
As long as the
processcommand completes successfully, dbutility becomes inactive (sleep) until the next scheduled time. If the scheduled dbutility
processcommand is successful, the following message appears:
<code>Next update for DataSource *datasourcename* will run at *hh:mm* (delay = *nn* secs)</code>
Scheduling of updates will continue until any of the following occurs:
You reboot the Databridge Client machine or end the Databridge Client session
You enter a
SCHED OFFconsole command when dbutility is processing updates
A DMSII reorganization (other than a garbage collection)
If you must stop the dbutility program, we recommend that you use the
QUITcommand to exit at the next quiet point. If the Client is waiting for the server to send updates when none are available and the
use_dbwaitconfiguration file parameter is set to True, you can use the
QUIT NOWcommand, which resets the connection to the server and terminates the Client run. If needed, you can also press Ctrl+C to terminate a session while dbutility is processing updates; however, we do not recommend this option.
Daily Schedule Example
The following example uses the daily scheduling method. In this example,
the Databridge Client runs only twice a day -- once midway through the
business day and once at the end of the business day. If the
command fails, the Databridge Client waits 10 minutes before retrying.
[scheduling] daily = 12:00, 17:00 ; run the process at noon and 5PM sched_retry_secs = 600 ; retry in 10 minutes after a failure
The following example uses the fixed-delay scheduling method. In this
example, the Databridge Client runs the
process command 4 hours (240
minutes) after the run finishes. If the
process command fails, the
Databridge Client retries every 30 minutes.
[scheduling] sched_delays_secs = 14400 sched_retry_secs = 1800
Scheduling Blackout Periods
You can schedule blackout periods during which the Client suspends all processing and updates to allow for routine maintenance. To use this feature with the service-controlled Client, you can set the Blackout Period value from the Administrative Console by selecting Settings > Configure from the data source page.
Use this procedure when you want to run a dbutility
process command independent of scheduling.
Make sure that the Databridge Server is running. If it is not, the Databridge Client will try to connect to the server and eventually time out.
Make sure that your signon parameters are set appropriately.
If the [EbcdictoAscii] section of the configuration file (to customize character translation) has changed since the initial clone, your data may not be consistent. You might need to re-clone.
Make sure that the current directory is the one you created for this data source. This ensures that Databridge Client can locate the scripts. (Scripts are only required during an update if there's a purged data set.)
Enter the following:
dbutility [signon_options misc_options] process datasource
||For each Databridge Client type, the following command-line options specify the relational database signon parameters:
|misc_options||See the table in the next section titled Process Command Options.|
|datasource||The name of the data source specified in the DBServer control file or by Enterprise Server.|
When you run a
process command to update the Databridge tables in the
relational database, the following occurs:
All modified records are overwritten with their new values. If the target record is not found in the table, the Databridge Client adds the record to the table instead.
All deleted records are deleted.
All added records are inserted into to the data tables. If the target record is already in the table, the Databridge Client modifies the record in the table instead.
Process Command Options
Command-line options related to the
process command are as follows:
||Enables default tracing.|
||Specifies a configuration file other than the default
||Enables selective tracing.|
||Inhibits the audit file removal WFL from being run on the host.|
||Forces the Client to use a new log file for this run.|
||Forces the Client to use a new trace file for this run, when tracing is enabled.|
For information on the command-line options, see dbutility Command-Line Options.
Anomalies That Can Occur In Updates
When the Databridge Client updates the relational database, the following anomalies can occur:
|Last quiet point (QPT) in an audit file||When processing an update transaction group since the last quiet point, Databridge Engine does the following when it reaches the end of the last available audit file:
|Host application rolls back changes
(This is a partial DMSII rollback, not to be confused with a total DMSII rollback.)
|If a host application encounters an error condition while updating the DMSII database, it rolls back all of the changes it made. In this case, Databridge Engine aborts the updates when it finds the aborted transaction indication in the audit file. Databridge Engine handles the situation in one of two ways based on the setting of the DBEngine control file parameter CONVERT REVERSALS TO UPDATES: