Because keys are not written at the time a new record is written, the WRITE statement never gets a duplicate key error status (status 22). When you are using bulk addition, illegal duplicate keys are handled in a different manner.
When the keys are added to the file, any illegal duplicates are detected then. Should a record be found that contains an illegal duplicate key value, that record is deleted. Your program is informed of this only if it contains a valid declarative for the file. If there is no declarative available, the record is quietly deleted. Otherwise, the file status data item is set to 22, the file's record area is filled with the contents of the rejected record, and the declarative executes. When the declarative finishes, the file record area is restored to its previous contents so that it contains the correct data when the suspended file operation resumes.
When the file's declarative executes in this way, the program may not perform any file operations in the declarative. This is because the program is in the middle of doing a file operation already, the one that triggered the addition of the keys. In addition, the declarative may not start or stop any run units (including chaining), nor may it do an EXIT PROGRAM from the program that contains the declarative. Finally, note that the declarative runs as a locked thread - no other threads execute while the declarative runs.
You can configure Vision to write any rejected records to a file. This gives you a way to log the rejected records even though you may not perform a file operation from within your declarative. To create this log, set the DUPLICATES_LOG configuration variable to the name of the file in which you want to store the records. Vision will erase this file first if it already exists. You must use a separate log file for each file opened with bulk addition. You can do this by changing the setting of DUPLICATES_LOG between OPEN statements. For example:
SET ENVIRONMENT "DUPLICATES-LOG" TO "file1.rej" OPEN OUTPUT FILE-1 FOR BULK-ADDITION SET ENVIRONMENT "DUPLICATES-LOG" TO "file2.rej" OPEN EXTEND FILE-2 FOR BULK-ADDITION
If DUPLICATES_LOG has not been set or is set to spaces, then no log file is created.
In addition, the duplicate-key log file may not be placed on a remote machine using AcuServer. The log file must be directly accessible by the machine that is running the program.
Any record that Vision rejects due to an illegal duplicate key value is written to the log file. The format of the file is a binary sequential file with variable-size records. You can read this file with a COBOL file that has the following layout:
FILE-CONTROL. SELECT OPTIONAL LOG-FILE ASSIGN TO DISK file-name BINARY SEQUENTIAL. FILE SECTION. FD LOG-FILE RECORD IS VARYING IN SIZE DEPENDING ON REC-SIZE. 01 LOG-RECORD. <<indexed record layout goes here>> WORKING-STORAGE SECTION. 77 REC-SIZE PIC 9(5).
If no duplicate records are found, the log file is removed when the Vision file is closed.
There is an unusual circumstance that can cause a file opened for bulk addition to reject a record that would not have been rejected if the file had been opened normally. This occurs only when the file has at least one alternate key that does not allow duplicates. This happens due to the changed order in which the keys are written to the file.
Consider a case where a file has two numeric keys, the primary key and one alternate that does not allow duplicates. Now suppose the following three records were written to this newly created file:
|Primary key||Alternate key|
In a file opened normally, the first record would be written to the file, the second record would be rejected because of an illegal duplicate on the alternate key, then the last record would be written. The result would be a two-record file, the records (1,1) and (2,2).
If the file is opened for bulk addition, the three records are added, then the primary keys are added, then the alternate keys are added. First the three records are added. Then the first and second record's primary keys are added. The third record's primary key is rejected because it duplicates the second record's key. The third record is removed as a result of this. Then the alternate keys are processed. The first record's key adds fine. The second record's key is rejected because it is a duplicate, and the second record is removed. The third record's alternate key is not processed because that record has already been removed. The result is a one record file, the record (1,1).
To summarize, as a result of bulk addition, you may end up with records rejected because of the duplicate key conflict with other (eventually) rejected records and not necessarily with any accepted records.
This difference would not occur if the keys were added row-wise instead of column-wise, but doing so would sacrifice much of the efficiency gained by bulk addition mode.
In most practical applications, this scenario is not very likely. If need be, you can adjust for this difference by logging the rejected records and then trying to add them to the file normally after leaving bulk-addition mode. The second attempt at writing out the records will still reject the records with illegal duplicates, but take any records that conflict only with other rejected records.
Because of the various issues surrounding illegal duplicate key values, it is best to use bulk addition in cases where illegal duplicates are rare. Processing records with a great many illegal keys significantly reduces the performance benefits of using bulk addition.