Loading Data Store Information into Your Mainframe Environment

Perform this procedure to map and load the definitions from your data store into the Knowledge Base. The information from your data store will be mapped to your workspace.

  1. In your mainframe environment, execute the following from an ISPF command line: TSO EXEC ‘xxxxxxxx.xxxxxxxx.CLIST(MFDATA)’ where xxxxxxxx.xxxxxxxx is a part of the PDS library name where member MFDATA is located. In both cases, this command allocates PDS libraries to ISPF and launches the Main Menu in an ISPF environment.
  2. Enter the following command: TSO CUINFIL.
  3. Select the Machine ID and Company you created.
  4. Specify the following parameters:
    Data Set Name Inp
    Name of the input data set that contains the table and owner names. For this demo, use URADAR.DEMOT.LISTDB2.
    Data Set Name Out
    the name of the output data set that contains the table names, column names, and column descriptions. For this demo, use URADAR.DEMOT.LISTODB2.
    Note: The names for input and output datasets must be different.
    Process Id
    Process identifier associated with the elaborated file. For example, for a DB2 data store with direct access, use DB2DA.
  5. Open the [Work with Jobs] window by clicking Environment > Work with Jobs to schedule a new job.
  6. Click New.
  7. In List of Jobs, select the job BURLFIL (Load Data Store Information From External Interface).
  8. Select the DATAEXPRES Machine ID you just created.
  9. Specify DEMO as the name of your company code in the Company name box.
  10. Click Apply.
  11. In the Sequential data store box, enter: URADAR.DEMOT.LISTODB2.
  12. Click OK. The job appears in the List of Scheduled Jobs. The pause symbol indicates that the job is awaiting execution. Once the job has completed successfully, the job is listed with a green check mark.
  13. In your TSO environment, submit the BURLFIL job by using the Submit Client Scheduled Job panel. The Submit Client Scheduled Job panel is accessible by selecting option Submit Client Scheduled Job from the Data Express for z/OS - Main Menu panel. For more information, see the Submission Function in the Process Guide for z/OS.
  14. In the Work with Jobs area, click the Refresh button to verify that the jobs completed successfully.
  15. Close the Work with Jobs window.
  16. In the [Work with Data Stores] window, expand the Machine ID node, and select DATAEXPRES, to view data stores associated with the DATAEXPRES machine ID:
  17. If you're planning to go straight on to the next session, you can keep Data Builder open. Otherwise, either click Exit and then Yes.
  18. Continue to the next section if you want to complete the sequential files session. Otherwise, go to the Data Masking.