The Tech Data Cloud & Automation (C&A) has several products that move data between different source applications. In choosing this product you have chosen a cost effective and cost reducing approach to fulfilling your enterprises data needs.
The Data Pump is a means of moving asset descriptive data from your Discovery or Source Application to your CMDB or Target Application. Also moved into the Target Application are the relationships that exist between the assets. Depending on the Target Application, the data is often moved into a staging dataset or import dataset. Also, there is often a special process, a reconciliation process, to merge the newly updated data into the dataset for production use.
The data in your Source Application is usually used by administrators to maintain the assets under their care. That puts this data under constant scrutiny as administrators use the Source Application to maintain far more systems than they would normally be able to handle manually. Thus, the data in the Source Application could be a gold mine to asset managers and other financial administrators since it is vetted information from hardware and software asset administrators. Such information, with the help of the Data Pump, can now be used for financial analysis, and management operations. Such needs as maintenance planning, asset life-cycle planning, compliance evaluation, and software and hardware license administration are greatly enhanced by such information. Importing such data provides a basis for incident management, problem management, software asset management, service level management, and the structure of the service book.
This Data Pump mapping system was designed to be easy to use. Its most obvious design feature for ease-of-use is that most panels in the User Interface operate like spreadsheets, So some familiarity with the concepts of spreadsheet manipulation can be used to adjust the results to the operator’s needs.
Finally, ease-of-understanding couples with ease-of-use, in that there is very little that happens “under the table” almost everything that happens in the Extract, Transform, and Load processes are visible to the operator. This transparency is experienced through becoming familiar with the product by moving single rows or small numbers of rows of data to the Target Application and then evaluating results in the Target Application.
The Data Pump also has an extremely effective Graphical User Interface configuration mode, with tools that allow for careful construction, adjustment, and verification of data mappings as the client’s environment evolves and the analysis needs increase. Mapping processes can, in effect be run one row at a time, and pieces source data can be entered by hand to evaluate specific transformation and update results.
The Data Pump has several internal features that are specifically designed to decrease network utilization, and other features to reduce server disk activity. The Data Pump has gone through many design cycles to be among the most, if not the most efficient ETL tool available to load data into the Target Applications we support. This efficiency is the result of the C&A team being intensely aware of the source and the target data structures and their access mechanisms and also aware of the problem many large clients have with massive numbers of CIs.
The Data Pump is automated, easy to use and supports ITIL.
Key benefits include:
Suggested for you are based on app category, product compatibility, popularity, rating and newness. Some apps may not show based on entitlements. Learn more about entitlements.
1 What’s new
• Implement the most current core module 3.3.07, with the performance improvements, ease of use
changes
• Enhancements were made to improve the development and customization tasks for end users, as well as
for the consultants.
• Test buttons are added to help users to identify connection problems.
• Maps have been consolidated to increase performance.
• Enabling and disabling sub-maps is now done by selected groups of sub-maps.
• Improvements when working with 1+ millions of CIs. Conserve memory and increase performance.
• Supports BMC ADDM 11..x and up
• Support MF uCMDB 10.x and up
• Works Java JDK 1.8+
• Works with 32 bit or 64 bit Java.
• Works with all current Microsoft Windows hosts including Windows 10.
• Validate Java JDK presence as identified from the command line or entries in the startup script.
• Clean install process prompts and responses.
• Clean logging of install process.
Note: if you are using Data pump version 3.3.00 and up, use the Java version 1.8 update 45+
2 Installation
This package is not an upgrade but rather a complete package. For new installations please follow the user guide or quick start guide to complete the installation steps. In order to upgrade an existing installation, do the full install of the package into a different directory then the existing Data Pump. Then copy your .map files (these are usually located in the “conf” directory) into the conf directory of the new installation. We’ve made every effort to make sure that this Data Pump is backward compatible with the map files of prior Data Pump releases. Please do not remove the previous version of the Data Pump from the production system immediately. Create a non-production environment to verify the changes first. A non-production Data Pump host system can be a server computer or your workstation.
3 Upgrade
Install the Data Pump product as if this is a new product, into a new directory.
Note: Do not make any changes to the directory where the original Data Pump is located.
7. To make this new installation be the active service, uninstall the current service using drop-down menu “Options=>Remove Service”. Usually even the new Data Pump MF uCMDB uninstall the old service. If problems arise uninstall with the old Data Pump. Then install the new service using drop-down menu “Options=>Install Service”.
8. Go to Options>>Schedules to enter your existing service schedule start times into the new Data Pump schedule. You may be asked to restart the service, answer “Yes”.
9. Go to File>>Save&Exit. If you see “make this map as your default map” message, click yes to accept it as default. If this question does occur, then restart the service again. One common question of Data Pump deployment is how to move the Data Pump from QA to production. Below are our recommendations:
1. Zip the entire Data Pump folder on your QA system, and unzip it to your production system. If possible, unzip it to the same drive and the same directory structure as the QA system.
2. Perform the service installation on production system. If the service is installed from the previous Data Pump then you will need to need to remove the service first (Options>>Remove Service). If you are using windows authentication on the connections, please remember to change the run-as account of the service (Administrative Tools>>Services>>Properties>>Logon).
Please upgrade to one of the following broswers: Internet Explorer 11 (or greater) or the latest version of Chrome or Firefox