Schedule the Copy of Logger Data to Arcsight SaaS
This process is optional.
Rather than manually copying the metadata to the S3 bucket, you can schedule the Archive Migration Tool to copy the data.
-
Perform Steps 1–3 of the Running the Archive Migration Tool procedure.
-
Select from the initial menu:
What do you want to do?[ ] Manage S3 Bucket[ x ] Archives Available[ ] Logger Information[ ] Generator ID[ ] Generate Catalog[ ] Exit -
Select one or several of the Storage groups listed at the top of the menu.
-
Select .
For example:
[ x ] Internal Event Storage Group (Total: xGb, Ava: xGb, Pen: xGb, Done: xGb, Fail: xGb)What do you want to do?[ ] Copy[ x ] Schedule[ ] More details[ ] Back -
(Conditional) To choose archives from specific years, months or days, select .
Note: If the Generator ID has not been configured, and it's not enabled by default in Logger, checking an option from the More Details menu will produce the following prompt:
To activate the migration process, check the Generator IDTo configure the option, which is required to initiate migrations, you must go back to the main menu, as detailed in Adding a Unique ID to Migrated Events.
-
After choosing the archives that you want to schedule for copying, specify when to import the metadata:
Enter the schedule hour:Enter the schedule timeout:where:
- schedule hour
-
Represents the hour of the day in a
00to23format when you want to start the migration. For example, specify a time when the traffic is slower or resources are freer. - schedule timeout
-
Represents the maximum time in minutes during which the job can be run. You must specify a value smaller than 1440 (a full day). The following considerations apply:
-
The value includes the time required to validate, calculate size and checksum, etc.
-
If the job does not finish within the allotted timeout, it will resume the next day at the same time and for the same duration. The process starts from the point where it left off the previous day.
-
If you schedule another copy to occur while an existing scheduled copy hasn't finished processing archives, the tool will not ask for a new or parameters. Instead, the system will continue using the same settings for the existing scheduled copy.
-
The system executes scheduled copies in the background.
-
If any errors occur during this process, the Archive Migration Tool will print messages on the screen to make you aware of them.
To review the execution log, look for the
file, saved under the same path where the script is.logger_to_recon_archive_catalog_${loggerIPNoDots}.logDuring execution, the loggerToReconArchiveCatalog.sh script will print error messages only. To view more on-screen detail, activate the verbose mode by adding the-vparameter when executing the script.Once the copy process is completed, an
Archive Catalogfile is generated. This file contains metadata information about the archives that have been copied so far, plus additional information about Logger, such as its storage groups and their retention.The
Archive Catalogfile is copied to the Amazon S3 bucket in a folder named:Bucket_Name/event-sync/logger-archives/Tenant_ID/Logger_IP_Without_Dots/The copied Logger archives' files will be available in folders such as:
Bucket_Name/event-sync/logger-archives/Tenant_ID/Logger_IP_Without_Dots/Storage_Group_ID/YearMonthDay/ -
(Conditional) Repeat this process for each Logger whose archived data you want to schedule for migration.
-
If no more files will be scheduled, exit the tool.