Schedule the Copy of Logger Data to Arcsight SaaS

This process is optional.

Rather than manually copying the metadata to the S3 bucket, you can schedule the Archive Migration Tool to copy the data.

  1. Perform Steps 1–3 of the Running the Archive Migration Tool procedure.

  2. Select Archives Available from the initial menu:

    What do you want to do?
    [ ] Manage S3 Bucket
    [ x ] Archives Available
    [ ] Logger Information
    [ ] Generator ID
    [ ] Generate Catalog
    [ ] Exit
  3. Select one or several of the Storage groups listed at the top of the menu.

  4. Select Schedule.

    For example:

    [ x ] Internal Event Storage Group (Total: xGb, Ava: xGb, Pen: xGb, Done: xGb, Fail: xGb)
    What do you want to do?
    [ ] Copy
    [ x ] Schedule
    [ ] More details
    [ ] Back
  5. (Conditional) To choose archives from specific years, months or days, select More details.

    Note: If the Generator ID has not been configured, and it's not enabled by default in Logger, checking an option from the More Details menu will produce the following prompt:

    To activate the migration process, check the Generator ID

    To configure the option, which is required to initiate migrations, you must go back to the main menu, as detailed in Adding a Unique ID to Migrated Events.

  6. After choosing the archives that you want to schedule for copying, specify when to import the metadata:

    Enter the schedule hour:
    Enter the schedule timeout:

    where:

    schedule hour

    Represents the hour of the day in a 00 to 23 format when you want to start the migration. For example, specify a time when the traffic is slower or resources are freer.

    schedule timeout

    Represents the maximum time in minutes during which the job can be run. You must specify a value smaller than 1440 (a full day). The following considerations apply:

    • The schedule timeout value includes the time required to validate, calculate size and checksum, etc.

    • If the job does not finish within the allotted timeout, it will resume the next day at the same time and for the same duration. The process starts from the point where it left off the previous day.

    • If you schedule another copy to occur while an existing scheduled copy hasn't finished processing archives, the tool will not ask for a new schedule hour or schedule timeout parameters. Instead, the system will continue using the same settings for the existing scheduled copy.

    • The system executes scheduled copies in the background.

    If any errors occur during this process, the Archive Migration Tool will print messages on the screen to make you aware of them.

    To review the execution log, look for the logger_to_recon_archive_catalog_${loggerIPNoDots}.log file, saved under the same path where the script is.

    During execution, the loggerToReconArchiveCatalog.sh script will print error messages only. To view more on-screen detail, activate the verbose mode by adding the -v parameter when executing the script.

    Once the copy process is completed, an Archive Catalog file is generated. This file contains metadata information about the archives that have been copied so far, plus additional information about Logger, such as its storage groups and their retention.

    The Archive Catalog file is copied to the Amazon S3 bucket in a folder named:

    Bucket_Name/event-sync/logger-archives/Tenant_ID/Logger_IP_Without_Dots/

    The copied Logger archives' files will be available in folders such as:

    Bucket_Name/event-sync/logger-archives/Tenant_ID/Logger_IP_Without_Dots/Storage_Group_ID/YearMonthDay/

  7. (Conditional) Repeat this process for each Logger whose archived data you want to schedule for migration.

  8. If no more files will be scheduled, exit the tool.