Medium Workload
This section describes the system sizing and tuning results from tests of the ArcSight Platform and deployed capabilities Transformation Hub, Fusion, Command Center for ESM, Intelligence, Recon, and the ArcSight Database that has been confirmed in our testing lab to maintain satisfactory performance of the system under a medium workload.
Workloads
This section describes the workload that was placed on the tested system.
Event Workload
This table provides event ingestion workload in events per second:
| Application | On-Premises Collocated Database | AWS or Azure Non-collocated Database |
|---|---|---|
| Microsoft Windows | 2,400 | 9,000 |
| InfoBlox NIOS | 2,400 | 9,000 |
| Intelligence Data (VPN, AD, Proxy) | 250 | 2,000 |
| Total | 5,050 | 20,000 |
Other Workload
| Category | Level |
|---|---|
| Storage Groups | 10 |
| Searches | 3 per hour (concurrent) |
| Reports | 1 scheduled every hour |
System Sizing
This section describes the system sizing of the tested system.
On-Premises Deployment
The OMT Master/Worker Node/Database Node system resources are where the core platform, Transformation Hub, Fusion, Command Center for ESM, Recon, and Database compute components were deployed in an all-in-one collocated configuration on the tested system. However, the Database Communal Storage components were deployed on a separate node because they are not embedded within the ArcSight Platform. When using this information as guidance for your own system sizing, the OMT Master/Worker Node/Database Node system resources are always needed, but the Database Communal Storage system resources are only needed when deploying Recon or Intelligence.
| Category | 1 x OMT Master/Worker Node/Database Node | 1 x Communal Storage |
|---|---|---|
| Processor | Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz | Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz |
| vCPU(s) (# threads) | 48 | 8 |
| RAM (per node) | 192 GB | 48 GB |
| Disks (per node) | ESX data store | ESX data store |
| Storage per day (1x) | 10 GB (depot) + 20 GB (ES) | 100 GB (MinIO) |
| Total disk space (5 Billion events) | 1 TB (holds up to 30 days of events) | 1 TB (holds up to 15 days of events) |
| K-safety level | 0 | N/A |
AWS Deployment
The OMT Worker (Platform) system resources are where the core platform, Transformation Hub, Fusion, Command Center for ESM, and Recon components were deployed on the tested system. However, Intelligence components were deployed on the OMT Worker (Intelligence) system resources because they utilize a significant amount of resources when running analytics jobs. When using this information as guidance for your own system sizing, the OMT Worker (Platform) system resources are always needed, the Database system resources are only needed when deploying Recon or Intelligence, and the OMT Worker (Intelligence) system resources are only needed when deploying ArcSight Intelligence.
| Category | OMT Worker (Platform) | Database | OMT Worker (Intelligence) |
|---|---|---|---|
| Instance Type | m5.4xlarge | m5.12xlarge | m5.4xlarge |
|
Instance Count |
3 | 6 | 3 |
| Disks (per node) |
1 X 2048 GB (gp3) EBS Volumes |
8 x 250 GB (gp3) EBS Volumes |
1 X 2048 GB (gp3) EBS Volumes |
Azure Deployment
The OMT Worker (Platform) system resources are where the core platform, Transformation Hub, Fusion, Command Center for ESM, and Recon components were deployed on the tested system. However, Intelligence components were deployed on the OMT Worker (Intelligence) system resources because they utilize a significant amount of resources when running analytics jobs. When using this information as guidance for your own system sizing, the "OMT Worker (Platform)" system resources are always needed, the Database system resources are only needed when deploying Recon or Intelligence, and the OMT Worker (Intelligence) system resources are only needed when deploying ArcSight Intelligence.
| Category | OMT Worker (Platform) | Database | OMT Worker (Intelligence) |
|---|---|---|---|
| Instance Type | D16s_V3 | D32s_V3 | D16s_V3 |
|
Instance Count |
3 | 6 | 3 |
| Disks (per node) | 2 TB - Premium SSD | 2 TB (Depot) - Premium SSD | 2 TB - Premium SSD |
System Tuning
This section describes the system tuning of the tested system.
Database Tuning
| Category | Property | On-Premises | Azure | AWS |
|---|---|---|---|---|
| Core Database |
shard_count |
3 |
18 |
18 |
| Core Database | depot_size | 40% | 60% | 60% |
| Tuple Mover | tm_concurrency | 5 | 5 | 10 |
| Tuple Mover | tm_memory | 10G | 10G | 10G |
| Tuple Mover | plannedconcurrency | 5 | 5 | 5 |
| Tuple Mover | tm_memory_usage | 10000 | 10000 | 20000 |
| Tuple Mover | maxconcurrency | 10 | 10 | 10 |
| Ingest Resource pools |
ingest_pool_memory_size |
30% |
30% |
30% |
| Ingest Resource pools | ingest_pool_planned_concurrency | 6 | 6 | 6 |
| Backup |
Backup Interval (hours) |
1 |
1 |
1 |
Transformation Hub Tuning
| Property | On-Premises | Azure | AWS |
|---|---|---|---|
| # of Kafka broker nodes in the Kafka cluster | 1 | 3 | 3 |
| # of ZooKeeper nodes in the ZooKeeper cluster | 1 | 3 | 3 |
| # of Partitions assigned to each Kafka Topic* | 12 | 72 | 72 |
| # of replicas assigned to each Kafka Topic | 1 | 2 | 2 |
| # of message replicas for the __consumer_offsets Topic | 1 | 3 | 3 |
| Schema Registry nodes in the cluster | 1 | 3 | 3 |
| # of CEF-to-Avro Stream Processor instances to start** | 0 | 0 | 3 |
| # of Enrichment Stream Processor Group instances to start | 2 |
3 |
3 |
*Kafka topics - th-arcsight-avro; mf-event-avro-enriched; and th-cef, if connectors are configured to send to Transformation Hub in CEF format
**If connectors are configured to send Avro format to Transformation Hub, you can set the # of CEF-to-Avro Stream Processor instances to start quantity to 0 because there is no need to convert CEF to Avro.
Intelligence Tuning
| Property | On-Premises | Azure | AWS |
|---|---|---|---|
| Elasticsearch Shard Count | 6 | 6 | 6 |
| Elasticsearch data processing Instances | 1 | 3 | 3 |
| Elasticsearch Index Replica Count | 0 | 1 | 1 |
| Elasticsearch Memory (GB) | 14 | 12 | 12 |
| Elasticsearch number of cores | 8 | 6 | 5 |
| Elasticsearch Size Per Batch | 5mb | 5mb | 5mb |
| Logstash Instances | 3 | 12 | 15 |
| Logstash pipeline workers per instance | 2 | 2 | 1 |
| Logstash Pipeline Batch size | 500 | 1000 | 500 |
| LogStash Filter Applied | yes | yes | yes |
| Spark Parallelism | 32 | 32 | 64 |
| Spark number of executors | 3 | 8 | 9 |
| Spark executor memory | 6g | 8g | 7g |
| Spark number of executor cores | 1 | 1 | 1 |
| Spark Driver Memory | 6g | 8g | 8g |
| Spark Memory Overhead Factor | 0.2 | 0.2 | 0.2 |
| Intelligence Job per day | 1 | 1 | 1 |
Fusion Tuning
| Category | All Deployments |
|---|---|
| Event Integrity Check Task Count | 1 |
| Event Integrity Check Chunk Size | 1000 |
| Use Event Integrity Check Resource Pool |
false |
SmartConnector Tuning
| Category | All Deployments |
|---|---|
| SmartConnector version that we tested | 8.3.0.14008.0 |
| Instance Count | 1 |
| Acknowledgement Mode | none |
| usessl (Transformation Hub Destination Param) | false |
| contenttype (Transformation Hub Destination Param) | Avro |
| topic (Transformation Hub Destination Param) | th-arcsight-avro |
| compression.type | gzip |
| transport.batchqueuesize | 20000 |
| transport.cefkafka.batch.size | 50000 |
| transport.cefkafka.linger.ms | 10 |
| transport.cefkafka.max.request.size | 4194304 |
| transport.cefkafka.multiplekafkaproducers | true |
| transport.cefkafka.threads | 6 |
| syslog.parser.threadcount | 6 |