Google Cloud Deployment Overview
Components
The Google Cloud project comprises the following components:
-
Public subnet: build for external-facing resources such as load balancers, the bastion node, and Cloud NAT
-
Private subnet: build for internal-facing resources such as GKE worker nodes and NFS (FileStore)
-
Bastion node: a node that sits in a public subnet and can be used to connect to resources in a private subnet (such as Kubernetes, the database, and the storage resources). The bastion node makes it possible to perform tasks such as: deployments, persistent volume setup, database setup, upgrade procedures, and troubleshooting.
-
Kubernetes: open-source software that allows you to deploy and manage containerized applications at scale
-
ArcSight Suite: the suite product shipped and running in the form of Kubernetes-managed containers
-
Google Kubernetes Engine (GKE): a fully managed service provided by Google Cloud. Once provisioned, the service is ready to work as a Kubernetes platform.
-
Load Balancer: used to route traffic from external applications to internal applications deployed on Kubernetes. Load balancers triggered from the Kubernetes service are capable of discovering changes in the backend worker node, and always fit into the latest status of the worker node pool.
-
Cloud NAT: enables the connection from private subnet instances to the Internet or other Google Cloud services, but prevents the Internet from initiating a connection with those instances
-
NFS (FileStore): a managed NFS service that stores data (such as attachments, certificates, logs and search engine indexing). The storage is called Persistent Volume (PV) in Kubernetes, and it's used by the Kubernetes containerized applications.
-
Google Container Registry (GCR): a managed docker registry service that's secure, scalable, and reliable. The SMA applications are shipped as container images, which are stored in a GCR.
Inter-communication between components
The private subnet components (such as the Kubernetes cluster and the NFS storage (FileStore)) communicate with public subnet components (such as load balancers and bastion nodes). See Architecture Security design considerations for details.
The load balancer functions as the entry way for the external traffic. It's usually bound with the site's ArcSight Suite URL (for example, https://arcsight-suite.gcp.opentext.com
). Once end-users open the URL in their browser or mobile app, and after DNS resolution, the load balancer routes traffic from external applications to the private subnet applications.
Cluster management is achieved by connecting to the bastion node in the public subnet, and then jumping to the private subnet. The bastion node in this scenario works as a Kubernetes client, database client, or NFS (FileStore) client for operation purposes.
The applications reside inside the Kubernetes cluster in the private subnet, using NFS (FileStore) as their Persistent Volume (which is also in the private subnet).
When the applications in the Kubernetes cluster need to connect to the Internet (for example, when downloading docker images for upgrade or downloading a patch) the traffic goes via Cloud NAT
to prevent the Internet from initiating a connection with the instances running those applications.
Architecture Security design considerations
The network infrastructure with separate public and private subnets, provides an additional layer of security, where:
Independent routing tables are configured for every private subnet to control the flow of traffic from within or without the VPC
All OMT and ArcSight Suite components are located in the private subnet, with no direct Internet access allowed
End-users and IT agents can only connect to specified ports (for example 443) for business purposes, with a typical traffic path being:
User | <-> | Load balancer (public subnet) | <-> | GKE worker nodes (private subnet) |
Only a limited number of users can access the bastion node, with a typical traffic path for cluster management being:
DevOps engineer | <-> | Bastion node (public subnet) | <-> | GKE cluster or NFS (FileStore) |
The architecture design also takes into account performance balance against availability. The worker nodes are distributed across multiple Availability Zones instead of regions, ensuring a network latency between different Availability Zones of less than 1 millisecond in most cases.
Benefits of the Google Kubernetes Engine (GKE)
-
Control plane nodes are no longer needed. The cluster has a guaranteed
SLA
provided by Google Cloud. -
Worker Node groups are deployed by default, thus increasing the availability
-
Cluster management operational costs are reduced thanks to the managed worker nodes
-
Cloud native services are leveraged to ease the management experience and reduce operational costs:
-
The Google Container Registry (GCR) is used as the container image storage
-
The Google FileStore is used as persistent storage for the ArcSight Suite