10.3 Planning your Access Manager Deployment on Kubernetes

10.3.1 Deployment Considerations

A few important points to understand to deploy Access Manager:

  • The Kubernetes cluster should contain one Master node and at least one Worker node.

  • Each worker node can run only one replica of an Access Manager component. However, two different components can run in the same node.

    For example, two instances of Administration Console, Identity Server, or Access Gateway cannot run in the same node. An instance of Administration Console and another of Identity Server can run in the same node.

  • If the number of users or your requirement of resource usage is low, it is possible to run all the components in a single worker node.

  • In a node, the port used by one container cannot be used by another container because Access Manager uses host networking.

  • To scale up, you must increase the number of worker nodes. For example, to run three instances of Identity Server, you must have three worker nodes.

  • Ensure that the system meets the requirements for installing Access Manager containers. See System Requirements of Administration Console, Identity Server, Access Gateway Containers.

The following table lists the maximum number of Access Manager pods that can be deployed on a given number of worker nodes:

Number of Worker Nodes

Administration Console Pod

Identity Server Pod

Access Gateway Pod

1

1

1

1

2

1

2

2

3

1

3

3

NOTE:As you increase the number of worker nodes, you can increase Identity Server and Access Gateway to that many instances.

Memory Allocation of Pods

The default resource memory limit of Access Manager pods are as follows:

Pod

Default Memory

eDirectory

1 Gi

Administration Console

2 Gi

Identity Server

2 Gi

Access Gateway

2 Gi

However, based on your need, you might need to allocate more memory to Tomcat as per the performance sizing recommendations for a production environment.

To determine the JAVA memory that you should allocate for Identity Server, see JAVA Memory Allocations, and for Access Gateway, see JAVA Memory Allocations.

To increase the pods’ memory, replace the respective values in the access-manager/values.yaml file.

IMPORTANT:You must update the memory values before running the Access Manager helm chart.

Docker containers usually have a default list of Linux capabilities enabled. You must ensure to keep only the following capabilities enabled for Primary Administration Console, secondary Administration Console, Identity Server, Access Gateway, and eDirectory containers and do not use the others:

  • CHOWN

  • FOWNER

  • SYS_CHROOT

  • DAC_OVERRIDE

  • SETGID

  • SETUID

  • NET_BIND_SERVICE

  • AUDIT_WRITE

Adding Privileges for Access Manager Docker

To add any extra privileges:

  1. Navigate to the access-manager/templates/_am-templates.tpl file.

  2. Add or Drop privileges as relevant in the following format:

    {{/******  Spec for capabilities required by Access Manager  ****/}}
    {{- define "access-manager.capabilities" -}}
    capabilities:
      drop:
      - all
      add:
      - CHOWN
      - KILL
      - FOWNER
      - DAC_OVERRIDE
      - SETGID
      - SETUID
      - AUDIT_WRITE
      - NET_BIND_SERVICE
    {{- end }}

    Similarly, for Analytics Dashboard you can add extra privileges in the section in file named _am-templates.tpl. After you add, helm install can be done (for fresh install) or helm upgrade (if already installed)

  3. Enter the helm install (if installing for the first time) or helm upgrade (if you are upgrading) command as applicable.

10.3.2 Protecting Access Manager Secrets

By default, Kubernetes does not hide the secrets while encrypting, managing, and sharing secrets across a Kubernetes cluster, if the secrets are decoded with Base64. Therefore, Access Manager Secrets could be visible.

You can protect the Access Manager Secrets by performing one or all of the following options:

10.3.3 Conditions for Creating Administrator Username and Password

Access Manager administrator username and password must conform to the following conditions:

  • The username and password must begin with an alphanumeric character or _ (underscore).

  • The last character must not contain any special character.

  • The username must not contain # (hash), & (ampersand), and ()(round brackets).

  • The password must not contain : (colon) and " (double quotes).

10.3.4 Installing Ingress

Ingress manages the external access to the Access Manager services in a Kubernetes cluster. To make the Ingress resources work, you need an Ingress controller. Configuring Ingress is mandatory while deploying Access Manager on Azure. However, on AWS, if the worker nodes are on public IP, configuring Ingress is optional. Below section shows an example with NginX controller:

Installing the NginX Ingress Controller

  1. Add the NginX stable to the master node by using the following command:

    helm repo add ingress-nginx "https://kubernetes.github.io/ingress-nginx"

  2. Update the repository:

    helm repo update

  3. Install the NginX Ingress Controller:

    helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true

NOTE:An alternative to accessing Administration Console safely without exposing it through Ingress is by deploying a Kubernetes cluster in a private network. Administration Console can only be accessed through a Windows machine which is located in a public subnet.

To configure the Ingress rules, see Section 10.3.5, Configuring Ingress.

10.3.5 Configuring Ingress

Ingress manages the external access to the Access Manager services in a Kubernetes cluster. To make the Ingress resources work, you need an Ingress controller. Configuring Ingress is mandatory while deploying Access Manager on Azure. However, on AWS, if the worker nodes are on public IP, configuring Ingress is optional.

Configuring the Ingress Rules

Configure the Ingress controller chart before running the Access Manager Helm chart.

  1. Open the access-manager/values.yaml file. Access Manager 5.0 Service Pack 2 onwards, additional attribute ingressClassName can be configured. This attribute is used to configure the ingress controller provider.

  2. Enable ingress by specifying enabled: true.

  3. Specify the service ports for the respective components:

    Component

    Value

    Administration Console

    2443

    Identity Server

    8443

    Access Gateway

    8000

    9099

    You can specify more port numbers if Access Gateway needs another ports to open.

  4. Configure the Administration Console service by specifying the following details:

    Element

    Value

    host

    Domain name or Administration Console service URL. For example, www.cloudac.com.

    https

    Specify true to enable the backend communication between Ingress and pods.

    paths

    :2443/nps

  5. Configure the Identity Server service by specifying the following details:

    Element

    Value

    host

    Domain name or Identity Server service URL. For example, www.cloudidp.com.

    https

    Specify true to enable the backend communication between Ingress and pods.

    paths

    /nidp: 8443

  6. Configure the Access Gateway service by specifying the following details:

    Element

    Value

    host

    Domain name or Access Gateway service URL. For example, www.cloudag.com.

    https

    Specify true when you want to enable backend communication between Ingress and pods.

    paths

    /path1: 8000

    /path2: 9099

    For example, specify

    /mag: 8000

    /apache: 9099

  7. Save and close the values.yaml file.

  8. Create a TLS secret to use with the self-signed certificate. Use the following command:

    kubectl create secret tls $<cert-name> --key $<KEY_FILE> --cert $<CERT_FILE>

    Use this TLS secret when front-end SSL communication is required.

  9. Proceed to Deploying Access Manager Containers on Azure Kubernetes Services or Deploying Access Manager Containers on AWS.

NOTE:If you modify any value in the values.yaml file of an existing helm release, you must perform a helm upgrade to apply the changes.

Run the following command to perform the helm upgrade:

helm upgrade <release-name> access-manager -n <name-of-the-namespace>

10.3.6 Limitations of Docker Deployment

  • Installing a tertiary Administration Console is not supported. You can install only primary and secondary Administration Consoles.

  • Customizing the path of the log files is not supported.

  • Combining components (Identity Server, Access Gateway, or Analytics Server) from various platforms is not supported. For example, you cannot use Identity Server deployed on Docker environment with Access Gateway deployed on a non-docker environment.

  • The release name used in one namespace cannot be used in another namespace. In a particular namespace, each cluster must be created as a separate release. The release name is a value that you specify in the Helm install or upgrade command.

  • If the release name contains many characters, the secondary Administration Console pod can go into an error state.

  • If you scale down any pod while upgrading to a new version, the deleted pod's persistent volume does not bind to the pod on performing a rollback.

  • Upgrading the Kubernetes cluster to a later version after installing Access Manager disrupts the Access Manager setup.

  • If Administration Console and device pods restart after an unexpected crash and if Administration Console is unable to restart, Identity Server pods, and Access Gateway pods will also be in the waiting state.

  • Converting a secondary Administration Console into a primary console is not supported.

  • In cloud deployments, such as EKS or AKS, upgrading a Kubernetes cluster is not supported.