Skip to content

Using Docker

The Docker open platform has excellent documentation which you should read and understand.

Why Docker?

Docker is a container-based platform that enables you to develop, deploy, and run applications within a container. Your application, plus any dependencies your application requires, such as binaries and libraries, and configuration information are held within the container. You can deploy multiple containers, all of which run in Docker and on top of the operating system.

Using Docker you can scale your applications vertically, meaning multiple instances of the session server can exist on one server and each instance will perform exactly as it did when you created and tested it.

What are the benefits?

Containerization delivers multiple benefits:

  • Performance

    Virtual machines are an alternative to containers, however containers do not contain an operating system (unlike VMs). This means containers are faster to create, quicker to start, and have a much smaller footprint.

  • Agility

    Because containers are more portable and have better performance, you can take advantage of more agile and responsive development practices.

  • Isolation

    Docker containers are independent of one another. This is important because a Docker container containing one application, including the required versions of any supporting software, will not interfere with another container of the same application which requires different supporting software. You can have total confidence that at each stage of development and deployment the image you create will perform exactly as expected.

Terminology

There are basic terms you need to be familiar with when working with Docker. For more information see the Docker Documentation site.

Container

A run-time instance of an image. A container is usually completely isolated from the host environment, only able to access host files and ports if it has been configured to do so. To run an image in a container you use the docker run command.

Docker Hub

A cloud-based community resource for working with Docker. Docker Hub is typically used for hosting images, but can be used for user authentication and automating the building of images. Anyone can publish images to Docker Hub.

Docker Compose

Compose is a tool that uses YAML files to configure your application services and then define and run multi-container Docker applications. To learn more about Compose visit the Docker Compose documentation.

Dockerfile

A text document containing the commands to build a Docker image. You can specify complex commands (such as specifying an existing image to use as a base) or simple ones (such as copying files from one directory to another). To build an image from a Dockerfile you use the docker build command.

Image

A standalone, executable package that runs in a container. A Docker image is a binary that includes everything needed to run a single Docker container, including its metadata. You can build your own images (using a Dockerfile) or use images that have been built by others and then made available in a registry (such as Docker Hub). To build an image from a Dockerfile you use the docker build command. To run an image in a container you use the docker run command.

Getting started with Docker and Verastream Host Integrator

When you install Host Integrator, if you choose to use Docker, the install package contains an initial Dockerfile and accompanying application jar file to get you started using the session server in a container. These files are available prior to your installation.

Note

Make sure you are running the latest version of Docker and Docker Compose.

There are examples located in the examples/docker folder.

There are six steps involved in creating the base image:

  1. Install Docker. Follow the instructions on the Docker web site.

  2. From the download site, download and extract the file, tar xvf vhisrv-x.x.xx-prod-linux-64. A subdirectory, linux64 which contains a Dockerfile example is created.

  3. Open the directory linux64/examples/docker. This directory includes the following files: Dockerfile, extract_install_files.shand stage_container.sh.

  4. Execute ./stage_container.sh and accept the license, or execute ./stage_container.sh --licenseagreed to accept the license from the command line.

  5. Build the Docker image.

  6. Run the Docker image.

Build the Docker image

Note

One session server container per Docker host is supported.

Assuming you have followed steps one and two; installed Docker and extracted and located Dockerfile, the next step is to build the base Docker image of the session server.

  1. Run this command from the folder containing the Dockerfile:

    docker build -t vhi/sessionserver:<version> .

    Replace <version> with the version of the session server. If a version is not available, the default tag (-t) is latest.

  2. Verify that the image was successfully created. Run:

    docker images

    The output should contain information about the image you just built.

Run the image

Before you can run the session server image in a Docker container, you must complete the following steps:

  • Expose the needed ports

    To specify the ports to use, run: -p 9623:9623 -p 9680:9680 -p 9681:9681 -p 35000:35000 -p 35001:35001

  • Map your configuration directory to the one in the container

A volume mount mounts a file or directory on the host machine into a container. The file or directory is referenced by its full or relative path on the host machine.

This volume mounts the volumes named etc and deploy on the host to the Docker container. If a volume named etc or deploy does not exist it will be created. The first time the container is run the configuration files and deployed models will be copied to the volumes for retaining the session server settings.

Add Host Machine Name Resolution

The VHI Session Server needs to know the name of its host, so we add an environment variable, --env VHI_SSHOSTNAME=<host-name>. We also --add-host so the Docker network can resolve the host by name. Specify the fully qualified <host-name>, e.g., "name.example.com". You can use nslookup <host-name> to find the <host-address>.

docker run -d \
 --add-host <host-name>:<host-address> --env VHI_SSHOSTNAME=<host-name> \
 -p 9623:9623 -p 9680:9680 -p 9681:9681 -p 9640:9640 -p 35000:35000 -p 35001:35001 \
 --mount source=etc,target=/opt/microfocus/verastream/hostintegrator/etc \
 --mount source=deploy,target=/opt/microfocus/verastream/hostintegrator/deploy \
 sessionserver:<version>
In Windows PowerShell:
docker run -d `
 --add-host <host-name>:<host-address> --env VHI_SSHOSTNAME=<host-name> `
 -p 9623:9623 -p 9680:9680 -p 9681:9681 -p 9640:9640 -p 35000:35000 -p 35001:35001 `
 --mount source=etc,target=/opt/microfocus/verastream/hostintegrator/etc `
 --mount source=deploy,target=/opt/microfocus/verastream/hostintegrator/deploy `
 sessionserver:<version>
Note: this docker run command might work better without the line breaks on some terminals.

Docker Desktop for Windows

To run VHI in Docker Desktop for Windows, the instructions given above are valid for both Linux and Windows, however be aware of these additional points:

  • We have tested the extraction, staging, and Docker build and run instructions on Windows 10 with good results. Recent updates of Windows 10 have the tar command, and will also run shell scripts. The ./stage_container.sh script does not seem to run interactively on Windows, so it behaves as if it were run as ./stage_container.sh --licenseagreed. All commands can be typed verbatim using PowerShell; but if using the Windows Command shell, omit the ./ when running stage_container.sh.

  • Using the Docker-recommended version two of Windows Subsystem for Linux (WSL2), users may encounter unusably slow network response times for VHI client requests, due to low entropy for secure network connections. This includes attempts to configure the Session Server from the Administrative Console. Low entropy is a problem best addressed on the Docker host machine. Our research has not turned up any entropy configuration options for WSL2. WSL1 seems to work better, so reverting back to WSL1 may be an option. In a Docker farm in the cloud, host machine access might not be possible, and the best solution may be something like harbur/haveged on Docker Hub. We have tested this, and it is convenient to use, and it works very well.

  • Docker Desktop for Windows modifies the Windows hosts file (%WINDIR%/System32/drivers/etc/hosts) with the following:

    # To allow the same kube context to work on the host and the container:
    127.0.0.1 kubernetes.docker.internal
    # End of section
    
    This may cause problems for some applications. If you're not using kube context, try commenting it out by adding a hash tag to the beginning of the line: 127.0.0.1 kubernetes.docker.internal.