Skip to content

Huddo Boards All-in-One (AIO) Docker setup


This document outlines a standalone (all in one) deployment of Huddo Boards using docker-compose. This can be used as a proof of concept, staging deployment or even a production deployment for a limited number of users (e.g. < 500).

You may run all services including database and file storage on one server, or you can use an external Mongo database or S3 file store.

Server requirements

RHEL (or Centos 7) server with:

  • 8gb ram minimum
  • 4 vCPUs
  • 40gb system drive
  • 100gb data drive (will be shared for database and file store) *see Persistence Options below
  • docker and docker-compose



The implementation of this can be either:

Deployment Type Example URLs Comments
Paths use your existing domain, no additional certificates, easier SSO integration of HCL Connections header.
requires 2 domains (and therefore certificates) in your environment.


Boards uses 3 types of persistent data:

  1. Mongodb
  2. S3 file store
  3. Redis cache.

Each of these may use external services (e.g. Mongo Atlas) or the included services in the template (this hugely changes the server demand).



If using the included services, you must have a separate mount point on your server for persistent data with a directory each for mongo and s3(minio) storage. You will need to map directories for mongo and s3 containers to this data drive. This data drive should be backed up however you currently backup data.


Access to Images

Please follow this guide to get access to our images in so that we may give you access to our repositories and templates. Once you have access please run the docker login command available from the interface, for example:

docker login -u="<username>" -p="<encrypted-password>"


Download the appropriate configuration files for your deployment type:

Deployment Type URL Files
Paths /boards, /api-boards docker-compose.yml
nginx proxy conf
nginx proxy conf

Update all example values in both files as required. Most required variables are in the template, for more information see the Kubernetes docs

S3 Storage

The minio credentials are are used to both set in the minio service and access it from other services;

  • x-minio-access is used as the username in minio
  • x-minio-secret is used as the password.

See the minios documentation on these fields, and an example of the values used here. The standard seems to be around 20 characters all caps/numbers for the username and around 40 characters any case / number for the password.


The user env variables in the compose file assume you are installing this in an HCL Connections environment. These can be removed or replaced with Microsoft 365 tenant info as shown here. For more info on other authentication methods contact the huddo team. The default variables for Domino are also included and can be uncommented as required.

DNS / Proxy

Please follow the instructions for your chosen deployment type:


Once you have updated the appropriate docker-compose.yml and nginx.conf with your environment details, you can start the services with:

docker-compose up -d


The mount point on your system for the mongo data needs to include user 1001 with read/write access, see bitnami/mongodb for more info and full documentation.

if your setup is not running, first check the db logs and make sure it is not complaining about permissions to write the files it needs docker-compose logs mongo

To remove any other network configuration/hops on the docker server you should be able to: curl -H "Host: your.web.url" --insecure https://localhost This should return the html from webfront curl -H "Host: your.api.url" --insecure https://localhost This should return the html for the swagger api documentation curl -H "Host: your.api.url" --insecure https://localhost/health This should return "{listening: 3001}"

If the above works then you may have configuration issues with a proxy / dns not pointing traffic to the docker server properly If it does not work then the local nginx proxy is probably not working, check docker-compose logs nginx to see if it points out any misconfiguration

The core image has ping enabled and has access to all others so you can use it to test connectivity

docker-compose exec -it core sh
ping user
ping mongo
... etc