Install as Docker containers
This article provides instructions for running the platform using a prepared docker-compose.yml file, or you can configure containers manually. The minimal base configuration of the platform components is described in the following chapters.
Since the platform requires databases to store its data, a section is also included to help you quickly spin up MongoDB, Elasticsearch, and Redis for a small deployment.
Requirements
Resources
- Docker Engine (20 or higher)
- At least 3 CPU cores and 4 GB of available memory on your machine
- A Linux machine is preferred, but macOS or Windows can be used for a local development environment
Networking and ports
- Admin: 8888, 9999
- Gateway: 8800, 9900
- Search: 8600, 9600
- Application Engine Worker: 8080, 9090
- MongoDB: 27017
- Elasticsearch: 9200, 9300
- Redis: 6379
- Minio: 9000, 9001
Platform Cluster dependencies
As the platform consists of several components, there is "the right" order in which to start them so that all communication, node registration, and other processes are executed successfully. While we try to make the platform as robust as possible, we recommended following this order:
- Databases and external dependent services (e.g., object storage) are available
- Admin
- Application Engine Worker
- Search
- Gateway
Running from docker-compose
We prepared a docker-compose example so you can start quickly with all the right configurations. The docker-compose file contains all the platform components as well as databases required for running the platform.
Before spinning up your platform deployment, make sure to follow these steps:
- Ensure you have the Docker Compose installed.
- Verify you have access to the Netgrif Docker repository to pull the platform images, and that you are logged in.
- Download the docker-compose.yml file.
- If you have custom modules implemented, create a folder on your machine where the modules are extracted. This folder can be mounted into the Application Engine worker container to load your modules.
- Verify that you have enough disk space to store all images and container data. For the start 5 GB should be enough.
Start containers
In the project root directory (where docker-compose.yml is located), run:
docker-compose up -d- The command creates (or uses the existing)
netgrif-platformnetvirtual docker network - Creates a named volume
minio_datato store files - Runs the containers in order:
- MongoDB and Elasticsearch and Redis and Minio (optional object storage)
- Admin (after MongoDB and Redis reach the ready state)
- Application Engine worker (after Admin, MongoDB, Elasticsearch, Redis, Minio are in the ready state)
- Gateway (after Admin and Engine worker are ready)
- Search (after Admin, Elasticsearch, Engine worker are ready)
- With health-checks and
depends_onconditions, it will ensure that each service is waiting for its dependencies
Verify a successful start-up
You can view a list of running containers with command:
docker psTo check the logs of one of the services (e.g., Admin), use the command:
docker logs -f nae-adminIf you have Docker Desktop installed, you can view containers and their logs via Docker Desktop UI.
You can also verify the availability of the platform components via published health endpoints, e.g., using curl command:
curl http://localhost:8888/manage/health # Admin
curl http://localhost:8800/manage/health # Gateway
curl http://localhost:8600/manage/health # Search
curl http://localhost:8080/manage/health # Engine root workerA text containing UP is expected to be returned.
Admin
Platform component for managing the cluster, deployed process applications, and user management. It publishes a web UI to make it easier to work with all cluster components and their resources.
Networking
- HTTP 8088
- TCP 9999 (gRPC)
Dependencies
- MongoDB: Used to store data about configurations, users, and zones.
- Redis: Used as a session store and cache.
Environment properties
NAMESPACE- Logical name of the space (e.g.,netgrif)netgrif.admin.data.mongodb.uri- URI for the MongoDB connection (e.g.,mongodb://<host>/netgrif_admin)netgrif.admin.data.redis.host- The hostname of the Redis instance (e.g.,nae-redis)netgrif.admin.data.redis.namespace- Name of the namespace prefix for Redis database (e.g.,netgrif)netgrif.admin.setup.admin.password- Initial password for the administratornetgrif.admin.setup.zones[0].name- Name of the default zone (e.g.,z1)netgrif.admin.setup.zones[0].description- Description of the default zone
Health check
Periodically pings the /manage/health endpoint and expects an UP response. If no response is received within the specified time, the container restarts (retrying up to 10 times).
Cluster Gateway
Serves as an entry point into the platform and routes API requests to the appropriate microservices (administration, core process, etc.). It can also provide security (authentication, authorisation) and routing.
Networking
- HTTP 8800
- TCP 9900 (gRPC)
Dependencies
- Admin: Handles permission authentication and configuration retrieval (requires the Admin to be "healthy").
- Application Engine (Worker): At least one active worker with an available gRPC endpoint.
Environment variables
netgrif.gateway.node.zone- Zone designation in the cluster (e.g.,z1)netgrif.gateway.node.host- Address of this Gateway available on the networknetgrif.gateway.node.admin-node.http-host- Addresses for HTTP connections to Adminnetgrif.gateway.node.admin-node.grpc-host- Addresses for gRPC connections to Admin
Health check
Endpoint /manage/health on port 8800, expecting UP.
Cluster Search
Indexes and provides a search interface over process and document data. The service communicates with Elasticsearch for full-text search.
Networking
- HTTP 8600
- TCP 9600 (gRPC)
Dependencies
- Admin: Gets zone settings and authorization (requires the Admin to be "healthy").
- Elasticsearch: Handles indexing and querying (requires Elasticsearch to be at least running).
- Application Engine (Worker): Requires at least one active worker with an available gRPC endpoint.
Environment variable
netgrif.search.node.rest-port- Server port for HTTP communication (set to 8600)netgrif.search.node.grpc.port- Server port for gRPC communication (set to 9600)netgrif.search.node.host- Address of the server on the networknetgrif.search.node.admin-node.host- Address of the Admin cluster componentnetgrif.search.node.in-cloud- Flag indication whether the cluster node is running in the cloud (set totruefor Docker, Kubernetes, or similar deployments)netgrif.search.data.elasticsearch.url- Elasticsearch addressnetgrif.search.data.redis.host- Redis addressnetgrif.search.data.redis.namespace- Name of the namespace prefix for Redis database (e.g.,netgrif)netgrif.search.node.zone- Zone designation in the cluster (e.g.,z1)netgrif.search.security.server-patterns- URL patterns for public endpoints define which paths are allowed for anonymous access (should be set to at least the following value:/v3/api-docs,/v3/api-docs/**,/swagger-ui.html,/swagger-ui/**,/api/public/**,/manage/**)
Health check
Endpoint /manage/health on port 8600, expected response UP.
Application Engine Worker
The root engine that handles basic workflows and tasks in the platform. It is marked as root in case there are delegated child engines (worker engines).
Networking
- HTTP 8080
- TCP 9090 (gRPC)
Dependencies
- Admin: Handles permission authentication and configuration retrieval (requires the Admin to be "healthy").
- MongoDB: Stores tasks and process states (
netgrif_nae_ro). - Elasticsearch: Indexes logs and process data.
- Redis: Stores sessions and event bus.
- Minio (optional): Stores "custom" modules mounted in the worker.
Environment variables
netgrif.engine.server.port- Server HTTP port (set to 8080)netgrif.engine.data.database-name- Name of the database to connect to (used for MongoDB database and as a prefix for Elasticsearch indices, e.g.,netgrif_nae_worker)netgrif.engine.data.mongodb.uri- Connection URI string to MongoDBnetgrif.engine.data.elasticsearch.url- Elasticsearch addressnetgrif.engine.data.redis.host- Redis addressnetgrif.engine.data.redis.namespace- Name of the namespace prefix for Redis database (e.g.,netgrif)netgrif.worker.node.name- Name of the worker (used in Admin for indication).netgrif.worker.node.host- Address of the server on the networknetgrif.worker.node.node-type- Type of the node in the cluster (set toENGINE_ROOTorENGINE; exactly oneENGINE_ROOTin the cluster)netgrif.worker.node.admin-node.host- Address of the Admin cluster componentnetgrif.worker.node.in-cloud- Flag indication whether the cluster node is running in the cloud (set totruefor Docker, Kubernetes, or similar deployments)netgrif.worker.node.root- Flag indication whether the worker is the Root Engine worker (set totruefor the only worker in the cluster)netgrif.worker.node.zone- Zone designation in the cluster (e.g.,z1)netgrif.otel.traces.exporter- Flag for exporting traces with OpenTelemetry SDK - (set tononefor local deployment)netgrif.otel.metrics.exporter- Flag for exporting metrics with OpenTelemetry SDK (set tononefor local deployment)netgrif.otel.logs.exporter- Flag for exporting logs with OpenTelemetry SDK (set tononefor local deployment)netgrif.engine.logging.file.path- Path to a log file (set tolog)netgrif.engine.management.endpoints.web.exposure.include- List of management endpoints to expose for the worker (set to*to see all)netgrif.engine.management.health.ldap.enabled- Flag for LDAP connector health checks (set tofalse)netgrif.engine.management.health.mail.enabled- Flag for mail server connector health checks (set tofalseif no email server is configured)netgrif.engine.management.endpoint.health.show-details- Flag if health details should be exposed (possible to set toalwaysfor local deployment, set towhen_authorizedfor production environment)netgrif.engine.management.metrics.export.simple- Should have valueenablednetgrif.engine.management.endpoints.web.base-path- Base URL path for all management endpoints (set to/manage)netgrif.engine.management.metrics.storage.cron- Cron expression for checking storage metrics (e.g., set to0 0 2 * * *)netgrif.engine.main.allow-bean-definition-overriding- Set totruenetgrif.engine.security.server-patterns- URL patterns for public endpoints define which paths are allowed for anonymous access (should be set to/api/auth/signup,/api/auth/token/verify,/api/auth/reset,/api/auth/recover,/v3/api-docs,/v3/api-docs/**,/swagger-ui.html,/swagger-ui/**,/api/public/**,/manage/**)
Volumes
Mounted folder ./custom (on host) to /opt/netgrif/engine/modules/custom (in the container) - the place where the custom modules can be inserted.
Health check
Endpoint /manage/health on port 8600, expected response UP.
Quick deployment of databases
If you want to test the platform or deploy it locally for development, you will need the required databases to run the platform. Below are some quick configurations for these databases. For production environments, it is recommended to have separate production / clustered database deployments.
MongoDB
Used as the main database. Use version 8 or higher. For local or test deployment, one container is enough. To be able to connect the platform to the MongoDB, it must be on the same Docker network as the platform.
- Image -
mongo:8.0.3 - Port -
27017 - Health check -
mongosh --eval "db.adminCommand('ping')"
If you want to see how it looks in the database, you can use available tools such as MongoDB Compass.
Elasticsearch
Used as a search index for the platform and to support cluster scope searches. Use version 8 or higher. For local or test deployment one container is enough. To be able to connect the platform to the Elasticsearch, it must be on the same Docker network as the platform.
- Image -
elasticsearch:8.15.3 - Port -
9200and9300 - Environment variables
cluster.name: elasticsearchdiscovery.type: single-nodehttp.host: 0.0.0.0xpack.security.enabled: falsetransport.host: 0.0.0.0
- Health check -
curl -sf http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=1s
Elasticsearch has a RESTful API to get traverse indices. For searching in the database, you can use available tools such as Kibana.
Redis
Used as a cache and as a session store for the platform. Every cluster node that needs to access session data (mainly authenticated user) is connected to the same Redis database. In the cluster nodes configurations, it is important to set spring.session.redis.namespace to the same value on every node. Use Redis version 7 or higher. For local or test deployment, a single container is sufficient. To be able to connect the platform to the Elasticsearch, it must be on the same Docker network as the platform.
- Image -
redis:7.4.1 - Port -
6379 - Health check -
redis-cli ping
To explore content saved in Redis, you can connect to the shell inside the Redis container and use its redis-cli tool for traversing the database content.
MinIO Optional
Each Application Engine worker stores uploaded files in the local file system under the storage directory within its working directory. However, there may be situations where this is not sufficient, and you need to manage files in a shared object storage. MinIO is a great option for both local deployments and production environments. It provides S3-compactible object storage. For local or test deployments, a single MinIO container is enough. To be able to connect the platform to the MinIO, it must be on the same Docker network as the platform.
- Image -
docker.io/bitnami/minio:2022 - Port -
9000and9001 - Environment variables
MINIO_ROOT_USER: rootMINIO_ROOT_PASSWORD: passwordMINIO_DEFAULT_BUCKETS: default
- Health check -
curl -sf http://localhost:9000/minio/health/ready
MinIO publishes its admin console on port 9001. If you plan to use MinIO for storage, you need to create buckets (different from the default bucket) before connecting the platform.
Troubleshooting
Health checks
Every Platform component exposes the /manage/health endpoint for health-check calls. The endpoint returns the text UP when the component is fully operational. If you configure the property netgrif.engine.management.endpoint.health.show-details with the value always, the health-check endpoint will return additional information about the component's state.
For Admin, Gateway, Search, Application Engine:
bashcurl http://localhost:<HTTP port>/manage/healthFor MongoDB:
bashdocker exec <container_id_mongo> mongosh --eval "db.adminCommand('ping')"For Elasticsearch:
bashcurl http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=1sFor Redis:
bashredis-cli -h localhost pingFor Minio:
bashcurl http://localhost:9000/minio/health/ready
Logs
docker logs -f <container_name>For example: docker logs -f nae-admin
Container control
Stopping all containers created by docker-compose
bashdocker-compose downTo remove all data from the created volumes, add
-voption to the command.Restart one container / platform component
bashdocker-compose restart <name of docker-compose service>For example:
docker-compose restart nae-adminRemove a container from docker-compose
bashdocker-compose rm -f <name of docker-compose service>Removing a container also removes all of its data, unless the data is persisted on a mounted volume or a directory on the host machine.
