Install on Kubernetes
You can install Netgrif Platform on any Kubernetes (K8S) compatible cloud provider. This article serves as a guide on how to deploy all Platform components on a Kubernetes cluster.
Netgrif Platform requires databases like MongoDB, Elasticsearch, and Redis to be available from the target K8S cluster. How to deploy these databases can be found in their respective documentations:
If you're on a public cloud such as Google Cloud, AWS, Azure, or DigitalOcean, you can use the managed databases they provide. Make sure the mentioned database services are running and accessible from your cluster. They should be in the same network, project, or equivalent environment.
Requirements
Prior experience with kubernetes is required, please refer to kubernetes documentation to set up your kubernetes cluster.
Resources
- Kubernetes API version: v1
The table below shows resource consumption per component.
| Name | CPUs | Memory | Storage |
|---|---|---|---|
| NAE Root Node | 2 | 2.5GB | 10GB |
| NAE Search Node | 2 | 3GB | 10GB |
| NAE Admin | 2 | 3GB | 10GB |
| NAE Worker Node | 2 | 3GB | 50GB |
| NAE Gateway | 1 | 2GB | 10GB |
These data are orientational and not precise; they highly depend on processes, deployed applications, and other factors.
Starting a platform cluster
At first, it is recommended to create a namespace for the platform cluster. Let's call it netgrif:
kubectl create namespace netgrifBefore we move to deployment of components, let's break down important configuration steps:
For each component (Node), we are using a range of ports. We recommend using this port distribution in your ingress and network configuration.
| Protocol | Port range | Node |
|---|---|---|
| HTTP | 8888 | Admin |
| 8800-8899 | Gateway | |
| 8080-8099 | Worker | |
| 8600-8699 | Search |
| Protocol | Port range | Node |
|---|---|---|
| GRPC | 9999 | Admin |
| 9900-9990 | Gateway | |
| 9090-9109 | Worker | |
| 9600-9699 | Search |
Configuration properties for internal node communication
We included only necessary configuration properties needed to run your cluster; they are the same as we mentioned in our docker guide. For further customization, please read our configuration properties reference.
In our custom Kubernetes deployment, environmental properties are required for internal communication between nodes. These properties ensure that each node can discover and interact with other components reliably at runtime.
INFO
Environmental properties follow this pattern NETGRIF_${component name}_NODE_${resource}. For Search node each property starts with NETGRIF_SEARCH_NODE or netgrif.search.node in spring application property syntax.
NETGRIF_WORKER_NODE_NAME- Spring property:
netgrif.worker.node.name - Default value:
@project.name@ - Note: Unique name of Node
- Spring property:
NETGRIF_WORKER_NODE_ZONE- Spring property:
netgrif.worker.node.zone - Default value:
z1 - Note: Zone name
- Spring property:
NETGRIF_WORKER_NODE_PROTOCOL- Spring property:
netgrif.worker.node.protocol - Default value:
http - Note: Network protocol
- Spring property:
NETGRIF_WORKER_NODE_HOST- Spring property:
netgrif.worker.node.host - Default value:
localhost - Note: Host of node server
- Spring property:
NETGRIF_WORKER_NODE_REST_PORT- Spring property:
netgrif.worker.node.rest-port - Default value:
8080
- Spring property:
NETGRIF_WORKER_NODE_NODE_TYPE- Spring property:
netgrif.worker.node.node-type - Default value:
ENGINE - Note: Type of node within cluster
- Spring property:
NETGRIF_WORKER_NODE_HEALTH_URI- Spring property:
netgrif.worker.node.health-url - Default value:
/manage/health - Note: URI endpoint used for health checking of the node
- Spring property:
NETGRIF_WORKER_NODE_IN_CLOUD- Spring property:
netgrif.worker.node.in-cloud - Default value:
false
- Spring property:
NETGRIF_WORKER_NODE_IS_ROOT- Spring property:
netgrif.worker.node.root - Default value:
false
- Spring property:
NETGRIF_WORKER_NODE_ADMIN_NODE_HOST- Spring property:
netgrif.worker.node.admin-node.host - Default value:
localhost - Note: Host of Admin node
- Spring property:
NETGRIF_WORKER_NODE_ADMIN_NODE_GRPC_PORT- Spring property:
netgrif.worker.node.admin-node. grpc-port - Default value:
9999 - Note: Port for gRPC communication
- Spring property:
We recommend using kubernetes ConfigMaps for non-confidential data and Secrets for sensitive information such as passwords. Keep your password confidential and use secure practices to protect your cluster secrets. If the admin password is lost, the entire cluster must be redeployed.
You will need to deploy a platform component with provided Kubernetes manifest files. To deploy component use apply command:
kubectl apply -f '<component_manifest.k8s.yaml>'After deployment of a component, wait for its initialization to finish. You can check internal pod logs:
kubectl logs -p application-rootWait for the banner to appear:
msg=Runner FinisherRunner is starting
msg=+----------------------------+
msg=| Netgrif Application Engine |
msg=+----------------------------+
msg=Runner FinisherRunner has endedYou can now continue with the deployment.
Here are the deployment files available for download:
- Root
- Worker
- Gateway
- Search
Each cluster requires one of: Admin, Root and at least one of: Worker, Gateway, Search. We can scale workers horizontally. It is a good practice to have one Search and Gateway Node per zone.
IMPORTANT
For a cluster to function properly, you need to deploy each component in this order:
- Admin → 2. Root → 3. Gateway → 4. Search → 5. Worker
You can check your deployment with:
kubectl get all -n netgrifYour output should look like this:
TYPE/NAME READY STATUS RESTARTS AGE
pod/application-admin 1/1 Running 0 5m
pod/application-root 1/1 Running 0 4m
pod/application-search-z1 1/1 Running 0 4m
pod/application-gateway-z1 1/1 Running 0 3m
pod/application-worker 1/1 Running 0 3m
\\ other db pods, services...IMPORTANT
We don't provide frontend image for cluster, you need to create your own image and deploy it. Then add it to ingress as routing rule:
- host: z1.netgrif.com
http:
paths:
- backend:
service:
name: application-my-custom-fronted
port:
number: 80
path: /
pathType: PrefixYou can now access the admin web console at the host you defined in your ingress file.
spec:
rules:
- host: netgrif.admin.com
http:
paths:
- backend:
service:
name: application-admin-service
port:
number: 8888
path: /
pathType: PrefixIn this example, the address ishttps://netgrif.admin.com. For your deployment, it will be whatever host you defined. The default port for Admin frontend is 8888. To access the credentials for the first sign-in, you need to either:
- Access admin logs:
kubectl logs -p application-adminAnd look for the start banner:
msg=------------------------------------------------
msg=Realm ID: 687f4f1bee08603d74565385
msg=Login: admin
msg=Password: password
msg=------------------------------------------------- Or you can also retrieve credentials from environmental properties:
- name: netgrif.admin.setup.admin.password
value: password
- name: netgrif.admin.setup.admin.username
value: adminAlso, we strongly recommend adding TLS to your cluster; you can do it by installing cert-manager.
FAQ
Admin logs returned: "zone does not exits".
Check netgrif.admin.setup.zones property in admin pod and check if name is equal to NETGRIF_WORKER_NODE_ZONE or netgrif.worker.node.zone in Node, which you are trying to register. If not, you can create a new zone in the admin web console, or restart Node with correct netgrif.worker.node.zone property.
Why do components need Volumes?
Current architecture needs mounts to store logs and storage (that holds petriNet files during process deployment).
Why does the application need two Ingresses ?
This is by design; in practice, there may be more. Each publicly available Gateway that is accessible in the cluster requires its own Ingress to handle inbound traffic. Admin Ingress is used for administration console access, and Gateway Ingress is used for application access.
I want to drain logs to an external platform.
NAE cluster supports OpenTelemetry integration, you can set your collector connection with properties:
netgrif.otel.traces.exporternetgrif.otel.logs.exporternetgrif.otel.metrics.exporter
Do you support Helm charts for deployment?
Currently, we do not provide official Helm charts for the Netgrif Platform. We recommend using the provided Kubernetes manifest files for deployment. However, you can create your own Helm charts based on the manifest files if needed.
I want to add a new worker.
Simply deploy a new StatefulSet and Service. Make sure to update the names of the resources and adjust environment variables such as NETGRIF_WORKER_NODE_NAME.
