Skip to main content

On-Prem Infrastructure Requirements

This page covers all infrastructure that must be provisioned before deploying Unstract. Complete these steps first, then proceed to the Deployment Guide.

LLMWhisperer Dependency

LLMWhisperer is a required dependency for Unstract and must be deployed before deploying Unstract. Refer to the LLMWhisperer On-Prem Deployment Guide for its deployment details.

Infrastructure Prerequisites

The following infrastructure must be provisioned by the customer team before proceeding with the Helm installation. Use whatever provisioning approach follows your internal standards (Terraform, Pulumi, CloudFormation, manual setup, etc.).

Kubernetes Cluster

  • Recommended version: >= 1.29 (latest tested: 1.33)
  • Node autoscaling should be enabled
  • Single AZ is sufficient for standard deployments. For production HA deployments, multi-AZ is supported — see the HA Deployment Guide
  • Ingress controller as a K8s cluster add-on for load balancer creation (recommended)
    • Ingress requires a maximum timeout of 900 seconds to work as expected (see Appendix c)
  • In-house or cloud provider observability stack (recommended)

PostgreSQL Database

  • Supported version: 15.0
  • Minimum specs: 1 vCPU, 8 GiB RAM, 50 GiB SSD
  • Autoscale enabled (recommended)
  • A dedicated database for Unstract should be created within the PostgreSQL instance

Object Storage

  • Managed blob storage: AWS S3 / Azure Blob Storage / GCP GCS
  • IAM / service principal with read/write access to the target bucket or container
  • See Remote Storage Configuration for detailed setup

DNS & SSL

  • A domain for pointing to Unstract (e.g., unstract.<customer-domain>.com)
  • An active SSL certificate is required — HTTPS is mandatory for the authentication system to function properly

Networking

  • Recommend allocating a subnet of /18 CIDR size for pods

Node Profile

Machine TypeLabelTaint (NoSchedule)MinMax
4 vCPU and 32 GiBservice: unstractservice: unstract24

The above is a small profile suitable for initial setup. For production sizing, see Appendix b.

It is expected that the workloads are to be deployed on non-spot nodepools.

Remote Storage Configuration

AWS

Unstract supports three authentication methods for AWS S3:

  • Static credentials — AWS access key and secret key configured directly in Helm values
  • Node-level IAM — attach the S3 IAM policy directly to the EKS node instance role; simplest setup but less granular than IRSA (available from v0.158.4)
  • IRSA (IAM Roles for Service Accounts) — recommended for EKS deployments; uses Kubernetes ServiceAccount annotations to assume an IAM role without storing credentials (available from v0.158.4)
S3 Bucket

Create an S3 bucket in your preferred region:

aws s3 mb s3://<s3-bucket-name> --region <aws-region>
IAM Policy

Create an IAM policy with the following permissions. Replace <s3-bucket-name> with your bucket name.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<s3-bucket-name>"
]
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<s3-bucket-name>/*"
]
},
{
"Sid": "ListAllBuckets",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"*"
]
}
]
}
aws iam create-policy \
--policy-name <policy-name> \
--policy-document file://s3-policy.json
note

The s3:ListAllMyBuckets permission is required for Unstract to validate storage connectivity.

Option 1: Static Credentials

Use this method when running outside EKS or when IRSA is not available.

Helm Chart Values

secret.yaml:

PERMANENT_REMOTE_STORAGE: &PERMANENT_REMOTE_STORAGE '{"provider": "s3", "credentials": {"key":"<s3-access-key>","secret":"<s3-access-secret>","endpoint_url":"<s3-endpoint-url>"}}'
Config placeholderExpected credential
s3-access-keyAWS access key
s3-access-secretAWS secret key
s3-endpoint-urle.g. https://s3.ap-south-1.amazonaws.com/
Option 2: Node-Level IAM

Attach the S3 IAM policy directly to the EKS node instance role. This is the simplest method but grants S3 access to all pods on the node, not just Unstract workloads.

Step 1: Find the Node Instance Role

# List the IAM role used by your EKS node group
aws iam list-attached-role-policies \
--role-name <node-instance-role-name>

The node instance role name can be found in the EKS console under your node group's configuration, or via:

kubectl get nodes -o jsonpath='{.items[0].spec.providerID}' | cut -d/ -f5
# Then look up the instance's IAM role in the EC2 console

Step 2: Attach the S3 Policy to the Node Role

Attach the IAM policy (created in the IAM Policy section above) to the node instance role — either as a managed policy or an inline policy.

As a managed policy:

aws iam attach-role-policy \
--role-name <node-instance-role-name> \
--policy-arn arn:aws:iam::<aws-account-id>:policy/<policy-name>

Or as an inline policy via the AWS Console:

  1. Go to IAMRoles → select your node instance role
  2. Permissions tab → Add permissionsCreate inline policy
  3. Switch to JSON, paste the S3 policy, click Next
  4. Give the policy a name and click Create policy

Step 3: Configure Helm Values

No IRSA configuration is needed. Leave IRSA disabled (default):

global:
irsa:
enabled: false

In secret.yaml, configure PERMANENT_REMOTE_STORAGE without key and secret:

PERMANENT_REMOTE_STORAGE: &PERMANENT_REMOTE_STORAGE '{"provider": "s3", "credentials": {"endpoint_url":"https://s3.<aws-region>.amazonaws.com/", "region_name":"<aws-region>"}}'
note

After attaching the policy, EC2 instance metadata credentials may take up to 5 minutes to refresh. Wait before testing connectivity.

Option 3: IRSA (Recommended for EKS)

IRSA eliminates the need to store AWS credentials in Helm values. Instead, pods assume an IAM role via a Kubernetes ServiceAccount annotation.

Prerequisites:

  • An EKS cluster with an OIDC provider associated
  • The IAM policy created above

Step 1: Create an IAM Role with OIDC Trust Policy

Create an IAM role that trusts the EKS OIDC provider. Replace the placeholders with your values.

# Get the OIDC issuer URL for your cluster
aws eks describe-cluster --name <cluster-name> --region <aws-region> \
--query "cluster.identity.oidc.issuer" --output text

Create the trust policy (replace <aws-account-id>, <oidc-id>, and <namespace>):

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<aws-account-id>:oidc-provider/oidc.eks.<aws-region>.amazonaws.com/id/<oidc-id>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<aws-region>.amazonaws.com/id/<oidc-id>:sub": "system:serviceaccount:<namespace>:unstract-irsa",
"oidc.eks.<aws-region>.amazonaws.com/id/<oidc-id>:aud": "sts.amazonaws.com"
}
}
}
]
}
PlaceholderDescription
<aws-account-id>Your AWS account ID
<oidc-id>OIDC provider ID from the issuer URL (the segment after /id/)
<aws-region>AWS region of the EKS cluster
<namespace>Kubernetes namespace where Unstract is deployed
note

The ServiceAccount name unstract-irsa is hardcoded in the Helm chart. Use this exact name in the trust policy.

aws iam create-role \
--role-name <role-name> \
--assume-role-policy-document file://trust-policy.json

Step 2: Attach the S3 Policy to the Role

aws iam attach-role-policy \
--role-name <role-name> \
--policy-arn arn:aws:iam::<aws-account-id>:policy/<policy-name>

Step 3: Configure Helm Values

Enable IRSA in values.yaml:

global:
cloud: aws
irsa:
enabled: true
roleArn: "arn:aws:iam::<aws-account-id>:role/<role-name>"

In secret.yaml, configure PERMANENT_REMOTE_STORAGE without key and secret:

PERMANENT_REMOTE_STORAGE: &PERMANENT_REMOTE_STORAGE '{"provider": "s3", "credentials": {"endpoint_url":"https://s3.<aws-region>.amazonaws.com/", "region_name":"<aws-region>"}}'
info

When IRSA is enabled, the Helm chart automatically creates a Kubernetes ServiceAccount annotated with the IAM role ARN. Pods receive temporary AWS credentials via the projected service account token — no static keys are needed.

Verify IRSA is Working

After deployment, confirm that the IRSA environment variables are injected into pods:

kubectl exec -n <namespace> <pod-name> -- env | grep AWS

You should see AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE in the output.

Common Helm Values (All Methods)

values.yaml — replace <s3-bucket-name> with your S3 bucket name:

backend:
configMap:
REMOTE_SIMPLE_PROMPT_STUDIO_FILE_PATH: <s3-bucket-name>/simple-prompt-studio-data
REMOTE_PROMPT_STUDIO_FILE_PATH: <s3-bucket-name>/prompt-studio-data

platform:
configMap:
MODEL_PRICES_FILE_PATH: <s3-bucket-name>/cost/model_prices.json

prompt:
configMap:
REMOTE_PROMPT_STUDIO_FILE_PATH: <s3-bucket-name>/prompt-studio-data

Azure

Helm Chart Values

secret.yaml:

PERMANENT_REMOTE_STORAGE: &PERMANENT_REMOTE_STORAGE '{"provider": "abfs", "credentials": {"account_name":"<azure-account-name>","access_key":"<azure-access-key>","connection_string":"<azure-connection-string>"}}'
Config placeholderExpected credential
azure-account-nameAzure account name
azure-access_keyAzure access key
azure-connection_stringE.g. DefaultEndpointsProtocol=https;AccountName=xxxxxxx;AccountKey=xxxxx;EndpointSuffix=core.windows.net

values.yaml — replace azure-container-name with applicable Azure container name:

backend:
configMap:
REMOTE_SIMPLE_PROMPT_STUDIO_FILE_PATH: <azure-container-name>/simple-prompt-studio-data
REMOTE_PROMPT_STUDIO_FILE_PATH: <azure-container-name>/prompt-studio-data

platform:
configMap:
MODEL_PRICES_FILE_PATH: <azure-container-name>/cost/model_prices.json

prompt:
configMap:
REMOTE_PROMPT_STUDIO_FILE_PATH: <azure-container-name>/prompt-studio-data

Appendix

a. Cluster Nodes Config

  • Minimum spec: 4 vCPU / 32 GiB
  • Node autoscaling should be enabled
  • Node Groups are optional based on the profile
  • Single AZ is sufficient for standard deployments
    • For production HA deployments with Redis Sentinel, RabbitMQ quorum queues, and MinIO HA, multi-AZ is supported — see the HA Deployment Guide

b. Cluster Size Profiles

Small Profile (not recommended for high volume)

  • No autoscaling (can be enabled if required)
  • Only one default Node Group
Machine TypeLabelTaint (NoSchedule)MinMax
4 vCPU and 32 GiBservice: unstractservice: unstract24

Production Profile

  • Different Node Groups based on workloads
  • Add 50 GiB SSD for application data to each machine
Machine TypeLabelTaint (NoSchedule)MinMax
4 vCPU and 32 GiBservice: unstractservice: unstract516

It is expected that the workloads are to be deployed on non-spot nodepools.

c. Ingress Setup

All ingress types must support a 900-second timeout.

AWS ALB Ingress Controller

  • Ingress configuration in EKS Auto Mode

  • Required annotation:

    # REF: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/how-it-works/#ip-mode
    alb.ingress.kubernetes.io/target-type: ip

Nginx Ingress Controller

  • Recommended ingress controller for Azure AKS

  • Two Nginx Ingress Controllers are commonly deployed and they use different, mutually exclusive annotation prefixes. Pick the annotation set that matches the controller installed in your cluster — do not mix the two:

    In both cases, set the controller via spec.ingressClassName: nginx on the Ingress resource (preferred since Kubernetes 1.18). The legacy kubernetes.io/ingress.class: nginx annotation still works for older clusters but is deprecated.

  • Option A — Community Ingress Controller

    # Selects the ingress controller. Deprecated since K8s 1.18 in favor of
    # spec.ingressClassName, but still honored as a fallback by the controller.
    kubernetes.io/ingress.class: nginx

    # Must be increased from default 60 to 900.
    nginx.ingress.kubernetes.io/proxy-read-timeout: "900"

    # Must be increased from default 1 MB for large document uploads.
    nginx.ingress.kubernetes.io/proxy-body-size: "200m"

    # Forces X-Forwarded-Proto=https when AWS NLB terminates TLS upstream.
    # Requires --allow-snippet-annotations=true on the controller (off by default since v1.9.0).
    nginx.ingress.kubernetes.io/configuration-snippet: |
    access_by_lua_block { ngx.var.pass_access_scheme = "https" }
  • Option B — NGINX Ingress Controller (F5 NGINX)

    # Selects the ingress controller. Deprecated since K8s 1.18 in favor of
    # spec.ingressClassName, but still honored as a fallback by the controller.
    kubernetes.io/ingress.class: nginx

    # Default is 60. Must be increased to 900.
    nginx.org/proxy-read-timeout: "900"

    # Default is 1 MB. Must be increased for large document uploads.
    nginx.org/client-max-body-size: "200m"

    # Required when using AWS NLB (Layer 4) with TLS termination.
    # NLB does not inject X-Forwarded-Proto, causing http:// callback URLs.
    nginx.org/proxy-set-headers: "X-Forwarded-Proto: https"
  • Configure Nginx to work with AWS EKS

warning

If you are using the Community Ingress Controller (kubernetes/ingress-nginx), avoid using the nginx.ingress.kubernetes.io/rewrite-target annotation. In Community NGINX Controller versions >= v0.22.0, the old rewrite-target: / syntax causes authentication failures (401 Unauthorized responses). If you encounter login issues, remove any rewrite-target annotations from your ingress configuration.

d. Container Images

For the full list of container images used by the Unstract Platform Helm chart, including instructions for mirroring, registry overrides, and pre-pulling, see the dedicated Container Images Reference.