ERPNext Local Kubernetes Deployment

Deploying ERPNext on Local Ubuntu with MicroK8s

I’m looking to deploy an ERPNext site on my local Ubuntu machine using MicroK8s. The goal is to create a local environment to test the Kubernetes deployment of ERPNext. I want to access the site at the URL erp.localhost and utilize a MariaDB pod with a local volume instead of an external one.

Since I’m new to Kubernetes, I would appreciate detailed steps and the necessary manifest files to set this up.

I have followed the below link to try it

Thank you!

The guide you linked ends with:
“Now visit http://localhost:8080 in your browser, and you should be prompted with the ERPNext login page.”
Did you reach that point successfully?

Thanks for your reply. Unfortunately, after i install erpnext help some of the pods are in pending state. I tried to get their log, but the i can’t get the logs for the pods that are in pending state. Can i get any guide links that gives guidance for deploying ERPNext locally using K8s.

Maybe this can help you to better adapt the configuration needed for your erpnext setup:

kubectl describe pods will give you why they’re in pending state.

Don’t use kubernetes, you need to familiarise yourself with kubectl more.

1 Like

Hello Everyone, thanks for all your reply. I think if i specify how i tried and what manifests i used, it will be easy for you all to guide me.

  1. I started my microk8s

microk8s start

microk8s status

  1. Configured kubectl to use context microk8s

microk8s config > ~/.kube/config

kubectl config use-context microk8s

  1. Cloned frappe helm repo in my local machine

git clone [https://github.com/frappe/helm.git](https://github.com/frappe/helm.git)

cd helm

  1. Creating namespace, adding helm repo and installing helm for nfs-ganesha-server-and-external-provisioner

kubectl create namespace nfs

  

helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner

  

helm upgrade --install -n nfs in-cluster nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner --set 'storageClass.mountOptions={vers=4.1}' --set persistence.enabled=true --set persistence.size=9Gi

  1. Creating namespace, adding helm repo and installing helm for erpnext

kubectl create namespace erpnext
  
helm repo add frappe https://helm.erpnext.com

helm install frappe-bench --namespace erpnext -f erpnext/values.yaml -f tests/erpnext/values.yaml frappe/erpnext

5.a. i have changed erpnext/values.yaml and tests/erpnext/values.yaml files

a. erpnext/values.yaml


# Default values for erpnext.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

externalDatabase:
  dbHost: "1.2.3.4"
  dbPort: "3306"
  dbRootUser: "root"
  dbRootPassword: "secret"

image:
  repository: frappe/erpnext
  tag: v15.36.1
  pullPolicy: IfNotPresent

nginx:
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 3
    targetCPU: 75
    targetMemory: 75
  # config: |
  #   # custom conf /etc/nginx/conf.d/default.conf
  environment:
    upstreamRealIPAddress: "127.0.0.1"
    upstreamRealIPRecursive: "off"
    upstreamRealIPHeader: "X-Forwarded-For"
    frappeSiteNameHeader: "$host"
  livenessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  service:
    type: ClusterIP
    port: 8080
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Custom topologySpreadConstraints (uncomment and modify to override defaults)
  # topologySpreadConstraints:
  #   - maxSkew: 2
  #     topologyKey: failure-domain.beta.kubernetes.io/zone
  #     whenUnsatisfiable: ScheduleAnyway

  # Default topologySpreadConstraints (used if topologySpreadConstraints is not set)
  defaultTopologySpread:
    maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule
  envVars: []
  initContainers: []
  sidecars: []

worker:
  gunicorn:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    livenessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    readinessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    service:
      type: ClusterIP
      port: 8000
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    args: []
    envVars: []
    initContainers: []
    sidecars: []

  default:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  short:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  long:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  scheduler:
    replicaCount: 1
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []


  # Custom topologySpreadConstraints (uncomment and modify to override defaults)
  # topologySpreadConstraints:
  #   - maxSkew: 2
  #     topologyKey: failure-domain.beta.kubernetes.io/zone
  #     whenUnsatisfiable: ScheduleAnyway

  # Default topologySpreadConstraints (used if topologySpreadConstraints is not set)
  defaultTopologySpread:
    maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule


  healthProbe: |
    exec:
      command:
        - bash
        - -c
        - echo "Ping backing services";
        {{- if .Values.mariadb.enabled }}
        {{- if eq .Values.mariadb.architecture "replication" }}
        - wait-for-it {{ .Release.Name }}-mariadb-primary:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- else }}
        - wait-for-it {{ .Release.Name }}-mariadb:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- else if .Values.dbHost }}
        - wait-for-it {{ .Values.dbHost }}:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- if index .Values "redis-cache" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-cache-master:{{ index .Values "redis-cache" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-cache" "host" }}
        - wait-for-it {{ index .Values "redis-cache" "host" }} -t 1;
        {{- end }}
        {{- if index .Values "redis-queue" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-queue-master:{{ index .Values "redis-queue" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-queue" "host" }}
        - wait-for-it {{ index .Values "redis-queue" "host" }} -t 1;
        {{- end }}
        {{- if .Values.postgresql.host }}
        - wait-for-it {{ .Values.postgresql.host }}:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- else if .Values.postgresql.enabled }}
        - wait-for-it {{ .Release.Name }}-postgresql:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- end }}
    initialDelaySeconds: 15
    periodSeconds: 5

socketio:
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 3
    targetCPU: 75
    targetMemory: 75
  livenessProbe:
    tcpSocket:
      port: 9000
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 9000
    initialDelaySeconds: 5
    periodSeconds: 10
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  service:
    type: ClusterIP
    port: 9000
  envVars: []
  initContainers: []
  sidecars: []

persistence:
  worker:
    storageClass: "nfs"

# Ingress
ingress:
  # ingressName: ""
  # className: ""
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    # cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
  - host: erp.cluster.local
    paths:
    - path: /
      pathType: ImplementationSpecific
  tls: []
  #  - secretName: auth-server-tls
  #    hosts:
  #      - auth-server.local

jobs:
  volumePermissions:
    enabled: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  configure:
    enabled: true
    fixVolume: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    envVars: []
    command: []
    args: []

  createSite:
    enabled: false
    forceCreate: false
    siteName: "erp.cluster.local"
    adminPassword: "changeit"
    installApps:
    - "erpnext"
    dbType: "mariadb"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  dropSite:
    enabled: false
    forced: false
    siteName: "erp.cluster.local"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  backup:
    enabled: false
    siteName: "erp.cluster.local"
    withFiles: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  migrate:
    enabled: false
    siteName: "erp.cluster.local"
    skipFailing: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  custom:
    enabled: false
    jobName: ""
    labels: {}
    backoffLimit: 0
    initContainers: []
    containers: []
    restartPolicy: Never
    volumes: []
    nodeSelector: {}
    affinity: {}
    tolerations: []

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true

podSecurityContext:
  supplementalGroups: [1000]

securityContext:
  capabilities:
    add:
    - CAP_CHOWN
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

externalRedis:
  cache:
    enabled: true
    host: "redis://1.1.1.1:6379"
  queue:
    enabled: true
    host: "redis://2.2.2.2:6379"

mariadb:
  # https://github.com/bitnami/charts/tree/master/bitnami/mariadb
  enabled: true
  auth:
    rootPassword: "secret"
    username: "admin"
    password: "secret"
    replicationPassword: "secret"
  primary:
    service:
      ports:
        mysql: 3306
    extraFlags: >-
      --skip-character-set-client-handshake
      --skip-innodb-read-only-compressed
      --character-set-server=utf8mb4
      --collation-server=utf8mb4_unicode_ci

postgresql:
  # https://github.com/bitnami/charts/tree/master/bitnami/postgresql
  enabled: false
  # host: ""
  auth:
    username: "postgres"
    postgresPassword: "changeit"
  primary:
    service:
      ports:
        postgresql: 5432

b. tests/erpnext/values.yaml


mariadb:
  enabled: false

dbHost: mariadb.mariadb.svc.cluster.local
dbRootUser: "admin"
dbRootPassword: "secret"
# For backward compatibility only, use dbHost
mariadbHost: mariadb.mariadb.svc.cluster.local

persistence:
  worker:
    storageClass: "nfs"

jobs:
  configure:
    fixVolume: false


  1. This is the my pod results

  2. Create erpnext/pvc.yaml


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: erpnext
  name: erpnext-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: local-path
  1. Apply erpnext/pvc.yaml

kubectl apply --namespace erpnext -f erpnext/pvc.yaml
  
kubectl get pvc -n erpnext

  1. Create erpnext/createsite.yaml file and apply it

apiVersion: batch/v1
kind: Job
metadata:
  name: create-erp-site
spec:
  backoffLimit: 0
  template:
    spec:
      securityContext:
        supplementalGroups: [1000]
      containers:
      - name: create-site
        image: frappe/erpnext-worker:v14.12.1
        command: ["/bin/sh", "-c"]  # Use shell to combine multiple commands
        args:
          - |
            if [ ! -f /home/frappe/frappe-bench/sites/apps.txt ]; then
              echo "erpnext" > /home/frappe/frappe-bench/sites/apps.txt;
            fi;
            bench new-site localhost --no-mariadb-socket --install-app=${INSTALL_APPS} \
            --db-root-password=${MYSQL_ROOT_PASSWORD} --admin-password=${ADMIN_PASSWORD};
        imagePullPolicy: IfNotPresent
        volumeMounts:
          - name: sites-dir
            mountPath: /home/frappe/frappe-bench/sites
        env:
          - name: "DB_ROOT_USER"
            value: "admin"
          - name: "MYSQL_ROOT_PASSWORD"
            value: "secret"
          - name: "ADMIN_PASSWORD"
            value: "secret"
          - name: "INSTALL_APPS"
            value: "erpnext"
      restartPolicy: Never
      volumes:
        - name: sites-dir
          persistentVolumeClaim:
            claimName: erpnext-pvc
            readOnly: false


a. Apply it


kubectl apply -n erpnext -f erpnext/createsite.yaml

  1. the out is

Now i am struck here. I have searched but there is no clear guide that says follow these steps tada! your erpnext site is deployed in k8s and available in localhost. I think if there is clear step by step guide then it will be really helpful for novice.
Thanks in advance! bye!

Hello Everyone,
As i have distracted from other works recently i took this long to dive again in this topic. After some digging i came across Github repo of VeryStrongFingers for deploying ERPNext locally using k3s. https://github.com/VeryStrongFingers/erpnext-k3s/tree/VeryStrongFingers-patch-1.
Some steps has been missed but somehow i picked the flow, but unfortunately i am struck at create-site job. It will be really helpful for me if anyone here can give your guidance. I am really hitting a wall after few steps.

To Make it easier, let me share my manifests and commands below:

  1. docker volume create erpnext-persistence
  2. k3d cluster create strong-erpnext --volume erpnext-persistence:/opt/local-path-provisioner
    log:
WARN[0000] No node filter specified                     
INFO[0000] Prep: Network                                
INFO[0000] Re-using existing network 'k3d-strong-erpnext' (c587f13f6d1439d76f32231210c976e46c2be09f9613bbf110dcbc2f69860a8b) 
INFO[0000] Created image volume k3d-strong-erpnext-images 
INFO[0000] Starting new tools node...                   
INFO[0000] Starting node 'k3d-strong-erpnext-tools'     
INFO[0001] Creating node 'k3d-strong-erpnext-server-0'  
INFO[0001] Creating LoadBalancer 'k3d-strong-erpnext-serverlb' 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0001] Starting new tools node...                   
INFO[0002] Starting node 'k3d-strong-erpnext-tools'     
INFO[0003] Starting cluster 'strong-erpnext'            
INFO[0003] Starting servers...                          
INFO[0003] Starting node 'k3d-strong-erpnext-server-0'  
INFO[0007] All agents already running.                  
INFO[0007] Starting helpers...                          
INFO[0007] Starting node 'k3d-strong-erpnext-serverlb'  
INFO[0014] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap... 
INFO[0016] Cluster 'strong-erpnext' created successfully! 
INFO[0016] You can now use it like this:                
kubectl cluster-info
  1. kubectl config use-context k3d-strong-erpnext
  2. kubectl create ns mariadb
  3. kubectl create ns erpnext
  4. kubectl apply --namespace erpnext -f ./pvc.yaml
    pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: erpnext
  name: erpnext-worker
  namespace: erpnext
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: local-path
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: erpnext
  name: erpnext-logs
  namespace: erpnext
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: local-path
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: erpnext
  name: erpnext-pvc
  namespace: erpnext
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: local-path
  1. kubectl apply --namespace erpnext -f ./erpnext-db-secret.yaml
    erpnext-db-secret.yaml
apiVersion: v1
data:
  password: ************************
kind: Secret
metadata:
  name: mariadb-root-password
  namespace: erpnext
type: Opaque
  1. helm install mariadb --namespace mariadb bitnami/mariadb --version 11.0.10 -f ./maria-db-values.yaml --wait
    maria-db-values.yaml
auth:
  rootPassword: "someSecurePassword"

primary:
  configuration: |-
    [mysqld]
    character-set-client-handshake=FALSE
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mariadb
    plugin_dir=/opt/bitnami/mariadb/plugin
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    tmpdir=/opt/bitnami/mariadb/tmp
    max_allowed_packet=16M
    bind-address=0.0.0.0
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
    log-error=/opt/bitnami/mariadb/logs/mysqld.log
    character-set-server=utf8mb4
    collation-server=utf8mb4_unicode_ci

    [client]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    default-character-set=utf8mb4
    plugin_dir=/opt/bitnami/mariadb/plugin

    [manager]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
  1. helm install erpnext --namespace erpnext frappe/erpnext --version 6.0.95 -f ./erpnext-values.yaml --wait
    erpnext-values.yaml
replicaCount: 1

mariadbHost: "mariadb.mariadb.svc.cluster.local"

persistence:
worker:
 enabled: true
 existingClaim: "erpnext-worker"
 storageClass: "local-path"
logs:
 enabled: true
 existingClaim: "erpnext-logs"
 storageClass: "local-path"
  1. kubectl get pods -n erpnext
NAME                                      READY   STATUS      RESTARTS      AGE
erpnext-conf-bench-20241015171732-nwv8k   0/1     Completed   0             2m27s
erpnext-gunicorn-8457f57678-vr45c         1/1     Running     0             2m27s
erpnext-mariadb-0                         1/1     Running     0             2m27s
erpnext-nginx-59f7f99746-dsdd7            1/1     Running     0             2m27s
erpnext-redis-cache-master-0              1/1     Running     0             2m27s
erpnext-redis-queue-master-0              1/1     Running     0             2m27s
erpnext-redis-socketio-master-0           1/1     Running     0             2m27s
erpnext-scheduler-79cc74cfd7-frpgl        1/1     Running     1 (67s ago)   2m27s
erpnext-socketio-6c88698f9d-bzgg7         1/1     Running     2 (66s ago)   2m27s
erpnext-worker-d-dd8d568fd-vzkzn          1/1     Running     2 (66s ago)   2m27s
erpnext-worker-l-6f658f9856-99x89         1/1     Running     2 (66s ago)   2m27s
erpnext-worker-s-5d6485477b-hq9lw         1/1     Running     2 (66s ago)   2m27s
  1. kubectl apply --namespace erpnext -f ./create-site-job.yaml
    create-site-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: create-erp-site
  namespace: erpnext
spec:
  backoffLimit: 0
  template:
    spec:
      securityContext:
        supplementalGroups: [1000]
      containers:
        - name: create-site
          image: frappe/erpnext-worker:v12.17.0
          args: ["new"]
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: sites-dir
              mountPath: /home/frappe/frappe-bench/sites
          env:
            - name: "SITE_NAME"
              value: "localhost"
            - name: "DB_ROOT_USER"
              value: root
            - name: "MYSQL_ROOT_PASSWORD"
              valueFrom:
                secretKeyRef:
                  key: password
                  name: mariadb-root-password
            - name: "ADMIN_PASSWORD"
              value: "bigchungus"
            - name: "INSTALL_APPS"
              value: "erpnext"
          securityContext:
            runAsUser: 1000     # Use the appropriate user ID
            runAsGroup: 1000    # Use the appropriate group ID
      restartPolicy: Never
      volumes:
        - name: sites-dir
          persistentVolumeClaim:
            claimName: erpnext-pvc
            readOnly: false
  1. kubectl get pods -n erpnext
NAME                                      READY   STATUS              RESTARTS        AGE
create-erp-site-gxt47                     0/1     ContainerCreating   0               4s
erpnext-conf-bench-20241015171732-nwv8k   0/1     Completed           0               10m
erpnext-gunicorn-8457f57678-vr45c         1/1     Running             0               10m
erpnext-mariadb-0                         1/1     Running             0               10m
erpnext-nginx-59f7f99746-dsdd7            1/1     Running             0               10m
erpnext-redis-cache-master-0              1/1     Running             0               10m
erpnext-redis-queue-master-0              1/1     Running             0               10m
erpnext-redis-socketio-master-0           1/1     Running             0               10m
erpnext-scheduler-79cc74cfd7-frpgl        1/1     Running             1 (8m43s ago)   10m
erpnext-socketio-6c88698f9d-bzgg7         1/1     Running             2 (8m42s ago)   10m
erpnext-worker-d-dd8d568fd-vzkzn          1/1     Running             2 (8m42s ago)   10m
erpnext-worker-l-6f658f9856-99x89         1/1     Running             2 (8m42s ago)   10m
erpnext-worker-s-5d6485477b-hq9lw         1/1     Running             2 (8m42s ago)   10m
  1. kubectl logs --namespace erpnext -f job/create-erp-site
    While here i am getting the log as i mentioned previously.
config file not created, retry 1
config file not created, retry 2
config file not created, retry 3
config file not created, retry 4
config file not created, retry 5
config file not created, retry 6
config file not created, retry 7
config file not created, retry 8
config file not created, retry 9
config file not created, retry 10
config file not created, retry 11
config file not created, retry 12
config file not created, retry 13
config file not created, retry 14
config file not created, retry 15
config file not created, retry 16
config file not created, retry 17
config file not created, retry 18
config file not created, retry 19
config file not created, retry 20
config file not created, retry 21
config file not created, retry 22
config file not created, retry 23
config file not created, retry 24
config file not created, retry 25
config file not created, retry 26
config file not created, retry 27
config file not created, retry 28
config file not created, retry 29
config file not created, retry 30
config file not created, retry 31
timeout: config file not created

Thanks in advance! bye!

Check your storage class supports RWX helm/erpnext at main · frappe/helm · GitHub

I don’t think local-path supports RWX.

1 Like