ERPNext Local Kubernetes Deployment

Deploying ERPNext on Local Ubuntu with MicroK8s

I’m looking to deploy an ERPNext site on my local Ubuntu machine using MicroK8s. The goal is to create a local environment to test the Kubernetes deployment of ERPNext. I want to access the site at the URL erp.localhost and utilize a MariaDB pod with a local volume instead of an external one.

Since I’m new to Kubernetes, I would appreciate detailed steps and the necessary manifest files to set this up.

I have followed the below link to try it

Thank you!

The guide you linked ends with:
“Now visit http://localhost:8080 in your browser, and you should be prompted with the ERPNext login page.”
Did you reach that point successfully?

Thanks for your reply. Unfortunately, after i install erpnext help some of the pods are in pending state. I tried to get their log, but the i can’t get the logs for the pods that are in pending state. Can i get any guide links that gives guidance for deploying ERPNext locally using K8s.

Maybe this can help you to better adapt the configuration needed for your erpnext setup:

kubectl describe pods will give you why they’re in pending state.

Don’t use kubernetes, you need to familiarise yourself with kubectl more.

1 Like

Hello Everyone, thanks for all your reply. I think if i specify how i tried and what manifests i used, it will be easy for you all to guide me.

  1. I started my microk8s

microk8s start

microk8s status

  1. Configured kubectl to use context microk8s

microk8s config > ~/.kube/config

kubectl config use-context microk8s

  1. Cloned frappe helm repo in my local machine

git clone [https://github.com/frappe/helm.git](https://github.com/frappe/helm.git)

cd helm

  1. Creating namespace, adding helm repo and installing helm for nfs-ganesha-server-and-external-provisioner

kubectl create namespace nfs

  

helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner

  

helm upgrade --install -n nfs in-cluster nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner --set 'storageClass.mountOptions={vers=4.1}' --set persistence.enabled=true --set persistence.size=9Gi

  1. Creating namespace, adding helm repo and installing helm for erpnext

kubectl create namespace erpnext
  
helm repo add frappe https://helm.erpnext.com

helm install frappe-bench --namespace erpnext -f erpnext/values.yaml -f tests/erpnext/values.yaml frappe/erpnext

5.a. i have changed erpnext/values.yaml and tests/erpnext/values.yaml files

a. erpnext/values.yaml


# Default values for erpnext.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

externalDatabase:
  dbHost: "1.2.3.4"
  dbPort: "3306"
  dbRootUser: "root"
  dbRootPassword: "secret"

image:
  repository: frappe/erpnext
  tag: v15.36.1
  pullPolicy: IfNotPresent

nginx:
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 3
    targetCPU: 75
    targetMemory: 75
  # config: |
  #   # custom conf /etc/nginx/conf.d/default.conf
  environment:
    upstreamRealIPAddress: "127.0.0.1"
    upstreamRealIPRecursive: "off"
    upstreamRealIPHeader: "X-Forwarded-For"
    frappeSiteNameHeader: "$host"
  livenessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  service:
    type: ClusterIP
    port: 8080
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Custom topologySpreadConstraints (uncomment and modify to override defaults)
  # topologySpreadConstraints:
  #   - maxSkew: 2
  #     topologyKey: failure-domain.beta.kubernetes.io/zone
  #     whenUnsatisfiable: ScheduleAnyway

  # Default topologySpreadConstraints (used if topologySpreadConstraints is not set)
  defaultTopologySpread:
    maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule
  envVars: []
  initContainers: []
  sidecars: []

worker:
  gunicorn:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    livenessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    readinessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    service:
      type: ClusterIP
      port: 8000
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    args: []
    envVars: []
    initContainers: []
    sidecars: []

  default:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  short:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  long:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  scheduler:
    replicaCount: 1
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []


  # Custom topologySpreadConstraints (uncomment and modify to override defaults)
  # topologySpreadConstraints:
  #   - maxSkew: 2
  #     topologyKey: failure-domain.beta.kubernetes.io/zone
  #     whenUnsatisfiable: ScheduleAnyway

  # Default topologySpreadConstraints (used if topologySpreadConstraints is not set)
  defaultTopologySpread:
    maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule


  healthProbe: |
    exec:
      command:
        - bash
        - -c
        - echo "Ping backing services";
        {{- if .Values.mariadb.enabled }}
        {{- if eq .Values.mariadb.architecture "replication" }}
        - wait-for-it {{ .Release.Name }}-mariadb-primary:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- else }}
        - wait-for-it {{ .Release.Name }}-mariadb:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- else if .Values.dbHost }}
        - wait-for-it {{ .Values.dbHost }}:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- if index .Values "redis-cache" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-cache-master:{{ index .Values "redis-cache" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-cache" "host" }}
        - wait-for-it {{ index .Values "redis-cache" "host" }} -t 1;
        {{- end }}
        {{- if index .Values "redis-queue" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-queue-master:{{ index .Values "redis-queue" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-queue" "host" }}
        - wait-for-it {{ index .Values "redis-queue" "host" }} -t 1;
        {{- end }}
        {{- if .Values.postgresql.host }}
        - wait-for-it {{ .Values.postgresql.host }}:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- else if .Values.postgresql.enabled }}
        - wait-for-it {{ .Release.Name }}-postgresql:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- end }}
    initialDelaySeconds: 15
    periodSeconds: 5

socketio:
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 3
    targetCPU: 75
    targetMemory: 75
  livenessProbe:
    tcpSocket:
      port: 9000
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 9000
    initialDelaySeconds: 5
    periodSeconds: 10
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  service:
    type: ClusterIP
    port: 9000
  envVars: []
  initContainers: []
  sidecars: []

persistence:
  worker:
    storageClass: "nfs"

# Ingress
ingress:
  # ingressName: ""
  # className: ""
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    # cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
  - host: erp.cluster.local
    paths:
    - path: /
      pathType: ImplementationSpecific
  tls: []
  #  - secretName: auth-server-tls
  #    hosts:
  #      - auth-server.local

jobs:
  volumePermissions:
    enabled: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  configure:
    enabled: true
    fixVolume: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    envVars: []
    command: []
    args: []

  createSite:
    enabled: false
    forceCreate: false
    siteName: "erp.cluster.local"
    adminPassword: "changeit"
    installApps:
    - "erpnext"
    dbType: "mariadb"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  dropSite:
    enabled: false
    forced: false
    siteName: "erp.cluster.local"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  backup:
    enabled: false
    siteName: "erp.cluster.local"
    withFiles: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  migrate:
    enabled: false
    siteName: "erp.cluster.local"
    skipFailing: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  custom:
    enabled: false
    jobName: ""
    labels: {}
    backoffLimit: 0
    initContainers: []
    containers: []
    restartPolicy: Never
    volumes: []
    nodeSelector: {}
    affinity: {}
    tolerations: []

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true

podSecurityContext:
  supplementalGroups: [1000]

securityContext:
  capabilities:
    add:
    - CAP_CHOWN
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

externalRedis:
  cache:
    enabled: true
    host: "redis://1.1.1.1:6379"
  queue:
    enabled: true
    host: "redis://2.2.2.2:6379"

mariadb:
  # https://github.com/bitnami/charts/tree/master/bitnami/mariadb
  enabled: true
  auth:
    rootPassword: "secret"
    username: "admin"
    password: "secret"
    replicationPassword: "secret"
  primary:
    service:
      ports:
        mysql: 3306
    extraFlags: >-
      --skip-character-set-client-handshake
      --skip-innodb-read-only-compressed
      --character-set-server=utf8mb4
      --collation-server=utf8mb4_unicode_ci

postgresql:
  # https://github.com/bitnami/charts/tree/master/bitnami/postgresql
  enabled: false
  # host: ""
  auth:
    username: "postgres"
    postgresPassword: "changeit"
  primary:
    service:
      ports:
        postgresql: 5432

b. tests/erpnext/values.yaml


mariadb:
  enabled: false

dbHost: mariadb.mariadb.svc.cluster.local
dbRootUser: "admin"
dbRootPassword: "secret"
# For backward compatibility only, use dbHost
mariadbHost: mariadb.mariadb.svc.cluster.local

persistence:
  worker:
    storageClass: "nfs"

jobs:
  configure:
    fixVolume: false


  1. This is the my pod results

  2. Create erpnext/pvc.yaml


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: erpnext
  name: erpnext-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: local-path
  1. Apply erpnext/pvc.yaml

kubectl apply --namespace erpnext -f erpnext/pvc.yaml
  
kubectl get pvc -n erpnext

  1. Create erpnext/createsite.yaml file and apply it

apiVersion: batch/v1
kind: Job
metadata:
  name: create-erp-site
spec:
  backoffLimit: 0
  template:
    spec:
      securityContext:
        supplementalGroups: [1000]
      containers:
      - name: create-site
        image: frappe/erpnext-worker:v14.12.1
        command: ["/bin/sh", "-c"]  # Use shell to combine multiple commands
        args:
          - |
            if [ ! -f /home/frappe/frappe-bench/sites/apps.txt ]; then
              echo "erpnext" > /home/frappe/frappe-bench/sites/apps.txt;
            fi;
            bench new-site localhost --no-mariadb-socket --install-app=${INSTALL_APPS} \
            --db-root-password=${MYSQL_ROOT_PASSWORD} --admin-password=${ADMIN_PASSWORD};
        imagePullPolicy: IfNotPresent
        volumeMounts:
          - name: sites-dir
            mountPath: /home/frappe/frappe-bench/sites
        env:
          - name: "DB_ROOT_USER"
            value: "admin"
          - name: "MYSQL_ROOT_PASSWORD"
            value: "secret"
          - name: "ADMIN_PASSWORD"
            value: "secret"
          - name: "INSTALL_APPS"
            value: "erpnext"
      restartPolicy: Never
      volumes:
        - name: sites-dir
          persistentVolumeClaim:
            claimName: erpnext-pvc
            readOnly: false


a. Apply it


kubectl apply -n erpnext -f erpnext/createsite.yaml

  1. the out is

Now i am struck here. I have searched but there is no clear guide that says follow these steps tada! your erpnext site is deployed in k8s and available in localhost. I think if there is clear step by step guide then it will be really helpful for novice.
Thanks in advance! bye!

Hello Everyone,
As i have distracted from other works recently i took this long to dive again in this topic. After some digging i came across Github repo of VeryStrongFingers for deploying ERPNext locally using k3s. https://github.com/VeryStrongFingers/erpnext-k3s/tree/VeryStrongFingers-patch-1.
Some steps has been missed but somehow i picked the flow, but unfortunately i am struck at create-site job. It will be really helpful for me if anyone here can give your guidance. I am really hitting a wall after few steps.

To Make it easier, let me share my manifests and commands below:

  1. docker volume create erpnext-persistence
  2. k3d cluster create strong-erpnext --volume erpnext-persistence:/opt/local-path-provisioner
    log:
WARN[0000] No node filter specified                     
INFO[0000] Prep: Network                                
INFO[0000] Re-using existing network 'k3d-strong-erpnext' (c587f13f6d1439d76f32231210c976e46c2be09f9613bbf110dcbc2f69860a8b) 
INFO[0000] Created image volume k3d-strong-erpnext-images 
INFO[0000] Starting new tools node...                   
INFO[0000] Starting node 'k3d-strong-erpnext-tools'     
INFO[0001] Creating node 'k3d-strong-erpnext-server-0'  
INFO[0001] Creating LoadBalancer 'k3d-strong-erpnext-serverlb' 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0001] Starting new tools node...                   
INFO[0002] Starting node 'k3d-strong-erpnext-tools'     
INFO[0003] Starting cluster 'strong-erpnext'            
INFO[0003] Starting servers...                          
INFO[0003] Starting node 'k3d-strong-erpnext-server-0'  
INFO[0007] All agents already running.                  
INFO[0007] Starting helpers...                          
INFO[0007] Starting node 'k3d-strong-erpnext-serverlb'  
INFO[0014] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap... 
INFO[0016] Cluster 'strong-erpnext' created successfully! 
INFO[0016] You can now use it like this:                
kubectl cluster-info
  1. kubectl config use-context k3d-strong-erpnext
  2. kubectl create ns mariadb
  3. kubectl create ns erpnext
  4. kubectl apply --namespace erpnext -f ./pvc.yaml
    pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: erpnext
  name: erpnext-worker
  namespace: erpnext
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: local-path
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: erpnext
  name: erpnext-logs
  namespace: erpnext
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: local-path
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: erpnext
  name: erpnext-pvc
  namespace: erpnext
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: local-path
  1. kubectl apply --namespace erpnext -f ./erpnext-db-secret.yaml
    erpnext-db-secret.yaml
apiVersion: v1
data:
  password: ************************
kind: Secret
metadata:
  name: mariadb-root-password
  namespace: erpnext
type: Opaque
  1. helm install mariadb --namespace mariadb bitnami/mariadb --version 11.0.10 -f ./maria-db-values.yaml --wait
    maria-db-values.yaml
auth:
  rootPassword: "someSecurePassword"

primary:
  configuration: |-
    [mysqld]
    character-set-client-handshake=FALSE
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mariadb
    plugin_dir=/opt/bitnami/mariadb/plugin
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    tmpdir=/opt/bitnami/mariadb/tmp
    max_allowed_packet=16M
    bind-address=0.0.0.0
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
    log-error=/opt/bitnami/mariadb/logs/mysqld.log
    character-set-server=utf8mb4
    collation-server=utf8mb4_unicode_ci

    [client]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    default-character-set=utf8mb4
    plugin_dir=/opt/bitnami/mariadb/plugin

    [manager]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
  1. helm install erpnext --namespace erpnext frappe/erpnext --version 6.0.95 -f ./erpnext-values.yaml --wait
    erpnext-values.yaml
replicaCount: 1

mariadbHost: "mariadb.mariadb.svc.cluster.local"

persistence:
worker:
 enabled: true
 existingClaim: "erpnext-worker"
 storageClass: "local-path"
logs:
 enabled: true
 existingClaim: "erpnext-logs"
 storageClass: "local-path"
  1. kubectl get pods -n erpnext
NAME                                      READY   STATUS      RESTARTS      AGE
erpnext-conf-bench-20241015171732-nwv8k   0/1     Completed   0             2m27s
erpnext-gunicorn-8457f57678-vr45c         1/1     Running     0             2m27s
erpnext-mariadb-0                         1/1     Running     0             2m27s
erpnext-nginx-59f7f99746-dsdd7            1/1     Running     0             2m27s
erpnext-redis-cache-master-0              1/1     Running     0             2m27s
erpnext-redis-queue-master-0              1/1     Running     0             2m27s
erpnext-redis-socketio-master-0           1/1     Running     0             2m27s
erpnext-scheduler-79cc74cfd7-frpgl        1/1     Running     1 (67s ago)   2m27s
erpnext-socketio-6c88698f9d-bzgg7         1/1     Running     2 (66s ago)   2m27s
erpnext-worker-d-dd8d568fd-vzkzn          1/1     Running     2 (66s ago)   2m27s
erpnext-worker-l-6f658f9856-99x89         1/1     Running     2 (66s ago)   2m27s
erpnext-worker-s-5d6485477b-hq9lw         1/1     Running     2 (66s ago)   2m27s
  1. kubectl apply --namespace erpnext -f ./create-site-job.yaml
    create-site-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: create-erp-site
  namespace: erpnext
spec:
  backoffLimit: 0
  template:
    spec:
      securityContext:
        supplementalGroups: [1000]
      containers:
        - name: create-site
          image: frappe/erpnext-worker:v12.17.0
          args: ["new"]
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: sites-dir
              mountPath: /home/frappe/frappe-bench/sites
          env:
            - name: "SITE_NAME"
              value: "localhost"
            - name: "DB_ROOT_USER"
              value: root
            - name: "MYSQL_ROOT_PASSWORD"
              valueFrom:
                secretKeyRef:
                  key: password
                  name: mariadb-root-password
            - name: "ADMIN_PASSWORD"
              value: "bigchungus"
            - name: "INSTALL_APPS"
              value: "erpnext"
          securityContext:
            runAsUser: 1000     # Use the appropriate user ID
            runAsGroup: 1000    # Use the appropriate group ID
      restartPolicy: Never
      volumes:
        - name: sites-dir
          persistentVolumeClaim:
            claimName: erpnext-pvc
            readOnly: false
  1. kubectl get pods -n erpnext
NAME                                      READY   STATUS              RESTARTS        AGE
create-erp-site-gxt47                     0/1     ContainerCreating   0               4s
erpnext-conf-bench-20241015171732-nwv8k   0/1     Completed           0               10m
erpnext-gunicorn-8457f57678-vr45c         1/1     Running             0               10m
erpnext-mariadb-0                         1/1     Running             0               10m
erpnext-nginx-59f7f99746-dsdd7            1/1     Running             0               10m
erpnext-redis-cache-master-0              1/1     Running             0               10m
erpnext-redis-queue-master-0              1/1     Running             0               10m
erpnext-redis-socketio-master-0           1/1     Running             0               10m
erpnext-scheduler-79cc74cfd7-frpgl        1/1     Running             1 (8m43s ago)   10m
erpnext-socketio-6c88698f9d-bzgg7         1/1     Running             2 (8m42s ago)   10m
erpnext-worker-d-dd8d568fd-vzkzn          1/1     Running             2 (8m42s ago)   10m
erpnext-worker-l-6f658f9856-99x89         1/1     Running             2 (8m42s ago)   10m
erpnext-worker-s-5d6485477b-hq9lw         1/1     Running             2 (8m42s ago)   10m
  1. kubectl logs --namespace erpnext -f job/create-erp-site
    While here i am getting the log as i mentioned previously.
config file not created, retry 1
config file not created, retry 2
config file not created, retry 3
config file not created, retry 4
config file not created, retry 5
config file not created, retry 6
config file not created, retry 7
config file not created, retry 8
config file not created, retry 9
config file not created, retry 10
config file not created, retry 11
config file not created, retry 12
config file not created, retry 13
config file not created, retry 14
config file not created, retry 15
config file not created, retry 16
config file not created, retry 17
config file not created, retry 18
config file not created, retry 19
config file not created, retry 20
config file not created, retry 21
config file not created, retry 22
config file not created, retry 23
config file not created, retry 24
config file not created, retry 25
config file not created, retry 26
config file not created, retry 27
config file not created, retry 28
config file not created, retry 29
config file not created, retry 30
config file not created, retry 31
timeout: config file not created

Thanks in advance! bye!

Check your storage class supports RWX helm/erpnext at main · frappe/helm · GitHub

I don’t think local-path supports RWX.

3 Likes

I tried the microk8s setup and got it to work (all indicators are now green in the dashboard which is part of microk8s).

Problems encountered:

  • A helm command is part of microk8s, but I didn’t know at the beginning and tried with a helm from the system’s packet manager. This lead to issues with accessing and authenticating with the cluster. I could do stuff with the helm program and with charts, but it was not reflected in the cluster.
  • Maybe it can still be done when passing options with locations of configurations files, certs, ca, etc., but using the helm integrated with microk8s got me further than these complicated options (although they are useful for teaching and analysing how everything works together). I’m not sure either if they are compatible regarding the version of the moving kubernetes parts.
  • The storage class had to be set just as @revant_one and the docs tell it.
  • I had it wrong first because of the independent helm, but after redoing the commands with the microk8s-integrated helm, and after a microk8s stop and then start, it started to run as it should.
  • This took some minutes, but no intervention was needed.
  • So now the cluster is up and the dashboard’s indicators are all green.

The basic lessen so far is to not mix too many different tools for starting such things. (We had this before. Sticking to one tool suite seems to help, at least at first to get into it.)
The docs are very clean and correct.

  • So now: All this enabled me to inspect the running containers and the logs – only to find out that there is no site configured. (And why would it be? I didn’t set any variables, any configs, didn’t supply any site name, or anything else.) There seems to be a way to force a site creation, but that’s for later. Anyway, if I had a site on a persistent volume, I’d rather like the mechanics not to interfere with it unnecessarily. Thus again, well done.

So I’m happy I got it to work so far, and some practice, including doc reading about the chart and the relevant technologies.
A Fun System!

Thanks a lot for having built it, and for offering it to anybody to use it and learn from it!

And there’s much more learning ahead, ahoi!

The promise is to get to know a machinery which offers lots of flexibility and fast turnaround with version upgrades, bespoke disposition of access means, clean separation of instances, versioned software-defined network setups. Said another way, good tools for tinkering, monitoring, crafting machinery.

1 Like

Hello Everyone,

I want to express my gratitude for all the support I’ve received. I’m thrilled to share that I successfully set up my ERPNext application using MicroK8s. Special thanks to @Peer and @revant_one for their invaluable assistance. Through this process, I’ve gained a lot of knowledge about Kubernetes resources. I’ve documented the steps I followed to deploy my ERPNext application on a local Ubuntu server using MicroK8s, and I hope this can be helpful to anyone who might need it.
Step:

microk8s start
microk8s status
microk8s enable dns hostpath-storage ingress
~/kube-poc$ microk8s kubectl get pods -A
NAMESPACE     NAME                                         READY   STATUS    RESTARTS      AGE
ingress       nginx-ingress-microk8s-controller-j72zk      1/1     Running   1 (99s ago)   3h55m
kube-system   calico-kube-controllers-796fb75cc-b7s6k      1/1     Running   9 (99s ago)   3d1h
kube-system   calico-node-wr7tq                            1/1     Running   9 (99s ago)   3d1h
kube-system   coredns-5986966c54-z2chp                     1/1     Running   9 (99s ago)   3d1h
kube-system   dashboard-metrics-scraper-795895d745-n64mz   1/1     Running   9 (99s ago)   3d1h
kube-system   hostpath-provisioner-7c8bdf94b8-5zcbt        1/1     Running   1 (99s ago)   4h1m
kube-system   kubernetes-dashboard-6796797fb5-7ml5q        1/1     Running   9 (99s ago)   3d1h
kube-system   metrics-server-7cff7889bd-4f5bh              1/1     Running   9 (99s ago)   3d1h
~/kube-poc$ microk8s kubectl get sc
NAME                          PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
microk8s-hostpath (default)   microk8s.io/hostpath   Delete          WaitForFirstConsumer   false                  4h2m

Create a folder at a path /mnt/data/hostpath in your local fs.
kube-pvc.yaml

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: hostpath-pv
spec:
  storageClassName: microk8s-hostpath
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /mnt/data/hostpath
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: hostpath-pvc
spec:
  storageClassName: microk8s-hostpath
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 8Gi
  volumeName: hostpath-pv
~/kube-poc$ microk8s kubectl apply -f kube-pvc.yaml
~/kube-poc$ microk8s kubectl get pvc -A
NAMESPACE   NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        VOLUMEATTRIBUTESCLASS   AGE
default     hostpath-pvc                  Bound    hostpath-pv                                8Gi        RWX            microk8s-hostpath   <unset>                 12h
~/kube-poc$ microk8s kubectl get pv -A
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS        VOLUMEATTRIBUTESCLASS   REASON   AGE
hostpath-pv                                8Gi        RWX            Retain           Bound    default/hostpath-pvc                  microk8s-hostpath   <unset>                          12h
~/kube-poc$ microk8s kubectl create namespace erpnext
namespace/erpnext created
microk8s helm repo add frappe https://helm.erpnext.com

custom-values.yaml (copied from https://github.com/frappe/helm/blob/main/erpnext/values.yaml)

# Default values for erpnext.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Configure external database host
# dbHost: ""
# dbPort: 3306
# dbRootUser: ""
# dbRootPassword: ""
# dbRds: false

image:
  repository: frappe/erpnext
  tag: v15.38.4
  pullPolicy: IfNotPresent

nginx:
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 3
    targetCPU: 75
    targetMemory: 75
  # config: |
  #   # custom conf /etc/nginx/conf.d/default.conf
  environment:
    upstreamRealIPAddress: "127.0.0.1"
    upstreamRealIPRecursive: "off"
    upstreamRealIPHeader: "X-Forwarded-For"
    frappeSiteNameHeader: "$host"
  livenessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  service:
    type: ClusterIP
    port: 8080
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  # Custom topologySpreadConstraints (uncomment and modify to override defaults)
  # topologySpreadConstraints:
  #   - maxSkew: 2
  #     topologyKey: failure-domain.beta.kubernetes.io/zone
  #     whenUnsatisfiable: ScheduleAnyway

  # Default topologySpreadConstraints (used if topologySpreadConstraints is not set)
  defaultTopologySpread:
    maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule
  envVars: []
  initContainers: []
  sidecars: []

worker:
  gunicorn:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    livenessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    readinessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    service:
      type: ClusterIP
      port: 8000
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    args: []
    envVars: []
    initContainers: []
    sidecars: []

  default:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  short:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  long:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  scheduler:
    replicaCount: 1
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []


  # Custom topologySpreadConstraints (uncomment and modify to override defaults)
  # topologySpreadConstraints:
  #   - maxSkew: 2
  #     topologyKey: failure-domain.beta.kubernetes.io/zone
  #     whenUnsatisfiable: ScheduleAnyway

  # Default topologySpreadConstraints (used if topologySpreadConstraints is not set)
  defaultTopologySpread:
    maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule


  healthProbe: |
    exec:
      command:
        - bash
        - -c
        - echo "Ping backing services";
        {{- if .Values.mariadb.enabled }}
        {{- if eq .Values.mariadb.architecture "replication" }}
        - wait-for-it {{ .Release.Name }}-mariadb-primary:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- else }}
        - wait-for-it {{ .Release.Name }}-mariadb:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- else if .Values.dbHost }}
        - wait-for-it {{ .Values.dbHost }}:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- if index .Values "redis-cache" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-cache-master:{{ index .Values "redis-cache" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-cache" "host" }}
        - wait-for-it {{ index .Values "redis-cache" "host" }} -t 1;
        {{- end }}
        {{- if index .Values "redis-queue" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-queue-master:{{ index .Values "redis-queue" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-queue" "host" }}
        - wait-for-it {{ index .Values "redis-queue" "host" }} -t 1;
        {{- end }}
        {{- if .Values.postgresql.host }}
        - wait-for-it {{ .Values.postgresql.host }}:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- else if .Values.postgresql.enabled }}
        - wait-for-it {{ .Release.Name }}-postgresql:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- end }}
    initialDelaySeconds: 15
    periodSeconds: 5

socketio:
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 3
    targetCPU: 75
    targetMemory: 75
  livenessProbe:
    tcpSocket:
      port: 9000
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 9000
    initialDelaySeconds: 5
    periodSeconds: 10
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  service:
    type: ClusterIP
    port: 9000
  envVars: []
  initContainers: []
  sidecars: []

persistence:
  worker:
    enabled: true
    # existingClaim: ""
    size: 6Gi
    storageClass: "microk8s-hostpath"
  logs:
    # Container based log search and analytics stack recommended
    enabled: false
    # existingClaim: ""
    size: 8Gi
    # storageClass: "nfs"

# Ingress
ingress:
  # ingressName: ""
  # className: ""
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
    # cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
  - host: erp.cluster.local
    paths:
    - path: /
      pathType: ImplementationSpecific
  tls: []
  #  - secretName: auth-server-tls
  #    hosts:
  #      - auth-server.local

jobs:
  volumePermissions:
    enabled: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  configure:
    enabled: true
    fixVolume: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    envVars: []
    command: []
    args: []

  createSite:
    enabled: false
    forceCreate: false
    siteName: "erp.cluster.local"
    adminPassword: "changeit"
    installApps:
    - "erpnext"
    dbType: "mariadb"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  dropSite:
    enabled: false
    forced: false
    siteName: "erp.cluster.local"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  backup:
    enabled: false
    siteName: "erp.cluster.local"
    withFiles: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  migrate:
    enabled: false
    siteName: "erp.cluster.local"
    skipFailing: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  custom:
    enabled: false
    jobName: ""
    labels: {}
    backoffLimit: 0
    initContainers: []
    containers: []
    restartPolicy: Never
    volumes: []
    nodeSelector: {}
    affinity: {}
    tolerations: []

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true

podSecurityContext:
  supplementalGroups: [1000]

securityContext:
  capabilities:
    add:
    - CAP_CHOWN
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

redis-cache:
  # https://github.com/bitnami/charts/tree/master/bitnami/redis
  enabled: true
  # host: ""
  architecture: standalone
  auth:
    enabled: false
    sentinal: false
  master:
    containerPorts:
      redis: 6379
    persistence:
      enabled: false

redis-queue:
  # https://github.com/bitnami/charts/tree/master/bitnami/redis
  enabled: true
  # host: ""
  architecture: standalone
  auth:
    enabled: false
    sentinal: false
  master:
    containerPorts:
      redis: 6379
    persistence:
      enabled: false

mariadb:
  # https://github.com/bitnami/charts/tree/master/bitnami/mariadb
  enabled: true
  auth:
    rootPassword: "changeit"
    username: "erpnext"
    password: "changeit"
    replicationPassword: "changeit"
  primary:
    service:
      ports:
        mysql: 3306
    extraFlags: >-
      --skip-character-set-client-handshake
      --skip-innodb-read-only-compressed
      --character-set-server=utf8mb4
      --collation-server=utf8mb4_unicode_ci

postgresql:
  # https://github.com/bitnami/charts/tree/master/bitnami/postgresql
  enabled: false
  # host: ""
  auth:
    username: "postgres"
    postgresPassword: "changeit"
  primary:
    service:
      ports:
        postgresql: 5432
~/kube-poc$ microk8s helm install frappe-bench -n erpnext -f custom-values.yaml frappe/erpnext
NAME: frappe-bench
LAST DEPLOYED: Tue Oct 22 01:45:46 2024
NAMESPACE: erpnext
STATUS: deployed
REVISION: 1
NOTES:
Frappe/ERPNext Release deployed

Release Name: frappe-bench-erpnext

Wait for the pods to start.
To create sites and other resources, refer:
https://github.com/frappe/helm/blob/main/erpnext/README.md

Frequently Asked Questions:
https://helm.erpnext.com/faq
~/kube-poc$ microk8s kubectl get pods -A
NAMESPACE     NAME                                                   READY   STATUS      RESTARTS      AGE
erpnext       frappe-bench-erpnext-conf-bench-20241022014932-zsb2b   0/1     Completed   0             40s
erpnext       frappe-bench-erpnext-gunicorn-7fdcf996ff-9mtnh         1/1     Running     0             40s
erpnext       frappe-bench-erpnext-nginx-5d6ff6bfd-m8h4k             1/1     Running     0             40s
erpnext       frappe-bench-erpnext-scheduler-7f5b6f7954-5qmp5        1/1     Running     1 (32s ago)   40s
erpnext       frappe-bench-erpnext-socketio-579b548d7c-252w9         1/1     Running     2 (31s ago)   40s
erpnext       frappe-bench-erpnext-worker-d-5fb4d5b4b7-h9l6d         0/1     Running     2 (27s ago)   40s
erpnext       frappe-bench-erpnext-worker-l-5754c45c46-xs6k9         0/1     Running     2 (27s ago)   40s
erpnext       frappe-bench-erpnext-worker-s-8d9687848-8qmdq          0/1     Running     2 (27s ago)   40s
erpnext       frappe-bench-mariadb-0                                 1/1     Running     0             40s
erpnext       frappe-bench-redis-cache-master-0                      1/1     Running     0             40s
erpnext       frappe-bench-redis-queue-master-0                      1/1     Running     0             40s
ingress       nginx-ingress-microk8s-controller-j72zk                1/1     Running     1 (12m ago)   4h5m
kube-system   calico-kube-controllers-796fb75cc-b7s6k                1/1     Running     9 (12m ago)   3d2h
kube-system   calico-node-wr7tq                                      1/1     Running     9 (12m ago)   3d2h
kube-system   coredns-5986966c54-z2chp                               1/1     Running     9 (12m ago)   3d2h
kube-system   dashboard-metrics-scraper-795895d745-n64mz             1/1     Running     9 (12m ago)   3d1h
kube-system   hostpath-provisioner-7c8bdf94b8-5zcbt                  1/1     Running     1 (12m ago)   4h12m
kube-system   kubernetes-dashboard-6796797fb5-7ml5q                  1/1     Running     9 (12m ago)   3d1h
kube-system   metrics-server-7cff7889bd-4f5bh                        1/1     Running     9 (12m ago)   3d1h
~/kube-poc$ microk8s kubectl get pvc -A
NAMESPACE   NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        VOLUMEATTRIBUTESCLASS   AGE
default     hostpath-pvc                  Bound    hostpath-pv                                8Gi        RWX            microk8s-hostpath   <unset>                 3h55m
erpnext     data-frappe-bench-mariadb-0   Bound    pvc-f5d3d2df-1f30-4108-a2fb-a3d480c317f4   8Gi        RWO            microk8s-hostpath   <unset>                 7m50s
erpnext     frappe-bench-erpnext          Bound    pvc-5437ff12-1d6d-4db7-9b37-65162f1652c1   6Gi        RWX            microk8s-hostpath   <unset>                 4m4s

create-new-site-job.yaml

---
# Source: erpnext/templates/job-create-site.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: frappe-bench-erpnext-new-site-20241022020025
  labels:
    helm.sh/chart: erpnext-7.0.122
    app.kubernetes.io/name: erpnext
    app.kubernetes.io/instance: frappe-bench
    app.kubernetes.io/version: "v15.38.2"
    app.kubernetes.io/managed-by: Helm
  annotations:
spec:
  backoffLimit: 0
  template:
    spec:
      serviceAccountName: frappe-bench-erpnext
      securityContext:
        supplementalGroups:
        - 1000
      initContainers:
        - name: validate-config
          image: "frappe/erpnext:v15.38.4"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - >
              export start=`date +%s`;
              until [[ -n `grep -hs ^ sites/common_site_config.json | jq -r ".db_host // empty"` ]] && \
                [[ -n `grep -hs ^ sites/common_site_config.json | jq -r ".redis_cache // empty"` ]] && \
                [[ -n `grep -hs ^ sites/common_site_config.json | jq -r ".redis_queue // empty"` ]];
              do
                echo "Waiting for sites/common_site_config.json to be created";
                sleep 5;
                if (( `date +%s`-start > 600 )); then
                  echo "could not find sites/common_site_config.json with required keys";
                  exit 1
                fi
              done;
              echo "sites/common_site_config.json found";

              echo "Waiting for database to be reachable...";
              wait-for-it -t 180 $(DB_HOST):$(DB_PORT);
              echo "Database is reachable.";
          env:
            - name: "DB_HOST"
              value: frappe-bench-mariadb
            - name: "DB_PORT"
              value: "3306"
          resources:
            {}
          securityContext:
            capabilities:
              add:
              - CAP_CHOWN
          volumeMounts:
            - name: sites-dir
              mountPath: /home/frappe/frappe-bench/sites
      containers:
      - name: create-site
        image: "frappe/erpnext:v15.38.4"
        imagePullPolicy: IfNotPresent
        command: ["bash", "-c"]
        args:
          - >
            set -x;

            bench_output=$(bench new-site ${SITE_NAME} \
              --no-mariadb-socket \
              --db-type=${DB_TYPE} \
              --db-host=${DB_HOST} \
              --db-port=${DB_PORT} \
              --admin-password=${ADMIN_PASSWORD} \
              --mariadb-root-username=${DB_ROOT_USER} \
              --mariadb-root-password=${DB_ROOT_PASSWORD} \
              --install-app=erpnext \
              --force \
             | tee /dev/stderr);

            bench_exit_status=$?;

            if [ $bench_exit_status -ne 0 ]; then
                # Don't consider the case "site already exists" an error.
                if [[ $bench_output == *"already exists"* ]]; then
                    echo "Site already exists, continuing...";
                else
                    echo "An error occurred in bench new-site: $bench_output"
                    exit $bench_exit_status;
                fi
            fi
 
            set -e;

            rm -f currentsite.txt
        env:
          - name: "SITE_NAME"
            value: "localhost"
          - name: "DB_TYPE"
            value: mariadb
          - name: "DB_HOST"
            value: frappe-bench-mariadb
          - name: "DB_PORT"
            value: "3306"
          - name: "DB_ROOT_USER"
            value: "root"
          - name: "DB_ROOT_PASSWORD"
            valueFrom:
              secretKeyRef:
                key: mariadb-root-password
                name: frappe-bench-mariadb
          - name: "ADMIN_PASSWORD"
            value: "changeit"
        resources:
          {}
        securityContext:
          capabilities:
            add:
            - CAP_CHOWN
        volumeMounts:
          - name: sites-dir
            mountPath: /home/frappe/frappe-bench/sites
          - name: logs
            mountPath: /home/frappe/frappe-bench/logs
      restartPolicy: Never
      volumes:
        - name: sites-dir
          persistentVolumeClaim:
            claimName: frappe-bench-erpnext
            readOnly: false
        - name: logs
          emptyDir: {}
~/kube-poc$ microk8s kubectl apply -n erpnext -f create-new-site-job.yaml
job.batch/frappe-bench-erpnext-new-site-20241022020025 created
~/kube-poc$ microk8s kubectl get pods -A
NAMESPACE     NAME                                                   READY   STATUS      RESTARTS      AGE
erpnext       frappe-bench-erpnext-conf-bench-20241022014932-zsb2b   0/1     Completed   0             12m
erpnext       frappe-bench-erpnext-gunicorn-7fdcf996ff-9mtnh         1/1     Running     0             12m
erpnext       frappe-bench-erpnext-new-site-20241022020025-6w2pp     1/1     Running     0             18s
erpnext       frappe-bench-erpnext-nginx-5d6ff6bfd-m8h4k             1/1     Running     0             12m
erpnext       frappe-bench-erpnext-scheduler-7f5b6f7954-5qmp5        1/1     Running     1 (11m ago)   12m
erpnext       frappe-bench-erpnext-socketio-579b548d7c-252w9         1/1     Running     2 (11m ago)   12m
erpnext       frappe-bench-erpnext-worker-d-5fb4d5b4b7-h9l6d         1/1     Running     2 (11m ago)   12m
erpnext       frappe-bench-erpnext-worker-l-5754c45c46-xs6k9         1/1     Running     2 (11m ago)   12m
erpnext       frappe-bench-erpnext-worker-s-8d9687848-8qmdq          1/1     Running     2 (11m ago)   12m
erpnext       frappe-bench-mariadb-0                                 1/1     Running     0             12m
erpnext       frappe-bench-redis-cache-master-0                      1/1     Running     0             12m
erpnext       frappe-bench-redis-queue-master-0                      1/1     Running     0             12m
ingress       nginx-ingress-microk8s-controller-j72zk                1/1     Running     1 (23m ago)   4h17m
kube-system   calico-kube-controllers-796fb75cc-b7s6k                1/1     Running     9 (23m ago)   3d2h
kube-system   calico-node-wr7tq                                      1/1     Running     9 (23m ago)   3d2h
kube-system   coredns-5986966c54-z2chp                               1/1     Running     9 (23m ago)   3d2h
kube-system   dashboard-metrics-scraper-795895d745-n64mz             1/1     Running     9 (23m ago)   3d1h
kube-system   hostpath-provisioner-7c8bdf94b8-5zcbt                  1/1     Running     1 (23m ago)   4h23m
kube-system   kubernetes-dashboard-6796797fb5-7ml5q                  1/1     Running     9 (23m ago)   3d1h
kube-system   metrics-server-7cff7889bd-4f5bh                        1/1     Running     9 (23m ago)   3d1h
~/kube-poc$ microk8s kubectl logs frappe-bench-erpnext-new-site-20241022020025-6w2pp -n erpnext
Defaulted container "create-site" out of: create-site, validate-config (init)
++ bench new-site localhost --no-mariadb-socket --db-type=mariadb --db-host=frappe-bench-mariadb --db-port=3306 --admin-password=changeit --mariadb-root-username=root --mariadb-root-password=changeit --install-app=erpnext --force
++ tee /dev/stderr

Installing frappe...
Updating DocTypes for frappe        : [========================================] 100%
Updating Dashboard for frappe

Installing erpnext...
Updating DocTypes for erpnext       : [========================================] 100%
Updating customizations for Address
Updating customizations for Contact
Updating Dashboard for erpnext
*** Scheduler is disabled ***
+ bench_output='
Installing frappe...
Updating DocTypes for frappe        : [========================================] 100%
Updating Dashboard for frappe

Installing erpnext...
Updating DocTypes for erpnext       : [========================================] 100%
Updating customizations for Address
Updating customizations for Contact
Updating Dashboard for erpnext
*** Scheduler is disabled ***'
+ bench_exit_status=0
+ '[' 0 -ne 0 ']'
+ set -e
+ rm -f currentsite.txt
~/kube-poc$ microk8s kubectl port-forward -n erpnext svc/frappe-bench-erpnext 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080

Now you can open the site using ( localhost:8080) in your browser. Since we used ingress addon in microk8s it will take care of loadbalancing.

2 Likes

Here only keep the values to be overridden. Do not keep keys in values.yaml that are not going to be overridden.

Check FluxCD example repo for advance eks stuff castlecraft / aws-eks-erpnext · GitLab

1 Like

Thank you for the guidance, @revant_one! I will ensure that custom-values.yaml only includes the values that need to be overridden. I’ll also take a look at the FluxCD example repository for additional insights. I appreciate your help!

Here is the output I got in the browser while port forwarding: localhost:8080. When I add ingress, I get an error. I’ve attached the ingress file below.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: erp-resource
annotations:
spec.ingressClassName: “nginx”
nginx.ingress.kubernetes.io/ssl-redirect: “false”
spec:
rules:

  • host: localhost
    http:
    paths:
    • pathType: Prefix
      path: “/”
      backend:
      service:
      name: erpnextbasechart-erpnext-c855eb9d
      port:
      number: 80

Hello @manoj91298, If you are using microk8s instead of deploying ingress you can just enable ingress add-on and use port-forwarding. Then directly use localhost:8080 in your browser.

1 Like

Frappe on Kubernetes VS Code devcontainer setup is also possible. We’re using this for tests helm/tests/compose.yaml at main · frappe/helm · GitHub, add the k3s service from compose file to your frappe/bench devcontainer’s compose file and you can try out things locally as well as develop frappe apps that interact with kubernetes API.

Thank you for your help, @Fathima786Irfana @revant_one

I have one more question: Can we implement multi-tenancy using MicroK8s, whether it’s running on localhost or on an AWS instance?

  • bench level multi-tenancy? keep adding sites to your helm release and expose them to the world as needed.
  • for multiple benches, create multiple helm releases.

Both processes can be dynamic using API, Manage official or custom frappe-benches on Kubernetes. Build your own SaaS or PaaS with it.

1 Like