ERPNext In Kubernetes (microk8s)

Hello Everyone,

I have discovered this excellent Project only a few month ago and after some testing and trying me and my company are ready to get serious about it.
So to ensure stability uptime ease of use and security I decided to set up ERPNext with Kubernetes on our server.

Now the Process did run mostly smoothly thanks to the excellent Kubernetes support provided by the GitHub - frappe/helm: Helm Chart Repository for Frappe/ERPNext repository.

Since we are running on a native server and the ease of use I used microk8s as Kubernetes engine.

There where some small adjustments I did differently (for example use the storageClass: “microk8s-hostpath”, for persistence.worker and for mariadb)
This is probably not Ideal in the long run but since we are only running on a single server for now anyways. I do not see a big Problem with that.

Now where I hit some roadblocks is the Ingress configuration to get ERPNext to the outside world. I know this is not necessarily a ERPNext Problem but maybe someone here can help me regardless of that. Also this would probably help many future users.

My Ingress configuration Currently looks like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: libattion-erpnext
  namespace: erpnext
  labels:
    helm.sh/chart: erpnext-6.0.7
    app.kubernetes.io/name: erpnext
    app.kubernetes.io/instance: frappe-bench
    app.kubernetes.io/version: "v14.16.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: public
    kubernetes.io/tls-acme: "true"
spec:
  rules:
  - http:
      paths:
      - backend:
          service:
            name: frappe-bench-erpnext
            port:
              number: 8080
        path: /
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 127.0.0.1

and it seems to connect to the frappe-bench-erpnext service (hopefully correctly) but if I access localhost (127.0.0.1) I am only getting 404.

This should mean that ether my pods are not working correctly (but since they are returning a running status this should not be the case) or something else is wrong.

since I am on that problem for a few days now help would be deeply appreciated.

  • make sure you’ve ingress controller installed and load balancer service created
  • make sure you’ve site created
  • site name and host header must match, i.e. if you wish to access localhost, site named localhost must be available
  • confirm by using svc port forward and curl -H "Host: site.name" localhost:8080
1 Like

Hello thanks a lot for your answer,

so based on that i changed the site name in the custom-values.yaml to localhost it now looks like this:

# Default values for erpnext.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Configure external database host
# dbHost: ""
# dbPort: 3306
# dbRootUser: ""
# dbRootPassword: ""
# dbRds: false

image:
  repository: frappe/erpnext
  tag: v14.16.1
  pullPolicy: IfNotPresent

nginx:
  replicaCount: 1
  # config: |
  #   # custom conf /etc/nginx/conf.d/default.conf
  environment:
    upstreamRealIPAddress: "127.0.0.1"
    upstreamRealIPRecursive: "off"
    upstreamRealIPHeader: "X-Forwarded-For"
    frappeSiteNameHeader: "$host"
  livenessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  service:
    type: ClusterIP
    port: 8080
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  envVars: []
  initContainers: []
  sidecars: []

worker:
  gunicorn:
    replicaCount: 1
    livenessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    readinessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    service:
      type: ClusterIP
      port: 8000
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    args: []
    envVars: []
    initContainers: []
    sidecars: []

  default:
    replicaCount: 1
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  short:
    replicaCount: 1
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  long:
    replicaCount: 1
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  scheduler:
    replicaCount: 1
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  healthProbe: |
    exec:
      command:
        - bash
        - -c
        - echo "Ping backing services";
        {{- if .Values.mariadb.enabled }}
        {{- if eq .Values.mariadb.architecture "replication" }}
        - wait-for-it {{ .Release.Name }}-mariadb-primary:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- else }}
        - wait-for-it {{ .Release.Name }}-mariadb:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- else if .Values.dbHost }}
        - wait-for-it {{ .Values.dbHost }}:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- if index .Values "redis-cache" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-cache-master:{{ index .Values "redis-cache" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-cache" "host" }}
        - wait-for-it {{ index .Values "redis-cache" "host" }} -t 1;
        {{- end }}
        {{- if index .Values "redis-queue" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-queue-master:{{ index .Values "redis-queue" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-queue" "host" }}
        - wait-for-it {{ index .Values "redis-queue" "host" }} -t 1;
        {{- end }}
        {{- if index .Values "redis-socketio" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-socketio-master:{{ index .Values "redis-socketio" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-socketio" "host" }}
        - wait-for-it {{ index .Values "redis-socketio" "host" }} -t 1;
        {{- end }}
        {{- if .Values.postgresql.host }}
        - wait-for-it {{ .Values.postgresql.host }}:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- else if .Values.postgresql.enabled }}
        - wait-for-it {{ .Release.Name }}-postgresql:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- end }}
    initialDelaySeconds: 15
    periodSeconds: 5

socketio:
  replicaCount: 1
  livenessProbe:
    tcpSocket:
      port: 9000
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 9000
    initialDelaySeconds: 5
    periodSeconds: 10
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  service:
    type: ClusterIP
    port: 9000
  envVars: []
  initContainers: []
  sidecars: []

persistence:
  worker:
    enabled: true
    # existingClaim: ""
    size: 20Gi
    storageClass: "microk8s-hostpath"
  logs:
    # Container based log search and analytics stack recommended
    enabled: true
    # existingClaim: ""
    size: 2Gi
    storageClass: "microk8s-hostpath"

jobs:
  volumePermissions:
    enabled: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  configure:
    enabled: true
    fixVolume: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  createSite:
    enabled: false
    forceCreate: false
    siteName: "localhost"
    adminPassword: "***"
    installApps:
    - "payments"
    - "erpnext"
    - "wiki"
    dbType: "mariadb"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  dropSite:
    enabled: false
    forced: false
    siteName: "localhost"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  backup:
    enabled: false
    siteName: "localhost"
    withFiles: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  migrate:
    enabled: false
    siteName: "localhost"
    skipFailing: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  custom:
    enabled: false
    jobName: ""
    labels: {}
    backoffLimit: 0
    initContainers: []
    containers: []
    restartPolicy: Never
    volumes: []
    nodeSelector: {}
    affinity: {}
    tolerations: []

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true

podSecurityContext:
  supplementalGroups: [1000]

securityContext:
  capabilities:
    add:
    - CAP_CHOWN
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

redis-cache:
  # https://github.com/bitnami/charts/tree/master/bitnami/redis
  enabled: true
  # host: ""
  architecture: standalone
  auth:
    enabled: false
    sentinal: false
  master:
    containerPorts:
      redis: 6379
    persistence:
      enabled: false

redis-queue:
  # https://github.com/bitnami/charts/tree/master/bitnami/redis
  enabled: true
  # host: ""
  architecture: standalone
  auth:
    enabled: false
    sentinal: false
  master:
    containerPorts:
      redis: 6379
    persistence:
      enabled: false

redis-socketio:
  # https://github.com/bitnami/charts/tree/master/bitnami/redis
  enabled: true
  # host: ""
  architecture: standalone
  auth:
    enabled: false
    sentinal: false
  master:
    containerPorts:
      redis: 6379
    persistence:
      enabled: false

mariadb:
  # https://github.com/bitnami/charts/tree/master/bitnami/mariadb
  enabled: true
  auth:
    rootPassword: "***"
    username: "erpnext"
    password: "***"
    replicationPassword: "***"
  primary:
    persistence:
      enabled: true
      storageClass: "microk8s-hostpath"
      size: 10Gi
    service:
      ports:
        mysql: 3306
    configuration: |-
      [mysqld]
      skip-name-resolve
      explicit_defaults_for_timestamp
      basedir=/opt/bitnami/mariadb
      plugin_dir=/opt/bitnami/mariadb/plugin
      port=3306
      socket=/opt/bitnami/mariadb/tmp/mysql.sock
      tmpdir=/opt/bitnami/mariadb/tmp
      max_allowed_packet=16M
      bind-address=*
      pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
      log-error=/opt/bitnami/mariadb/logs/mysqld.log
      slow_query_log=0
      slow_query_log_file=/opt/bitnami/mariadb/logs/mysqld.log
      long_query_time=10.0

      # Frappe Specific Changes
      character-set-client-handshake=FALSE
      character-set-server=utf8mb4
      collation-server=utf8mb4_unicode_ci

      [client]
      port=3306
      socket=/opt/bitnami/mariadb/tmp/mysql.sock
      plugin_dir=/opt/bitnami/mariadb/plugin

      # Frappe Specific Changes
      default-character-set=utf8mb4

      [manager]
      port=3306
      socket=/opt/bitnami/mariadb/tmp/mysql.sock
      pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid

postgresql:
  # https://github.com/bitnami/charts/tree/master/bitnami/postgresql
  enabled: false
  # host: ""
  auth:
    username: "postgres"
    postgresPassword: "changeit"
  primary:
    service:
      ports:
        postgresql: 5432

since I am running on a single server with microk8s and not a cloud like aws, I do not have a load balancer i could install MetalLB but this service needs an IP range to give out (which I do not have).
as far as I understand microk8s instead provides an option to expose a service with its ingress controller.
Which I am trying to use. (MicroK8s - Addon: Ingress).
(tough for it to work i made some changes based on this stackoverflow answer kubernetes - Simple ingress from host with microk8s? - Stack Overflow)

This also seems to somewhat work since I am getting the 404 response from nginx

kubectl gives me the following info about my ingress controller:

based on that I believe that it connected to the ERPNext service correctly since it shows an IP and port for the service (but I am not exactly sure where this IP is coming from the In Cluster IP of the service is different)?

also the address ( 127.0.0.1 or localhost ) only appears with the addition of

status:
  loadBalancer:
    ingress:
    - ip: 127.0.0.1

to the Ingres configuration file (I forgot from where I got that … but it seems to get me a step further)

I think that the site and the frappe service is running perfectly fine, since the site creation job did run successfully and the services and the pods are up. (Tough some of the worker pods seem to have struggled in the very beginning and restarted up to 6 times, maybe some dependency was not there yet)

Also when I do service port forwarding like suggested:
microk8s kubectl port-forward -n erpnext service/frappe-bench-erpnext 8080

If i than do curl localhost:8080 I am getting the desired response

<!DOCTYPE html>
<!-- Built on Frappe. https://frappeframework.com/ -->
<html lang="en">
<head>
	<meta charset="utf-8">
	<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
	<meta name="generator" content="frappe">
...

I think I am very close but few things seem to not be quite in the right place jet.

site name should resolve as hostname.

if you have site called localhost then you can only access it as localhost.

Host header? it resolves site name. Your site will not work. Your site name is localhost, it’ll only work when you enter http://localhost in browser

localhost will only work if you port forward

localhost will not work.

check FRAPPE_SITE_NAME_HEADER env var, it is set to $host that means resolve host. you can set it to localhost, it’ll then only serve site named localhost. then you’ll not be able to serve any other site from same nginx reverse proxy.

1 Like

Just keep the values you’ve changed in the file. no need to have copy of all keys here. it’ll only confuse you later.

when you have only changes, it means you modify and apply only those. when you have full file copy it’ll override default values if they change later

1 Like

JRU,
If you are in a home or office network, then your router should be assigning you IPs in a certain range. You need to look for the free IPs in the range.

Then in Kubernetes, you definitely need to install MetalLB. The config should contain the free IPs and can be something like below:

apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.16.0.240-172.16.0.250

Then, edit your DNS settings to point the host e.g. micro.localhost to the IP address the MetalLB will assign.

Your ingress could look something like:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: site-ingress
namespace: cloud
annotations:
kubernetes.io/ingress.class: nginx
# 413 Request Entity Too Large ¡ Issue #21 ¡ nginxinc/kubernetes-ingress ¡ GitHub
nginx.ingress.kubernetes.io/proxy-body-size: 64m
# ERPNext SSL / HTTPS config not working with nginx - ERPNext - Frappe Forum (default is 60)
nginx.ingress.kubernetes.io/proxy-read-timeout: ‘120’
# Using SignalR and other WebSockets in Kubernetes behind an NGINX Ingress Controller
nginx.ingress.kubernetes.io/affinity: cookie
cert-manager.io/cluster-issuer: letsencrypt-staging
#acme.cert-manager.io/http01-edit-in-place: “true”
cert-manager.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- micro.localhost
secretName: micro-staging-tls
rules:
- http:
paths:
- backend:
service:
port:
number: 8080
name: frappe-bench-erpnext
path: /
pathType: Prefix
host: micro.localhost

2 Likes

Hi Thanks for all the replies,

so I got it to work now!

the main problem was that I did not really understand the relation of the SitenName and the host.
In my mind the Service would just be Provided under the IP address and the DNS records would just do the translation of the IP so it would not matter what the actual host name was.

This is not the case the server check’s the host name and if the host name aka the IP address forwarding is not from the correct host than it does not accept the connection.

Armed with that knowledge I changed the SiteName in the custom-values.yaml to erp.libattion.com (which will be my final intended domain name)
and regenerated the Ingress file so now I have:

---
# Source: erpnext/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: libattion-erpnext
  labels:
    helm.sh/chart: erpnext-6.0.8
    app.kubernetes.io/name: erpnext
    app.kubernetes.io/instance: frappe-bench
    app.kubernetes.io/version: "v14.17.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
spec:
  tls:
    - hosts:
        - "erp.libattion.com"
      secretName: libattion-erpnext-tls
  rules:
    - host: "erp.libattion.com"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frappe-bench-erpnext
                port:
                  number: 8080

than I added a entry in my local etc/hosts file (since I do not have that DNS A record set yet)

<IP of Server> 	erp.libattion.com

So my request header now comes with the correct host name.

and voilĂ  it starts Working!

So this is fine for now, however in the near future I we will move from a online server to an office server and I will then try to set It up with MetalLB I then also have an IP range available.

More so, when I install only the ERPNext app this works fine but if I add other apps, it fails to install them and I only get the standard bench site.
If I try to correct that buy adding bench install-app payments in the create-new-site.yaml before the site installation step it seems to work but, the pod/frappe-bench-erpnext-scheduler-855967dc6c-zjjlt then fails with the following error message:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/frappe/frappe-bench/apps/frappe/frappe/utils/bench_helper.py", line 109, in <module>
    main()
  File "/home/frappe/frappe-bench/apps/frappe/frappe/utils/bench_helper.py", line 18, in main
    click.Group(commands=commands)(prog_name="bench")
  File "/home/frappe/frappe-bench/env/lib/python3.10/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/frappe/frappe-bench/env/lib/python3.10/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/frappe/frappe-bench/env/lib/python3.10/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/frappe/frappe-bench/env/lib/python3.10/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/frappe/frappe-bench/env/lib/python3.10/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/frappe/frappe-bench/env/lib/python3.10/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/frappe/frappe-bench/apps/frappe/frappe/commands/scheduler.py", line 177, in start_scheduler
    start_scheduler()
  File "/home/frappe/frappe-bench/apps/frappe/frappe/utils/scheduler.py", line 37, in start_scheduler
    tick = cint(frappe.get_conf().scheduler_tick_interval) or 60
  File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 331, in get_conf
    with init_site(site):
  File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 341, in __enter__
    init(self.site)
  File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 248, in init
    setup_module_map()
  File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 1538, in setup_module_map
    for module in get_module_list(app):
  File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 1378, in get_module_list
    return get_file_items(os.path.join(os.path.dirname(get_module(app_name).__file__), "modules.txt"))
  File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 1327, in get_module
    return importlib.import_module(modulename)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'payments'

This seems to me like a recourse is unavailable, and since it is working otherwise this could be a bug.
Maybe someone has a few hints I can follow of where to look or how to approach that?

Thanks for all the Kind Help

So I did some digging and I am now fairly certain that this is not a bug.

I looked into the docker file to see what is actually already inside the image:

and it already does an bench get-app on erpnext so the erpnext app comes included but only this one.

The create-new-site.yaml that I have been using looks like this:

---
# Source: erpnext/templates/job-create-site.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: frappe-bench-erpnext-new-site-20230222131403
  labels:
    helm.sh/chart: erpnext-6.0.8
    app.kubernetes.io/name: erpnext
    app.kubernetes.io/instance: frappe-bench
    app.kubernetes.io/version: "v14.17.0"
    app.kubernetes.io/managed-by: Helm
spec:
  backoffLimit: 0
  template:
    spec:
      serviceAccountName: frappe-bench-erpnext
      securityContext:
        supplementalGroups:
        - 1000
      initContainers:
        - name: validate-config
          image: "frappe/erpnext:v14.16.1"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - >
              export start=`date +%s`;
              until [[ -n `grep -hs ^ sites/common_site_config.json | jq -r ".db_host // empty"` ]] && \
                [[ -n `grep -hs ^ sites/common_site_config.json | jq -r ".redis_cache // empty"` ]] && \
                [[ -n `grep -hs ^ sites/common_site_config.json | jq -r ".redis_queue // empty"` ]];
              do
                echo "Waiting for sites/common_site_config.json to be created";
                sleep 5;
                if (( `date +%s`-start > 600 )); then
                  echo "could not find sites/common_site_config.json with required keys";
                  exit 1
                fi
              done;
              echo "sites/common_site_config.json found";
          resources:
            {}
          securityContext:
            capabilities:
              add:
              - CAP_CHOWN
          volumeMounts:
            - name: sites-dir
              mountPath: /home/frappe/frappe-bench/sites
      containers:
      - name: create-site
        image: "frappe/erpnext:v14.16.1"
        imagePullPolicy: IfNotPresent
        command: ["bash", "-c"]
        args:
          - >
            bench get-app payments;
            bench get-app erpnext;
            bench get-app wiki;
            bench new-site $(SITE_NAME)
            --no-mariadb-socket
            --db-type=$(DB_TYPE)
            --db-host=$(DB_HOST)
            --db-port=$(DB_PORT)
            --admin-password=$(ADMIN_PASSWORD)
            --mariadb-root-username=$(DB_ROOT_USER)
            --mariadb-root-password=$(DB_ROOT_PASSWORD)
            --install-app=payments
            --install-app=erpnext
            --install-app=wiki
            ;rm -f currentsite.txt
        env:
          - name: "SITE_NAME"
            value: "erp.libattion.com"
          - name: "DB_TYPE"
            value: mariadb
          - name: "DB_HOST"
            value: frappe-bench-mariadb
          - name: "DB_PORT"
            value: "3306"
          - name: "DB_ROOT_USER"
            value: "root"
          - name: "DB_ROOT_PASSWORD"
            valueFrom:
              secretKeyRef:
                key: mariadb-root-password
                name: frappe-bench-mariadb
          - name: "ADMIN_PASSWORD"
            value: "***"
        resources:
          {}
        securityContext:
          capabilities:
            add:
            - CAP_CHOWN
        volumeMounts:
          - name: sites-dir
            mountPath: /home/frappe/frappe-bench/sites
          - name: logs
            mountPath: /home/frappe/frappe-bench/logs
      restartPolicy: Never
      volumes:
        - name: sites-dir
          persistentVolumeClaim:
            claimName: frappe-bench-erpnext
            readOnly: false
        - name: logs
          persistentVolumeClaim:
            claimName: frappe-bench-erpnext-logs
            readOnly: false

The thinking here was to install the apps with bench get-app payments first and then let them be installed later.

But I have a strong suspicion that since we have only these volume mounts:

        volumeMounts:
          - name: sites-dir
            mountPath: /home/frappe/frappe-bench/sites
          - name: logs
            mountPath: /home/frappe/frappe-bench/logs

and the frappe apps are stored in /home/frappe/frappe-bench/apps that my bench get-app command does not permanently write these apps and therefor the Installation succeeds but after this the app folders do no longer exist and the scheduler can not find them and delvers the error from before.

How would I go about to change this are there other places where bench get-app writes besides the apps folder?

Maybe it is a good Idea to create an install-app-job.yaml? I may be able to contribute a template or something similar to the repo here.

for additional app installation read this Frequently Asked Questions ¡ frappe/frappe_docker Wiki ¡ GitHub

2 Likes

Perfect! That worked like a charm.

It also makes sence since you only want to deploy already tested Images.

Thanks for all this great help!

2 Likes