Credits
Firstly I want to thank madwyn for his post :
How I installed ERPNext on a Kubernetes cluster on GKE
And avoronchikhin for his post :
Install frappe app in k8s (microk8s) on postgresql
for providing their valuable experience insights.
Also thanks to revant_one for kickstarting this whole project and actively contribute towards new features and changes.
Custom Image Setup
This topic aims for documenting end to end steps for creating frappe custom app’s image and deploying that image within a remote kubernetes
cluster.
Prerequisites:
- domain name
- a cluster setup with min 3 nodes (I am using GKE)
Step 1: Build the container image
After building your custom app, one can make the image using the repository frappe_docker.
- Clone the
frappe_docker
repositroy - Open terminal and
cd
to that cloned directory - Build your image (assuming you have your custom app within a git repository)
export APPS_JSON='[
{
"url": "<CUSTOM-APP-GIT-LINK>",
"branch": "<BRANCH-NAME>"
}
]'
export APPS_JSON_BASE64=$(echo ${APPS_JSON} | openssl base64 -e)
NOTE:
Change the encode command if necessary. This command is for macOS users. Checkfrappe_docker
repo for linux.
docker build --no-cache --platform=linux/arm64 \
--build-arg=FRAPPE_PATH=https://github.com/frappe/frappe \
--build-arg=FRAPPE_BRANCH=version-15 \
--build-arg=PYTHON_VERSION=3.11.6 \
--build-arg=NODE_VERSION=18.17.0 \
--build-arg=APPS_JSON_BASE64=$APPS_JSON_BASE64 \
--tag=<IMAGE_REPO_LINK:TAG> \
--file=images/custom/Containerfile .
NOTE:
tag
argument should have the repo link with tag, where this built image is supposed to be pushed.
NOTE:
You can also use Google cloud build or any other builder with appropriate commands/files to build this image.
NOTE:
You can also use Google cloud artifact repository or any other repository to push your built image. Usedocker push
command.
Cloud build and artifact repository works well together, we just need to make acloudbuild.yaml
. Follow : Build container images | Cloud Build Documentation | Google Cloud
STEP 1B: Test your image locally using pwd.yml
You can test the created image using the pwd.yml
file located within frappe_docker
repository.
NOTE:
My modifications will also enable postgres v14.11
Edit the following within pwd.yml
:
- Change all the
image
value whose value haserpnext
reference, with your newly creeated image name i.e.<image_name>:<tag>
- Other changes
# ... denotes scroll down
...
------
configurator:
...
environment:
DB_HOST: db
DB_PORT: "5432"
...
------
db:
image: postgres:14.11
command: []
environment:
POSTGRES_PASSWORD: changeit
volumes:
- db-data:/var/lib/postgresql/data
...
------
create-site:
...
command:
- >
wait-for-it -t 120 db:5432;
...
bench new-site --db-host=db --db-type=postgres --admin-password=admin --db-root-password=admin --install-app <your_app_name> --set-default frontend;
NOTE:
My app also contains a custom web UI.
Follow #BuildWithHussain Ep. 4: Full-stack Web App with React & Frappe Framework!
After making changes, open terminal on the frappe_docker
directory and run:
docker compose -p pwd -f pwd.yml up -d
Then you can test your local deployment on localhost:8080/desk
For shutting the deployment down, run:
docker compose -p pwd down
STEP 2: Deploy using frappe/helm
and kubernetes
Create a Kubernetes cluster either locally or within Google Cloud (GKE) and setup the kubectl
command for enabling interaction with the created cluster.
Clone the frappe/helm
repository, locally.Repo Link
Open the erpnext
folder.
Here we have to change values.yaml
file according to our specific needs.
BUT first, we need a persistent volume claim
within the cluster.
Locally deployed clusters have a predefined PVC, which we can use directly within the deployment.
Otherwise, we can create our own PVC. The official document shows how to deploy nfs
storage class within a cluster. The commands are as follows:
kubectl create namespace nfs
helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner
helm upgrade --install -n nfs in-cluster nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner --set 'storageClass.mountOptions={vers=4.1}' --set persistence.enabled=true --set persistence.size=12Gi
Now we can create another namespace and add the frappe/helm
repository
kubectl create namespace erpnext
helm repo add frappe https://helm.erpnext.com
NOTE:
You can use any string as your namespace name. I am usingerpnext
Within your local values.yaml
, make the following changes:
image:
repository: <IMAGE-NAME>
tag: <TAG>
...
------
persistence:
worker:
enabled: true
# existingClaim: ""
size: 10Gi
storageClass: "nfs"
...
------
jobs:
volumePermissions:
enabled: true
...
------
mariadb:
# https://github.com/bitnami/charts/tree/master/bitnami/mariadb
enabled: false
...
------
postgresql:
# https://github.com/bitnami/charts/tree/master/bitnami/postgresql
enabled: true
# host: ""
image:
tag: 14.11.0-debian-11-r13
auth:
username: "postgres"
postgresPassword: "changeit"
...
Save the changes and run the following command in that same directory:
helm install frappe-bench -n erpnext -f values.yaml frappe/erpnext
Wait for the services to be deployed
There will be a job call conf-bench
which will fail/not execute completely.
Delete that job using the pod-id:
kubectl -n erpnext get jobs
kubetcl -n erpnext delete job <conf_bench_job_id>
Now create a template for conf-bench
:
helm template frappe-bench -n erpnext frappe/erpnext -f values.yaml -s templates/job-configure-bench.yaml > configure-bench-job.yaml
This will create a file called configure-bench-job.yaml
. Edit this file with following changes:
------------------
env:
- name: DB_HOST
value: frappe-bench-postgresql
- name: DB_PORT
value: "5432"
-------------------
Save the file and run on terminal:
kubectl -n erpnext apply -f configure-bench-job.yaml
You can check logs using:
kubectl -n erpnext logs <YOUR_CONF_BENCH_POD_NAME> -c configure
Now we can enable create-site
within values.yaml
:
createSite:
enabled: true
forceCreate: true
siteName: <YOUR-DOMAIN> #ip for local clusters
adminPassword: "changeit"
installApps:
- "<YOUR_APP_NAME>"
dbType: "postgres"
backoffLimit: 0
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
After saving this file, run in terminal:
helm template frappe-bench -n erpnext frappe/erpnext -f values.yaml -s templates/job-create-site.yaml > create-new-site-job.yaml
Edit the create-new-site-job.yaml
file with the following values:
------------------
args:
- >
bench new-site $(SITE_NAME)
--db-type=$(DB_TYPE)
--db-host=$(DB_HOST)
--db-port=$(DB_PORT)
--admin-password=$(ADMIN_PASSWORD)
--db-root-username=$(DB_ROOT_USER)
--db-root-password=$(DB_ROOT_PASSWORD)
--install-app=frappe
--force
;rm -f currentsite.txt
env:
- name: "SITE_NAME"
value: "<YOUR-DOMAIN>" #ip for local clusters
- name: "DB_TYPE"
value: postgres
- name: "DB_HOST"
value: frappe-bench-postgresql
- name: "DB_PORT"
value: "5432"
- name: "DB_ROOT_USER"
value: "postgres"
- name: "DB_ROOT_PASSWORD"
value: "changeit"
- name: "ADMIN_PASSWORD"
value: "changeit"
----------------------
After saving changes, run in terminal:
kubectl -n erpnext apply -f create-new-site-job.yaml
You can check the logs for this job same way as mentioned above.
OPTIONAL (For Local Clusters)
Now you can Port forwarding on host machine:
kubectl port-forward -n frappe-pg service/frappe-bench-erpnext 18080:8080 --address 0.0.0.0
Login in frappe:
http://ip:18080
Verify the site is working. You can certainly check the pods in the K8s cluster too.
kubectl port-forward -n erpnext svc/frappe-bench-erpnext 8080:8080
curl -H "Host: YOUR-DOMAIN" http://0.0.0.0:8080/api/method/ping
It should return:
{"message":"pong"}
Now all left is to deploy an ingress. I have taken the long way and used ingress-nginx
with cert-manager
to expose my frappe app to web.
You can also use gce
(Google’s own GKE load balancer) and gcloud ssl certs
or any other option that suits your requirements.
REQUEST FROM COMMUNITY
Please mention in comments if you have configured your frappe app with other ingress options like:gce
,nginx-ingress
,traefik
, etc.
Setting Up ingress-nginx
with cert-manager
:
First we need to install cert-manager
and ingress-nginx
:
Now lets create ingress object:
FIrstly we need to generate a config from values.yaml
. Edit as follows:
# Ingress
ingress:
ingressName: "custom-app-name"
# className: ""
enabled: true
annotations:
kubernetes.io/stagingess.class: nginx
#nginx.ingress.kubernetes.io/ssl-redirect: false
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-staging
hosts:
- host: <YOUR_DOMAIN_NAME>
paths:
- path: /
pathType: Prefix
tls:
- secretName: custom-app-tls
hosts:
- <YOUR_DOMAIN_NAME>
Save this file and run in terminal:
helm template frappe-bench -n erpnext frappe/erpnext -f values.yaml -s templates/ingress.yaml > ingress.yaml
After this open ingress.yaml
and do the following changes`:
...
------
annotations:
#cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/ingress.class: nginx
#kubernetes.io/tls-acme: "true"
....
------
spec:
#tls:
#- hosts:
# - "<YOUR_DOMAIN>"
#secretName: custom-app-app-tls
...
After commenting out the above lines. Deploy with command:
kubectl -n erpnext apply -f ingress.yaml
NOTE
Set the A record in your dns namespace with the ip of the loadbalancer generated with this ingress deployment.
Now we need to install cert-manager
:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml
You can check the deployed services with this:
kubectl -n cert-manager get all
cert-manager
is deployed within cert-manager
namespace.
Now we can create Certificate Issuers. There are two types of issuer:
- Normanl Issuer (
Issuer
) - Cluster Issuer (
ClusterIssuer
)
Here Cluster issuer field is required for proper ingress generation and functioning.
I have deployed two types of cluster issuers, one for staging and other for production since production ready certificates have strict rate limits for generating.
The clusterissuers are deployed with the following terminal commands:
For staging
cluster issuer:
kubectl create --edit -f https://raw.githubusercontent.com/cert-manager/website/master/content/docs/tutorials/acme/example/staging-issuer.yaml
# expected output: issuer.cert-manager.io "letsencrypt-staging" created
Edit the email filed and save with :wq
For production
cluster issuer:
kubectl create --edit -f https://raw.githubusercontent.com/cert-manager/website/master/content/docs/tutorials/acme/example/production-issuer.yaml
# expected output: issuer.cert-manager.io "letsencrypt-prod" created
Edit the email filed and save with :wq
Now again the created issuer.yaml
file with the following changes:
...
------
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
....
------
spec:
tls:
- hosts:
- "<YOUR_DOMAIN>"
secretName: custom-app-app-tls
...
Deploy changes with command:
kubectl -n erpnext apply -f ingress.yaml
See if the certificate is generated by visiting your domain.
If the certificate is generated, change
cert-manager.io/cluster-issuer: letsencrypt-prod
and rerun:
kubectl -n erpnext apply -f ingress.yaml
It should be live in your domain in 5-10 mins.
CHEERS