Hi,
I have a demo cluster, two nodes with Ubuntu Server 22.
The nodes are two EC2 AWS instances, Intel Xeon Platinum 8259CL 32 GB RAM
I followed these steps to setup a small demo of the ERPnext/frappe
kubectl create namespace erpnext
helm repo add frappe https://helm.erpnext.com
helm upgrade --install frappe-bench --namespace erpnext frappe/erpnext --set persistence.worker.storageClass=nfs
kubectl create namespace nfs
helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner
helm upgrade --install -n nfs in-cluster nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner --set 'storageClass.mountOptions={vers=4.1}' --set persistence.enabled=true --set persistence.size=8Gi
This is the error
$ kubectl logs frappe-bench-erpnext-worker-s-5dbf5cb994-l78b8 -n erpnext
Defaulted container "short" out of: short, populate-assets (init)
Also, when I run the describe
command on one of the pending pods, I see:
Warning FailedScheduling 11s (x12 over 41m) default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
I don’t have enough experience with K8S unfortunately, some handholding would be needed here, sorry folks.
The problem is with storage / persistent volumes , but I don’t understand if I missed a step in the pre-requisites or not.
I really appreciate any help you can provide.
1 Like
This should be the sequence of commands:
kubectl create namespace nfs
helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner
helm upgrade --install -n nfs in-cluster nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner --set 'storageClass.mountOptions={vers=4.1}' --set persistence.enabled=true --set persistence.size=8Gi
kubectl create namespace erpnext
helm repo add frappe https://helm.erpnext.com
helm upgrade --install frappe-bench --namespace erpnext frappe/erpnext --set persistence.worker.storageClass=nfs
- first setup NFS, which gives you RWX storage class
- then setup ERPNext which uses the RWX storage class.
- after each command confirm state of pods in namespace, proceed only after all pods are in running state.
- if your kubernetes distribution comes with some built in RWX class then you can skip the NFS setup step and use existing storageClass in last command
1 Like
thank you so so much, I am going to try and report back, any chance we could do it together whenever you can sharing the terminal?
Hello and thank you so much.
The problem is definitely with pv / pvc.
- There are no
pv
in my cluster
- The pod
in-cluster-nfs-server-provisioner-0
is pending
forever since the pvc data-in-cluster-nfs-server-provisioner-0
has this error no persistent volumes available for this claim and no storage class is set
.
Is it something related with the fact that I am using Ec2 instances?
I have 100GB of storage, 84% free, I am stuck.
Any help is welcome
Some progress.
I followed these steps, to the letter, only changing the sc
name to nfs
.
I can see two pvc
in the namespace erpnext
:
data-frappe-bench-mariadb-0
→ pending
frappe-bench-erpnext
→ bound
I do not understand why the first pvc
is unbound with the error no persistent volumes available for this claim and no storage class is set
If I delete the pvc
, change its storageClass
to nfs
, and re-create the pvc
the volume is bound
, but then I get a bunch of errors like this.
Any ideas?
what are the available storage classes?
kubectl get sc
you need to attach pvc for nfs and Mariadb pods. if nothing is mentioned it’ll pick up default storage class. PVC access mode for these 2 pods need to be RWO
1 Like
Hello!
**k get sc**
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs example.com/nfs Delete Immediate false 29m
The nfs
pod is nfs-provisioner-777d7447c6-mfc7c
and is sitting in the default
ns. Is this a problem?
Also, when you say “you need to attach pvc for nfs and Mariadb pods”, I did not attach any pvc to the nfs
provisioner pod. I tried to deploy this claim from here and it worked no problem.
it seems you dont have default storage class.
- go for managed kubernetes offering, it has many things figured out.
- if setting up on own then you’ll need to figure out many things that are way too much for someone who’s beginning with k8s. Or use some distro like Rancher, k3sup etc
install this and set it as default storage class GitHub - kubernetes-sigs/aws-ebs-csi-driver: CSI driver for Amazon EBS https://aws.amazon.com/ebs/
1 Like
Thank you, before I give up, how can you tell I do not have a default sc
?
❯ k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 634d
scw-bssd (default) csi.scaleway.com Delete Immediate true 2y219d
scw-bssd-retain csi.scaleway.com Retain Immediate true 2y219d
Default will have (default)
tagged.
1 Like
working on it. I see ur point, I should go for managed k8s, but there is a method behind my madness, I am using this for my job and the (albeit steep) learning curve, is giving me massive help to progress in what I am doing.
I can’t express how helpful you are being mate!
Ok, some MORE progress:
I did this and now:
k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs (default) example.com/nfs Delete Immediate false 44m
therefore
ubuntu@master:~$ k get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
erpnext data-frappe-bench-mariadb-0 Bound pvc-c7d3c63f-f493-487e-aa06-0b936429e78c 8Gi RWO nfs 4m17s
erpnext frappe-bench-erpnext Bound pvc-b5f2eb86-aa82-4451-a0e7-cbd0665d3b94 8Gi RWX nfs 4m18s
Which is great…but the pod frappe-bench-mariadb-0
is still not starting:
Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[config data]: timed out waiting for the condition
Don’t use nfs as default. Use block storage as default. NFS is only for sites volume.
nfs-server-provisioner needs some storage class which is not nfs to start. once that starts it can successfully provision nfs
pvcs.
In case you are on amazon use RDS instead of mariadb and EFS instead of nfs. That will take care of your backups and snapshots from AWS console/api.
1 Like
Hi @Luca_Avalle , were you successful ? I have found myself in the same conundrum. Please help.
Check https://www.youtube.com/watch?v=ARfvm-WJLrU. Recently we created NFS server with 8GB PVC and then tried to provision 8GB NFS PVC to erpnext sites. nfs server pod showed error.
- Create NFS server with X GB
- Provision Y GB to erpnext sites.
- X must be greater that Y