Kubernetes setup stuck at ContainerCreating / Init

Hi all,
I have followed various links regarding Setup ERPNext Kubernetes but for some reason i stuck always at the same step as the title says…

Here are my steps:
microk8s enable storage
kubectl config use-context microk8s
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add frappe https://helm.erpnext.com
helm repo add nfs-ganesha-server-and-external-provisioner NFS Ganesha Server and External Provisioner Helm Repository | NFS Ganesha Server and External Provisioner Helm Chart
helm repo update

kubectl create namespace nfs
helm upgrade --install -n nfs in-cluster nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner --set persistence.enabled=true --set persistence.size=30Gi

kubectl create namespace erpnext
helm upgrade --install frappe-bench --namespace erpnext frappe/erpnext --set persistence.worker.storageClass=nfs

When doing kubectl get pvc -A

As you can see it has the RWX on frappe-bench-erpnext using storageclass nfs

When doing kubectl get pods -A

The Status ContainerCreating and Init:0/1

When doing kubectl describe pods frappe-bench-erpnext-nginx-6556df574-jw6kd -n erpnext
It shows the below error

Checking the kubectl describe pvc frappe-bench-erpnext -n erpnext

What am i missing to have the PODS running!

Thank you :slight_smile:

install required os packages

my search string was “pvc nfs bad option ubuntu”

Thanks for your reply …

Will do a fresh install and give a try…
Will feedback on the result.

Just installing nfs-common / nfs-utils whatever the package name, should solve the problem. May be restart after that.

Why re-install? you’ll land up in same situation if defaults doesn’t include the needed packages.

I actually just installed the nfs-common but did not solve the problem.

So created a new server and the new version of nfs is broken so only now found that I need to install version 1.5.0

I left the server running upto to the nfs install later will do the frappe install aniseed the results…

Once again thank you for tips…

:clap: :clap:

Hi @revant_one
Here is my result after rebuilding and using version 1.5.0 of NFS

get PVC -A

get SVC -n erpnext

Now i need to setup Ingress if i need to make it available outside or locally, correct ?

you’ll need ingress controller which adds a LoadBalancer Service and in case of cloud setup it’ll add a cloud load balancer. The you can add ingress. The DNS should point to the loadbalancer IP.

if you just wish to test you can kubectl port-forward the erpnext service on port 8080 and ping. curl -H "Host: site-name.example.com" http://localhost:8080/api/method/ping

Hi @revant_one
Thanks again for your support… unfortunately mine did not work.
Let me tell you how mine setup is…

  • VM machine running with Kubernet installed.

  • On this VM added on /hosts: erp.example.com (create site erp.example.com)

  • Inside local network

  • Firewall installed.

Installed traefik and to deal with the External IP installed METALLB (selected IP as external)
Here are my PODS

Here are SVCs

On another console of the kuber machine i run curl -H “Host: erp.example.comhttp://localhost:8080/api/method/ping
and got “Empty reply from server”

Now on the other console of the Kuber machine i had port-forward and the result

Being on the same local network if i ping erp.example.com

Ping returns with Errors

Services and pods seems to be running.

port-forward 8080:8080, internal nginx service is also running on port 8080

For traefik your ingress/ ingressClass will change.

Figure out the LB and networking, and you should be able to access the site. I’ve not used traefik as ingress-controller, should be similar to kubernetes/ingress-nginx

Thank you again…
I have redone using Nginx as you can see the PODs and SVCs


But when doing port forward 8080:80 from local network console
it drops

But if i do 8080:8080 replies with PONG

Does this means that my issue now is only on the LB/Metallb that is not allowing or receiving traffic from outside (80) ?

What can you advise so i can check the traffic or where the bottleneck is !

Thank you again for your support on the learning curve…

Assuming i have installed all not the NGINX controller …
I created the Site demo13b.angolaerp.co.ao
and i run this
enabled: true
ingressName: “demo13b-angola-co-ao”
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: “false”

  • host: demo13b.angolaerp.co.ao
    • path: /
      pathType: ImplementationSpecific

followed by the Generate Ingress YAML and Create Ingress resource (Apply)

After I install NGINX like this
helm install ingress-nginx ingress-nginx --repo Welcome - NGINX Ingress Controller --namespace ingress-nginx --create-namespace

Do you think this is the step i’m doing wrong! Reason why there is no link between the LB and Erpnext-Nginx ? I mean port 80

Also i see the the new installed NGINX creates a new POD with different nginx.conf

Understand Ingress controller: Ingress Controllers | Kubernetes

Before ingress is created ingress controller should be in place.

Once ingress controller is setup, you’ll have a LoadBalancer service and a cloud load balancer with public IP. Use this IP to configure DNS for all sites.

After ingress-controller, ip/dns configuration create Ingress resource. It should point to erpnext service on port 8080. (I don’t know from where you came up with 80 and repeating it in your posts, it won’t work on port 80, it runs on port 8080)

Ingress resource tells to serve the service/port combination through ingress controller.

For this to work you also need cert-manager.io installed

1 Like

If you wish to have a local setup just to try out commands, use vscode devcontainers. custom_containers/kube-devcontainer.md at main · castlecraft/custom_containers · GitHub

Alternative k3s setup for self-hosted Kubernetes custom_containers/k3s-cluster.md at main · castlecraft/custom_containers · GitHub

I’d recommend start with Digitalocean or Scaleway single VM managed cluster. It will only cost you 1 VM to have a production-like environment to try things on.

1 Like

Hi @revant_one
Once again thank you for your tips and links on Kubernetes… :clap: :clap:

I have managed to create and run my local VMs with Kubernetes.
Managed also to create my Custom images and cert-manager running too.

Now playing around with multiple sites on the same Pods as the purpose of all this will be to move my Production (around 30 sites) to Kubernetes… If you have any advise on this please, i will be very pleased to hear.


Play around for some time. Try disaster scenarios. Try backup recovery. Use Restic+s3+CronJob, I’m using it to take regular backups.

Here is reference for manual setup. It also has example CronJob, custom_containers/kube-devcontainer.md at main · castlecraft/custom_containers · GitHub

If you are looking for dynamically adding custom benches and then add sites to these benches using kubernetes then I’ve something for community https://castlecraft.gitlab.io/k8s_bench_interface.