Auto-refresh not working - possible nginx ingress issue

I have a fresh install of frappe/erpnext v15 running in a kubernetes environment. The pages are not auto-refreshing after adding items. I’ve ensured that Auto-Refresh is not disabled in List Settings. I’m guessing that websockets are being interfered with by the upstream (kubernetes) nginx ingress.

I’ve tried adding the following annotations but they have no impact. Does someone have this working?
nginx.ingress.kubernetes.io/proxy-read-timeout: “3600”
nginx.ingress.kubernetes.io/proxy-send-timeout: “3600”

1 Like

Read more about nginx-ingress and sticky session for socketio here Using multiple nodes | Socket.IO

1 Like

Thanks! That makes sense. I’ve added the following annotations to my ingress. Unfortunately, it hasn’t had an impact on auto-refresh. This is actually a pretty simple deployment, it’s a single-node cluster with a ClusterIP erpnext service. Ingress controller is registry.k8s.io/ingress-nginx/controller:v1.10

nginx.ingress.kubernetes.io/configuration-snippet: |
  set $forwarded_client_ip "";
  if ($http_x_forwarded_for ~ "^([^,]+)") {
    set $forwarded_client_ip $1;
  }
  set $client_ip $remote_addr;
  if ($forwarded_client_ip != "") {
    set $client_ip $forwarded_client_ip;
  }
nginx.ingress.kubernetes.io/upstream-hash-by: "$client_ip"

I could not test a lot of combination because of an issue in the Helm chart, it should be fixed soon.

@Joshua_Restivo I see this issue in the setup wizard. Have you found a functional configuration for the ingress to increase the timeout of the socket?

Not yet. Unfortunately, I’m still wrestling with it. :confused:

Actually, I’m wondering if it might not be a websockets problem. I set up an ssh tunnel with a port forward to the cluster server and configured an entry in my local hosts file so that I could access the erpnext instance directly. This bypasses the cluster’s ingress controller. The behavior was exactly the same. Deleted items disappear as expected but adding a new Item (or any new object in a list view) fails to update without a manual refresh.

Edit: Nope, definitely a websocket problem. In Chrome developer console, I can wee the websocket sessions. They all report a status of 101 but no traffic goes across them (Size remains zero).

I will be working on it in the upcoming few days, please let me know if you figure it out.

Well, the following annotations on the ingress seem to be working for me. I no longer get the timeout issue.

This is shown in the docs of Nginx ingress:

ingress:
  annotations:
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"

One hour (3600 sec) is of course too long, you may use shorter timeout, 300 is reasonable I guess.

Thanks for the follow up. Unfortunately, I had those annotations on my ingress already and that wasn’t the solution here.