Best way to deploy new versions of custom app in self hosted docker setup

I have a new install of frappe/erpnext version 14 running in docker. I also build an image for my custom app following the directions in GitHub - frappe/frappe_docker: Docker images for production and development setups of the Frappe framework and ERPNext. All this is running fine now.

My question is, what is the best way to deploy a new version of my app with the least amount of user interruption? One issue I have is that when I rebuild my app image (using docker buildx bake), and then re-deploy all the docker containers, the assets volume is not updated with the newly built assets in the custom app image. I can see the updated javascript bundles are in the new custom app image, but not in the running container. If I just delete the assets volume and then start all the containers, then it re-populates it with the new data from the custom app image. But that seems a strange way to have to update it? Is there a better way to make that happen?

Also, in order to delete the assets volume I have to shutdown all the containers as quite a few mount that volume, then restart them. This results in an obvious interruption to the user. (over 35 seconds)

With my old setup, everything was just installed on a single VM and I could just update the git repo for the production branch of my custom app. If only javascript or html had changed, I didn’t even have to restart the server (just doing bench build --app app_name to rebuild assets). But even a change that did require a server restart only took a few seconds.

So am I missing anything? Is there a better way to deploy app updates?

Thank you.

Hi!

I’m facing the same issues when trying to deploy my custom app using docker: When the updated docker image is pulled, the apps folder contains the new CSS and JS assets, but the assets.json file within the sites directory is still referencing old (and now non-existing) file names.

I was thinking about forking the frappe_docker repo and modifying the Dockerfile so that the asset.json file is copied to some other path outside the sites directory and then moving it into the sites folder automatically when the container starts, but this just seems wrong…

Did you ever come up with a nice solution?

Basti

Assets are not supposed to be part of any volume.

If they are not in a volume they’ll get replaced by new ones on image update and container restart.

Hi Revant!

Thank you very much for your help.
I understand the assets folder should not be part of any volume.

My understanding is that Line 134 in the Dockerfile creates an unnamed volume for the sites/assets folder:

VOLUME [ \
  "/home/frappe/frappe-bench/sites", \
  "/home/frappe/frappe-bench/sites/assets", \
  "/home/frappe/frappe-bench/logs" \
]

When the container is deployed (I’m using a portainer stack, but was also able to replicate the behavior using a local docker compose setup) for the first time, this unnamed volume is created automatically and is empty. As soon as the containers are created, the asset files (assets.json and assets-rtl.json especially) are thus copied from the container into this volume.

When the stack is redeployed (In my case, I’m using the image with a tag “latest”, it is rebuilt and redeployed to a private repository before the re-pull and re-deployment is triggered), this anonymous volume seems to persist and now already contains "old’ the assets{(-rtl)?}.json file. When the container is now started, it does not replace the content of the asset directory with the correct file of the new build.

This leads to a situation where there is a mismatch between the files referenced in the assets.json file (the ones from the last build) and the filenames of the actual files (they are symlinked into the apps/x/public folder, so they are not part of the persistent volume.

For some weird reason, It appears as if at some point, the “new” assets.json does end up in the volume. I have no clue how this works. Interestingly enough, a kind of pattern emerges:

1st deploy: Generated asset files and assets.json are in sync
2nd deploy: The generated asset files are new, but the asset.json file now refers to the filenames of the 1st deploy
3rd deploy: The generated asset files are new, but the asset.json file now refers to the filenames of the 2nd deploy
[…]

I could only imagine a weird effect between the different containers that share the same image that one “old” container writes the “old” version to the somewhat-newly-created volume?

Thank you so much for your help!

Basti

Share your list of apps so I can try locally.

I didn’t look for assets.json file

The assets volume is specified because in case blank volume is mounted for sites then assets goes missing.

That location needs to be present for symlink

Hi Revant!

I came up with a step-by-step instruction on how to replicate the problem.
It doesn’t even install any other custom app other than the frappe app itself and also omits the frontend, scheduler and queue containers.

I hope this minimal example can help others to replicate the behavior.

Preparation:

git clone https://github.com/frappe/frappe_docker
cd frappe_docker

Configure demo custom image:

(I’m running on macOS, so just using base64 instead of base64 -w 0)

export APPS_JSON='[]'
export APPS_JSON_BASE64=$(echo ${APPS_JSON} | base64)

Build demo custom image:

docker build \
  --build-arg=FRAPPE_PATH=https://github.com/frappe/frappe \
  --build-arg=FRAPPE_BRANCH=version-14 \
  --build-arg=PYTHON_VERSION=3.10.12 \
  --build-arg=NODE_VERSION=16.20.1 \
  --build-arg=APPS_JSON_BASE64=$APPS_JSON_BASE64 \
  --no-cache \
  --tag=frappe-custom:latest \
  --file=images/custom/Containerfile .

Create a minimal compose.yaml based on the one used in frappe_docker. We omit the frontend image and the other backend images as we don’t need them to reproduce the issue:

nano compose-demo.yaml

Insert file content:

x-customizable-image: &customizable_image
  # By default the image used only contains the `frappe` and `erpnext` apps.
  # See https://github.com/frappe/frappe_docker/blob/main/docs/custom-apps.md
  # about using custom images.
  image: frappe-custom:latest

x-depends-on-configurator: &depends_on_configurator
  depends_on:
    configurator:
      condition: service_completed_successfully

x-backend-defaults: &backend_defaults
  <<: [*depends_on_configurator, *customizable_image]
  volumes:
    - sites:/home/frappe/frappe-bench/sites
    
services:
  configurator:
    <<: *backend_defaults
    entrypoint:
      - bash
      - -c
    command:
      - >
        ls -1 apps > sites/apps.txt;
    environment:
      SOCKETIO_PORT: 9000
    depends_on: {}

  backend:
    <<: *backend_defaults

volumes:
  sites:
  

Deploy the docker-compose stack:

docker compose -f composer-demo.yaml up

Verify in new terminal window that assets filename and filename referenced in asset.json match:

(In new terminal window)

[Host] docker exec -it frappe_docker-backend-1 bash # you possibly have to modify the container name
[In Container] ls -la sites/assets/frappe/dist/css | grep website.bundle # Note filename of the asset css file
[In Container] tail -n5 sites/assets/assets.json # Note filename of the file referenced in assets.json
[In Container] exit

Up until this point, the two filenames are identical and the application would work as expected (if we had a frontend container etc. running).
In my case, the filename in both cases is:

website.bundle.WFSQXEFO.css

(We only compare this one file for brevity, the issue is the same for all asset files)

Now, we build a new version of the image and redeploy the stack

docker build \
  --build-arg=FRAPPE_PATH=https://github.com/frappe/frappe \
  --build-arg=FRAPPE_BRANCH=version-14 \
  --build-arg=PYTHON_VERSION=3.10.12 \
  --build-arg=NODE_VERSION=16.20.1 \
  --build-arg=APPS_JSON_BASE64=$APPS_JSON_BASE64 \
  --no-cache \
  --tag=frappe-custom:latest \
  --file=images/custom/Containerfile .

(Edit: Add --no-cache to make sure the image is not cached when building again without any changes)

Optional: Verify the new image is part of our local docker image list using
docker image ls and looking at the “created” column.

Redeploy the docker-compose stack

(note how the terminal window where we deployed the stack the first time is still open, and docker compose is still running there! We don’t stop the stack, we just deploy an update):

docker compose -f composer-demo.yaml up

Leave the terminal and command running and open a third console window to verify the assets.json file:

[Host] docker exec -it frappe_docker-backend-1 bash # you possibly have to modify the container name
[In Container] ls -la sites/assets/frappe/dist/css | grep website.bundle # Note filename of the asset css file
[In Container] tail -n5 sites/assets/assets.json # Note filename of the file referenced in assets.json
[In Container] exit

(Edit: One all-in-one command to check the asset file hashes)

docker exec -it frappe_docker-backend-1 bash -c 'ls -la sites/assets/frappe/dist/css | grep website.bundle && tail -n5 sites/assets/assets.json | grep website.bundle'

Note how the filename of the assets files and the ones referenced in assets.json now deviate. In my case, the output is as follows:

frappe@2f7e2d6fd893:~/frappe-bench$ ls -la sites/assets/frappe/dist/css | grep website.bundle
-rw-r--r-- 1 frappe frappe 424294 Jun 26 13:21 website.bundle.TYF3V5B2.css
-rw-r--r-- 1 frappe frappe 644921 Jun 26 13:21 website.bundle.TYF3V5B2.css.map
frappe@2f7e2d6fd893:~/frappe-bench$ tail -n5 sites/assets/assets.json
    "print_format.bundle.css": "/assets/frappe/dist/css/print_format.bundle.G2J7LXX4.css",
    "report.bundle.css": "/assets/frappe/dist/css/report.bundle.QOWEEDD3.css",
    "web_form.bundle.css": "/assets/frappe/dist/css/web_form.bundle.S4ZINDVU.css",
    "website.bundle.css": "/assets/frappe/dist/css/website.bundle.WFSQXEFO.css"
}frappe@2f7e2d6fd893:~/frappe-bench$ 

The actual built css file is named differently in the new container build, but the asset.json still references the old filename.

When we were running a frontend and the other containers, we could see that when opening the application with a web browser, the assets would fail to load. (404)

2 Likes

I found a temporary solution by using a modified version of the Dockerfile that moves the assets-folder out of the sites directory, replaces it with a symbolic link and disables the sites/asset volume (see below).

One downside of this approach is that it probably doesn’t work for existing installations, as it can’t replace the assets folder with a symbolic link if that directory already exists in the volume.

Are there any better ways? I would very much like to help find a nice solution for this.

Updated Dockerfile

ARG PYTHON_VERSION=3.11.4
ARG DEBIAN_BASE=bookworm
FROM python:${PYTHON_VERSION}-slim-${DEBIAN_BASE} AS base

COPY resources/nginx-template.conf /templates/nginx/frappe.conf.template
COPY resources/nginx-entrypoint.sh /usr/local/bin/nginx-entrypoint.sh

ARG WKHTMLTOPDF_VERSION=0.12.6.1-3
ARG WKHTMLTOPDF_DISTRO=bookworm
ARG NODE_VERSION=18.16.1
ENV NVM_DIR=/home/frappe/.nvm
ENV PATH ${NVM_DIR}/versions/node/v${NODE_VERSION}/bin/:${PATH}

RUN useradd -ms /bin/bash frappe \
    && apt-get update \
    && apt-get install --no-install-recommends -y \
    curl \
    git \
    vim \
    nginx \
    gettext-base \
    # weasyprint dependencies
    libpango-1.0-0 \
    libharfbuzz0b \
    libpangoft2-1.0-0 \
    libpangocairo-1.0-0 \
    # For backups
    restic \
    # MariaDB
    mariadb-client \
    # Postgres
    libpq-dev \
    postgresql-client \
    # For healthcheck
    wait-for-it \
    jq \
    # NodeJS
    && mkdir -p ${NVM_DIR} \
    && curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.2/install.sh | bash \
    && . ${NVM_DIR}/nvm.sh \
    && nvm install ${NODE_VERSION} \
    && nvm use v${NODE_VERSION} \
    && npm install -g yarn \
    && nvm alias default v${NODE_VERSION} \
    && rm -rf ${NVM_DIR}/.cache \
    && echo 'export NVM_DIR="/home/frappe/.nvm"' >>/home/frappe/.bashrc \
    && echo '[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm' >>/home/frappe/.bashrc \
    && echo '[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion' >>/home/frappe/.bashrc \
    # Install wkhtmltopdf with patched qt
    && if [ "$(uname -m)" = "aarch64" ]; then export ARCH=arm64; fi \
    && if [ "$(uname -m)" = "x86_64" ]; then export ARCH=amd64; fi \
    && downloaded_file=wkhtmltox_${WKHTMLTOPDF_VERSION}.${WKHTMLTOPDF_DISTRO}_${ARCH}.deb \
    && curl -sLO https://github.com/wkhtmltopdf/packaging/releases/download/$WKHTMLTOPDF_VERSION/$downloaded_file \
    && apt-get install -y ./$downloaded_file \
    && rm $downloaded_file \
    # Clean up
    && rm -rf /var/lib/apt/lists/* \
    && rm -fr /etc/nginx/sites-enabled/default \
    && pip3 install frappe-bench \
    # Fixes for non-root nginx and logs to stdout
    && sed -i '/user www-data/d' /etc/nginx/nginx.conf \
    && ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log \
    && touch /run/nginx.pid \
    && chown -R frappe:frappe /etc/nginx/conf.d \
    && chown -R frappe:frappe /etc/nginx/nginx.conf \
    && chown -R frappe:frappe /var/log/nginx \
    && chown -R frappe:frappe /var/lib/nginx \
    && chown -R frappe:frappe /run/nginx.pid \
    && chmod 755 /usr/local/bin/nginx-entrypoint.sh \
    && chmod 644 /templates/nginx/frappe.conf.template

FROM base AS builder

RUN apt-get update \
    && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
    # For frappe framework
    wget \
    # For psycopg2
    libpq-dev \
    # Other
    libffi-dev \
    liblcms2-dev \
    libldap2-dev \
    libmariadb-dev \
    libsasl2-dev \
    libtiff5-dev \
    libwebp-dev \
    redis-tools \
    rlwrap \
    tk8.6-dev \
    cron \
    # For pandas
    gcc \
    build-essential \
    libbz2-dev \
    && rm -rf /var/lib/apt/lists/*

# apps.json includes
ARG APPS_JSON_BASE64
RUN if [ -n "${APPS_JSON_BASE64}" ]; then \
    mkdir /opt/frappe && echo "${APPS_JSON_BASE64}" | base64 -d > /opt/frappe/apps.json; \
  fi

USER frappe

ARG FRAPPE_BRANCH=version-14
ARG FRAPPE_PATH=https://github.com/frappe/frappe
RUN export APP_INSTALL_ARGS="" && \
  if [ -n "${APPS_JSON_BASE64}" ]; then \
    export APP_INSTALL_ARGS="--apps_path=/opt/frappe/apps.json"; \
  fi && \
  bench init ${APP_INSTALL_ARGS}\
    --frappe-branch=${FRAPPE_BRANCH} \
    --frappe-path=${FRAPPE_PATH} \
    --no-procfile \
    --no-backups \
    --skip-redis-config-generation \
    --verbose \
    /home/frappe/frappe-bench && \
  cd /home/frappe/frappe-bench && \
  echo "{}" > sites/common_site_config.json && \
  find apps -mindepth 1 -path "*/.git" | xargs rm -fr


# Move assets out of sites folder
RUN mv /home/frappe/frappe-bench/sites/assets /home/frappe/frappe-bench/assets
RUN ln -s /home/frappe/frappe-bench/assets /home/frappe/frappe-bench/sites/assets


FROM base as backend

USER frappe

COPY --from=builder --chown=frappe:frappe /home/frappe/frappe-bench /home/frappe/frappe-bench

WORKDIR /home/frappe/frappe-bench

VOLUME [ \
  "/home/frappe/frappe-bench/sites", \
#  "/home/frappe/frappe-bench/sites/assets", \
  "/home/frappe/frappe-bench/logs" \
]

CMD [ \
  "/home/frappe/frappe-bench/env/bin/gunicorn", \
  "--chdir=/home/frappe/frappe-bench/sites", \
  "--bind=0.0.0.0:8000", \
  "--threads=4", \
  "--workers=2", \
  "--worker-class=gthread", \
  "--worker-tmp-dir=/dev/shm", \
  "--timeout=120", \
  "--preload", \
  "frappe.app:application" \
]

Try mounting nfs volume or bind mount which starts empty.
Does assets get symlinked in empty volume?

Somehow on my setup the assets volume is re-created on container update and the old one becomes dangling volume that can be pruned (old container is also stopped).

My setup: custom_containers/docs/docker-swarm.md at main · castlecraft/custom_containers · GitHub

Could it be that swarm handles redeploys of stacks differently?
In my portainer instance I’m not using swarm, just a local docker engine (and also in my local dev setup)

If it really turns out the local docker instance is the issue, do you think we should come up with a version of frappe_docker that also works there?

I think the assets directory maybe shouldn’t even be placed within the sites directory in the first place (it appears to contain no site-specific files?), but I have no clue how big of a change it would be for the entire framework and apps if this would be changed. Certainly only a long-term thing to think about.

Yes.

It stops the running container and creates a new container with new image. Similar thing happens on Kubernetes where new pod gets created and running pod terminates.

If you are using it in production do docker swarm init. More on dockerswarm.rocks.

1 Like

Thanks!

Looks like I’m going to migrate the prod environment to swarm then…

As the thread talks about “best way” for “self hosted docker”

Containers for running benches

  • Add managers to docker swarm
  • Add workers to docker swarm

Non container setup for DB and NFS

DB and NFS in Container

  • DB can remain on manager node using labels like the ones used to setup traefik and portainer using dockerswarm.rocks. In that case on NFS is setup on separate server.
  • I’ve not tried NFS setup in swarm that can be used by other stacks. For now setup separate NFS server only if you need to scale bench across servers later.

Use self hosted NFS server from docker swarm stacks:

volumes:
  sites:
    driver_opts:
      type: "nfs"
      o: "addr=1.2.3.4,nfsvers=4,rw,nolock,soft"
      # AWS EFS
      # o: "addr=fs-55Nuv9e5kB2W2ajdL.efs.us-east-1.amazonaws.com,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
      device: ":/var/nfs/general/bench01"
# change ip from 1.2.3.4 to your NFS server ip
3 Likes

Just a quick update:

I am now using docker swarm and followed the best practices you outlined, works flawlessly now.

Thanks!

2 Likes

Hi,

I am having the same issue but I am unable to understand what you did to solve the issue.

My deployment is in a Kubernetes environment. The sites folder is mounted from an nfs server and the assets.json in under the assets folder in sites folder.

When I deploy the new version of image, the file names of the bundles under js and css directories (in frappe and erpnext as well as other modules) in the new images get changed very often. However, the assets.json didn’t get changed to point to the new file names in the new images. And I will get error 404 until I manually change the contains of assets.json to point to the new file names.

I didn’t under how you manage to get that resolved so the assets.json can get updated automatically?

Are you using cri-o?

Check this assets are not properly mounted when using CRI-O as container engine · Issue #181 · frappe/helm · GitHub

I am not using CRI-O. And it should have nothing to do with the mounting issue. It is just that the assets.json will have to be updated to point to the new assets files when we deploy new version of images.

I never got it working using plain docker compose, I switched to docker swam (using Portainer in my case, but that shouldn’t make a difference) and then the problem was gone.

I still think that this is a problem, as it should also work with just docker compose in my opinion.

I never tried a K8s setup, but I assume it’s they same root cause.

@revant_one in your #BuildWithHussain Video from a few weeks ago, you also seem to be using docker compose, were you able to replicate the problem? (Didn’t watch the entire episode yet, sorry!)

If someone finds a nice solution to work around this, I would love to know.

Hello @ba_basti,
I am new on this forum and new to frappe but i managed to get enough working that i created a custom app and i am trying to deploy it the same way you are doing here, but i fail at getting the docker-compose stack to work.

would you be able to help by sharing your entire compose file and env file or any instructions on what to include if i am trying to do it in a similar way to you (other than the documentation on the frappe_docker repo because i couldn’t get it to work with that alone)

I need to do a simple deploy using the image i built with my custom app, the problem is when i run the compose file in it’s simplest states (replacing the backend image with my custom built one) the compose stack fails and a lot of errors come up so please

So to sum up

please share your simplest docker compose file for a custom image built in the same way you mentioned in your question (and maybe the env file to be sure)

I’ve been using a plain docker compose stack with my custom app for over a year now. I handle the assets problem by just deleting the <project_name>_assets volume before restarting. It seemed like a hack at first, but it has never caused any problems. I just wrote a script to do the restart, so that that step doesn’t get missed.

Here is my restart script:

docker compose -p $PROJECT -f config/$PROJECT.yaml down ;
docker volume rm ${PROJECT}_assets;
docker compose -p $PROJECT -f config/$PROJECT.yaml up -d ;
docker compose --project-name $PROJECT exec backend bench --site example.org clear-cache

4 Likes