i have problem with volumes docker.
i create one image into docker hub with tag 1.0
then i pull and run docker container (i was not change any thing in pwd.yml of office frappe docker repo).
After that i change some code in my app and rebuild image with tag: 1.1
when i docker down all old container and docker up to create new container with tag 1.1.
all data was lost.
how can i keep it
Hi @meeeeeeeee I suppose you are not setting the volumes correctly as persistent, so your data does not survive any container restart.
To get there you could share your current compose file and commands and we can easily help you to not loose your data.
@pronext
my compose file followed frappe_docker/pwd.yml at main · frappe/frappe_docker · GitHub
i just change image name by my image
and i did the following:
step 1: build my image
docker build \
--build-arg=FRAPPE_PATH=https://github.com/frappe/frappe \
--build-arg=FRAPPE_BRANCH=develop \
--build-arg=PYTHON_VERSION=3.11.4 \
--build-arg=NODE_VERSION=18.17.1 \
--build-arg=APPS_JSON_BASE64=$APPS_JSON_BASE64 \
--tag=hieu/frappe:v1.0 \
--file=images/custom/Containerfile .
step 2: docker push hieu/frappe:v1.0
step 3: docker compose -p frappe -f pwd.yml up -d
step 4: i change some code in custom repo
step 5: i rebuild image witth other tag
docker build --no-cache \
--build-arg=FRAPPE_PATH=https://github.com/frappe/frappe \
--build-arg=FRAPPE_BRANCH=develop \
--build-arg=PYTHON_VERSION=3.11.4 \
--build-arg=NODE_VERSION=18.17.1 \
--build-arg=APPS_JSON_BASE64=$APPS_JSON_BASE64 \
--tag=hieu/frappe:v1.1 \
--file=images/custom/Containerfile .
step 6: docker push hieu/frappe:v1.1
step 7: docker compose -p frappe -f pwd.yml down
step 8: docker rm $(docker ps -aq)
step 9: docker pull hieu/frappe:v1.1
step 10: change image in pwd.yml from v1.0 to v1.1 and
docker compose -p frappe -f pwd.yml up -d
i I did not delete any volume but all data not persist.
my step followed by Overview Frappe, Docker, self-hosting - Install / Update / Deployment - Frappe Forum
@meeeeeeeee
It looks fine to me. I suppose its either our usage of the compose file or the problem is in your custom image.
what is the status of your create-site and configurator service when you change to your new image?
Logs from the configurator and create site service would be nice.
I have a site name configured in my create-site service, to fail when reloading any image, if the site is already existent.
I used a combined version of:
x-customizable-image: &customizable_image
# By default the image used only contains the `frappe` and `erpnext` apps.
# See https://github.com/frappe/frappe_docker/blob/main/docs/custom-apps.md
# about using custom images.
image: ${CUSTOM_IMAGE:-frappe/erpnext}:${CUSTOM_TAG:-${ERPNEXT_VERSION:?No ERPNext version or tag set}}
pull_policy: ${PULL_POLICY:-always}
x-depends-on-configurator: &depends_on_configurator
depends_on:
configurator:
condition: service_completed_successfully
x-backend-defaults: &backend_defaults
<<: [*depends_on_configurator, *customizable_image]
volumes:
- sites:/home/frappe/frappe-bench/sites
services:
configurator:
<<: *backend_defaults
This file has been truncated. show original
and
For me none of the compose or built templates provided on github were working at all.
Thats why I took this one the two ones from my last post and combined them to contain Environment Variables for Site Name and all Volumes.
I guess you should check your volumes on /var/lib/docker/volumes after doing down command to be sure there are still there with all data you need. If not its definately a error in your compose file pwd.yml.
This is the compose file I was using and its working like a charm. You need to enter your image tag and your site header url also as fqdn. and define all secrets needed.
Docker Setup on Frappe is quite tricky and not really straightforward especially if you are building your own image.
version: "3.9"
x-common-erpnextimage: &common-erpnextimage
image: #ENTER-YOUR-IMAGE-HERE
x-common-erpnextvolumes: &common-erpnextvolumes
volumes:
- sites:/home/frappe/frappe-bench/sites
- logs:/home/frappe/frappe-bench/logs
- /home/admin/restore:/backup
x-common-mariadbimage: &common-mariadbimage
image: mariadb:10.6
x-common-redisimage: &common-redisimage
image: redis:6.2-alpine
secrets:
# ERPNext Database
MARIADB_ROOT_PASSWORD:
name: MARIADB_ROOT_PASSWORD
external: true
# ERPNext Site Details
ERPNEXT_SITE_NAME:
name: ERPNEXT_SITE_NAME
external: true
ERPNEXT_ADMIN_PASSWORD:
name: ERPNEXT_ADMIN_PASSWORD
external: true
ERPNEXT_HOST_ADDRESS:
name: ERPNEXT_HOST_ADDRESS
external: true
services:
backend:
<<: *common-erpnextimage
deploy:
restart_policy:
condition: on-failure
<<: *common-erpnextvolumes
configurator:
<<: *common-erpnextimage
deploy:
restart_policy:
condition: none
secrets:
- ERPNEXT_HOST_ADDRESS
- ERPNEXT_SITE_NAME
entrypoint:
- bash
- -c
command:
- >
echo "{}" > sites/common_site_config.json;
ls -1 apps > sites/apps.txt;
bench set-config -g db_host db;
bench set-config -gp db_port 3306;
bench set-config -g redis_cache "redis://redis-cache:6379";
bench set-config -g redis_queue "redis://redis-queue:6379";
bench set-config -g redis_socketio "redis://redis-socketio:6379";
bench set-config -gp socketio_port 9000;
bench set-config -g host_name "$$(cat /run/secrets/ERPNEXT_HOST_ADDRESS)";
bench set-config -g allow_cors "*";
bench set-config -g server_script_enabled 1;
bench --site $$(cat /run/secrets/ERPNEXT_SITE_NAME) migrate --skip-failing;
environment:
- DB_HOST=db
- DB_PORT="3306"
- REDIS_CACHE=redis-cache:6379
- REDIS_QUEUE=redis-queue:6379
- REDIS_SOCKETIO=redis-socketio:6379
- SOCKETIO_PORT="9000"
<<: *common-erpnextvolumes
create-site:
<<: *common-erpnextimage
deploy:
restart_policy:
condition: none
<<: *common-erpnextvolumes
secrets:
- ERPNEXT_SITE_NAME
- ERPNEXT_ADMIN_PASSWORD
- MARIADB_ROOT_PASSWORD
entrypoint:
- bash
- -c
command:
- >
wait-for-it -t 120 db:3306;
wait-for-it -t 120 redis-cache:6379;
wait-for-it -t 120 redis-queue:6379;
wait-for-it -t 120 redis-socketio:6379;
export start=`date +%s`;
until [[ -n `grep -hs ^ sites/common_site_config.json | jq -r ".db_host // empty"` ]] && \
[[ -n `grep -hs ^ sites/common_site_config.json | jq -r ".redis_cache // empty"` ]] && \
[[ -n `grep -hs ^ sites/common_site_config.json | jq -r ".redis_queue // empty"` ]];
do
echo "Waiting for sites/common_site_config.json to be created";
sleep 5;
if (( `date +%s`-start > 120 )); then
echo "could not find sites/common_site_config.json with required keys";
exit 1
fi
done;
echo "sites/common_site_config.json found";
bench new-site $$(cat /run/secrets/ERPNEXT_SITE_NAME) --no-mariadb-socket --admin-password=$$(cat /run/secrets/ERPNEXT_ADMIN_PASSWORD) --db-root-password=$$(cat /run/secrets/MARIADB_ROOT_PASSWORD) --install-app erpnext --set-default;
bench --site $$(cat /run/secrets/ERPNEXT_SITE_NAME) install-app payments;
bench --site $$(cat /run/secrets/ERPNEXT_SITE_NAME) install-app hrms;
bench migrate;
bench restart;
bench clear-cache;
db:
<<: *common-mariadbimage
secrets:
- MARIADB_ROOT_PASSWORD
healthcheck:
test: mysqladmin ping -h localhost --password=admin
interval: 1s
retries: 15
deploy:
restart_policy:
condition: on-failure
command:
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
- --skip-character-set-client-handshake
- --skip-innodb-read-only-compressed # Temporary fix for MariaDB 10.6
environment:
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/MARIADB_ROOT_PASSWORD
volumes:
- db-data:/var/lib/mysql
frontend:
<<: *common-erpnextimage
deploy:
restart_policy:
condition: on-failure
command:
- nginx-entrypoint.sh
environment:
- BACKEND=backend:8000
- FRAPPE_SITE_NAME_HEADER= #YOUR-SITE-URL
- SOCKETIO=websocket:9000
- UPSTREAM_REAL_IP_ADDRESS=127.0.0.1
- UPSTREAM_REAL_IP_HEADER=X-Forwarded-For
- UPSTREAM_REAL_IP_RECURSIVE="off"
- PROXY_READ_TIMOUT=120
- CLIENT_MAX_BODY_SIZE=50m
<<: *common-erpnextvolumes
ports:
- "8080:8080"
queue-default:
<<: *common-erpnextimage
deploy:
restart_policy:
condition: on-failure
command:
- bench
- worker
- --queue
- default
<<: *common-erpnextvolumes
queue-long:
<<: *common-erpnextimage
deploy:
restart_policy:
condition: on-failure
command:
- bench
- worker
- --queue
- long
<<: *common-erpnextvolumes
queue-short:
<<: *common-erpnextimage
deploy:
restart_policy:
condition: on-failure
command:
- bench
- worker
- --queue
- short
<<: *common-erpnextvolumes
redis-queue:
<<: *common-redisimage
deploy:
restart_policy:
condition: on-failure
volumes:
- redis-queue-data:/data
redis-cache:
<<: *common-redisimage
deploy:
restart_policy:
condition: on-failure
volumes:
- redis-cache-data:/data
redis-socketio:
<<: *common-redisimage
deploy:
restart_policy:
condition: on-failure
volumes:
- redis-socketio-data:/data
scheduler:
<<: *common-erpnextimage
deploy:
restart_policy:
condition: on-failure
command:
- bench
- schedule
<<: *common-erpnextvolumes
websocket:
<<: *common-erpnextimage
deploy:
restart_policy:
condition: on-failure
command:
- node
- /home/frappe/frappe-bench/apps/frappe/socketio.js
<<: *common-erpnextvolumes
volumes:
db-data:
redis-queue-data:
redis-cache-data:
redis-socketio-data:
sites:
logs:
Take your time!! Hope that helps.
@pronext
tks, i will try your compose file
And in the dockerfile, have you changed anything compared to the version on github?
@meeeeeeeee I dont use dockerfile I have a very complex image builder on ansible. but in general it does the same like the one on github.