First Timer issues with bench for containerized mariadb and redis Yields 404

Note: This Post is for first-timers who are exploring frappe and erpnext and may tumble along the way esp. for developers who would also like to create a more standardized/containerized developer experience trying to test, develop & contribute to frappe and erpnext out. I do NOT benefit from promoting such posts.

I am beginning to try out frappe and setting up a dev instance on a Mac M1 OSX 13.x
I run the redis-cache, queue and mariadb from containers.
I have partially followed https://github.com/D-codE-Hub/ERPNext-installation-Guide/blob/main/README.md with the modification on the containerized part as mentioned earlier.

Now executing bench start with this setup yields:

web.1         | 127.0.0.1 - -  "GET / HTTP/1.1" 404 -
web.1         | 192.168.0.181 - - "GET / HTTP/1.1" 404 -

My project setup has the following common_site_config :

{
 "background_workers": 1,
 "db_host": "0.0.0.0",
 "db_name": "scott",
 "db_password": "mypassword",
 "admin_password": "tiger",
 "file_watcher_port": 6787,
 "frappe_user": "somename",
 "gunicorn_workers": 17,
 "live_reload": true,
 "rebase_on_pull": false,
 "redis_cache": "redis://0.0.0.0:6378",
 "redis_queue": "redis://0.0.0.0:11000",
 "redis_socketio": "redis://localhost:13000",
 "restart_supervisor_on_update": false,
 "restart_systemd_on_update": false,
 "serve_default_site": true,
 "shallow_clone": true,
 "socketio_port": 9000,
 "use_redis_auth": false,
 "webserver_port": 8000,
 "root_password": "tiger"
}

Docker compose is:

version: '3'

networks:
  thebridge:
    driver: bridge


services:

  mariadb:
    image: mariadb:latest
    container_name: mariadb_container
    volumes:
      - ./infra/config/my.cnf:/etc/mysql/my.cnf
    environment:
      MYSQL_ROOT_PASSWORD: tiger
      MYSQL_DATABASE: scott
      MYSQL_USER: admin
      MYSQL_PASSWORD: mypassword
    ports:
      - "3306:3306"
    networks:
      - thebridge

  redis:
    image: redis:latest
    container_name: redis_container
    ports:
      - "6378:6379"
    networks:
      - thebridge

  redis-queue:
    build:
        context: .
        dockerfile: Dockerfile.rq
    container_name: redis_queue_container
    environment:
      REDIS_HOST: redis_container
    depends_on:
      - redis
    networks:
      - thebridge
    ports:
      - "11000:11000"

Dockerfile.rq is

FROM python:3.11-slim-buster

RUN apt-get update && apt-get install -y \
    libssl-dev \
    libffi-dev \
    python3-dev \
    python3-pip \
    python3-setuptools

RUN pip3 install -U pip redis rq

ENTRYPOINT ["bash", "-c", "rq worker -v --url redis://redis_container:6379"]

is there a migration setup or frontend setup required for the site to load?

redis-queue is redis service.

It is not rq worker.

Rq workers are in queue-* or worker-* containers in production

In case of dev they start as part of Procfile, on bench start

db_name and db_password are part of site_config not common_site_config

site name should resolve. In case of dev end the site name with .localhost (e.g. site.localhost) and it’ll resolve without 404

Thanks for clarifying. Additional doubts.

Can both of these be started off with the same redis service container as with the redis service I’ve shared?

For development purposes essentially, do I need to additionally configure for queue- and worker- or Procfile is enough? I want to fire up erpnext app and play around. I’m not sure where and when Rq workers are additionally used for. Any helpful frappe resources?

I see site_config is necessary but where should this live?

Do you mean, referring to this step of the setup,

bench new-site dcode.localhost #instead of dcode.com ??

For dev setup, start in Procfile

They allow background jobs

https://frappeframework.com/docs/user/en/guides/app-development/running-background-jobs

Yes you need dcode.localhost, also don’t miss --no-mariadb-socket after new-site

After making few changes as detailed, I ran into a redis socketio issue.

I’m trying to ensure that I can run frappe from local virtual environment with mariadb and redis containarized. I’d instead like to keep the storage in volume and not spawn these services from my local. The intention is Not to install mariadb/redis on host.

In common site config I set:

 "redis_cache": "redis://127.0.0.1:13000",
 "redis_queue": "redis://127.0.0.1:11000",
 "redis_socketio": "redis://127.0.0.1:12000",

I also set in the relevant site config :

{
 "db_name": "_4d428179248edb3e",
 "db_password": "sEOCR5gAhDcxBQNw",
 "db_type": "mariadb",
 "admin_password": "admin_password"
}

Whereas in docker compose mariadb service:

MYSQL_ROOT_PASSWORD: MARIADB_PASSWORD
      MYSQL_DATABASE: _4d428179248edb3e
      MYSQL_USER: MYSQL_USER
      MYSQL_PASSWORD: sEOCR5gAhDcxBQNw

(FYI notice i set db_name as MYSQL_USER and then it connects otherwise i see a warning MYSQL_USER@ connection fails due to wrong credentials but that’s an out of scope issue)

The Procfile is now:

web: bench serve --port 8000

socketio: /usr/local/bin/node apps/frappe/socketio.js

watch: bench watch

schedule: bench schedule
worker: bench worker 1>> logs/worker.log 2>> logs/worker.error.log

I run redis services with their individual redis configs. I’ve also mapped the host redis conf dir paths to a container volume and instead use the container dir path now.

bench start runs fine until it throws an UNCERTAIN_STATE error
socketio.1 | AbortError: Ready check failed: Redis connection lost and command aborted. It might have been processed. before sending a SIGTERM which I believe pops from socketio: /usr/local/bin/node apps/frappe/socketio.js

Is this necessary to run during development?

I also notice in node_utils that redis_async_broker_port is set to 12311. I’m assuming here client needs to connect to this port?

If you need development setup use the documentation here https://github.com/frappe/frappe_docker/blob/main/docs/development.md, check the “setup hosts” section to configure containerized db and redis.

I can’t help you much as you’ve mac. Also if your are using custom compose and Dockerfiles then you’ll need to figure out on your own as per your custom need.

1 Like