ERPNext - podman install - docker is dead

Hello, where can i find a tutorial to install ERPNext this podman?

Thank’s
jannis

or

old, legacy

1 Like

This is ineresting.

Is Podman available for Windows?

Regards,

https://podman.io/getting-started/installation

While “containers are Linux,” Podman also runs on Mac and Windows, where it provides a native podman CLI and embeds a guest Linux system to launch your containers. This guest is referred to as a Podman machine and is managed with the podman machine command. Podman on Mac and Windows also listens for Docker API clients, supporting direct usage of Docker-based tools and programmatic access from your language of choice.

On Windows, each Podman machine is backed by a virtualized Windows System for Linux (WSLv2) distribution.

or read this

Podman is the new default container engine for Red_Hat_Linux, Oracle_Linux, AlmaLinux and Rocket_Linux.

Tell me alternative for docker buildx bake which speeds up the image building process.

Use the images in any container runtime. try

wget https://raw.githubusercontent.com/frappe/frappe_docker/main/pwd.yml -O pwd.yml
podman-compose --project-name pwd -f pwd.yml up -d

Note: you need wget, podman, podman-compose and podman-dnsname already installed to run above command.

site will be avaialble on http://localhost:8080 Wait for 5-10 min, or check create-site container logs. Username: Administrator, password: admin.

You can use podman to build your custom images and base them on the official images.

3 Likes

@revant_one
Hello, why you didn’t prefer podman?

Which is the best way in your opinion
to install ERPNext for produktiv using?

Thank’s
yannis

What fits your use is best!

I use docker to build images and devcontainers because I can install it for free (at least for now).

In production kubernetes clusters the runtime is containerd.

On VM setup I prefer docker swarm + portainer for webui. Portainer provides lot of features including webhooks to update services.

3 Likes

Hello Users and @revant_one ,

My system:


++++++
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.1 LTS
Release:        22.04
Codename:       jammy
++++++

+++++++
podman  --version && podman-compose version
podman version 4.3.0
['podman', '--version', '']
using podman version: 4.3.0
podman-composer version  1.0.3
podman --version 
podman version 4.3.0
exit code: 0
++++++++


++++++++
#cat pwd.yml
version: "3"

services:
  backend:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  configurator:
    image: frappe/erpnext-worker:v14.5.1
    command:
      - configure.py
    environment:
      DB_HOST: db
      DB_PORT: "3306"
      REDIS_CACHE: redis-cache:6379
      REDIS_QUEUE: redis-queue:6379
      REDIS_SOCKETIO: redis-socketio:6379
      SOCKETIO_PORT: "9000"
    volumes:
      - sites:/home/frappe/frappe-bench/sites

  create-site:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets
    entrypoint:
      - bash
      - -c
    command:
      - >
        wait-for-it -t 120 db:3306;
        wait-for-it -t 120 redis-cache:6379;
        wait-for-it -t 120 redis-queue:6379;
        wait-for-it -t 120 redis-socketio:6379;
        export start=`date +%s`;
        until [[ -n `grep -hs ^ common_site_config.json | jq -r ".db_host // empty"` ]] && \
          [[ -n `grep -hs ^ common_site_config.json | jq -r ".redis_cache // empty"` ]] && \
          [[ -n `grep -hs ^ common_site_config.json | jq -r ".redis_queue // empty"` ]];
        do
          echo "Waiting for common_site_config.json to be created";
          sleep 5;
          if (( `date +%s`-start > 120 )); then
            echo "could not find common_site_config.json with required keys";
            exit 1
          fi
        done;
        echo "common_site_config.json found";
        bench new-site frontend --admin-password=admin --db-root-password=admin --install-app payments --install-app erpnext --set-default;

  db:
    image: mariadb:10.6
    healthcheck:
      test: mysqladmin ping -h localhost --password=admin
      interval: 1s
      retries: 15
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - --character-set-server=utf8mb4
      - --collation-server=utf8mb4_unicode_ci
      - --skip-character-set-client-handshake
      - --skip-innodb-read-only-compressed # Temporary fix for MariaDB 10.6
    environment:
      MYSQL_ROOT_PASSWORD: admin
    volumes:
      - db-data:/var/lib/mysql

  frontend:
    network_mode: cntnet
    image: frappe/erpnext-nginx:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    environment:
      BACKEND: backend:8000
      FRAPPE_SITE_NAME_HEADER: frontend
      SOCKETIO: websocket:9000
      UPSTREAM_REAL_IP_ADDRESS: 127.0.0.1
      UPSTREAM_REAL_IP_HEADER: X-Forwarded-For
      UPSTREAM_REAL_IP_RECURSIVE: "off"
    volumes:
      - sites:/usr/share/nginx/html/sites
      - assets:/usr/share/nginx/html/assets
    ports:
      - "8080:8080"

  queue-default:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - bench
      - worker
      - --queue
      - default
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  queue-long:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - bench
      - worker
      - --queue
      - long
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  queue-short:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - bench
      - worker
      - --queue
      - short
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  redis-queue:
    image: redis:6.2-alpine
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - redis-queue-data:/data

  redis-cache:
    image: redis:6.2-alpine
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - redis-cache-data:/data

  redis-socketio:
    image: redis:6.2-alpine
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - redis-socketio-data:/data

  scheduler:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - bench
      - schedule
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  websocket:
    image: frappe/frappe-socketio:v14.14.2
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

volumes:
  assets:
  db-data:
  redis-queue-data:
  redis-cache-data:
  redis-socketio-data:
  sites:
++++++++

My Problem:

podman-compose --project-name pwd -f pwd.yml up -d
['podman', '--version', '']
using podman version: 4.3.0
** excluding:  set()
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
Error: command required for rootless mode with multiple IDs: exec: "newuidmap": executable file not found in $PATH
['podman', 'volume', 'create', '--label', 'io.podman.compose.project=pwd', '--label', 'com.docker.compose.project=pwd', 'pwd_sites']
Error: command required for rootless mode with multiple IDs: exec: "newuidmap": executable file not found in $PATH
Traceback (most recent call last):
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 296, in assert_volume
    try: out = compose.podman.output([], "volume", ["inspect", vol_name]).decode('utf-8')
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 820, in output
    return subprocess.check_output(cmd_ls)
  File "/usr/lib/python3.10/subprocess.py", line 420, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['podman', 'volume', 'inspect', 'pwd_sites']' returned non-zero exit status 125.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/jannis/.local/bin/podman-compose", line 8, in <module>
    sys.exit(main())
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 1775, in main
    podman_compose.run()
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 1024, in run
    cmd(self, args)
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 1248, in wrapped
    return func(*args, **kw)
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 1415, in compose_up
    podman_args = container_to_args(compose, cnt, detached=args.detach)
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 644, in container_to_args
    podman_args.extend(get_mount_args(compose, cnt, volume))
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 402, in get_mount_args
    assert_volume(compose, fix_mount_dict(compose, volume, proj_name, srv_name))
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 307, in assert_volume
    compose.podman.output([], "volume", args)
  File "/home/jannis/.local/lib/python3.10/site-packages/podman_compose.py", line 820, in output
    return subprocess.check_output(cmd_ls)
  File "/usr/lib/python3.10/subprocess.py", line 420, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['podman', 'volume', 'create', '--label', 'io.podman.compose.project=pwd', '--label', 'com.docker.compose.project=pwd', 'pwd_sites']' returned non-zero exit status 125.


@revant_one and all
How can i solved the problem?

sudo apt install podman-dnsname

My system don’t have the package podman-dnsname.

Best Regards
jannis

You need to use podman-dnsname plugin.

read more: dnsname/README_PODMAN.md at main · containers/dnsname · GitHub

I don’t know how you manage to get it installed on your distribution.

Fix your podman setup. podman/troubleshooting.md at main · containers/podman · GitHub

Hello @revant_one

Now i have change my podmann release.

############
podman-compose --version
['podman', '--version', '']
using podman version: 3.4.4
podman-composer version  1.0.3
podman --version 
podman version 3.4.4
exit code: 0
############

Install

sudo apt install golang-github-containernetworking-plugin-dnsname

Now i get this error

podman-compose --project-name pwd -f pwd.yml up -d
['podman', '--version', '']
using podman version: 3.4.4
** excluding:  set()
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_backend_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=backend -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias backend frappe/erpnext-worker:v14.5.1
Error: short-name "frappe/erpnext-worker:v14.5.1" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_backend_1
Error: no container with name or ID "pwd_backend_1" found: no such container
exit code: 125
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_configurator_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=configurator -e DB_HOST=db -e DB_PORT=3306 -e REDIS_CACHE=redis-cache:6379 -e REDIS_QUEUE=redis-queue:6379 -e REDIS_SOCKETIO=redis-socketio:6379 -e SOCKETIO_PORT=9000 -v pwd_sites:/home/frappe/frappe-bench/sites --net pwd_default --network-alias configurator frappe/erpnext-worker:v14.5.1 configure.py
Error: short-name "frappe/erpnext-worker:v14.5.1" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_configurator_1
Error: no container with name or ID "pwd_configurator_1" found: no such container
exit code: 125
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_create-site_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=create-site -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias create-site --entrypoint ["bash", "-c"] frappe/erpnext-worker:v14.5.1 wait-for-it -t 120 db:3306; wait-for-it -t 120 redis-cache:6379; wait-for-it -t 120 redis-queue:6379; wait-for-it -t 120 redis-socketio:6379; export start=`date +%s`; until [[ -n `grep -hs ^ common_site_config.json | jq -r ".db_host // empty"` ]] && \
  [[ -n `grep -hs ^ common_site_config.json | jq -r ".redis_cache // empty"` ]] && \
  [[ -n `grep -hs ^ common_site_config.json | jq -r ".redis_queue // empty"` ]];
do
  echo "Waiting for common_site_config.json to be created";
  sleep 5;
  if (( `date +%s`-start > 120 )); then
    echo "could not find common_site_config.json with required keys";
    exit 1
  fi
done; echo "common_site_config.json found"; bench new-site frontend --admin-password=admin --db-root-password=admin --install-app payments --install-app erpnext --set-default;

Error: short-name "frappe/erpnext-worker:v14.5.1" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_create-site_1
Error: no container with name or ID "pwd_create-site_1" found: no such container
exit code: 125
podman volume inspect pwd_db-data || podman volume create pwd_db-data
['podman', 'volume', 'inspect', 'pwd_db-data']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_db_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=db -e MYSQL_ROOT_PASSWORD=admin -v pwd_db-data:/var/lib/mysql --net pwd_default --network-alias db --healthcheck-command /bin/sh -c 'mysqladmin ping -h localhost --password=admin' --healthcheck-interval 1s --healthcheck-retries 15 mariadb:10.6 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --skip-character-set-client-handshake --skip-innodb-read-only-compressed
Error: short-name "mariadb:10.6" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_db_1
Error: no container with name or ID "pwd_db_1" found: no such container
exit code: 125
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_frontend_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=frontend --network cntnet -e BACKEND=backend:8000 -e FRAPPE_SITE_NAME_HEADER=frontend -e SOCKETIO=websocket:9000 -e UPSTREAM_REAL_IP_ADDRESS=127.0.0.1 -e UPSTREAM_REAL_IP_HEADER=X-Forwarded-For -e UPSTREAM_REAL_IP_RECURSIVE=off -v pwd_sites:/usr/share/nginx/html/sites -v pwd_assets:/usr/share/nginx/html/assets --net pwd_default --network-alias frontend -p 8080:8080 frappe/erpnext-nginx:v14.5.1
Error: short-name "frappe/erpnext-nginx:v14.5.1" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_frontend_1
Error: no container with name or ID "pwd_frontend_1" found: no such container
exit code: 125
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_queue-default_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=queue-default -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias queue-default frappe/erpnext-worker:v14.5.1 bench worker --queue default
Error: short-name "frappe/erpnext-worker:v14.5.1" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_queue-default_1
Error: no container with name or ID "pwd_queue-default_1" found: no such container
exit code: 125
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_queue-long_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=queue-long -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias queue-long frappe/erpnext-worker:v14.5.1 bench worker --queue long
Error: short-name "frappe/erpnext-worker:v14.5.1" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_queue-long_1
Error: no container with name or ID "pwd_queue-long_1" found: no such container
exit code: 125
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_queue-short_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=queue-short -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias queue-short frappe/erpnext-worker:v14.5.1 bench worker --queue short
Error: short-name "frappe/erpnext-worker:v14.5.1" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_queue-short_1
Error: no container with name or ID "pwd_queue-short_1" found: no such container
exit code: 125
podman volume inspect pwd_redis-queue-data || podman volume create pwd_redis-queue-data
['podman', 'volume', 'inspect', 'pwd_redis-queue-data']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_redis-queue_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=redis-queue -v pwd_redis-queue-data:/data --net pwd_default --network-alias redis-queue redis:6.2-alpine
Error: short-name "redis:6.2-alpine" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_redis-queue_1
Error: no container with name or ID "pwd_redis-queue_1" found: no such container
exit code: 125
podman volume inspect pwd_redis-cache-data || podman volume create pwd_redis-cache-data
['podman', 'volume', 'inspect', 'pwd_redis-cache-data']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_redis-cache_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=redis-cache -v pwd_redis-cache-data:/data --net pwd_default --network-alias redis-cache redis:6.2-alpine
Error: short-name "redis:6.2-alpine" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_redis-cache_1
Error: no container with name or ID "pwd_redis-cache_1" found: no such container
exit code: 125
podman volume inspect pwd_redis-socketio-data || podman volume create pwd_redis-socketio-data
['podman', 'volume', 'inspect', 'pwd_redis-socketio-data']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_redis-socketio_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=redis-socketio -v pwd_redis-socketio-data:/data --net pwd_default --network-alias redis-socketio redis:6.2-alpine
Error: short-name "redis:6.2-alpine" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_redis-socketio_1
Error: no container with name or ID "pwd_redis-socketio_1" found: no such container
exit code: 125
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_scheduler_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=scheduler -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias scheduler frappe/erpnext-worker:v14.5.1 bench schedule
Error: short-name "frappe/erpnext-worker:v14.5.1" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_scheduler_1
Error: no container with name or ID "pwd_scheduler_1" found: no such container
exit code: 125
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_websocket_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=websocket -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias websocket frappe/frappe-socketio:v14.14.2
Error: short-name "frappe/frappe-socketio:v14.14.2" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
exit code: 125
podman start pwd_websocket_1
Error: no container with name or ID "pwd_websocket_1" found: no such container
exit code: 125

Why do you downgrade? Install from sources or try another distro which provides latest podman.

I’d recommend stick with “dead” docker.

1 Like

Which distro did you use?

“dead” docker works for me, but i will learn and use podman.

now

sudo nano /etc/containers/registries.conf

add

[registries.search]
registries = ['container-registry.oracle.com', 'quay.io', 'docker.io']


than

 podman-compose --project-name pwd -f pwd.yml up -d
['podman', '--version', '']
using podman version: 3.4.4
** excluding:  set()
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_backend_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=backend -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias backend frappe/erpnext-worker:v14.5.1
✔ docker.io/frappe/erpnext-worker:v14.5.1
Trying to pull docker.io/frappe/erpnext-worker:v14.5.1...
Getting image source signatures
Copying blob 5a96a0b5efbd done  
Copying blob 65462f2b00a4 done  
Copying blob 126aaa6587b8 done  
Copying blob 1efc276f4ff9 done  
Copying blob 934013fd0430 done  
Copying blob 098c9e455557 done  
Copying blob c390feb39c6d done  
Copying blob 4f4fb700ef54 done  
Copying blob 030d07f089ad done  
Copying blob 854e61ff805d done  
Copying blob 3e7056357404 done  
Copying blob 774b267aad1f done  
Copying blob f9aff52241f8 done  
Copying blob 838849d40418 done  
Copying blob 306697e74772 done  
Copying blob 96120ee8333c done  
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 9459bac372e1 done  
Copying blob 7a996706727b done  
Copying blob 83f31f72897e done  
Copying config 31e0f5c157 done  
Writing manifest to image destination
Storing signatures
0bf5c35af83e6ca1a87c39eff7bde1d2ca296adcd26b8e368608a97cf960f794
exit code: 0
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_configurator_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=configurator -e DB_HOST=db -e DB_PORT=3306 -e REDIS_CACHE=redis-cache:6379 -e REDIS_QUEUE=redis-queue:6379 -e REDIS_SOCKETIO=redis-socketio:6379 -e SOCKETIO_PORT=9000 -v pwd_sites:/home/frappe/frappe-bench/sites --net pwd_default --network-alias configurator frappe/erpnext-worker:v14.5.1 configure.py
fa7e3bd74239125f9af0a24e4c0f7b4521f407cbfac0f7f5df858bbfe4fce2b7
exit code: 0
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_create-site_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=create-site -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias create-site --entrypoint ["bash", "-c"] frappe/erpnext-worker:v14.5.1 wait-for-it -t 120 db:3306; wait-for-it -t 120 redis-cache:6379; wait-for-it -t 120 redis-queue:6379; wait-for-it -t 120 redis-socketio:6379; export start=`date +%s`; until [[ -n `grep -hs ^ common_site_config.json | jq -r ".db_host // empty"` ]] && \
  [[ -n `grep -hs ^ common_site_config.json | jq -r ".redis_cache // empty"` ]] && \
  [[ -n `grep -hs ^ common_site_config.json | jq -r ".redis_queue // empty"` ]];
do
  echo "Waiting for common_site_config.json to be created";
  sleep 5;
  if (( `date +%s`-start > 120 )); then
    echo "could not find common_site_config.json with required keys";
    exit 1
  fi
done; echo "common_site_config.json found"; bench new-site frontend --admin-password=admin --db-root-password=admin --install-app payments --install-app erpnext --set-default;

16cf3f69ef6d8979f826ec5a33cbc6126bcb37d34114103d105f67de17f2417f
exit code: 0
podman volume inspect pwd_db-data || podman volume create pwd_db-data
['podman', 'volume', 'inspect', 'pwd_db-data']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_db_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=db -e MYSQL_ROOT_PASSWORD=admin -v pwd_db-data:/var/lib/mysql --net pwd_default --network-alias db --healthcheck-command /bin/sh -c 'mysqladmin ping -h localhost --password=admin' --healthcheck-interval 1s --healthcheck-retries 15 mariadb:10.6 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --skip-character-set-client-handshake --skip-innodb-read-only-compressed
✔ docker.io/library/mariadb:10.6
Trying to pull docker.io/library/mariadb:10.6...
Getting image source signatures
Copying blob 7b884cf51717 done  
Copying blob eaead16dc43b done  
Copying blob 682cd0757ae2 done  
Copying blob 4f6f4832182b done  
Copying blob b7adf766d7ba done  
Copying blob 4b163f9db790 done  
Copying blob 2acb7d9d14a6 done  
Copying blob 337215cb97d6 done  
Copying blob 86e9042c5555 done  
Copying blob 45d8095ac930 done  
Copying blob 6b7ce3fcb4b2 done  
Copying config 480da96458 done  
Writing manifest to image destination
Storing signatures
1f8a930f0eb52f4587a597cd9a28b9ce1f935f31e7d70d90203fbe3bbb82ab4e
exit code: 0
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_frontend_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=frontend --network cntnet -e BACKEND=backend:8000 -e FRAPPE_SITE_NAME_HEADER=frontend -e SOCKETIO=websocket:9000 -e UPSTREAM_REAL_IP_ADDRESS=127.0.0.1 -e UPSTREAM_REAL_IP_HEADER=X-Forwarded-For -e UPSTREAM_REAL_IP_RECURSIVE=off -v pwd_sites:/usr/share/nginx/html/sites -v pwd_assets:/usr/share/nginx/html/assets --net pwd_default --network-alias frontend -p 8080:8080 frappe/erpnext-nginx:v14.5.1
✔ docker.io/frappe/erpnext-nginx:v14.5.1
Trying to pull docker.io/frappe/erpnext-nginx:v14.5.1...
Getting image source signatures
Copying blob 213ec9aee27d done  
Copying blob 84c379d17a94 done  
Copying blob 04c26c6007b3 done  
Copying blob f828fc87bf0d done  
Copying blob 924ccd16bc46 done  
Copying blob 5ff6f595230d done  
Copying blob b6f0c6a1752c done  
Copying blob cee187c43c93 done  
Copying blob c240b60b18f8 done  
Copying blob 4d1bda40b96a done  
Copying blob 3e6f21c9a980 done  
Copying blob 78e8bde8a437 done  
Copying config f06b8216e6 done  
Writing manifest to image destination
Storing signatures
6c6c3370f150dc37b776894d6eed3dc23e4eebabcd5c0387c598e8c1962a67ee
exit code: 0
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_queue-default_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=queue-default -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias queue-default frappe/erpnext-worker:v14.5.1 bench worker --queue default
71d45a219e34cba32a1dfb5775e59f51ef5a0539f35be94b894b73d0b3cba42e
exit code: 0
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_queue-long_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=queue-long -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias queue-long frappe/erpnext-worker:v14.5.1 bench worker --queue long
409b45dc972beb0fc979007af348cef16515a0547d36bd7d53c5fb74f2ba59bd
exit code: 0
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_queue-short_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=queue-short -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias queue-short frappe/erpnext-worker:v14.5.1 bench worker --queue short
c25b87b034a07828daf2828559ccd69126c84e6424db83b1d18bce0838cf3be6
exit code: 0
podman volume inspect pwd_redis-queue-data || podman volume create pwd_redis-queue-data
['podman', 'volume', 'inspect', 'pwd_redis-queue-data']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_redis-queue_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=redis-queue -v pwd_redis-queue-data:/data --net pwd_default --network-alias redis-queue redis:6.2-alpine
✔ docker.io/library/redis:6.2-alpine
Trying to pull docker.io/library/redis:6.2-alpine...
Getting image source signatures
Copying blob 213ec9aee27d [--------------------------------------] 0.0b / 0.0b
Copying blob fb541f77610a done  
Copying blob 32a6cdb7d7d4 done  
Copying blob dd10592b1090 done  
Copying blob e3ab86c30f4c done  
Copying blob dc2e3041aaa5 done  
Copying config 48822f4436 done  
Writing manifest to image destination
Storing signatures
9626dc9f5fad6f8eae58a55ee4eeaeea61dfb576faa18c405215e366a17132bd
exit code: 0
podman volume inspect pwd_redis-cache-data || podman volume create pwd_redis-cache-data
['podman', 'volume', 'inspect', 'pwd_redis-cache-data']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_redis-cache_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=redis-cache -v pwd_redis-cache-data:/data --net pwd_default --network-alias redis-cache redis:6.2-alpine
963ad23bdcf8feac7bef1ae5d4dca241a4486da118b39dc6bb91f679a970bcd4
exit code: 0
podman volume inspect pwd_redis-socketio-data || podman volume create pwd_redis-socketio-data
['podman', 'volume', 'inspect', 'pwd_redis-socketio-data']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_redis-socketio_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=redis-socketio -v pwd_redis-socketio-data:/data --net pwd_default --network-alias redis-socketio redis:6.2-alpine
2e43ff3ee6020c94b9fc1a149023f2de5c0d55a6668e41bb3239785729bbae73
exit code: 0
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_scheduler_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=scheduler -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias scheduler frappe/erpnext-worker:v14.5.1 bench schedule
adeb10082743fd1318a19f111d68ac3e117c1890dbf23763e0623a5783d085a0
exit code: 0
podman volume inspect pwd_sites || podman volume create pwd_sites
['podman', 'volume', 'inspect', 'pwd_sites']
podman volume inspect pwd_assets || podman volume create pwd_assets
['podman', 'volume', 'inspect', 'pwd_assets']
['podman', 'network', 'exists', 'pwd_default']
podman run --name=pwd_websocket_1 -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=pwd --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=pwd --label com.docker.compose.project.working_dir=/home/zxz/erpnext --label com.docker.compose.project.config_files=pwd.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=websocket -v pwd_sites:/home/frappe/frappe-bench/sites -v pwd_assets:/home/frappe/frappe-bench/sites/assets --net pwd_default --network-alias websocket frappe/frappe-socketio:v14.14.2
✔ docker.io/frappe/frappe-socketio:v14.14.2
Trying to pull docker.io/frappe/frappe-socketio:v14.14.2...
Getting image source signatures
Copying blob 213ec9aee27d [--------------------------------------] 0.0b / 0.0b
Copying blob 9653b84b6e0f done  
Copying blob d64061ca841e done  
Copying blob 78c6f799e789 done  
Copying blob e52143b06d85 done  
Copying blob 3706d9c05056 done  
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 2f52f7af48dd done  
Copying blob 82b50cca3209 done  
Copying blob b649c1b3570a done  
Copying config 13f5536e7a done  
Writing manifest to image destination
Storing signatures
6c47496b9cf02209d78072fede6f10ebd205767f786f3d45669338e4b59cf939
exit code: 0

Arch Linux

Everything works as expected for you now?

1 Like

Hello, i’m use

++++++
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.1 LTS
Release:        22.04
Codename:       jammy
++++++

login

https://192.168.178.98:8080

No, i have trouble wiht the login, i don’t get the login screen.

Info

podman network ls
NETWORK ID    NAME         VERSION     PLUGINS
2f259bab93aa  podman       0.4.0       bridge,portmap,firewall,tuning
8df6644559ab  pwd_default  0.4.0       bridge,portmap,firewall,tuning,dnsname

Info2

podman image list
REPOSITORY                        TAG         IMAGE ID      CREATED       SIZE
docker.io/frappe/frappe-socketio  v14.14.2    13f5536e7aeb  31 hours ago  187 MB
docker.io/frappe/erpnext-nginx    v14.5.1     f06b8216e615  31 hours ago  342 MB
docker.io/frappe/erpnext-worker   v14.5.1     31e0f5c15774  31 hours ago  1.17 GB
docker.io/library/mariadb         10.6        480da9645831  12 days ago   420 MB
docker.io/library/redis           6.2-alpine  48822f443672  4 weeks ago   26.7 MB

I can reach my cockpit, cockpit-podman

https://192.168.178.98:9090/


http://ftpmirror.your.org/pub/ubuntu/archive/pool/universe/a/aardvark-dns/

Kubic packages have been discontinued for Ubuntu 22.04 LTS. Current users of the Kubic repos for Ubuntu are highly recommended to uninstall the packages from the Kubic repos before upgrading to Ubuntu 22.04 LTS.

https://podman.io/getting-started/installation.html#building-from-scratch

I don’t will use “Building from scratch”, to much trouble.

Now my system:

++++++
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
++++++

+++++++
podman --version && podman-compose version
podman version 4.3.0
[‘podman’, ‘–version’, ‘’]
using podman version: 4.3.0
podman-composer version 1.0.3
podman --version
podman version 4.3.0
exit code: 0
++++++++

How can i use ERPNext this podman?

I don’t know about you, I can use ERPNext with exact same podman and podman-compose. I tried and I could see erpnext login page on http://localhost:8080

I’ve rootless podman configured there is NO podman systemctl service active or running for system as well as user.

No active systemctl service ``` ❯ systemctl --user status podman ○ podman.service - Podman API Service Loaded: loaded (/usr/lib/systemd/user/podman.service; disabled; preset: enabled) Active: inactive (dead) TriggeredBy: ○ podman.socket Docs: man:podman-system-service(1) ❯ sudo systemctl status podman ○ podman.service - Podman API Service Loaded: loaded (/usr/lib/systemd/system/podman.service; disabled; preset: disabled) Active: inactive (dead) TriggeredBy: ○ podman.socket Docs: man:podman-system-service(1) ```

check this: podman/docs/tutorials/rootless_tutorial.md at main · containers/podman · GitHub

Check podman ps. If you don’t have pwd_frontend_1 container running then don’t edit any network_mode and try again

wget https://raw.githubusercontent.com/frappe/frappe_docker/main/pwd.yml -O pwd.yml
podman-compose --project-name pwd -f pwd.yml up -d

Hello @revant_one , did you use Ubuntu 22.04.1 or which distro did you use?

What is your output? ---->

podman --version && podman-compose version

Did you think, i should better use podman 3.4.3 ?

apt show podman
Package: podman
Version: 3.4.4+ds1-1ubuntu1

Arch Linux

❯ uname -a
Linux revant-laptop 6.0.7-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 03 Nov 2022 18:01:58 +0000 x86_64 GNU/Linux
❯ podman-compose --version
['podman', '--version', '']
using podman version: 4.3.0
podman-composer version  1.0.3
podman --version 
podman version 4.3.0
exit code: 0
1 Like
++++++
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
++++++

++++++
podman-compose version
['podman', '--version', '']
using podman version: 3.4.4
podman-composer version  1.0.3
podman --version 
podman version 3.4.4
exit code: 0
++++++

yml

cat /etc/jannis/erpnext/pwd.yml

version: "3"

services:
  backend:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  configurator:
    image: frappe/erpnext-worker:v14.5.1
    command:
      - configure.py
    environment:
      DB_HOST: db
      DB_PORT: "3306"
      REDIS_CACHE: redis-cache:6379
      REDIS_QUEUE: redis-queue:6379
      REDIS_SOCKETIO: redis-socketio:6379
      SOCKETIO_PORT: "9000"
    volumes:
      - sites:/home/frappe/frappe-bench/sites

  create-site:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets
    entrypoint:
      - bash
      - -c
    command:
      - >
        wait-for-it -t 120 db:3306;
        wait-for-it -t 120 redis-cache:6379;
        wait-for-it -t 120 redis-queue:6379;
        wait-for-it -t 120 redis-socketio:6379;
        export start=`date +%s`;
        until [[ -n `grep -hs ^ common_site_config.json | jq -r ".db_host // empty"` ]] && \
          [[ -n `grep -hs ^ common_site_config.json | jq -r ".redis_cache // empty"` ]] && \
          [[ -n `grep -hs ^ common_site_config.json | jq -r ".redis_queue // empty"` ]];
        do
          echo "Waiting for common_site_config.json to be created";
          sleep 5;
          if (( `date +%s`-start > 120 )); then
            echo "could not find common_site_config.json with required keys";
            exit 1
          fi
        done;
        echo "common_site_config.json found";
        bench new-site frontend --admin-password=admin --db-root-password=admin --install-app payments --install-app erpnext --set-default;

  db:
    image: mariadb:10.6
    healthcheck:
      test: mysqladmin ping -h localhost --password=admin
      interval: 1s
      retries: 15
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - --character-set-server=utf8mb4
      - --collation-server=utf8mb4_unicode_ci
      - --skip-character-set-client-handshake
      - --skip-innodb-read-only-compressed # Temporary fix for MariaDB 10.6
    environment:
      MYSQL_ROOT_PASSWORD: admin
    volumes:
      - db-data:/var/lib/mysql

  frontend:
    image: frappe/erpnext-nginx:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    environment:
      BACKEND: backend:8000
      FRAPPE_SITE_NAME_HEADER: frontend
      SOCKETIO: websocket:9000
      UPSTREAM_REAL_IP_ADDRESS: 127.0.0.1
      UPSTREAM_REAL_IP_HEADER: X-Forwarded-For
      UPSTREAM_REAL_IP_RECURSIVE: "off"
    volumes:
      - sites:/usr/share/nginx/html/sites
      - assets:/usr/share/nginx/html/assets
    ports:
      - "8080:8080"

  queue-default:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - bench
      - worker
      - --queue
      - default
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  queue-long:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - bench
      - worker
      - --queue
      - long
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  queue-short:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - bench
      - worker
      - --queue
      - short
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  redis-queue:
    image: redis:6.2-alpine
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - redis-queue-data:/data

  redis-cache:
    image: redis:6.2-alpine
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - redis-cache-data:/data

  redis-socketio:
    image: redis:6.2-alpine
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - redis-socketio-data:/data

  scheduler:
    image: frappe/erpnext-worker:v14.5.1
    deploy:
      restart_policy:
        condition: on-failure
    command:
      - bench
      - schedule
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

  websocket:
    image: frappe/frappe-socketio:v14.14.2
    deploy:
      restart_policy:
        condition: on-failure
    volumes:
      - sites:/home/frappe/frappe-bench/sites
      - assets:/home/frappe/frappe-bench/sites/assets

volumes:
  assets:
  db-data:
  redis-queue-data:
  redis-cache-data:
  redis-socketio-data:
  sites:

podman-compose

podman-compose --project-name pwd -f pwd.yml up -d

podman ps

podman ps
CONTAINER ID  IMAGE                                      COMMAND               CREATED         STATUS                       PORTS       NAMES
8aabde349b2f  docker.io/frappe/erpnext-worker:v14.5.1    /home/frappe/frap...  14 minutes ago  Up 14 minutes ago                        pwd_backend_1
3d5e53935f98  docker.io/library/mariadb:10.6             --character-set-s...  11 minutes ago  Up 11 minutes ago (healthy)              pwd_db_1
f27edd72a3ae  docker.io/library/redis:6.2-alpine         redis-server          8 minutes ago   Up 8 minutes ago                         pwd_redis-queue_1
d37d38e115eb  docker.io/library/redis:6.2-alpine         redis-server          8 minutes ago   Up 8 minutes ago                         pwd_redis-cache_1
c938f498c242  docker.io/library/redis:6.2-alpine         redis-server          8 minutes ago   Up 8 minutes ago                         pwd_redis-socketio_1
aba3bcc6b336  docker.io/frappe/erpnext-worker:v14.5.1    bench schedule        8 minutes ago   Up 8 minutes ago                         pwd_scheduler_1
46e683648966  docker.io/frappe/frappe-socketio:v14.14.2  node /home/frappe...  7 minutes ago   Up 7 minutes ago                         pwd_websocket_1

podman service

sudo systemctl status  podman.service
○ podman.service - Podman API Service
     Loaded: loaded (/lib/systemd/system/podman.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Mon 2022-11-07 11:58:52 UTC; 40min ago
TriggeredBy: ● podman.socket
       Docs: man:podman-system-service(1)
    Process: 2385 ExecStart=/usr/bin/podman $LOGGING system service (code=exited, status=0/SUCCESS)
   Main PID: 2385 (code=exited, status=0/SUCCESS)
        CPU: 140ms

Nov 07 11:58:47 ubu2204 systemd[1]: Started Podman API Service.
Nov 07 11:58:47 ubu2204 podman[2385]: time="2022-11-07T11:58:47Z" level=info msg="/usr/bin/podman filtering at lo>
Nov 07 11:58:47 ubu2204 podman[2385]: time="2022-11-07T11:58:47Z" level=info msg="[graphdriver] using prior stora>
Nov 07 11:58:47 ubu2204 podman[2385]: time="2022-11-07T11:58:47Z" level=info msg="Found CNI network podman (type=>
Nov 07 11:58:47 ubu2204 podman[2385]: 2022-11-07 11:58:47.546046631 +0000 UTC m=+0.223696814 system refresh
Nov 07 11:58:47 ubu2204 podman[2385]: time="2022-11-07T11:58:47Z" level=info msg="Setting parallel job count to 1>
Nov 07 11:58:47 ubu2204 podman[2385]: time="2022-11-07T11:58:47Z" level=info msg="using systemd socket activation>
Nov 07 11:58:47 ubu2204 podman[2385]: time="2022-11-07T11:58:47Z" level=info msg="using API endpoint: ''"
Nov 07 11:58:47 ubu2204 podman[2385]: time="2022-11-07T11:58:47Z" level=info msg="API service listening on \"/run>
Nov 07 11:58:52 ubu2204 systemd[1]: podman.service: Deactivated successfully.

podman network

#
podman network ls
NETWORK ID    NAME         VERSION     PLUGINS
2f259bab93aa  podman       0.4.0       bridge,portmap,firewall,tuning
8df6644559ab  pwd_default  0.4.0       bridge,portmap,firewall,tuning,dnsname

#
podman info | grep network
  network:



ERPNext login

https://192.168.178.98:8080

Browser message:
—> ERR_CONNECTION_REFUSED

check

sudo  systemctl --user status podman
Failed to connect to bus: $DBUS_SESSION_BUS_ADDRESS and $XDG_RUNTIME_DIR not defined (consider using --machine=<user>@.host --user to connect to bus of other user)

@revant_one

Where i get the " podman-dnsname " or the " netavark" package for Ubuntu 22.04.1 ?

netavark needs Podman 4.0+

What is the solution for podman version: 3.4.4 ?

$ apt search podman

catatonit/jammy,now 0.1.7-1 amd64
  init process for containers

cockpit-podman/jammy 45-1 all
  Cockpit component for Podman containers

conmon/jammy,now 2.0.25+ds1-1.1 amd64 
  OCI container runtime monitor

golang-github-containernetworking-plugin-dnsname/jammy,now 1.3.1+ds1-2 amd64  
  name resolution for containers

podman/jammy,now 3.4.4+ds1-1ubuntu1 amd64
  engine to run OCI-based containers in Pods

podman-docker/jammy 3.4.4+ds1-1ubuntu1 amd64
  engine to run OCI-based containers in Pods - wrapper for docker

podman-toolbox/jammy 0.0.99.2-2ubuntu1 amd64
  unprivileged development environment using containers

resource-agents-extra/jammy 1:4.7.0-1ubuntu7 amd64
  Cluster Resource Agents

ruby-docker-api/jammy 2.2.0-1 all
  Ruby gem to interact with docker.io remote AP

How can i solved my problem with podman 3.4.4?