Custom docker image error in building

i follow this guide to build docker image from custom app but i have got this error
frappe_docker/custom_app/

cp -r apps/customapp1/textmapp1/node_modules /out/assets/customapp1/node_modules
cp: cannot overwrite non-directory '/out/assets/customapp1/node_modules' with directory 'apps/customapp1/node_modules'
------
ERROR: failed to solve: executor failed running [/bin/sh -c install-app customapp1 &&     install-app customapp2 &&     install-app customapp3]: exit code: 1

i have changed the frontend dockerfile from

ARG FRAPPE_VERSION
ARG ERPNEXT_VERSION

FROM frappe/assets-builder:v13 as assets

COPY repos apps

RUN install-app customapp1 && \
    install-app customapp2 && \
    install-app customapp3

FROM $$user/erpnext-nginx:${ERPNEXT_VERSION}

COPY --from=assets /out /usr/share/nginx/html

it show this error

to this

ARG FRAPPE_VERSION
ARG ERPNEXT_VERSION

FROM frappe/assets-builder:v13 as assets

COPY repos ..apps

RUN install-app customapp1 && \
    install-app customapp2 && \
    install-app customapp3

FROM $$user/erpnext-nginx:${ERPNEXT_VERSION}

COPY --from=assets /out /usr/share/nginx/html

it showed that everythinng is done

after that the docker image successfully generated
but in my cluster as an endpoint indicate 404 error
*i am using helm chart with kubernetes on aws to make this build , and in my kubernetes dashboard appear everything to be working just fine with no reboot , i also checked all pods and service and inspect the cluster , also i used ingress with nginx *
can anyone help me with this

frontend now uses frappe/bench as base layer to build assets. Check the latest frontend.Dockerfile

if you overwrite assets then make sure you delete the static files from image first or just copy the new files.

hi , thank for replying
in my setup i build custom frappe and custom erpnext docker images as shown in this repo [GitHub - frappe/frappe_docker: Docker images for production and development setups of the Frappe framework and ERPNext] (GitHub - frappe/frappe_docker: Docker images for production and development setups of the Frappe framework and ERPNext) , which is generate five docker images ( erpnext-nginx , erpnext-worker , frappe-worker, frappe-nginx , frappe-socketio ). and in my local setup i have 3 custom apps not one

i am wondering this guide said that i have to build custom docker image for each app

actually i have solved the docker image building issue but the thing is that , still there is no showing page in my cluster in AWS.

i followed this guide to build my cluster ,


![Screenshot from 2022-12-04 18-36-14|690x388]

and this is my configure-bench-job.yaml file

Source: erpnext/templates/job-configure-bench.yaml

apiVersion: batch/v1
kind: Job
metadata:
name: configure-erpnext-v13
spec:
backoffLimit: 0
template:
spec:
serviceAccountName: erpnext-v13
securityContext:
supplementalGroups:
- 1000
containers:
- name: configure
image: “user/custom_app-erpnext-worker:v1”
imagePullPolicy: Always
command: [‘bash’, ‘-c’]
args:
- >
ls -1 …/apps > apps.txt;
[[ -f common_site_config.json ]] || echo “{}” > common_site_config.json;
bench set-config -gp db_port $DB_PORT;
bench set-config -gp rds_db 1;
bench set-config -g db_host $DB_HOST;
bench set-config -g redis_cache $REDIS_CACHE;
bench set-config -g redis_queue $REDIS_QUEUE;
bench set-config -g redis_socketio $REDIS_SOCKETIO;
bench set-config -gp socketio_port $SOCKETIO_PORT;
env:
- name: DB_HOST
value: url
- name: DB_PORT
value: “3306”
- name: REDIS_CACHE
value: redis://erpnext-v13-redis-cache-master:6379
- name: REDIS_QUEUE
value: redis://erpnext-v13-redis-queue-master:6379
- name: REDIS_SOCKETIO
value: redis://erpnext-v13-redis-socketio-master:6379
- name: SOCKETIO_PORT
value: “9000”
resources:
{}
volumeMounts:
- name: sites-dir
mountPath: /home/frappe/frappe-bench/sites
- name: logs
mountPath: /home/frappe/frappe-bench/logs
restartPolicy: Never
volumes:
- name: sites-dir
persistentVolumeClaim:
claimName: erpnext-v13
readOnly: false
- name: logs
persistentVolumeClaim:
claimName: erpnext-v13-logs
readOnly: false

i have some thought that the setup is only accept request from only my domain 
,but the aws endpoint is 404 from nginx ,
 in my cluster i have checked with tls ,
 it is not showing.
also is there any guide on how to setup domain name with ingress and nginx with aws and kubernetes???

To build multiple apps refer GitHub - castlecraft/custom_frappe_docker
Even if you have 50 apps, it’ll only build 1 huge image!

Did you create first site and add ingress?

thank you so much , it works for building docker image and then i run this command

helm upgrade --install \
  erpnext-v13 ourrepo/erpnext \
  --namespace erpnext \
  --create-namespace \
  --version 5.0.5 \
  --set dbHost=link.rds.amazonaws.com \
  --set dbPort=3306 \
  --set dbRootUser=root \
  --set dbRootPassword=password \
  --set jobs.configure.enabled=false \
  --set persistence.worker.enabled=true \
  --set persistence.worker.size=12Gi \
  --set persistence.worker.storageClass=efs-sc \
  --set persistence.logs.enabled=true \
  --set persistence.logs.size=2Gi \
  --set persistence.logs.storageClass=efs-sc \
  --set mariadb.enabled=false

but nothing change from the front end
this picture for all svc from kubernetes

but when i click on tls certificate it show nothing or it does not exist

this is my ingress yaml file

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: roadslinkmiddleeast.ae
  namespace: erpnext
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-wildcard
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: roadslinkmiddleeast.ae
    http:
      paths:
      - backend:
          service:
            name: erpnext-v13
            port:
              number: 8080
        path: /
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - 'roadslinkmiddleeast.ae'
    - '*.roadslinkmiddleeast.ae'
    secretName: wildcard-roadslinkmiddleeast.ae

[quote=“Tarek_Eesa, post:3, topic:98065”]

have checked with tls ,
 it is not showing.

i created my first site with this command

helm template erpnext-v13 -n erpnext ourrepo/erpnext \
  --version v5.0.5 \
  --set dbHost=link.rds.amazonaws.com \
  --set dbPort=3306 \
  --set dbRootUser=root \
  --set dbRootPassword=password \
  --set jobs.configure.enabled=false \
  --set jobs.createSite.enabled=true \
  --set jobs.createSite.siteName="roadslinkmiddleeast.ae" \
  --set jobs.createSite.adminPassword=password \
  --set persistence.worker.enabled=true \
  --set persistence.worker.size=12Gi \
  --set persistence.worker.storageClass=efs-sc \
  --set persistence.logs.enabled=true \
  --set persistence.logs.size=2Gi \
  --set persistence.logs.storageClass=efs-sc \
  --set mariadb.enabled=false \
  -s templates/job-create-site.yaml | kubectl -n erpnext apply -f -

this is the logs from erpnext-nginx

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/frappe-entrypoint.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/default.conf.template to /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/12/04 19:25:57 [notice] 1#1: using the "epoll" event method
2022/12/04 19:25:57 [notice] 1#1: nginx/1.23.2
2022/12/04 19:25:57 [notice] 1#1: built by gcc 11.2.1 20220219 (Alpine 11.2.1_git20220219) 
2022/12/04 19:25:57 [notice] 1#1: OS: Linux 5.4.219-126.411.amzn2.x86_64
2022/12/04 19:25:57 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/12/04 19:25:57 [notice] 1#1: start worker processes
2022/12/04 19:25:57 [notice] 1#1: start worker process 36
2022/12/04 19:25:57 [notice] 1#1: start worker process 37
127.0.0.1 - - [05/Dec/2022:08:58:21 +0000] "GET / HTTP/1.1" 404 136 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0" "-"
127.0.0.1 - - [05/Dec/2022:08:58:21 +0000] "GET /favicon.ico HTTP/1.1" 404 136 "http://localhost:8080/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0" "-"
127.0.0.1 - - [05/Dec/2022:08:58:55 +0000] "GET / HTTP/1.1" 404 136 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0" "-"
127.0.0.1 - - [05/Dec/2022:08:58:55 +0000] "GET /favicon.ico HTTP/1.1" 499 0 "http://localhost:8080/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0" "-"
127.0.0.1 - - [05/Dec/2022:08:59:36 +0000] "GET /admin HTTP/1.1" 404 136 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0" "-"
127.0.0.1 - - [05/Dec/2022:09:14:42 +0000] "GET /api/method/ping HTTP/1.1" 200 18 "-" "curl/7.68.0" "-"
127.0.0.1 - - [05/Dec/2022:10:19:10 +0000] "GET / HTTP/1.1" 404 136 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0" "-"
127.0.0.1 - - [05/Dec/2022:10:19:10 +0000] "GET /favicon.ico HTTP/1.1" 499 0 "http://localhost:8080/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0" "-"

also one more , is there any aws background setup to make this domain work with route 53 or registering new domain.
in normal domain setup we need to give that domain an endpoint and ssl certificate.

thank you again >>>

you need to enable dbRds=true and jobs.configure.enabled=true. that’ll set the rds_db: 1 in common_site_config.json

then you need to create the site with additional job.

after that if service is able to serve site, ingress should work.

hi revant

i run kubectl apply -f job-configure-bench.yaml after helm install as shown in the guide ,
i tried to set dbrds=true and jobs.configure.enabled=true with out running job-configure-bench.yaml the initialization, it get stuck at container "configure" in pod "erpnext-v13-conf-bench-20221205155755-snm2h" is waiting to start: PodInitializing

but the old way done everything completly and the installation of the site is finish properly

# Source: erpnext/templates/job-configure-bench.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: configure-erpnext-v13
spec:
  backoffLimit: 0
  template:
    spec:
      serviceAccountName: erpnext-v13
      securityContext:
        supplementalGroups:
        - 1000
      containers:
      - name: configure
        image: "image/custom_app-erpnext-worker:v1"
        imagePullPolicy: Always
        command: ['bash', '-c']
        args:
          - >
            ls -1 ../apps > apps.txt;
            [[ -f common_site_config.json ]] || echo "{}" > common_site_config.json;
            bench set-config -gp db_port $DB_PORT;
            bench set-config -gp rds_db 1;
            bench set-config -g db_host $DB_HOST;
            bench set-config -g redis_cache $REDIS_CACHE;
            bench set-config -g redis_queue $REDIS_QUEUE;
            bench set-config -g redis_socketio $REDIS_SOCKETIO;
            bench set-config -gp socketio_port $SOCKETIO_PORT;
        env:
          - name: DB_HOST
            value: link.rds.amazonaws.com
          - name: DB_PORT
            value: "3306"
          - name: REDIS_CACHE
            value: redis://erpnext-v13-redis-cache-master:6379
          - name: REDIS_QUEUE
            value: redis://erpnext-v13-redis-queue-master:6379
          - name: REDIS_SOCKETIO
            value: redis://erpnext-v13-redis-socketio-master:6379
          - name: SOCKETIO_PORT
            value: "9000"
        resources:
          {}
        volumeMounts:
          - name: sites-dir
            mountPath: /home/frappe/frappe-bench/sites
          - name: logs
            mountPath: /home/frappe/frappe-bench/logs
      restartPolicy: Never
      volumes:
        - name: sites-dir
          persistentVolumeClaim:
            claimName: erpnext-v13
            readOnly: false
        - name: logs
          persistentVolumeClaim:
            claimName: erpnext-v13-logs
            readOnly: false

i also run this command
kubectl port-forward -n erpnext svc/erpnext-v13 8080:8080
curl -H “Host: roadslinkmiddleeast.ae” http://0.0.0.0:8080/api/method/ping
{“message”:“pong”}.
this picture from my nginx container

and still the same result??
is there any way i can access the site without having the domain ,or just expost the service to external port ???

you can do that if it’s possible for you.

I’ve not faced this issue. I setup nginx ingress controller that adds a cloud provided load balancer.

adding site ingress make the load balancer serve the site on domain/location.

thank you for replying.

the thing Mr. revant is i have not buy domain yet , so i just want this setup to work as a test in aws endpoint like this http://afd0b30b188554e63b384d886a9c020e-06959e8f7a284655.elb.us-west-2.amazonaws.com/
sorry it has been 2 month only working with frappe and erpnext but essentially i am python developer
i can make new site or new setup but i need to view this site from forntend

Then you should not jump on to using Kubernetes directly.

Can you setup a nginx site on Kubernetes?
If you can then, you can setup any site.
If you can’t then, first figure that out then proceed.

thank you sir for replying

i have the site running locally with bench but i need a live production ready version
i checked with community most of users have issues with ingress and load balancer and nginx
i could not find any official documentation to do this
sir if you have any guide even if i have to buy any domain , it not my problem for now.
after successfully building the cluster and connecting the database and efs file storage, if you can provide any guide to build the load balancer with ingress and nginx. or just guide to registering new domain and setup that with aws and kubernetes

one more edit
i have run this command

kubectl port-forward -n ingress-nginx svc/ingress-nginx-controller 8080:80
curl -H "Host: roadslinkmiddleeast.ae" http://0.0.0.0:8080/api/method/ping

the respond is

<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx</center>
</body>
</html>

It’s working for me. Tests are running fine.

May be someone else can help.

Ingress controller documentation may have more info. If site is served, service is running, then I don’t think it’s a problem with erpnext setup. Something must be missing with other configuration.

i have the following error building docker image when set --set jobs.configure.enabled=false
in helm install

Logs from

frappe-bench-ownership

in

erpnext-v13-vol-fix-20221212160444-bvck6

chown: changing ownership of ‘/home/frappe/frappe-bench/sites/.build’: Operation not permitted

chown: changing ownership of ‘/home/frappe/frappe-bench/sites/assets’: Operation not permitted

chown: changing ownership of ‘/home/frappe/frappe-bench/sites’: Operation not permitted

arguments is :
chown -R
“1000:1000”
/home/frappe/frappe-bench

how to change this in docker image