I'm trying to create a docker image for AWX by following the guide mentioned here. I have all pre-requisites. However when I try to run
make docker-compose-build
I run into the following error:
ansible#Master:~/DevOpsPractice/awx$ sudo make docker-compose-build
make: python3.9: No such file or directory
/bin/sh: 1: python3.9: not found
ansible-playbook tools/ansible/dockerfile.yml -e build_dev=True -e receptor_image=quay.io/ansible/receptor:devel
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: An error occurred while calling ansible.utils.display.initialize_locale (unsupported locale setting). This may result in incorrectly calculated text widths that can cause
Display to print incorrect line lengths
PLAY [Render AWX Dockerfile and sources] ***********************************************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************************************************
ok: [localhost]
TASK [dockerfile : Create _build directory] ********************************************************************************************************************************************
changed: [localhost]
TASK [dockerfile : Render supervisor configs] ******************************************************************************************************************************************
changed: [localhost] => (item=supervisor.conf)
changed: [localhost] => (item=supervisor_task.conf)
TASK [dockerfile : Render Dockerfile] **************************************************************************************************************************************************
changed: [localhost]
PLAY RECAP *****************************************************************************************************************************************************************************
localhost : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
DOCKER_BUILDKIT=1 docker build -t ghcr.io/ansible/awx_devel:HEAD \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=ghcr.io/ansible/awx_devel:HEAD .
[+] Building 24.5s (9/48)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 7.58kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 56B 0.0s
=> [internal] load metadata for quay.io/centos/centos:stream9 3.3s
=> ERROR importing cache manifest from ghcr.io/ansible/awx_devel:HEAD 1.4s
=> CANCELED FROM quay.io/ansible/receptor:devel 19.5s
=> => resolve quay.io/ansible/receptor:devel 1.5s
=> => sha256:f359ad713860d45ca196f689cf6641031a9e13cead84711561656f5ff12b76bf 9.17kB / 9.17kB 1.0s
=> => sha256:40465eda3d8c2d2ddd753fe3acc9663aec895810c34ae4e2e30a75c9f6a66819 3.90kB / 3.90kB 1.3s
=> => sha256:581c4418c46b7d13d08f57e4b2d51701044f85249d1ff6c6e0087229047ff9e6 1.05kB / 1.05kB 0.0s
=> => sha256:29ce8e4aa01356544be36e1561aa26cab8e48a1075b515d07c22f570a05bfdd4 1.57kB / 1.57kB 0.0s
=> => sha256:eca252e0939cac0ab0370d4e95676927ac8f28749f335eeffe323641c2f9dcb5 4.42kB / 4.42kB 0.0s
=> => sha256:4cef5a1a5c0996515ba8dcff2c3de29345ce0276d0adb0d205167a21389bdace 16.78MB / 57.94MB 19.5s
=> => sha256:6ebef3df747ed2b762052017d518f29c7e6ba0191310fee89db020c3bf6ccb95 230B / 230B 1.4s
=> => sha256:bc0193de74a56ec29a87ea372a5479695915118cc307d290ade87ab307b61c3c 6.29MB / 7.87MB 19.5s
=> => sha256:d9d79abb920b26bddc4453acbf8a1d49101e3720d4a9d503f4c957bee39f6742 4.19MB / 22.48MB 19.5s
=> [internal] load build context 0.0s
=> => transferring context: 47.09kB 0.0s
=> ERROR https://raw.githubusercontent.com/containers/libpod/master/contrib/podmanimage/stable/podman-containers.conf 21.1s
=> CANCELED [builder 1/9] FROM quay.io/centos/centos:stream9#sha256:997d6abc92f74a652d390a82f3e67467d0ad7ffcbbbe352466a06485104656a9 19.7s
=> => resolve quay.io/centos/centos:stream9#sha256:997d6abc92f74a652d390a82f3e67467d0ad7ffcbbbe352466a06485104656a9 0.0s
=> => sha256:4cef5a1a5c0996515ba8dcff2c3de29345ce0276d0adb0d205167a21389bdace 15.73MB / 57.94MB 19.7s
=> => sha256:997d6abc92f74a652d390a82f3e67467d0ad7ffcbbbe352466a06485104656a9 858B / 858B 0.0s
=> => sha256:082807861440fcd8def47c5ee77185fc6ea68eea4aa37604a22fb8bd6e37fbfe 350B / 350B 0.0s
=> => sha256:e0c32bf1fbef58fb0960b06c642f87508aa5c169eda585a25e29d22d766aac85 1.16kB / 1.16kB 0.0s
=> ERROR https://raw.githubusercontent.com/containers/libpod/master/contrib/podmanimage/stable/containers.conf 21.1s
------
> importing cache manifest from ghcr.io/ansible/awx_devel:HEAD:
------
------
> https://raw.githubusercontent.com/containers/libpod/master/contrib/podmanimage/stable/podman-containers.conf:
------
------
> https://raw.githubusercontent.com/containers/libpod/master/contrib/podmanimage/stable/containers.conf:
------
failed to load cache key: Get "https://raw.githubusercontent.com/containers/libpod/master/contrib/podmanimage/stable/podman-containers.conf": dial tcp 49.44.79.236:443: connect: connection refused
make: *** [Makefile:518: docker-compose-build] Error 1
docker service is already running as can be seen from here:
ansible#Master:~/DevOpsPractice/awx$ systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-11-30 13:10:59 IST; 11min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 2282 (dockerd)
Tasks: 12
Memory: 55.7M
CPU: 2.778s
CGroup: /system.slice/docker.service
└─2282 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
FYI the docker compose that I have, is the one installed through the apt package manager
ansible#Master:~/DevOpsPractice/awx$ which docker-compose
/usr/bin/docker-compose
Looking for helpful pointers
Related
BACKGROUND:
I'm trying to build a postgres-compatible rust server using actix-web & sqlx into a docker container. It's a basic RESTful API for a mock blog, just so I can establish a basic scaffolding for a useful server architecture that I can use as a basis for future projects.
It builds & runs perfectly well outside the docker container, and I've isolated the issue to two dependencies: actix-web & sqlx-- unfortunately the two core dependencies for the project. When I remove these two dependencies from Cargo.toml, the container builds within seconds.
For my Dockerfile I'm using a multi-stage build based upon this tutorial, using cargo chef to cache the build.
PROBLEM:
When running docker build -t server ., after ~30-45 minutes cargo will throw numerous warning: spurious network errors and timeout's, despite building normally outside of docker.
NOTES:
I am running MacOs on a laptop with a 2.3 GHz 8-Core Intel Core i9 processor, and 32 GB 2400 MHz DDR4 memory
My internet connection is stable, running at 442.33 Mbps download & 21.50 Mbps upload through a VPN (the issue persists with VPN disabled as well)
I am running Docker v20.10.21, via homebrew
The same errors are thrown even with a simple single-stage build
Issue mirrored here on GitHub
Command:
From server:
docker build -t server .
Or from root:
docker-compose up
Code:
server/Cargo.toml
[package]
name = "actix-sqlx-docker"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
actix-web = "4.2.1"
chrono = { version = "0.4.23", features = [ "serde" ] }
dotenv = "0.15.0"
serde = { version = "1.0.150", features = [ "derive" ] }
serde_json = "1.0.89"
sqlx = { version = "0.6.2", features = [ "runtime-actix-native-tls", "postgres", "chrono", "time", "uuid"] }
uuid = { version = "1.2.2", features = [ "v4", "fast-rng", "macro-diagnostics", "serde" ] }
server/Dockerfile
# generate a recipe file for dependencies
FROM --platform=linux/amd64 rust as planner
WORKDIR /usr/src/app
RUN cargo install cargo-chef
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
# build dependencies
FROM --platform=linux/amd64 rust as cacher
WORKDIR /usr/src/app
RUN cargo install cargo-chef
COPY --from=planner /usr/src/app/recipe.json recipe.json
RUN cargo chef cook --release --recipe-path recipe.json
# use rust docker image as builder
FROM --platform=linux/amd64 rust as builder
COPY . /usr/src/app
WORKDIR /usr/src/app
COPY --from=cacher /usr/src/app/target target
COPY --from=cacher /usr/local/cargo /usr/local/cargo
FROM --platform=linux/amd64 gcr.io/distroless/cc-debian11 as development
COPY --from=builder /usr/src/app/target/release/actix-sqlx-docker /usr/src/test-app/actix-sqlx-docker
WORKDIR /usr/src/app
CMD ["./actix-sqlx-docker"]
Errors:
Detailed view of errors can be viewed here
Extraneous error messages left out to conserve character count. See in full in linked GitHub issue.
Partial error from multi-stage build:
[+] Building 1919.8s (16/20)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 913B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 77B 0.0s
=> [internal] load metadata for gcr.io/distroless/cc-debian11:latest 3.0s
=> [internal] load metadata for docker.io/library/rust:latest 3.4s
=> [auth] library/rust:pull token for registry-1.docker.io 0.0s
=> [development 1/3] FROM gcr.io/distroless/cc-debian11#sha256:101c26286ea36b68200ff94cf95ca9dbde3329c987738cba3ba702efa3465f6f 1.6s
=> => resolve gcr.io/distroless/cc-debian11#sha256:101c26286ea36b68200ff94cf95ca9dbde3329c987738cba3ba702efa3465f6f 0.0s
=> => sha256:a1f1879bb7de17d50f521ac7c19f3ed9779ded79f26461afb92124ddb1ee7e27 817.19kB / 817.19kB 0.6s
=> => sha256:101c26286ea36b68200ff94cf95ca9dbde3329c987738cba3ba702efa3465f6f 1.67kB / 1.67kB 0.0s
=> => sha256:e8c6063fa88722e5fc86eb631b41c0a46d7c8503aa2433298c59b07d3b731fbf 753B / 753B 0.0s
=> => sha256:7d62b76a133b357bbc8eca89d5aa845e8b807e31980789db995c851ca100a61c 776B / 776B 0.0s
=> => sha256:fc251a6e798157dc3b46fd265da72f39cd848e3f9f4a0b28587d1713b878deb9 795.83kB / 795.83kB 0.5s
=> => sha256:fda4ba87f6fbeebb651625814b981d4100a98c224fef80d562fb33853500f40e 8.00MB / 8.00MB 1.1s
=> => extracting sha256:fc251a6e798157dc3b46fd265da72f39cd848e3f9f4a0b28587d1713b878deb9 0.2s
=> => extracting sha256:fda4ba87f6fbeebb651625814b981d4100a98c224fef80d562fb33853500f40e 0.3s
=> => extracting sha256:a1f1879bb7de17d50f521ac7c19f3ed9779ded79f26461afb92124ddb1ee7e27 0.1s
=> [planner 1/5] FROM docker.io/library/rust#sha256:0067330b7e0eacacc5c32f21b720607c0cd61eda905c8d55e6a745f579ddeee9 21.9s
=> => resolve docker.io/library/rust#sha256:0067330b7e0eacacc5c32f21b720607c0cd61eda905c8d55e6a745f579ddeee9 0.0s
=> => sha256:0067330b7e0eacacc5c32f21b720607c0cd61eda905c8d55e6a745f579ddeee9 988B / 988B 0.0s
=> => sha256:5e0d0367026b13e8f26d3993fcd0d880da32a67ed45b2f3dd2467ac97de3077c 6.42kB / 6.42kB 0.0s
=> => sha256:3406614c4f79d01038ac6d7c608f856ed3a77f545c9ab7f521dddc8905ea2e63 1.59kB / 1.59kB 0.0s
=> => sha256:32de3c850997ce03b6ff4ae8fb00b34b9d7d7f9a35bfcdb8538e22cc7b77c29d 55.03MB / 55.03MB 4.4s
=> => sha256:fa1d4c8d85a4e064e50cea74d4aa848dc5fc275aef223fcc1f21fbdb1b5dd182 5.16MB / 5.16MB 1.6s
=> => sha256:c796299bbbddc7aeada9539a4e7874a75fa2b6ff421f8d5ad40f227b40ab4d86 10.88MB / 10.88MB 1.9s
=> => sha256:81283a9569ad5e90773f038daedd0d565810ca5935eec8f53b8bcb6a199030d6 54.58MB / 54.58MB 7.6s
=> => sha256:60b38700e7fb2cdfac79b15e4c1691a80fe6b4101c7b7fea66b9e7cd64d961cf 196.88MB / 196.88MB 7.9s
=> => sha256:5e868601ee3b6add93e6e2e5794ac5d9e29de9eaeb7920466e30c4e92cd06e7f 175.92MB / 175.92MB 11.0s
=> => extracting sha256:32de3c850997ce03b6ff4ae8fb00b34b9d7d7f9a35bfcdb8538e22cc7b77c29d 2.7s
=> => extracting sha256:fa1d4c8d85a4e064e50cea74d4aa848dc5fc275aef223fcc1f21fbdb1b5dd182 0.4s
=> => extracting sha256:c796299bbbddc7aeada9539a4e7874a75fa2b6ff421f8d5ad40f227b40ab4d86 0.4s
=> => extracting sha256:81283a9569ad5e90773f038daedd0d565810ca5935eec8f53b8bcb6a199030d6 2.6s
=> => extracting sha256:60b38700e7fb2cdfac79b15e4c1691a80fe6b4101c7b7fea66b9e7cd64d961cf 5.6s
=> => extracting sha256:5e868601ee3b6add93e6e2e5794ac5d9e29de9eaeb7920466e30c4e92cd06e7f 4.9s
=> [internal] load build context 0.0s
=> => transferring context: 62.71kB 0.0s
=> [builder 2/5] COPY . /usr/src/app 3.2s
=> [planner 2/5] WORKDIR /usr/src/app 3.2s
=> [planner 3/5] RUN cargo install cargo-chef 52.6s
=> [builder 3/5] WORKDIR /usr/src/app 0.0s
=> [planner 4/5] COPY . . 0.0s
=> [planner 5/5] RUN cargo chef prepare --recipe-path recipe.json 0.3s
=> [cacher 4/5] COPY --from=planner /usr/src/app/recipe.json recipe.json 0.0s
=> ERROR [cacher 5/5] RUN cargo chef cook --release --recipe-path recipe.json 1838.2s
------
> [cacher 5/5] RUN cargo chef cook --release --recipe-path recipe.json:
... // EXTRANEOUS MESSAGES LEFT OUT TO CONSERVE CHARACTER COUNT
#16 1687.7 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `subtle v2.4.1` within 30s)
#16 1717.8 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `thiserror v1.0.38` failed to transfer more than 10 bytes in 30s)
#16 1717.8 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `thiserror-impl v1.0.38` within 30s)
#16 1747.9 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `time v0.1.45` failed to transfer more than 10 bytes in 30s)
#16 1747.9 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `time v0.3.17` within 30s)
#16 1778.0 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `unicode-bidi v0.3.8` failed to transfer more than 10 bytes in 30s)
#16 1778.0 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `unicode_categories v0.1.1` within 30s)
#16 1808.1 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `url v2.3.1` failed to transfer more than 10 bytes in 30s)
#16 1808.1 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `uuid-macro-internal v1.2.2` within 30s)
#16 1838.2 error: failed to download from `https://crates.io/api/v1/crates/derive_more/0.99.17/download`
#16 1838.2
#16 1838.2 Caused by:
#16 1838.2 [28] Timeout was reached (download of `derive_more v0.99.17` failed to transfer more than 10 bytes in 30s)
#16 1838.2 thread 'main' panicked at 'Exited with status code: 101', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/cargo-chef-0.1.50/src/recipe.rs:176:27
#16 1838.2 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
------
executor failed running [/bin/sh -c cargo chef cook --release --recipe-path recipe.json]: exit code: 101
Partial error from single-staged build:
[+] Building 2668.9s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.10kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 77B 0.0s
=> [internal] load metadata for docker.io/library/rust:latest 1.2s
=> [internal] load build context 0.0s
=> => transferring context: 62.71kB 0.0s
=> [1/4] FROM docker.io/library/rust#sha256:0067330b7e0eacacc5c32f21b720607c0cd61eda905c8d55e6a745f579ddeee9 22.8s
=> => resolve docker.io/library/rust#sha256:0067330b7e0eacacc5c32f21b720607c0cd61eda905c8d55e6a745f579ddeee9 0.0s
=> => sha256:fa1d4c8d85a4e064e50cea74d4aa848dc5fc275aef223fcc1f21fbdb1b5dd182 5.16MB / 5.16MB 0.7s
=> => sha256:3406614c4f79d01038ac6d7c608f856ed3a77f545c9ab7f521dddc8905ea2e63 1.59kB / 1.59kB 0.0s
=> => sha256:32de3c850997ce03b6ff4ae8fb00b34b9d7d7f9a35bfcdb8538e22cc7b77c29d 55.03MB / 55.03MB 3.9s
=> => sha256:c796299bbbddc7aeada9539a4e7874a75fa2b6ff421f8d5ad40f227b40ab4d86 10.88MB / 10.88MB 2.4s
=> => sha256:0067330b7e0eacacc5c32f21b720607c0cd61eda905c8d55e6a745f579ddeee9 988B / 988B 0.0s
=> => sha256:5e0d0367026b13e8f26d3993fcd0d880da32a67ed45b2f3dd2467ac97de3077c 6.42kB / 6.42kB 0.0s
=> => sha256:81283a9569ad5e90773f038daedd0d565810ca5935eec8f53b8bcb6a199030d6 54.58MB / 54.58MB 3.5s
=> => sha256:60b38700e7fb2cdfac79b15e4c1691a80fe6b4101c7b7fea66b9e7cd64d961cf 196.88MB / 196.88MB 12.0s
=> => sha256:5e868601ee3b6add93e6e2e5794ac5d9e29de9eaeb7920466e30c4e92cd06e7f 175.92MB / 175.92MB 8.2s
=> => extracting sha256:32de3c850997ce03b6ff4ae8fb00b34b9d7d7f9a35bfcdb8538e22cc7b77c29d 2.5s
=> => extracting sha256:fa1d4c8d85a4e064e50cea74d4aa848dc5fc275aef223fcc1f21fbdb1b5dd182 0.3s
=> => extracting sha256:c796299bbbddc7aeada9539a4e7874a75fa2b6ff421f8d5ad40f227b40ab4d86 0.4s
=> => extracting sha256:81283a9569ad5e90773f038daedd0d565810ca5935eec8f53b8bcb6a199030d6 2.6s
=> => extracting sha256:60b38700e7fb2cdfac79b15e4c1691a80fe6b4101c7b7fea66b9e7cd64d961cf 5.7s
=> => extracting sha256:5e868601ee3b6add93e6e2e5794ac5d9e29de9eaeb7920466e30c4e92cd06e7f 4.8s
=> [2/4] WORKDIR /usr/src/app 2.9s
=> [3/4] COPY . . 0.0s
=> ERROR [4/4] RUN cargo build --release 2641.9s
------
> [4/4] RUN cargo build --release:
... // EXTRANEOUS MESSAGES LEFT OUT TO CONSERVE CHARACTER COUNT
#8 2430.7 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `unicode-bidi v0.3.8` within 30s)
#8 2460.8 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `unicode-ident v1.0.6` failed to transfer more than 10 bytes in 30s)
#8 2460.8 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `unicode-normalization v0.1.22` within 30s)
#8 2491.0 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `unicode-segmentation v1.10.0` failed to transfer more than 10 bytes in 30s)
#8 2491.0 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `unicode_categories v0.1.1` within 30s)
#8 2521.1 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `url v2.3.1` failed to transfer more than 10 bytes in 30s)
#8 2521.1 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `uuid v1.2.2` within 30s)
#8 2551.3 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `uuid-macro-internal v1.2.2` failed to transfer more than 10 bytes in 30s)
#8 2551.3 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `version_check v0.9.4` within 30s)
#8 2581.4 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `whoami v1.2.3` failed to transfer more than 10 bytes in 30s)
#8 2581.4 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `zstd v0.11.2+zstd.1.5.2` within 30s)
#8 2611.6 warning: spurious network error (1 tries remaining): [28] Timeout was reached (download of `zstd-safe v5.0.2+zstd.1.5.2` failed to transfer more than 10 bytes in 30s)
#8 2611.6 warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `zstd-sys v2.0.4+zstd.1.5.2` within 30s)
#8 2641.8 error: failed to download from `https://crates.io/api/v1/crates/actix-http/3.2.2/download`
#8 2641.8
#8 2641.8 Caused by:
#8 2641.8 [28] Timeout was reached (download of `actix-http v3.2.2` failed to transfer more than 10 bytes in 30s)
------
executor failed running [/bin/sh -c cargo build --release]: exit code: 101
Looks like I solved the issue in 040f1b8.
Sometimes during the cacher stage, it still stalls during the download step, but aborting & re-running the build works within a few tries. Could be an issue when running this via a cloud provider but it works for now.
Two things seemed to help:
cargo-chef seems to have trouble with dependency-name: { version = "1", features = [ "foo", "bar" ] } syntax, so I removed that syntax from Cargo.toml.
[dependencies]
actix = "0.13"
actix-web = "4"
dotenv = "0.15.0"
serde_json = "1.0.89"
[dependencies.chrono]
version = "0.4.23"
features = [ "serde" ]
[dependencies.serde]
version = "1.0.150"
features = [ "derive" ]
[dependencies.sqlx]
version = "0.6.2"
features = [ "runtime-async-std-native-tls", "postgres", "chrono", "time", "uuid" ]
[dependencies.uuid]
version = "1.2.2"
features = [ "v4", "fast-rng", "macro-diagnostics", "serde" ]
The entire rust package was overkill, so I ended up using an alpine distro instead. Also specifying the --target flag in the cacher stage seemed to help. I don't think my previous Dockerfile would have run anyway, since the user wasn't configured.
# generic lite rust container with musl & openssl
FROM --platform=linux/amd64 rust:alpine3.17 as rust-alpine
RUN apk add --no-cache musl-dev openssl-dev
WORKDIR /usr/src/app
# generic chef to minimize redundancy
FROM rust-alpine as chef
RUN cargo install cargo-chef
# generate a recipe file for dependencies
FROM chef as planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
# build & cache dependencies
FROM chef as cacher
COPY --from=planner /usr/src/app/recipe.json recipe.json
RUN cargo chef cook --release --target x86_64-unknown-linux-musl --recipe-path recipe.json
# use rust:alpine3.17 docker image as builder
FROM rust-alpine as builder
# create user
ENV USER=dev
ENV UID=1337
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/sbin/nologin" \
--no-create-home \
--uid "${UID}" \
"${USER}"
# copy build image
COPY . .
COPY --from=cacher /usr/src/app/target target
COPY --from=cacher /usr/local/cargo /usr/local/cargo
RUN cargo build --release
# run executable from distroless build
FROM --platform=linux/amd64 gcr.io/distroless/cc-debian11 as development
# copy user from builder
COPY --from=builder /etc/passwd /etc/passwd
COPY --from=builder /etc/group /etc/group
# copy executable from builder
COPY --from=builder /usr/src/app/target/release/actix-sqlx-docker /usr/local/bin/actix-sqlx-docker
# set user
USER dev:dev
# run executable
CMD ["/usr/local/bin/actix-sqlx-docker"]
I'm trying to build a medusajs Docker image.
I followed all the steps in the quick start guide, but when doing sudo docker compose up --build in the root folder of the project, I receive the following error:
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "~/medusaPR/micruniverse-store-engine/develop.sh": stat ~/medusaPR/micruniverse-store-engine/develop.sh: no such file or directory: unknown
Edit: This is the output after making those corrections to the publication of the Dockerfiles and the docker-compose, now for the new errors, can you point me in the right direction pls? From what I can see, all the steps of the build are complete, but then at the end when executing seems to be missing files or permission.
Sorry for the inconvenience, thanks for your great assistance :D
random0perator#random0perator:~/medusaPR/micruniverse-store-engine$ sudo docker compose up --build
[+] Building 5.1s (39/39) FINISHED
=> [storefront:test internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [backend:test internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [admin:test internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 32B 0.0s
=> [storefront:test internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [admin:test internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [backend:test internal] load .dockerignore 0.1s
=> => transferring context: 34B 0.0s
=> [admin:test internal] load metadata for docker.io/library/node:latest 0.8s
=> [backend:test internal] load metadata for docker.io/library/node:17.1.0 0.0s
=> [backend:test internal] load build context 0.0s
=> => transferring context: 4.62kB 0.0s
=> [backend:test 1/9] FROM docker.io/library/node:17.1.0 0.0s
=> CACHED [backend:test 2/9] WORKDIR /app/medusa 0.0s
=> CACHED [backend:test 3/9] COPY package.json . 0.0s
=> CACHED [backend:test 4/9] RUN apt-get update 0.0s
=> CACHED [backend:test 5/9] RUN apt-get install -y python 0.0s
=> CACHED [backend:test 6/9] RUN npm install -g npm#latest 0.0s
=> CACHED [backend:test 7/9] RUN npm install -g #medusajs/medusa-cli#latest 0.0s
=> CACHED [backend:test 8/9] RUN npm install --loglevel=error 0.0s
=> CACHED [backend:test 9/9] COPY . . 0.0s
=> [admin:test] exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:19d5d8cb37b3e0a9fd95f01c0afaa2862872a3cc68522504afd349bca4f18290 0.0s
=> => naming to docker.io/library/backend:test 0.0s
=> => writing image sha256:2b8d52804d62799a489c22e732cc79dd805ea0e82e1b66f1ed022c77e9e85164 0.0s
=> => naming to docker.io/library/storefront:test 0.0s
=> => writing image sha256:0ae7ad72057373eb371b515ebd55b4536ef97b0e4d809abf99c57ef9f8499dc4 0.0s
=> => naming to docker.io/library/admin:test 0.0s
=> [storefront:test 1/10] FROM docker.io/library/node:latest#sha256:ebd1096a66c724af78abb11e6c81eb05b85fcbe8920af2c24d42b6df6aab268 0.0s
=> [admin:test internal] load build context 4.0s
=> => transferring context: 8.40MB 3.5s
=> [storefront:test internal] load build context 2.9s
=> => transferring context: 5.91MB 2.8s
=> CACHED [storefront:test 2/9] WORKDIR /app/storefront 0.0s
=> CACHED [storefront:test 3/9] COPY . . 0.0s
=> CACHED [storefront:test 4/9] RUN rm -rf node_modules 0.0s
=> CACHED [storefront:test 5/9] RUN apt-get update 0.0s
=> CACHED [storefront:test 6/9] RUN npm install -g npm#latest 0.0s
=> CACHED [storefront:test 7/9] RUN npm install -g gatsby-cli 0.0s
=> CACHED [storefront:test 8/9] RUN npm install sharp 0.0s
=> CACHED [storefront:test 9/9] RUN npm install --loglevel=error 0.0s
=> CACHED [admin:test 2/10] WORKDIR /app/admin 0.0s
=> CACHED [admin:test 3/10] COPY . . 0.0s
=> CACHED [admin:test 4/10] RUN rm -rf node_modules 0.0s
=> CACHED [admin:test 5/10] RUN apt-get update 0.0s
=> CACHED [admin:test 6/10] RUN npm install -g npm#latest 0.0s
=> CACHED [admin:test 7/10] RUN npm install sharp 0.0s
=> CACHED [admin:test 8/10] RUN npm install -g gatsby-cli 0.0s
=> CACHED [admin:test 9/10] RUN npm install --loglevel=error 0.0s
=> CACHED [admin:test 10/10] RUN npm run build &> /dev/null 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 5/0
⠿ Container medusapr-postgres-1 Created 0.0s
⠿ Container cache Created 0.0s
⠿ Container medusa-server Created 0.0s
⠿ Container medusa-storefront Created 0.0s
⠿ Container medusa-admin Created 0.0s
Attaching to cache, medusa-admin, medusa-server, medusa-storefront, medusapr-postgres-1
cache | 1:C 30 Jul 2022 20:37:21.835 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
cache | 1:C 30 Jul 2022 20:37:21.837 # Redis version=7.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
cache | 1:C 30 Jul 2022 20:37:21.838 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
cache | 1:M 30 Jul 2022 20:37:21.844 * monotonic clock: POSIX clock_gettime
cache | 1:M 30 Jul 2022 20:37:21.848 * Running mode=standalone, port=6379.
cache | 1:M 30 Jul 2022 20:37:21.849 # Server initialized
cache | 1:M 30 Jul 2022 20:37:21.850 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
cache | 1:M 30 Jul 2022 20:37:21.853 * Loading RDB produced by version 7.0.4
cache | 1:M 30 Jul 2022 20:37:21.853 * RDB age 24 seconds
cache | 1:M 30 Jul 2022 20:37:21.853 * RDB memory usage when created 0.82 Mb
cache | 1:M 30 Jul 2022 20:37:21.854 * Done loading RDB, keys loaded: 0, keys expired: 0.
cache | 1:M 30 Jul 2022 20:37:21.854 * DB loaded from disk: 0.001 seconds
cache | 1:M 30 Jul 2022 20:37:21.854 * Ready to accept connections
medusapr-postgres-1 | 2022-07-30 20:37:21.962 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
medusapr-postgres-1 | 2022-07-30 20:37:21.962 UTC [1] LOG: listening on IPv6 address "::", port 5432
medusapr-postgres-1 | 2022-07-30 20:37:22.105 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
medusapr-postgres-1 | 2022-07-30 20:37:22.446 UTC [23] LOG: database system was shut down at 2022-07-30 20:36:57 UTC
medusapr-postgres-1 | 2022-07-30 20:37:22.505 UTC [1] LOG: database system is ready to accept connections
medusa-server | medusa develop
medusa-server |
medusa-server | Start development server. Watches file and rebuilds when something changes
medusa-server |
medusa-server | Options:
medusa-server | --verbose Turn on verbose output [boolean] [default: false]
medusa-server | --no-color, --no-colors Turn off the color in output [boolean] [default: false]
medusa-server | --json Turn on the JSON logger [boolean] [default: false]
medusa-server | -H, --host Set host. Defaults to localhost [string] [default: "localhost"]
medusa-server | -p, --port Set port. Defaults to 9000 (set by env.PORT) (otherwise defaults 9000) [string] [default: "9000"]
medusa-server | -h, --help Show help [boolean]
medusa-server | -v, --version Show the version of the Medusa CLI and the Medusa package in the current project [boolean]
medusa-server | babel:
medusa-server | src does not exist
medusa-server | Error: Command failed: "/app/medusa/node_modules/.bin/babel" src -d dist
medusa-server | at checkExecSyncError (node:child_process:826:11)
medusa-server | at execSync (node:child_process:900:15)
medusa-server | at /app/medusa/node_modules/#medusajs/medusa/dist/commands/develop.js:82:42
medusa-server | at step (/app/medusa/node_modules/#medusajs/medusa/dist/commands/develop.js:33:23)
medusa-server | at Object.next (/app/medusa/node_modules/#medusajs/medusa/dist/commands/develop.js:14:53)
medusa-server | at /app/medusa/node_modules/#medusajs/medusa/dist/commands/develop.js:8:71
medusa-server | at new Promise (<anonymous>)
medusa-server | at __awaiter (/app/medusa/node_modules/#medusajs/medusa/dist/commands/develop.js:4:12)
medusa-server | at default_1 (/app/medusa/node_modules/#medusajs/medusa/dist/commands/develop.js:74:12)
medusa-server | at /usr/local/lib/node_modules/#medusajs/medusa-cli/dist/create-cli.js:260:7 {
medusa-server | status: 2,
medusa-server | signal: null,
medusa-server | output: [ null, null, null ],
medusa-server | pid: 13,
medusa-server | stdout: null,
medusa-server | stderr: null
medusa-server | }
medusa-server exited with code 0
medusa-storefront |
medusa-storefront | ERROR
medusa-storefront |
medusa-storefront | gatsby develop
medusa-storefront |
medusa-storefront | Start development server. Watches files, rebuilds, and hot reloads if something
medusa-storefront | changes
medusa-storefront |
medusa-storefront | Options:
medusa-storefront | --verbose Turn on verbose output [boolean] [default: false]
medusa-storefront | --no-color, --no-colors Turn off the color in output [boolean] [default:
medusa-storefront | false]
medusa-storefront | --json Turn on the JSON logger [boolean] [default:
medusa-storefront | false]
medusa-storefront | -H, --host Set host. Defaults to localhost [string]
medusa-storefront | [default: "localhost"]
medusa-storefront | -p, --port Set port. Defaults to 8000 [string] [default:
medusa-storefront | "8000"]
medusa-storefront | -o, --open Open the site in your (default) browser for you.
medusa-storefront | [boolean]
medusa-storefront | -S, --https Use HTTPS. See
medusa-storefront | https://www.gatsbyjs.com/docs/local-https/ as a guide [boolean]
medusa-storefront | -c, --cert-file Custom HTTPS cert file (also required: --https,
medusa-storefront | --key-file). See https://www.gatsbyjs.com/docs/local-https/ [string] [default:
medusa-storefront | ""]
medusa-storefront | -k, --key-file Custom HTTPS key file (also required: --https,
medusa-storefront | --cert-file). See https://www.gatsbyjs.com/docs/local-https/ [string] [default:
medusa-storefront | ""]
medusa-storefront | --ca-file Custom HTTPS CA certificate file (also required:
medusa-storefront | --https, --cert-file, --key-file). See
medusa-storefront | https://www.gatsbyjs.com/docs/local-https/ [string] [default: ""]
medusa-storefront | --graphql-tracing Trace every graphql resolver, may have performance
medusa-storefront | implications [boolean] [default: false]
medusa-storefront | --open-tracing-config-file Tracer configuration file (OpenTracing
medusa-storefront | compatible). See https://gatsby.dev/tracing [string]
medusa-storefront | --inspect Opens a port for debugging. See
medusa-storefront | https://www.gatsbyjs.com/docs/debugging-the-build-process/ [number]
medusa-storefront | --inspect-brk Opens a port for debugging. Will block until
medusa-storefront | debugger is attached. See
medusa-storefront | https://www.gatsbyjs.com/docs/debugging-the-build-process/ [number]
medusa-storefront | -h, --help Show help [boolean]
medusa-storefront | -v, --version Show the version of the Gatsby CLI and the Gatsby
medusa-storefront | package in the current project [boolean]
medusa-storefront |
medusa-storefront | ERROR
medusa-storefront |
medusa-storefront | gatsby <develop> can only be run for a gatsby site.
medusa-storefront | Either the current working directory does not contain a valid package.json or
medusa-storefront | 'gatsby' is not specified as a dependency
medusa-storefront |
medusa-storefront |
medusa-storefront exited with code 1
medusa-admin |
medusa-admin | ERROR
medusa-admin |
medusa-admin | Initiated Worker with invalid NODE_OPTIONS env variable:
medusa-admin | --openssl-legacy-provider is not allowed in NODE_OPTIONS
medusa-admin |
medusa-admin |
medusa-admin | Error: Initiated Worker with invalid NODE_OPTIONS env variable: --openssl-lega
medusa-admin | cy-provider is not allowed in NODE_OPTIONS
medusa-admin |
medusa-admin | - errors:387 new NodeError
medusa-admin | node:internal/errors:387:5
medusa-admin |
medusa-admin | - worker:195 new Worker
medusa-admin | node:internal/worker:195:13
medusa-admin |
medusa-admin | - ThreadsWorker.js:51 ThreadsWorker.start
medusa-admin | [admin]/[#parcel]/workers/lib/threads/ThreadsWorker.js:51:19
medusa-admin |
medusa-admin | - Worker.js:104 Worker.fork
medusa-admin | [admin]/[#parcel]/workers/lib/Worker.js:104:23
medusa-admin |
medusa-admin | - WorkerFarm.js:232 WorkerFarm.startChild
medusa-admin | [admin]/[#parcel]/workers/lib/WorkerFarm.js:232:12
medusa-admin |
medusa-admin | - WorkerFarm.js:394 WorkerFarm.startMaxWorkers
medusa-admin | [admin]/[#parcel]/workers/lib/WorkerFarm.js:394:14
medusa-admin |
medusa-admin | - WorkerFarm.js:139 new WorkerFarm
medusa-admin | [admin]/[#parcel]/workers/lib/WorkerFarm.js:139:10
medusa-admin |
medusa-admin | - Parcel.js:565 createWorkerFarm
medusa-admin | [admin]/[#parcel]/core/lib/Parcel.js:565:10
medusa-admin |
medusa-admin | - Parcel.js:232 Parcel._init
medusa-admin | [admin]/[#parcel]/core/lib/Parcel.js:232:20
medusa-admin |
medusa-admin | - Parcel.js:273 Parcel.run
medusa-admin | [admin]/[#parcel]/core/lib/Parcel.js:273:7
medusa-admin |
medusa-admin | - compile-gatsby-files.ts:67 compileGatsbyFiles
medusa-admin | [admin]/[gatsby]/src/utils/parcel/compile-gatsby-files.ts:67:29
medusa-admin |
medusa-admin | - initialize.ts:172 initialize
medusa-admin | [admin]/[gatsby]/src/services/initialize.ts:172:3
medusa-admin |
medusa-admin |
not finished compile gatsby files - 0.308s
medusa-admin |
medusa-admin exited with code 1
I reviewed your Dockerfile:
FROM node:17.1.0
WORKDIR ~/medusaPR/micruniverse-store-engine
COPY package.json ./
RUN apt-get update
RUN apt-get install -y python
RUN npm install -g npm#latest
RUN npm install -g #medusajs/medusa-cli#latest
RUN npm install --loglevel=error
COPY . .
ENTRYPOINT ["./develop.sh", "develop"]
... and most of the instructions look alright to me. Your entrypoint is being properly copied in. What is odd, however, is your ENTRYPOINT instruction:
ENTRYPOINT ["./develop.sh", "develop"]
Referencing the documentation on ENTRYPOINT, we can see that your Dockerfile is using the exec form:
ENTRYPOINT ["executable", "param1", "param2"]
... which means that your develop.sh is your executable and develop is a parameter to it. However, if we take a look at develop.sh, we can see that it does not take parameters. It already has the "develop" parameter:
#!/bin/bash
#Run migrations to ensure the database is updated
medusa migrations run
#Start development environment
medusa develop
So maybe change your ENTRYPOINT to: ENTRYPOINT ["./develop.sh"] to fix the problem?
Also a second possible cause, based on my own earlier comment:
If this dockerfile is from the backend folder, it MUST be built within the backend folder, not some "root" directory above it.
The build context sent to Docker determines what files are available
to COPY when you use relative references, for example: COPY ./ ./. If
you're trying to build your image from a parent directory, without
adjusting your dockerfile or build arguments, then that's probably why
your build is failing.
Your Dockerfile can't copy develop.sh into the image because its not in the parent directory; it's in the backend child directory.
Just run this command will format the .sh file correctly.
dos2unix.exe your-file.sh
I was able to run the following Docker file on my Mac with Intel chip, but I am getting errors when I run it on a Mac with M1. I then tried docker run --init --platform=linux/amd64 -e SPRING_PROFILES_ACTIVE=dev -e SERVER_FLAVOR=LOCAL_DEV -p 8080:8080 monolith-repo and docker buildx build --platform=linux/amd64 -t monolith-repo .. That got the Docker container to run but I get the following error when trying to call selenium:
org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:561)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:230)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:151)
at org.openqa.selenium.chromium.ChromiumDriver.<init>(ChromiumDriver.java:108)
at org.openqa.selenium.chrome.ChromeDriver.<init>(ChromeDriver.java:104)
at org.openqa.selenium.chrome.ChromeDriver.<init>(ChromeDriver.java:91)
at com.flockta.monolith.scraping.MakeNewWebpageScraperVersion2.getWebDriver(MakeNewWebpageScraperVersion2.java:176)
at com.flockta.monolith.scraping.MakeNewWebpageScraperVersion2.getWebpage(MakeNewWebpageScraperVersion2.java:52)
at com.flockta.monolith.job.ScrapingDataPipelineJob.getHtmls(ScrapingDataPipelineJob.java:308)
at com.flockta.monolith.job.ScrapingDataPipelineJob.processWebPage(ScrapingDataPipelineJob.java:168)
at com.flockta.monolith.job.ScrapingDataPipelineJob.processResult(ScrapingDataPipelineJob.java:153)
at com.flockta.monolith.job.ScrapingDataPipelineJob.processResult(ScrapingDataPipelineJob.java:59)
at com.flockta.monolith.job.AbstractScrapingPipelineJob.processResults(AbstractScrapingPipelineJob.java:57)
at com.flockta.monolith.job.AbstractScrapingPipelineJob.process(AbstractScrapingPipelineJob.java:30)
at com.flockta.monolith.ScheduledJobController.runDataJob(ScheduledJobController.java:71)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: org.openqa.selenium.WebDriverException: Driver server process died prematurely.
Build info: version: '4.1.1', revision: 'e8fcc2cecf'
System info: host: '1c789f0433ca', ip: '172.17.0.3', os.name: 'Linux', os.arch: 'amd64', os.version: '5.10.76-linuxkit', java.version: '15.0.2'
Driver info: driver.version: ChromeDriver
at org.openqa.selenium.remote.service.DriverService.start(DriverService.java:226)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:98)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:543)
... 26 common frames omitted
My full Docker file is:
FROM maven:3.6.3-openjdk-15
#Chrome
ARG CHROME_VERSION=98.0.4758.102-1
ADD google-chrome.repo /etc/yum.repos.d/google-chrome.repo
RUN microdnf install -y google-chrome-stable-$CHROME_VERSION \
&& sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome
## ChromeDriver
ARG CHROME_DRIVER_VERSION=98.0.4758.102
RUN microdnf install -y unzip \
&& curl -s -o /tmp/chromedriver.zip https://chromedriver.storage.googleapis.com/$CHROME_DRIVER_VERSION/chromedriver_linux64.zip \
&& unzip /tmp/chromedriver.zip -d /opt \
&& rm /tmp/chromedriver.zip \
&& mv /opt/chromedriver /opt/chromedriver-$CHROME_DRIVER_VERSION \
&& chmod 755 /opt/chromedriver-$CHROME_DRIVER_VERSION \
&& ln -s /opt/chromedriver-$CHROME_DRIVER_VERSION /usr/bin/chromedriver
ENV CHROMEDRIVER_PORT 4444
ENV CHROMEDRIVER_WHITELISTED_IPS "127.0.0.1"
ENV CHROMEDRIVER_URL_BASE ''
EXPOSE 4444
EXPOSE 8080
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar", "-Xmx600m","/app.jar"]
Also, when I try to run docker build without the buildx build --platform=linux/amd64 I get an error:
docker build -t monolith-repo .
[+] Building 12.0s (7/9)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/maven:3.6.3-openjdk-15 0.3s
=> [internal] load build context 0.0s
=> => transferring context: 122B 0.0s
=> [1/5] FROM docker.io/library/maven:3.6.3-openjdk-15#sha256:aac64d9d716f5fa3926e6c8f43c680fa8404faae0b8a014c0c9b3d73d2d0f66a 0.0s
=> CACHED [2/5] ADD google-chrome.repo /etc/yum.repos.d/google-chrome.repo 0.0s
=> ERROR [3/5] RUN microdnf install -y google-chrome-stable-98.0.4758.102-1 && sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome 11.6s
------
> [3/5] RUN microdnf install -y google-chrome-stable-98.0.4758.102-1 && sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome:
#7 0.232 Downloading metadata...
#7 5.286 Downloading metadata...
#7 9.705 Downloading metadata...
#7 11.53 error: Could not depsolve transaction; 1 problem detected:
#7 11.53 Problem: conflicting requests
#7 11.53 - package google-chrome-stable-98.0.4758.102-1.x86_64 does not have a compatible architecture
#7 11.53 - nothing provides libm.so.6(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides ld-linux-x86-64.so.2(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libdl.so.2(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides librt.so.1(GLIBC_2.2.5)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.3.2)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.12)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.3.4)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides ld-linux-x86-64.so.2()(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
#7 11.53 - nothing provides libpthread.so.0(GLIBC_2.3.3)(64bit) needed by google-chrome-stable-98.0.4758.102-1.x86_64
------
executor failed running [/bin/sh -c microdnf install -y google-chrome-stable-$CHROME_VERSION && sed -i 's/"$HERE\/chrome"/"$HERE\/chrome" --no-sandbox/g' /opt/google/chrome/google-chrome]: exit code: 1
I note two things during the build:
'package google-chrome-stable-98.0.4758.102-1.x86_64 does not have a compatible architecture'
I am using 'chromedriver_linux64.zip (but it never gets to that stage) albeit https://chromedriver.storage.googleapis.com/ shows that there is a chromedriver_mac64_m1 as weak,
Is there a solution to get Chrome working on my local machine? Specifically, I need to be able to run this on my Mac and also deploy it to AWS. I think the AWS can be solved via buildx build --platform=linux/amd64 but I do not know how to get this to run locally. Any ideas?
The high level issue is that linux running on amd (Intel) may not be available on arm yet. Specifically see https://github.com/SeleniumHQ/docker-selenium/issues/1076 and the great work that jamesmortensen did with the https://hub.docker.com/u/seleniarm repo (and specifically https://hub.docker.com/r/seleniarm/standalone-chromium/tags ). To use it, do:
FROM seleniarm/standalone-chromium:4.1.1-alpha-20220119
ENV CHROMEDRIVER_PORT 4444
ENV CHROMEDRIVER_WHITELISTED_IPS "127.0.0.1"
ENV CHROMEDRIVER_URL_BASE ''
EXPOSE 4444
EXPOSE 8080
EXPOSE 5005
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
# For Testing
ENTRYPOINT ["java","-jar", "-Xmx600m","/app.jar"]
Java code is then:
return new ChromeDriver(service, getChromeOptions());
and
private ChromeOptions getChromeOptions() {
ChromeOptions chromeOptions = new ChromeOptions();
// User agent is required because some websites will reject your request if it does not have a user agent
chromeOptions.addArguments(String.format("user-agent=%s", USER_AGENT));
chromeOptions.addArguments("--log-level=OFF");
chromeOptions.setHeadless(true);
List<String> arguments = new LinkedList<>();
arguments.add("--disable-extensions");
arguments.add("--headless");
arguments.add("--disable-gpu");
arguments.add("--no-sandbox");
arguments.add("--incognito");
arguments.add("--disable-application-cache");
arguments.add("--disable-dev-shm-usage");
chromeOptions.addArguments(arguments);
return chromeOptions;
}
Note that the standalone is chromium , not chromedriver but this works because chromedriver is based off of chromium.
The root cause of this is that many packages (for example https://www.ubuntuupdates.org/package/google_chrome/stable/main/base/google-chrome-stable) do not have arm versions yet (they only have amd versions which is Intel based).
As for Docker files, I suggest having two docker files for now (much like zwbetz-gh's comment on Dec 28th, see https://github.com/SeleniumHQ/docker-selenium/issues/1076). To build the arn version you would do:
docker build -f DOCKER_FILE_ARN -t your_tag. Although I have still to test it, for the non-arn file, you would do: docker buildx build --platform=linux/amd64 -f DOCKER_FILE_AMD -t your_tag.
From this trial, it looks like Docker's COPY command doesn't preserve symlinks -- rather, it "follows" symlinks and copies the target file(?):
$ ls -l
total 4
lrwxrwxrwx 1 user domain users 1 Mar 26 09:37 a -> b
-rw-r--r-- 1 user domain users 0 Mar 26 09:37 b
lrwxrwxrwx 1 user domain users 1 Mar 26 09:41 c -> d
-rw-r--r-- 1 user domain users 0 Mar 26 09:41 d
-rw-r--r-- 1 user domain users 54 Mar 26 09:39 Dockerfile
$
# Dockerfile
FROM alpine:3.7 as base
COPY [ "./*", "/foo/bar/" ]
$ docker build -t foo:tmp . && docker run -it foo:tmp
[+] Building 1.1s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 116B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.7 0.8s
=> [internal] load build context 0.0s
=> => transferring context: 12.52kB 0.0s
=> CACHED [1/2] FROM docker.io/library/alpine:3.7#sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 0.0s
=> [2/2] COPY [ ./*, /foo/bar/ ] 0.1s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:81531080243eedcb443c7fe0c9e85df92515a6cf3f997c549414cae9bf6ca925 0.0s
=> => naming to docker.io/library/foo:tmp 0.0s
/ # ls -l /foo/bar/
total 4
-rw-r--r-- 1 root root 67 Mar 26 16:43 Dockerfile
-rw-r--r-- 1 root root 0 Mar 26 16:37 a
-rw-r--r-- 1 root root 0 Mar 26 16:37 b
-rw-r--r-- 1 root root 0 Mar 26 16:41 c
-rw-r--r-- 1 root root 0 Mar 26 16:41 d
/ #
This behavior appears to be the same whether I'm copying from my context or from another Docker image layer.
Is there some way I can get Docker copy to preserve symlinks i.e. instead of it "following" them and creating hard files? Or is there some convention to work around this situation?
The context behind this question is that a base layer I'm copying from has (a lot of) a mix of files and symlinks that point to them. Example: libfoo.so -> libfoo.so.1 -> libfoo.so.1.0.0
I spent a while searching online for this topic, but found surprisingly few hits. Most "close" questions were about symlinks to directories, which isn't the same as my situation. The closest hit I got was this unfortunately unanswered question: https://forums.docker.com/t/copying-symlinks-into-image/39521
This works if you try to copy the entire directory as a unit, rather than trying to copy the files in the directory:
COPY ./ /foo/bar/
Note that there are some subtleties around copying directories: the Dockerfile COPY documentation notes that, if you COPY a directory,
NOTE: The directory itself is not copied, just its contents.
This is fine for your case where you're trying to copy the entire build context. If you have a subdirectory you're trying to copy, you need to make sure the subdirectory name is also on the right-hand side of COPY and that the directory name ends with /.
I'd like to use one experimental feature of Docker BuidlKit (mount=type=cache)
The fist lines of my Dockerfile are:
# syntax=docker/dockerfile:experimental
FROM i386/debian:buster
#
# Setup an apt cache for Docker (experimental)
#
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt --mount=type=cache,target=/var/lib/apt apt update && apt-get --no-install-recommends install -y gcc
I have setup a Password Store for docker, logged in successfull to docker hub, "docker-credential-pass" binary in my PATH, setup the "docker login process using encrypted password".
(as described in
"How to Enable Docker Experimental Features and Encrypt Your Login Credentials"
kalou#shinwey $ pass list
Password Store
`-- docker-credential-helpers
|-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
| `-- berryamin
`-- docker-pass-initialized-check
But when I launch the image build, the process fails with:
DOCKER_BUILDKIT=1 docker build -t minexpert2:0.1 .
[+] Building 0.5s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> ERROR resolve image config for docker.io/docker/dockerfile:experimental 0.4s
------
> resolve image config for docker.io/docker/dockerfile:experimental:
------
failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `exit status 2: gpg: decryption failed: No secret key`
May someone help to explain what's missing here ?
try download first the docker image and run command for build image, it worked me
We had this issue when trying to push to ghcr.io we got this error, doing the steps here solved removed the error and let us push https://docs.github.com/en/free-pro-team#latest/packages/getting-started-with-github-container-registry/enabling-improved-container-support