I am having problems running docker compose (the yaml file later). It should build three containers: Jupyter, mlflow and postgres but for some reason it only builds postgress.
The error message is
Building mlflow
[+] Building 0.7s (5/9)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> ERROR [internal] load metadata for docker.io/library/python:3.7 0.6s
=> ERROR [1/5] FROM docker.io/library/python:3.7 0.1s
=> => resolve docker.io/library/python:3.7 0.1s
=> [internal] load build context 0.0s
------
> [internal] load metadata for docker.io/library/python:3.7:
------
------
> [1/5] FROM docker.io/library/python:3.7:
------
failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: unexpected status code https://dockerhub.azk8s.cn/v2/library/python/manifests/3.7: 403 Forbidden
ERROR: Service 'mlflow' failed to build : Build failed
Looking at the error message I notices that it says
=> ERROR [internal] load metadata for docker.io/library/python:3.7 0.6s
=> ERROR [1/5] FROM docker.io/library/python:3.7
So I looked it up and I found this article in which it says that
Docker.Io is the enterprise edition of Docker, and is only available
to paying customers
So why is my Docker image using a pay only repository(?)
So far I have used docker multiple time without problem, and I never knew about this "docker.io" what is this exactly?
And why progres could build even though it used also this pay only version?
How can I use the community edition? Or should I?
It this the root of the problem?
For reference here is the yaml file
version: '3.7'
services:
jupyter:
user: root
build:
context: .
dockerfile: ./docker/jupyter/Dockerfile
target: ${JUPYTER_TARGET}
args:
- MLFLOW_ARTIFACT_STORE=/${MLFLOW_ARTIFACT_STORE}
- MLFLOW_VERSION=${MLFLOW_VERSION}
- JUPYTER_BASE_IMAGE=${JUPYTER_BASE_IMAGE}
- JUPYTER_BASE_VERSION=${JUPYTER_BASE_VERSION}
- JUPYTER_USERNAME=${JUPYTER_USERNAME}
image: ${IMAGE_OWNER}/${REPO_SLUG}/${JUPYTER_TARGET}:${VERSION}
ports:
- "${JUPYTER_PORT}:${JUPYTER_PORT}"
depends_on:
- mlflow
environment:
MLFLOW_TRACKING_URI: ${MLFLOW_TRACKING_URI}
JUPYTER_ENABLE_LAB: ${JUPYTER_ENABLE_LAB}
NB_USER: ${JUPYTER_USERNAME}
NB_UID: ${JUPYTER_UID}
CHOWN_HOME: "yes"
CHOWN_HOME_OPTS: '-R'
CHOWN_EXTRA: ${JUPYTER_CHOWN_EXTRA}
CHOWN_EXTRA_OPTS: '-R'
volumes:
- ./:/home/${JUPYTER_USERNAME}/work
- ./${MLFLOW_ARTIFACT_STORE}:/${MLFLOW_ARTIFACT_STORE}
mlflow:
build:
context: ./docker/mlflow
args:
- MLFLOW_VERSION=${MLFLOW_VERSION}
image: ${IMAGE_OWNER}/${REPO_SLUG}/${MLFLOW_IMAGE_NAME}:${VERSION}
expose:
- "${MLFLOW_TRACKING_SERVER_PORT}"
ports:
- "${MLFLOW_TRACKING_SERVER_PORT}:${MLFLOW_TRACKING_SERVER_PORT}"
depends_on:
- postgres
environment:
MLFLOW_TRACKING_SERVER_HOST: ${MLFLOW_TRACKING_SERVER_HOST}
MLFLOW_TRACKING_SERVER_PORT: ${MLFLOW_TRACKING_SERVER_PORT}
MLFLOW_ARTIFACT_STORE: ${MLFLOW_ARTIFACT_STORE}
MLFLOW_BACKEND_STORE: ${MLFLOW_BACKEND_STORE}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DATABASE: ${POSTGRES_DATABASE}
POSTGRES_PORT: ${POSTGRES_PORT}
WAIT_FOR_IT_TIMEOUT: ${WAIT_FOR_IT_TIMEOUT}
volumes:
- ./${MLFLOW_ARTIFACT_STORE}:/${MLFLOW_ARTIFACT_STORE}
postgres:
user: "${POSTGRES_UID}:${POSTGRES_GID}"
build:
context: ./docker/postgres
image: ${IMAGE_OWNER}/${REPO_SLUG}/${POSTGRES_IMAGE_NAME}:${VERSION}
restart: always
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
ports:
- "${POSTGRES_PORT}:${POSTGRES_PORT}"
volumes:
- ./${POSTGRES_STORE}:/var/lib/postgresql/data
Related
[+] Building 14.6s (1/1) FINISHED
=> ERROR [internal] booting buildkit 14.6s
=> => pulling image moby/buildkit:buildx-stable-1 12.7s
=> => creating container buildx_buildkit_default 1.9s
------
\> [internal] booting buildkit:
------
Error response from daemon: crun: creating cgroup directory `/sys/fs/cgroup/systemd/docker/buildx/libpod-86ba89f2fa100371f55620a538087d7e63a80541ae3b69b4919483f52ff616fa`: No such file or directory: OCI runtime attempted to invoke a command that was not found
`docker-compose` process finished with exit code 17
# Docker Compose file Reference (https://docs.docker.com/compose/compose-file/)
version: '3'
# Define services
services:
# App Service
app:
# Configuration for building the docker image for the service
build:
context: . # Use an image built from the specified dockerfile in the current directory.
dockerfile: Dockerfile
ports:
- "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: unless-stopped
depends_on:
- redis # This service depends on redis. Start that first.
environment: # Pass environment variables to the service
REDIS_URL: redis:6379
networks: # Networks to join (Services on the same network can communicate with each other using their name)
- backend
# Redis Service
redis:
image: "redis:alpine" # Use a public Redis image to build the redis service
restart: unless-stopped
networks:
- backend
networks:
backend:
I'm using this little docker-compose.yaml file to start two containers in an EC2 to start a preview of my app:
version: "3.9"
services:
redis:
build:
context: .
ports:
- 6379:6379
database:
container_name: postgres
image: postgres:14.2
ports:
- 5432:5432
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
volumes:
postgres:
The redis image is build from a Dockerfile:
FROM redis:alpine
COPY ./redis.conf /etc/redis.conf
CMD ["redis-server", "/etc/redis.conf"]
Now when I run docker compose up -d I get the following error:
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 31B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/redis:alpine 0.0s
------
> [internal] load metadata for docker.io/library/redis:alpine:
------
failed to solve: rpc error: code = Unknown
desc = failed to solve with frontend dockerfile.v0:
failed to create LLB definition:
failed to authorize: rpc error: code = Unknown
desc = failed to fetch anonymous token:
Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fredis%3Apull&service=registry.docker.io":
dial tcp: lookup auth.docker.io on [::1]:53: read udp [::1]:58421->[::1]:53: read: connection refused
But if I run docker image build -t . and then rerun docker compose up -d all works.
I searched online a bit about the error, but I didn't find any useful.
I find here on stack overflow that there are a bunch of questions with the same failed to solve: rpc error: code = Unknown, but the body of the error is different.
The compose file works perfectly on my local machine.
I create a 'docker' folder inside my php web app source code.
I placed a docker-composer.yml and a Dockerfile inside /docker
<project_root>
- app
- bootstrap
....
- docker
- docker-compose.yml
- Dockerfile
.....
- storage
So insider docker-composer.yml I am trying to do this
version: '3.9'
services:
app:
build:
context: ..
dockerfile: ./Dockerfile
image: my-custom-app-docker-image
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www/app
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
The problem is that: running docker-composer up I got
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount61239487/Dockerfile: no such file or directory
Problem found, and fixed
I add a second dot to build.context
app:
build:
context: ..
changing third line to
context: .
worked
I get the following error when running "docker compose up --detach". My Ubuntu distribution is 22.04, and I have ran this container on another system before without issues but switched laptops and now keep getting this error.
2022/11/02 15:20:09 http2: server: error reading preface from client //./pipe/docker_engine: bogus greeting "400 Bad Request: invalid"
> \[+\] Building 5.0s (2/2) FINISHED
> =\> ERROR \[internal\] load build definition from Dockerfile 5.0s
> =\> CANCELED \[internal\] load .dockerignore 5.0s
------
> \[internal\] load build definition from Dockerfile:
------
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: no active session for v0kttrdojvk121i286w61p0sb: context deadline exceeded
This is what my Dockerfile looks like:
FROM python:3.10.2-buster
WORKDIR /bfo_app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 5005
COPY . .
CMD ["python3", "run.py"]
And this is my docker-compose.yml
version: '3.9'
services:
db:
container_name: breakfastForOneDatabase
image: postgres:14.1-alpine
restart: always
env_file:
- dev.env
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin4
image: dpage/pgadmin4
restart: always
env_file:
- dev.env
ports:
- "5050:80"
web:
container_name: breakfastForOneFlask
env_file:
- dev.env
build: .
ports:
- "5005:5005"
volumes:
- .:/bfo_app
volumes:
db:
driver: local
I am expecting the containers to start. I have tried looking through a bunch of SO pages but nothing has worked. There are three containers that should be started but I get the error showed above.
Building localstack
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /Applications/MAMP/htdocs/hidden_app_name/docker/Dockerfile: no such file or directory
this is what I'm getting after trying to run docker-compose up -d
The error is pretty straighforward, but the service localstack does not have any Dockerfile.
localstack:
build:
context: ./docker
image: localstack/localstack:latest
container_name: localstack
platform: linux/arm64/v8
ports:
- "4566:4566"
- "4571:4571"
environment:
- SERVICES=s3,ses,rekognition
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
volumes:
- './.localstack:/tmp/localstack'
- './.aws:/root/.aws'
- '/var/run/docker.sock:/var/run/docker.sock'
Any ideas if this is related to the M1 Max Docker or anything like that and how to fix it?
If there is no Dockerfile, then there is nothing to build, therefore there is no context needed. Context is what is sent to the docker daemon at build time.
You should remove this from your docker-compose.yml:
build:
context: ./docker
Mihai is correct
You should remove this from your docker-compose.yml:
build:
context: ./docker