docker compose "failed to solve: rpc error: code" - docker

I'm using this little docker-compose.yaml file to start two containers in an EC2 to start a preview of my app:
version: "3.9"
services:
redis:
build:
context: .
ports:
- 6379:6379
database:
container_name: postgres
image: postgres:14.2
ports:
- 5432:5432
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- .env
volumes:
postgres:
The redis image is build from a Dockerfile:
FROM redis:alpine
COPY ./redis.conf /etc/redis.conf
CMD ["redis-server", "/etc/redis.conf"]
Now when I run docker compose up -d I get the following error:
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 31B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/redis:alpine 0.0s
------
> [internal] load metadata for docker.io/library/redis:alpine:
------
failed to solve: rpc error: code = Unknown
desc = failed to solve with frontend dockerfile.v0:
failed to create LLB definition:
failed to authorize: rpc error: code = Unknown
desc = failed to fetch anonymous token:
Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fredis%3Apull&service=registry.docker.io":
dial tcp: lookup auth.docker.io on [::1]:53: read udp [::1]:58421->[::1]:53: read: connection refused
But if I run docker image build -t . and then rerun docker compose up -d all works.
I searched online a bit about the error, but I didn't find any useful.
I find here on stack overflow that there are a bunch of questions with the same failed to solve: rpc error: code = Unknown, but the body of the error is different.
The compose file works perfectly on my local machine.

Related

tried to build an image but I am keep getting this error. I clone many projects from public GitHub and have the same issue

[+] Building 14.6s (1/1) FINISHED
=> ERROR [internal] booting buildkit 14.6s
=> => pulling image moby/buildkit:buildx-stable-1 12.7s
=> => creating container buildx_buildkit_default 1.9s
------
\> [internal] booting buildkit:
------
Error response from daemon: crun: creating cgroup directory `/sys/fs/cgroup/systemd/docker/buildx/libpod-86ba89f2fa100371f55620a538087d7e63a80541ae3b69b4919483f52ff616fa`: No such file or directory: OCI runtime attempted to invoke a command that was not found
`docker-compose` process finished with exit code 17
# Docker Compose file Reference (https://docs.docker.com/compose/compose-file/)
version: '3'
# Define services
services:
# App Service
app:
# Configuration for building the docker image for the service
build:
context: . # Use an image built from the specified dockerfile in the current directory.
dockerfile: Dockerfile
ports:
- "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: unless-stopped
depends_on:
- redis # This service depends on redis. Start that first.
environment: # Pass environment variables to the service
REDIS_URL: redis:6379
networks: # Networks to join (Services on the same network can communicate with each other using their name)
- backend
# Redis Service
redis:
image: "redis:alpine" # Use a public Redis image to build the redis service
restart: unless-stopped
networks:
- backend
networks:
backend:

Can't run compose in Bitbucket pipeline getting --privileged=true is not allowed

I also opened an issue for this on the Atlassian community forum: https://community.atlassian.com/t5/Jira-Work-Management-Questions/Can-t-run-compose-in-bitbucket-pipelines-getting-privileged-true/qaq-p/2233411
Bitbucket pipelines are supposed to support buildkit now and they even say in their docs compose is supported as well.
This is my compose file:
version: "3.9"
services:
myservice:
platform: linux/amd64
privileged: false # I also tried adding this
image: someimage
build:
context: .
dockerfile: "Dockerfile"
secrets:
- pypi_conf
secrets:
pypi_conf:
file: "${BITBUCKET_CLONE_DIR}/pypi_config/pip/pip.conf"
This is my yaml:
image: atlassian/default-image:3
definitions:
services:
docker:
memory: 3072
steps:
- step: &build
name: Build
image:
name: tiangolo/docker-with-compose
script:
- export DOCKER_BUILDKIT=1
- docker compose build
services:
- docker
pipelines:
default:
- step: *build
branches:
master:
- step: *build
I'm not mounting anything outside of the allowed BITBUCKET_CLONE_DIR.
But I get this error:
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 3.1s done
#1 creating container buildx_buildkit_default done
#1 ERROR: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
------
> [internal] booting buildkit:
------
Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
make: *** [Makefile:134: testing] Error 17
Update
I found that both docker-compose and docker compose fail if I turn on buildkit with export DOCKER_BUILDKIT=1. So it's not even just the secret mounting feature, it's Compose + buildkit that generates this error.

"docker compose up --detach" to start containers not working & giving error

I get the following error when running "docker compose up --detach". My Ubuntu distribution is 22.04, and I have ran this container on another system before without issues but switched laptops and now keep getting this error.
2022/11/02 15:20:09 http2: server: error reading preface from client //./pipe/docker_engine: bogus greeting "400 Bad Request: invalid"
> \[+\] Building 5.0s (2/2) FINISHED
> =\> ERROR \[internal\] load build definition from Dockerfile 5.0s
> =\> CANCELED \[internal\] load .dockerignore 5.0s
------
> \[internal\] load build definition from Dockerfile:
------
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: no active session for v0kttrdojvk121i286w61p0sb: context deadline exceeded
This is what my Dockerfile looks like:
FROM python:3.10.2-buster
WORKDIR /bfo_app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 5005
COPY . .
CMD ["python3", "run.py"]
And this is my docker-compose.yml
version: '3.9'
services:
db:
container_name: breakfastForOneDatabase
image: postgres:14.1-alpine
restart: always
env_file:
- dev.env
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin4
image: dpage/pgadmin4
restart: always
env_file:
- dev.env
ports:
- "5050:80"
web:
container_name: breakfastForOneFlask
env_file:
- dev.env
build: .
ports:
- "5005:5005"
volumes:
- .:/bfo_app
volumes:
db:
driver: local
I am expecting the containers to start. I have tried looking through a bunch of SO pages but nothing has worked. There are three containers that should be started but I get the error showed above.

Docker compose fails. Why is my docker compose using docker.io?

I am having problems running docker compose (the yaml file later). It should build three containers: Jupyter, mlflow and postgres but for some reason it only builds postgress.
The error message is
Building mlflow
[+] Building 0.7s (5/9)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> ERROR [internal] load metadata for docker.io/library/python:3.7 0.6s
=> ERROR [1/5] FROM docker.io/library/python:3.7 0.1s
=> => resolve docker.io/library/python:3.7 0.1s
=> [internal] load build context 0.0s
------
> [internal] load metadata for docker.io/library/python:3.7:
------
------
> [1/5] FROM docker.io/library/python:3.7:
------
failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: unexpected status code https://dockerhub.azk8s.cn/v2/library/python/manifests/3.7: 403 Forbidden
ERROR: Service 'mlflow' failed to build : Build failed
Looking at the error message I notices that it says
=> ERROR [internal] load metadata for docker.io/library/python:3.7 0.6s
=> ERROR [1/5] FROM docker.io/library/python:3.7
So I looked it up and I found this article in which it says that
Docker.Io is the enterprise edition of Docker, and is only available
to paying customers
So why is my Docker image using a pay only repository(?)
So far I have used docker multiple time without problem, and I never knew about this "docker.io" what is this exactly?
And why progres could build even though it used also this pay only version?
How can I use the community edition? Or should I?
It this the root of the problem?
For reference here is the yaml file
version: '3.7'
services:
jupyter:
user: root
build:
context: .
dockerfile: ./docker/jupyter/Dockerfile
target: ${JUPYTER_TARGET}
args:
- MLFLOW_ARTIFACT_STORE=/${MLFLOW_ARTIFACT_STORE}
- MLFLOW_VERSION=${MLFLOW_VERSION}
- JUPYTER_BASE_IMAGE=${JUPYTER_BASE_IMAGE}
- JUPYTER_BASE_VERSION=${JUPYTER_BASE_VERSION}
- JUPYTER_USERNAME=${JUPYTER_USERNAME}
image: ${IMAGE_OWNER}/${REPO_SLUG}/${JUPYTER_TARGET}:${VERSION}
ports:
- "${JUPYTER_PORT}:${JUPYTER_PORT}"
depends_on:
- mlflow
environment:
MLFLOW_TRACKING_URI: ${MLFLOW_TRACKING_URI}
JUPYTER_ENABLE_LAB: ${JUPYTER_ENABLE_LAB}
NB_USER: ${JUPYTER_USERNAME}
NB_UID: ${JUPYTER_UID}
CHOWN_HOME: "yes"
CHOWN_HOME_OPTS: '-R'
CHOWN_EXTRA: ${JUPYTER_CHOWN_EXTRA}
CHOWN_EXTRA_OPTS: '-R'
volumes:
- ./:/home/${JUPYTER_USERNAME}/work
- ./${MLFLOW_ARTIFACT_STORE}:/${MLFLOW_ARTIFACT_STORE}
mlflow:
build:
context: ./docker/mlflow
args:
- MLFLOW_VERSION=${MLFLOW_VERSION}
image: ${IMAGE_OWNER}/${REPO_SLUG}/${MLFLOW_IMAGE_NAME}:${VERSION}
expose:
- "${MLFLOW_TRACKING_SERVER_PORT}"
ports:
- "${MLFLOW_TRACKING_SERVER_PORT}:${MLFLOW_TRACKING_SERVER_PORT}"
depends_on:
- postgres
environment:
MLFLOW_TRACKING_SERVER_HOST: ${MLFLOW_TRACKING_SERVER_HOST}
MLFLOW_TRACKING_SERVER_PORT: ${MLFLOW_TRACKING_SERVER_PORT}
MLFLOW_ARTIFACT_STORE: ${MLFLOW_ARTIFACT_STORE}
MLFLOW_BACKEND_STORE: ${MLFLOW_BACKEND_STORE}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DATABASE: ${POSTGRES_DATABASE}
POSTGRES_PORT: ${POSTGRES_PORT}
WAIT_FOR_IT_TIMEOUT: ${WAIT_FOR_IT_TIMEOUT}
volumes:
- ./${MLFLOW_ARTIFACT_STORE}:/${MLFLOW_ARTIFACT_STORE}
postgres:
user: "${POSTGRES_UID}:${POSTGRES_GID}"
build:
context: ./docker/postgres
image: ${IMAGE_OWNER}/${REPO_SLUG}/${POSTGRES_IMAGE_NAME}:${VERSION}
restart: always
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
ports:
- "${POSTGRES_PORT}:${POSTGRES_PORT}"
volumes:
- ./${POSTGRES_STORE}:/var/lib/postgresql/data

image building fine with docker build, but stat /GO/src/main: no such file or directory encountered with docker-compose

I have a Dockerfile which I can successfully build an image from:
FROM iron/go:dev
RUN mkdir /app
COPY src/main/main.go /app/.
# Set an env var that matches your github repo name, replace treeder/dockergo here with your repo name
ENV SRC_DIR=/app
# Add the source code:
ADD . $SRC_DIR
# Build it:
RUN go get goji.io
RUN go get gopkg.in/mgo.v2
RUN cd $SRC_DIR; go build -o main
ENTRYPOINT ["/app/main"]
However, when I attempt to build the following docker-compose.yml file:
version: "3.3"
services:
api:
build: ./api
expose:
- '8080'
container_name: 'api'
ports:
- "8082:8080"
depends_on:
- db
networks:
- api-net
db:
build: ./db
expose:
- '27017'
container_name: 'mongo'
ports:
- "27017:27017"
networks:
- api-net
networks:
api-net:
driver: bridge
I get:
Removing api mongo is up-to-date Recreating
532e3cf66460_carsupermarket_api_1 ... error
ERROR: for 532e3cf66460_carsupermarket_api_1 Cannot start service
api: OCI runtime create failed: container_linux.go:348: starting
container process caused "exec: \"/GO/src/main\": stat /GO/src/main:
no such file or directory": unknown
ERROR: for api Cannot start service api: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec:
\"/GO/src/main\": stat /GO/src/main: no such file or directory":
unknown ERROR: Encountered errors while bringing up the project.
I suspect that docker-compose is introducing some nuance when it comes to directory build paths, however, I'm at a loss as to why my image is building from the docker file when using docker build ., but failing when I try to incorporate this into docker-compose.
Can someone point me in the right direction as to what I am doing wrong?
I've upgraded to the latest version of Docker CE (18.03.1-ce, build 9ee9f40) and this appears to have resolved the issue.

Resources