The following commands do not show the output ubuntu1 image:
docker buildx build -f 1.dockerfile -t ubuntu1 .
docker image ls | grep ubuntu1
# no output
1.dockerfile:
FROM ubuntu:latest
RUN echo "my ubuntu"
Plus, I cannot use the image in FROM statements in other docker files (both builds are on my local Windows box):
2.dockerfile:
FROM ubuntu1
RUN echo "my ubuntu 2"
docker buildx build -f 2.dockerfile -t ubuntu2 .
#error:
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 1.8s (4/4) FINISHED
=> [internal] load build definition from 2.dockerfile 0.0s
=> => transferring dockerfile: 84B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/ubuntu1:latest 1.8s
=> [auth] library/ubuntu1:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/ubuntu1:latest:
------
2.dockerfile:1
--------------------
1 | >>> FROM ubuntu1:latest
2 | RUN echo "my ubuntu 2"
3 |
--------------------
error: failed to solve: ubuntu1:latest: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (did you mean ubuntu:latest?)
Any idea what's going on? How can I see what buildx prepared and reference one image in another dockerfile?
Ok found a partial solution, I need to add --output type=docker as per the docs. This puts the docker in the image list. But I still cannot use it in the second docker.
Related
Total docker newbie here and I would appreciate any help I could get. I pulled an image from my ECR repository and tagged it as app:latest using this command:
docker tag xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/app app:latest. When I list my imaged with docker images, the image is there with the new tag.
REPOSITORY TAG IMAGE ID CREATED SIZE
xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/app latest b5c8c2b74272 4 weeks ago 660MB
app latest b5c8c2b74272 4 weeks ago 660MB
I want to use this app:latest image as the base image in my Dockerfile. I know docker's default behavior is to check locally for the image and pull from dockerhub if it's not stored locally. When I run docker build -t hello ., I get this error:
[+] Building 1.3s (4/4) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 36B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/app:latest 1.2s
=> [auth] library/app:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/app:latest:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Why is docker trying to pull from dockerhub when the app:latest image exists locally? Any insights would be greatly appreciated. Thank you!
I think this issue is related to me using an M1 computer. I ran these commands and I was able to successfully build my docker image from my Dockerfile
export DOCKER_BUILDKIT=0
export COMPOSE_DOCKER_CLI_BUILD=0
The logs are printed on the console when I do docker-compose up -d --build, how can I have them saved to a file so I can go through the logs?
These logs
0.0s => => transferring dockerfile: 771B
0.0s => [internal] load .dockerignore
0.0s => => transferring context: 2B
0.0s => [internal] load metadata for docker.io/library/python:3.8.12-bullseye
1.5s => [auth] library/python:pull token for registry-1.docker.io
0.0s => [internal] load build context
2.0s => => transferring context: 847.24kB
1.9s => [1/7] FROM docker.io/library/python:3.8.12-bullseye#sha256:b39ab988ac2fa749273cf6fdeb89fb3635850824419dc61
0.0s => => resolve docker.io/library/python:3.8.12-bullseye#sha256:b39ab988ac2fa749273cf6fdeb89fb3635850824419dc61
0.0s => CACHED [2/7] WORKDIR /usr/src/app
Docker already saves the logs in a json file. To find it you can do
docker inspect --format='{{.LogPath}}' [container id]
Or if you just want the outputs of the docker logs function just do this :
docker-compose logs --no-color > logs.txt
For docker-compose up --build, did you try that :
docker-compose up --build &> logs.txt
This can be achieved using docker-compose build and the --progress flag and specifying one of the following options: auto, tty, plain, or quiet (default "auto")
So to persist the logs in your terminal you would build with:
docker-compose build --progress plain
See --help:
$ docker-compose build --help
Usage: docker compose build [OPTIONS] [SERVICE...]
Build or rebuild services
Options:
--build-arg stringArray Set build-time variables for services.
--no-cache Do not use cache when building the image
--progress string Set type of progress output (auto, tty, plain, quiet) (default "auto")
--pull Always attempt to pull a newer version of the image.
-q, --quiet Don't print anything to STDOUT
--ssh string Set SSH authentications used when building service images. (use
'default' for using your default SSH Agent)
I am building an image for a docker container running on a different architecture. As I don't have internet access all the time, I usually just pull the image when I have internet and docker uses the local image instead of pulling a new one. After I started to build the image with buildx, this does not seem to work anymore. Is there any way to tell docker to only use the local image? When I have connection, docker seems to check wherever there is a new version available but uses the local (or cached) image as I would expect it without internet connection.
$ docker image ls
ros galactic bac817d14f26 5 weeks ago 626MB
$ docker image inspect ros:galactic
...
"Architecture": "arm64",
"Variant": "v8",
"Os": "linux",
...
Example build command
$ docker buildx build . --platform linux/arm64
WARN[0000] No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 0.3s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 72B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/ros:galactic 0.0s
------
> [internal] load metadata for docker.io/library/ros:galactic:
------
Dockerfile:1
--------------------
1 | >>> FROM ros:galactic
2 | RUN "echo hello"
3 |
--------------------
error: failed to solve: failed to fetch anonymous token: Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fros%3Apull&service=registry.docker.io": proxyconnect tcp: dial tcp 127.0.0.1:3333: connect: connection refused
My workaround for this is to explicitly state the registry in the Dockerfile FROM sections, be it your own private registry or dockerhub.
For example, to use the dockerhub ubuntu:latest image, instead of just doing FROM ubuntu:latest I would write in the Dockerfile:
FROM docker.io/library/ubuntu:latest
To use myprivateregistry:5000 I would use:
FROM myprivateregistry:5000/ubuntu:latest
And also you must set --pull=false flag for the docker buildx build or DOCKER_BUILDKIT=1 docker build commands. When you have internet again you can use --pull=true again
Windows PowerShell:
PS C:\Users\Administrator> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest d6e46aa2470d 13 days ago 5.57MB
alpine/git latest a8b6c5c0eb62 2 weeks ago 28.4MB
PS C:\Users\Administrator> docker build C:\dfiles
[+] Building 0.9s (2/2) FINISHED
=> [internal] load build definition from Dockerfile 0.6s
=> => transferring dockerfile: 2B 0.0s
=> [internal] load .dockerignore 0.8s
=> => transferring context: 2B 0.0s
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount632819289/Dockerfile: no such file or directory
The path of where the dockerfile is stored: C:\dfiles
The code in my Dockerfile.txt:
FROM alpine
CMD ["echo", "Hello StackOverflow!"]
Simple
Use just Dockerfile with no Extention instead of Dockerfile.txt
The code in my Dockerfile.txt
The file needs to be called "Dockerfile", not "Dockerfile.txt". So remove the file extension from your file and try again.
I have a CI script that builds Dockerfiles. My plan is that unit tests should be run in a test stage in each Dockerfile, for example:
FROM alpine AS build
WORKDIR /app
COPY src .
...
FROM build AS test
RUN mvn clean test
FROM build AS package
COPY --from=build ...
So, for a given Dockerfile, I would like to check if it has a test stage and, if so, run docker build --target test .... If it doesn't have a test stage, I don't want to run docker build (which would fail).
How can I check if a Dockerfile contains a certain stage without actually building it?
I do realize this question has some XY problem vibes to it, so feel free to enlighten me. But I also think the question can be generally useful anyway.
I'm going to shy away from trying to parse the Dockerfile since there are a lot of ways to inject false positives or negatives. E.g.
RUN echo \
FROM base as test
or
FROM base \
as test
So instead, I'm going to favor letting docker do the hard work, and modifying the file to not fail on a missing test. This can be done by adding a test stage to a file even when it already as a test stage. Whether you want to put this at the beginning or end of the Dockerfile depends on whether you are running buildkit:
$ cat df.dup-target
FROM busybox as test
RUN exit 1
FROM busybox as test
RUN exit 0
$ DOCKER_BUILDKIT=0 docker build --target test -f df.dup-target .
Sending build context to Docker daemon 20.99kB
Step 1/2 : FROM busybox as test
---> be5888e67be6
Step 2/2 : RUN exit 1
---> Running in 9f96f42bc6d8
The command '/bin/sh -c exit 1' returned a non-zero code: 1
$ DOCKER_BUILDKIT=1 docker build --target test -f df.dup-target .
[+] Building 0.1s (6/6) FINISHED
=> [internal] load build definition from df.dup-target 0.0s
=> => transferring dockerfile: 114B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> [test 1/2] FROM docker.io/library/busybox 0.0s
=> CACHED [test 2/2] RUN exit 0 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:8129063cb183c1c1aafaf3eef0c8671e86a54f795092fa7a918145c14da3ec3b 0.0s
Then you could append the always successful test at the beginning or end, passing that modified Dockerfile to stdin for the docker build to process:
$ cat df.simple
FROM busybox as build
RUN exit 0
$ cat - df.simple <<EOF | DOCKER_BUILDKIT=1 docker build --target test -f - .
FROM busybox as test
RUN exit 0
EOF
[+] Building 0.1s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 109B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> [test 1/2] FROM docker.io/library/busybox 0.0s
=> CACHED [test 2/2] RUN exit 0 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:8129063cb183c1c1aafaf3eef0c8671e86a54f795092fa7a918145c14da3ec3b 0.0s
This is a simple grep invocation:
egrep -i -q '^FROM .* AS test$' Dockerfile
You also might consider running your unit tests outside of Docker, before you start building containers. (Or, if your CI system supports running steps inside containers, use a container to get a language runtime, but not necessarily run the Dockerfile.) You'll still need a Docker-based setup to run larger integration tests, but you can run these on your built production-ready containers.