I'm using # syntax = docker/dockerfile:experimental in my Dockerfile, I use it to mount ssh but for some reason it stopped working.
I have the env DOCKER_BUILDKIT=1 and already tried with DOCKER_CLI_EXPERIMENTAL=enabled but nothing changed.
2 transferring context: 69B done
#2 DONE 0.0s
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 402B done
#1 DONE 0.0s
#3 resolve image config for docker.io/docker/dockerfile:experimental
#3 ERROR: docker.io/docker/dockerfile:experimental not found
------
> resolve image config for docker.io/docker/dockerfile:experimental:
------
docker.io/docker/dockerfile:experimental not found
There's the output and the problem.
Best regards
Fixed by forcing pull.
docker pull docker/dockerfile:experimental
Related
I am tring to create a Dockerfile to run CentOS. One of the host systems needing to run this container is an ARM based (m1) Mac. These are the two files I have created so far.
# Dockerfile
FROM centos:7
# docker-compose.yml
version: "3.9"
services:
genesis:
build: .
When trying to run/build this cointainer I get the following error
Building genesis
[+] Building 0.9s (4/4) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 77B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/centos:7 0.8s
=> [auth] library/centos:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/centos:7:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize: rpc error: code = Unknown desc = failed to fetch oauth token: Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fcentos%3Apull&service=registry.docker.io": read tcp 192.168.1.209:64469->3.228.155.36:443: read: software caused connection abort
ERROR: Service 'genesis' failed to build : Build failed
After some google searching and answers on stack overflow it looks like the issue is something to do with the architecture difference between the containers and the host. I have tried setting the dockerfile to
FROM --platform=aarch/arm centos:7
and
FROM --platform=linux/amd64 centos:7
but neither of these work, and they're returning the same error as before. I have also tried specifying the platform in the docker-compose file too but that didn't work either.
Interestingly I did seem to have it working when I use the command in the shell
$ docker run --rm -it --platform=linux/amd64 centos:7 sh
but I need to have it working in the dockerfile as I need to then have more setup in the dockefile
Docker isn't a virtual machine, so there are some limitations.
Your application is for Linux, but you appear to be trying to run it from macOS?
Does your FROM lines specify a version of Linux?
If so, you need a build a new container of binaries native to macOS and avoid using Linux containers in your FROM lines.
Im trying to create a docker image using my Dockerfile. I have no prior experience with Docker so I cant really decribe the problem better. I was able to do this yesterday without problems, but I deleted the image and now I cant recreate it.
My Dockerfile
FROM bitnami/spark
USER root
RUN pip install unidecode
RUN curl https://repo1.maven.org/maven2/com/databricks/spark-xml_2.12/0.13.0/spark-xml_2.12-0.13.0.jar --output /opt/bitnami/spark/jars/spark-xml_2.10-0.2.0.jar
ENV PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.9-src.zip:$PYTHONPATH
Im trying to create docker image by this command docker build -t imagename .
I am in the same directory as Dockerfile, so thats not the issue.
This is the output I get when I run the command above.
[+] Building 32.0s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/bitnami/spark:latest 31.8s
------
> [internal] load metadata for docker.io/bitnami/spark:latest:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize:
rpc error: code = Unknown desc = failed to fetch anonymous token:
Get "https://auth.docker.io/token?scope=repository%3Abitnami%2Fspark%3Apull&service=registry.docker.io":
dial tcp 54.85.56.253:443: i/o timeout
Restarting docker helped me to fix this.
Reinstalling docker helped to fix this problem
It's commonly known that you can run docker commit against a failed build process to take a snapshot of a container for debugging purposes. The container ID is gleaned from the running in <ID> text. However, this text is not emitted during builds that happen with Docker's newer BuildKit buildx functionality.
I tried using --progress plain on the Docker build command, but that hasn't shown me the container IDs. Plus, I cannot run a new container from the image layer IDs (SHA hashes) that are spit out.
Sample BuildKit Output
Using this command:
#1 [internal] load build definition from Dockerfile
#1 sha256:0e70418d547c3ccb20da7b100cf4f69564bddc416652e3e2b9b514e9a732b4aa
#1 transferring dockerfile: 32B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:396b2cfd81ff476a70ecda27bc5d781bd61c859b608537336f8092e155dd38bf
#2 transferring context: 34B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/node:latest
#3 sha256:1c0b05b884068c98f7acad32e4f7fd374eba1122b4adcbb1de68aa72d5a6046f
#3 DONE 0.0s
#4 [1/4] FROM docker.io/library/node
#4 sha256:5045d46e15358f34ea7fff145af304a1fa3a317561e9c609f4ae17c0bd3359df
#4 DONE 0.0s
#5 [internal] load build context
#5 sha256:49d7a085caed3f75e779f05887e53e0bba96452e3a719963993002a3638cb8a3
#5 transferring context: 35.17kB 0.0s done
#5 DONE 0.1s
#6 [2/4] ADD [trevortest/*, /app/]
#6 sha256:6da32965a50f6e13322efb20007ff49fb0546e2ff55799163b3b00d034a62c57
#6 CACHED
Question: How can I obtain the container IDs of the build process, during each step, specifically when using Docker BuildKit?
The BuildKit works differently than the legacy docker build system. At the moment, there is no direct way to spawn a container from a step in the build and troubleshoot it.
To use the BuildKit potential up to the maximum, best approach is to organize the builds in smaller logical stages. Once the build is organized in this way, When running the builds, you can specify that you want to stop at a certain stage by using --target. When the target is specified, Docker creates an image with the results of the build up to that stage. You can use this container to further troubleshoot in the same way that was possible with the old build system.
Take this example. Here I have 4 stages out of which 2 are parallel stages:
FROM debian:9.11 AS stage-01
# Prepare for installation
RUN apt update && \
apt upgrade -y
FROM stage-01 as stage-02
# Install building tools
RUN apt install -y build-essential
FROM stage-02 as stage-02a
RUN echo "Build 0.1" > /version.txt
FROM stage-02 as stage-03
RUN apt install -y cmake gcc g++
Now you can use the --target option to tell Docker that you want to stop at the stage-02 as follows:
$ docker build -f test-docker.Dockerfile -t test . --target stage-02 [+] Building 67.5s (7/7) FINISHED
=> [internal] load build definition from test-docker.Dockerfile 0.0s
=> => transferring dockerfile: 348B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:9.11 0.0s
=> [stage-01 1/2] FROM docker.io/library/debian:9.11 0.0s
=> CACHED [stage-01 2/2] RUN apt update && apt upgrade -y 0.0s
=> [stage-02 1/1] RUN apt install -y build-essential 64.7s
=> exporting to image 2.6s
=> => exporting layers 2.5s
=> => writing image sha256:ac36b95184b79b6cabeda3e4d7913768f6ed73527b76f025262d6e3b68c2a357 0.0s
=> => naming to docker.io/library/test 0.0s
Now you have the image with the name test and you can spawn a container to troubleshoot.
docker run -ti --rm --name troubleshoot test /bin/bash
root#bbdb0d2188c0:/# ls
Using multiple stages facilitates the troubleshooting, however it really speeds up the build process since the parallel branches can be build on different instances. Also, the readability of the build file is significantly improved.
C:\kafka> docker build Dockerfile
[+] Building 0.0s (1/2)
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 63B 0.0s
------
> [internal] load build definition from Dockerfile:
------
failed to solve with frontend dockerfile.v0: failed to read dockerfile: error from sender: walk Dockerfile: The system cannot find the path specified.
Above is an example of the error I am getting and the command run.
My Dockerfile is named "Dockerfile" as I have read on many answers but still no resolve.
My Dockerfile is also within the directory I am in.
To build a docker image:
cd /path/where/docker_file/lives
docker build .
Above is same as:
docker build -f Dockerfile .
You need to specify Dockerfile name only if it is not default:
cd /path/where/docker_file/lives
docker build -f Dockerfile.modified .
I admit, I am a newbie in the container-world. But I managed to get docker running on my W10 with WSL2. I can also use the docker-UI and run Containers/Apps or Images. So I believe that the infrastructure is in place and uptodate.
Yet, when I try even the simplest Dockerfile, it doesn't seem to work and I don't understand the error-messages it gives:
This is Dockerfile:
FROM ubuntu:20.04
(yes, a humble beginning - or an extremly slimmed down repro)
docker build Dockerfile
[+] Building 0.0s (2/2) FINISHED
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 33B 0.0s
=> ERROR [internal] load .dockerignore 0.0s
=> => transferring context: 33B 0.0s
------
> [internal] load build definition from Dockerfile:
------
------
> [internal] load .dockerignore:
------
failed to solve with frontend dockerfile.v0: failed to build LLB: error from sender: Dockerfile is not a directory
You need to run docker build -f [docker_file_name] . (don't miss the dot at the end).
If the name of your file is Dockerfile then you don't need the -f and the filename.
I faced a similar issue, I use the docker desktop for windows. Restarted the laptop and the issue was resolved. Hope it may help someone.
First check Docker file name(D should be capital), then run docker build -f Dockerfile . (dot at the end).
For me, I had a Linux symlink in the same directory as the Dockerfile. When running docker build from Windows 10, it gave me the ERROR [internal] load build definition from Dockerfile. I suspect Docker docker build . scans the directory and, if it can't read one file, it crashes. For me, I mounted the directory with WSL and removed the symblink.
I had the same issue but a different solution (on Windows):
I opened a console in my folder; my folder contains only Dockerfile
Dockerfile content was FROM ubuntu:20.04 (same as OP)
Ran docker build knowing that I had a Dockerfile in my current folder
I was getting the OP's same error message
I stopped the Docker Desktop service
Ran docker build again -- got "docker build" requires exactly 1 argument.
Ran docker build Dockerfile -- got unable to prepare context: context must be a directory: C:\z\docker\aem_ubuntu20\Dockerfile
Ran docker build . -- got error during connect: This error may indicate that the docker daemon is not running.
Re-started the Docker Desktop service
Ran docker build . -- success!
Conclusion: docker build PATH, where PATH must be a folder name and that folder must contain a Dockerfile
In my case I got this error when running docker commands in a wrong directory. Just cd to the dir where your Dockerfile is, and all is good again.