Can you make any sense of Dockers error-messages? - docker

I admit, I am a newbie in the container-world. But I managed to get docker running on my W10 with WSL2. I can also use the docker-UI and run Containers/Apps or Images. So I believe that the infrastructure is in place and uptodate.
Yet, when I try even the simplest Dockerfile, it doesn't seem to work and I don't understand the error-messages it gives:
This is Dockerfile:
FROM ubuntu:20.04
(yes, a humble beginning - or an extremly slimmed down repro)
docker build Dockerfile
[+] Building 0.0s (2/2) FINISHED
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 33B 0.0s
=> ERROR [internal] load .dockerignore 0.0s
=> => transferring context: 33B 0.0s
------
> [internal] load build definition from Dockerfile:
------
------
> [internal] load .dockerignore:
------
failed to solve with frontend dockerfile.v0: failed to build LLB: error from sender: Dockerfile is not a directory

You need to run docker build -f [docker_file_name] . (don't miss the dot at the end).
If the name of your file is Dockerfile then you don't need the -f and the filename.

I faced a similar issue, I use the docker desktop for windows. Restarted the laptop and the issue was resolved. Hope it may help someone.

First check Docker file name(D should be capital), then run docker build -f Dockerfile . (dot at the end).

For me, I had a Linux symlink in the same directory as the Dockerfile. When running docker build from Windows 10, it gave me the ERROR [internal] load build definition from Dockerfile. I suspect Docker docker build . scans the directory and, if it can't read one file, it crashes. For me, I mounted the directory with WSL and removed the symblink.

I had the same issue but a different solution (on Windows):
I opened a console in my folder; my folder contains only Dockerfile
Dockerfile content was FROM ubuntu:20.04 (same as OP)
Ran docker build knowing that I had a Dockerfile in my current folder
I was getting the OP's same error message
I stopped the Docker Desktop service
Ran docker build again -- got "docker build" requires exactly 1 argument.
Ran docker build Dockerfile -- got unable to prepare context: context must be a directory: C:\z\docker\aem_ubuntu20\Dockerfile
Ran docker build . -- got error during connect: This error may indicate that the docker daemon is not running.
Re-started the Docker Desktop service
Ran docker build . -- success!
Conclusion: docker build PATH, where PATH must be a folder name and that folder must contain a Dockerfile

In my case I got this error when running docker commands in a wrong directory. Just cd to the dir where your Dockerfile is, and all is good again.

Related

Migrating local Docker images to buildx

I have been using several locally built docker images that I am trying to migrate to building with docker buildx. Essentially I have a local container to build something from source, and then a prod container that references the local build container.
For example, I have two Dockerfiles in a directory, Dockerfile.builder and Dockerfile.prod
# Dockerfile.builder
FROM maven:3-eclipse-temurin-17
ARG VERSION
# clone git repository, do building things
# Dockerfile.prod
ARG BUILDER_TAG
FROM builder:$BUILDER_TAG as builder
# pull in build artifacts from builder container, do other things
Then from that working directory I would build the containers like so:
docker build --no-cache --build-arg VERSION=$BUILD_VERSION -t builder-container:${BUILD_VERSION} -f Dockerfile.builder .
docker build --no-cache --build-arg BUILDER_TAG=$BUILD_VERSION -t prod-container:${BUILD_VERSION} -f Dockerfile.prod .
I'm trying to adapt this to docker buildx but am struggling with the extra overhead and complexity.
I think this would be the closest to what I'm wanting to do:
docker buildx build --no-cache --build-arg VERSION=$BUILD_VERSION -t builder:${BUILD_VERSION} - < Dockerfile.builder
However, when I try that, I get the following:
[+] Building 4.3s (2/2) FINISHED
=> ERROR [internal] load .dockerignore 4.0s
=> => transferring context: 0.0s
=> ERROR [internal] load build definition from Dockerfile 4.3s
=> => transferring dockerfile: 30B 0.0s
------
> [internal] load .dockerignore:
------
------
> [internal] load build definition from Dockerfile:
------
ERROR: failed to solve: failed to read dockerfile: failed to remove: /var/lib/docker/zfs/graph/hiqgpytehhglat0nn1a06dop1/.zfs: unlinkat /var/lib/docker/zfs/graph/hiqgpytehhglat0nn1a06dop1/.zfs/snapshot: operation not permitted
So that is telling me that it's not reading from my Dockerfile that I'm trying to supply via STDIN, and the path /var/lib/docker/zfs/graph/hiqgpytehhglat0nn1a06dop1/.zfs/snapshot doesn't exist.
Am I invoking docker buildx correctly for my use case?
Do I need to start with a fresh graph directory in order to start building my own images with buildx, or is there something I need to do with docker buildx create first?
I'm finding Docker's documentation on buildx very lacking in terms of how it differs conceptually from the legacy docker build, and I think that's part of my problem.
buildx config:
docker buildx inspect
Name: default
Driver: docker
Last Activity: 2023-02-15 03:27:29 +0000 UTC
Nodes:
Name: default
Endpoint: default
Status: running
Buildkit: 23.0.1
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386

Docker compose does not completely invalidate cache when asked

In an effort to update my container to the newest version of PHP 8.0 (that is 8.0.20 at the time of writing) I have tried running
$ docker compose build --no-cache
[+] Building 148.2s (24/24) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.97kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/php:8.0-apache 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 6.17kB 0.0s
=> CACHED [base 1/9] FROM docker.io/library/php:8.0-apache 0.0s
=> [base 2/9] RUN a2enmod rewrite
...
But as seen from the output, only step 2 and up are rebuilt, the base image is still being read from cache, resulting in PHP version 8.0.8.
How can I force a complete rebuild without using old cache?
$ docker --version
Docker version 20.10.12, build 20.10.12-0ubuntu4
$ docker compose version
Docker Compose version v2.4.0
Top of Dockerfile:
FROM php:8.0-apache as base
# Apache rewrite module
RUN a2enmod rewrite
EDIT: After more research I find this question is similar to and possibly a duplicate of How to get docker-compose to always re-create containers from fresh images?. The missing part in this specific example is docker pull php:8.0-apache.
I don't understand why though. Why do I have to manually pull the fresh version of the base image?
Pruning (docker system prune, docker builder prune -a) has no effect on this issue, even after taking the containers down.
There's a general Docker rule that, if you already have some image locally, it's just used without checking Docker Hub. As #BMitch indicates in their answer the newer BuildKit engine should validate that you do in fact have the most current version of an image, but it's possible to update this manually.
In your case, you already have a php:8.0-apache image locally (with PHP 8.0.8). So you could manually get the updated base image
docker pull php:8.0-apache
docker-compose build
If the base image has changed, this will invalidate the cache, and so you don't need --no-cache here. This will also work with the "classic" builder (though your output does show that you're using the newer BuildKit engine).
This is a common enough sequence that there's a shorthand for it
docker-compose build --pull
You can do something like:
docker-compose up --force-recreate
or something like this:
docker-compose down --rmi all --remove-orphans && docker-compose up --force-recreate
This will remove all the images so use at your own discretion. Reference to docker-compose down command here
For a base image, CACHED effectively means "verified" where buildkit has queried the registry to check the current base image digest, and seen that the base image digest matches what it's already pulled down. More details are available in this GitHub issue.
There's only 2 reasons I can think of to not use that cache. First is if your local build cache is corrupt. In that case, purge your local cache (docker builder prune). And the other case is a sha256 collision, which if that happens, the registry server itself will probably be broken and fixing the builder is the least concern.
The reason for using --no-cache is to ensure steps that could possibly result in a different output are done again, and that's being done in this example.

How to run a Cenos7 docker image on an ARM based Mac

I am tring to create a Dockerfile to run CentOS. One of the host systems needing to run this container is an ARM based (m1) Mac. These are the two files I have created so far.
# Dockerfile
FROM centos:7
# docker-compose.yml
version: "3.9"
services:
genesis:
build: .
When trying to run/build this cointainer I get the following error
Building genesis
[+] Building 0.9s (4/4) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 77B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/centos:7 0.8s
=> [auth] library/centos:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/centos:7:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize: rpc error: code = Unknown desc = failed to fetch oauth token: Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fcentos%3Apull&service=registry.docker.io": read tcp 192.168.1.209:64469->3.228.155.36:443: read: software caused connection abort
ERROR: Service 'genesis' failed to build : Build failed
After some google searching and answers on stack overflow it looks like the issue is something to do with the architecture difference between the containers and the host. I have tried setting the dockerfile to
FROM --platform=aarch/arm centos:7
and
FROM --platform=linux/amd64 centos:7
but neither of these work, and they're returning the same error as before. I have also tried specifying the platform in the docker-compose file too but that didn't work either.
Interestingly I did seem to have it working when I use the command in the shell
$ docker run --rm -it --platform=linux/amd64 centos:7 sh
but I need to have it working in the dockerfile as I need to then have more setup in the dockefile
Docker isn't a virtual machine, so there are some limitations.
Your application is for Linux, but you appear to be trying to run it from macOS?
Does your FROM lines specify a version of Linux?
If so, you need a build a new container of binaries native to macOS and avoid using Linux containers in your FROM lines.

Docker build is not working - It does not find the Dockerfile

I am trying to build a quarkus container with docker file, but look like that docker build is not finding the Dockerfile. I have changed the name of the Dockerfile but anyway is not working.
I run: docker build src/main/docker/native.dockerfile
And there is the error:
docker build src/main/docker/native.dockerfile
[+] Building 0.1s (1/2)
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 97B 0.0s
------
> [internal] load build definition from Dockerfile:
------
failed to solve with frontend dockerfile.v0: failed to read dockerfile: error from sender: walk src\main\docker\native.dockerfile: System cannot find specified path.
Here is a print:
Even if I run with the intellij it throw another error:
This is the dockerfile:
FROM registry.access.redhat.com/ubi8/ubi-minimal
WORKDIR /work/
COPY target/*-runner /work/application
RUN chmod 775 /work
CMD ./application -Dquarkus.http.host=0.0.0.0 -Dquarkus.http.port=${PORT}
What am I doing wrong?
First of all, speaking of -f flag, the command:
docker build -f src/main/docker/native.dockerfile
will not work, as you mentioned, but I think it is important for me to explain why. The reason is - you did not specify the build context for Dockerfile. When you typing something like this:
docker build src/main/docker/native.dockerfile
It will lookup for Dockerfile, called Dockerfile, but the src/main/docker/native.dockerfile will act as an context of build. In other words, when you coping something to your image, docker needs to understand, from where exactly you want to copy files/directories. So you can assign whatever name your want to your Dockerfile, just remember about the build context (It can be either relative or absolute)
Now let me address errors you encoutered :)
You got 2 different problems, roughly speaking. First of them is - when you ran:
docker build /build/context/path
docker engine was not able to determine the context. I do not use docker on windows, but I am pretty sure this is because of separators. If I were you I will simply change directory (just to ease your life) to one which represents your build context (I assume this is the same directory, where is your Dockerfile is situated), and simply run:
docker build --file native.dockerfile .
But you will get the problem, that you have got in Intelij. This is completely another problem. The reason of it - when docker was copying files to your image from the host machine, it was not able to find suitable (in regards to your wildcard) files to copy. I do not see you target directory - it does not present on the screenshots, so, I cannot suggest anything further, but the problem is there. Fell free to attach them and lets investigate together :)
Have a nice day!

failed to solve with frontend dockerfile.v0: failed to read dockerfile?

I am new to Docker and i am trying to create an image from my application, i created Dockerfile in the same directory with package.json file with no extension, just Dockerfile
Now in Dockerfile:
FROM node:14.16.0
CMD ["/bin/bash"]
and i am trying to build the image with that command
docker build -t app .
But i got this constant error:
[+] Building 0.2s (2/2) FINISHED
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 2B 0.0s
=> CANCELED [internal] load .dockerignore 0.0s
=> => transferring context: 0.0s
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount457647683/Dockerfile: no such file or directory
My folder directory is like this:
|- Dockerfile
|- README.md
|- src
|- package.json
|- public
|- node-modules
|-package-lock.json
My OS is : Windows 10 Pro
Double check that you are in the right directory.
I was in downloads/app
When I downloaded the app from Docker as part of their tutorial, I extracted it and it ended up in downloads/app**/app**
Type dir in your terminal to see if you can see dockerfile or another folder called app.
I encountered a different issue, so sharing as an FYI. On Windows, I created the docker file as DockerFile instead of Dockerfile. The capital F messed things up.
If you come here from a duplicate, notice also that Docker prevents you from accessing files outside the current directory tree. So, for example,
docker build -f ../Dockerfile .
will not be allowed. You have to copy the Dockerfile into the current directory, or perhaps run the build in the parent directory.
For what it's worth, you also can't use symlinks to files elsewhere in your file system from docker build for security reasons.
Naming convention for Docker file is 'Dockerfile' not 'DockerFile', I got this error because of this.
In windows when the Dockerfile is in .txt format I got this error. changing it to type "file" fixed the issue.
It's a pretty generic error message but what caused it for me, was in my Dockerfile, I didn't have a space specifying the initial command properly. Here's how it should look like:
CMD ["command-name"]
Notice the space between "CMD" and "[".
Instead, my mistake was that I typed CMD["command-name"] which resulted in the error you described.

Resources