Migrating local Docker images to buildx - docker

I have been using several locally built docker images that I am trying to migrate to building with docker buildx. Essentially I have a local container to build something from source, and then a prod container that references the local build container.
For example, I have two Dockerfiles in a directory, Dockerfile.builder and Dockerfile.prod
# Dockerfile.builder
FROM maven:3-eclipse-temurin-17
ARG VERSION
# clone git repository, do building things
# Dockerfile.prod
ARG BUILDER_TAG
FROM builder:$BUILDER_TAG as builder
# pull in build artifacts from builder container, do other things
Then from that working directory I would build the containers like so:
docker build --no-cache --build-arg VERSION=$BUILD_VERSION -t builder-container:${BUILD_VERSION} -f Dockerfile.builder .
docker build --no-cache --build-arg BUILDER_TAG=$BUILD_VERSION -t prod-container:${BUILD_VERSION} -f Dockerfile.prod .
I'm trying to adapt this to docker buildx but am struggling with the extra overhead and complexity.
I think this would be the closest to what I'm wanting to do:
docker buildx build --no-cache --build-arg VERSION=$BUILD_VERSION -t builder:${BUILD_VERSION} - < Dockerfile.builder
However, when I try that, I get the following:
[+] Building 4.3s (2/2) FINISHED
=> ERROR [internal] load .dockerignore 4.0s
=> => transferring context: 0.0s
=> ERROR [internal] load build definition from Dockerfile 4.3s
=> => transferring dockerfile: 30B 0.0s
------
> [internal] load .dockerignore:
------
------
> [internal] load build definition from Dockerfile:
------
ERROR: failed to solve: failed to read dockerfile: failed to remove: /var/lib/docker/zfs/graph/hiqgpytehhglat0nn1a06dop1/.zfs: unlinkat /var/lib/docker/zfs/graph/hiqgpytehhglat0nn1a06dop1/.zfs/snapshot: operation not permitted
So that is telling me that it's not reading from my Dockerfile that I'm trying to supply via STDIN, and the path /var/lib/docker/zfs/graph/hiqgpytehhglat0nn1a06dop1/.zfs/snapshot doesn't exist.
Am I invoking docker buildx correctly for my use case?
Do I need to start with a fresh graph directory in order to start building my own images with buildx, or is there something I need to do with docker buildx create first?
I'm finding Docker's documentation on buildx very lacking in terms of how it differs conceptually from the legacy docker build, and I think that's part of my problem.
buildx config:
docker buildx inspect
Name: default
Driver: docker
Last Activity: 2023-02-15 03:27:29 +0000 UTC
Nodes:
Name: default
Endpoint: default
Status: running
Buildkit: 23.0.1
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386

Related

docker buildx fails to show result in image list

The following commands do not show the output ubuntu1 image:
docker buildx build -f 1.dockerfile -t ubuntu1 .
docker image ls | grep ubuntu1
# no output
1.dockerfile:
FROM ubuntu:latest
RUN echo "my ubuntu"
Plus, I cannot use the image in FROM statements in other docker files (both builds are on my local Windows box):
2.dockerfile:
FROM ubuntu1
RUN echo "my ubuntu 2"
docker buildx build -f 2.dockerfile -t ubuntu2 .
#error:
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 1.8s (4/4) FINISHED
=> [internal] load build definition from 2.dockerfile 0.0s
=> => transferring dockerfile: 84B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/ubuntu1:latest 1.8s
=> [auth] library/ubuntu1:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/ubuntu1:latest:
------
2.dockerfile:1
--------------------
1 | >>> FROM ubuntu1:latest
2 | RUN echo "my ubuntu 2"
3 |
--------------------
error: failed to solve: ubuntu1:latest: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (did you mean ubuntu:latest?)
Any idea what's going on? How can I see what buildx prepared and reference one image in another dockerfile?
Ok found a partial solution, I need to add --output type=docker as per the docs. This puts the docker in the image list. But I still cannot use it in the second docker.

How to run a Cenos7 docker image on an ARM based Mac

I am tring to create a Dockerfile to run CentOS. One of the host systems needing to run this container is an ARM based (m1) Mac. These are the two files I have created so far.
# Dockerfile
FROM centos:7
# docker-compose.yml
version: "3.9"
services:
genesis:
build: .
When trying to run/build this cointainer I get the following error
Building genesis
[+] Building 0.9s (4/4) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 77B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/centos:7 0.8s
=> [auth] library/centos:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/centos:7:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to authorize: rpc error: code = Unknown desc = failed to fetch oauth token: Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fcentos%3Apull&service=registry.docker.io": read tcp 192.168.1.209:64469->3.228.155.36:443: read: software caused connection abort
ERROR: Service 'genesis' failed to build : Build failed
After some google searching and answers on stack overflow it looks like the issue is something to do with the architecture difference between the containers and the host. I have tried setting the dockerfile to
FROM --platform=aarch/arm centos:7
and
FROM --platform=linux/amd64 centos:7
but neither of these work, and they're returning the same error as before. I have also tried specifying the platform in the docker-compose file too but that didn't work either.
Interestingly I did seem to have it working when I use the command in the shell
$ docker run --rm -it --platform=linux/amd64 centos:7 sh
but I need to have it working in the dockerfile as I need to then have more setup in the dockefile
Docker isn't a virtual machine, so there are some limitations.
Your application is for Linux, but you appear to be trying to run it from macOS?
Does your FROM lines specify a version of Linux?
If so, you need a build a new container of binaries native to macOS and avoid using Linux containers in your FROM lines.

Docker use local image with buildx

I am building an image for a docker container running on a different architecture. As I don't have internet access all the time, I usually just pull the image when I have internet and docker uses the local image instead of pulling a new one. After I started to build the image with buildx, this does not seem to work anymore. Is there any way to tell docker to only use the local image? When I have connection, docker seems to check wherever there is a new version available but uses the local (or cached) image as I would expect it without internet connection.
$ docker image ls
ros galactic bac817d14f26 5 weeks ago 626MB
$ docker image inspect ros:galactic
...
"Architecture": "arm64",
"Variant": "v8",
"Os": "linux",
...
Example build command
$ docker buildx build . --platform linux/arm64
WARN[0000] No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 0.3s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 72B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/ros:galactic 0.0s
------
> [internal] load metadata for docker.io/library/ros:galactic:
------
Dockerfile:1
--------------------
1 | >>> FROM ros:galactic
2 | RUN "echo hello"
3 |
--------------------
error: failed to solve: failed to fetch anonymous token: Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fros%3Apull&service=registry.docker.io": proxyconnect tcp: dial tcp 127.0.0.1:3333: connect: connection refused
My workaround for this is to explicitly state the registry in the Dockerfile FROM sections, be it your own private registry or dockerhub.
For example, to use the dockerhub ubuntu:latest image, instead of just doing FROM ubuntu:latest I would write in the Dockerfile:
FROM docker.io/library/ubuntu:latest
To use myprivateregistry:5000 I would use:
FROM myprivateregistry:5000/ubuntu:latest
And also you must set --pull=false flag for the docker buildx build or DOCKER_BUILDKIT=1 docker build commands. When you have internet again you can use --pull=true again

Dockerfile Build Error: The system cannot find the path specified

C:\kafka> docker build Dockerfile
[+] Building 0.0s (1/2)
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 63B 0.0s
------
> [internal] load build definition from Dockerfile:
------
failed to solve with frontend dockerfile.v0: failed to read dockerfile: error from sender: walk Dockerfile: The system cannot find the path specified.
Above is an example of the error I am getting and the command run.
My Dockerfile is named "Dockerfile" as I have read on many answers but still no resolve.
My Dockerfile is also within the directory I am in.
To build a docker image:
cd /path/where/docker_file/lives
docker build .
Above is same as:
docker build -f Dockerfile .
You need to specify Dockerfile name only if it is not default:
cd /path/where/docker_file/lives
docker build -f Dockerfile.modified .

How do I build a dockerfile if the name of the dockerfile isn't Dockerfile?

I am able a build a Dockerfile like
docker build -t deepak/ruby .
But for a Dockerfile which is not named Dockerfile
# DOCKER-VERSION 0.4.8
FROM deepak/ruby
MAINTAINER Deepak Kannan "deepak#example.com"
RUN ./bin/rails s
let us say it is called Dockerfile.app
which we build with
docker build -t deepak/app Dockerfile.app
then i get the error
Uploading context 0 bytes
Error build: EOF
EOF
Notice there is a dot . at the end of both commands.
docker build -f MyDockerfile .
Or with a tag:
docker build -t mysuperimage -f MyDockerfile .
This Works
docker build -t doronaviguy/helloworld -f SomeDockerFile .
Docker build Documentation
The last parameter to docker build is the build path, when you put . it means this is the path where you will find the Dockerfile. When you change it to Dockerfile.app it will then try and look for Dockerfile.app/Dockerfile, which isn't correct.
I'm not sure if it will still work, but you used to be able to do this.
$ docker build -t deepak/app - < Dockerfile.app
Try that and see if it helps, if not, maybe open a docker issue to add this feature back in, or update the documentation on how to use a Dockerfile with a different name.
More info here: http://docs.docker.io/en/latest/commandline/command/build/
Try dockerfeed. It uses the docker feature to build a context via stdin. I wrote the script to address exactly your problem I was facing myself.
To replace a Dockerfile with a different one you do it like this:
dockerfeed -d Dockerfile.app . | docker build -t deepak/ruby -
And voilà. Dockerfeed is doing the same as docker build. It packs the source with its Dockerfile but lets you swap out the old Dockerfile with the desired one. No files are created in the process, no source is changed. The generated tar archive is piped into docker, which in turn sends it down to the docker daemon.
Update:
This was a valid answer in the old days when there was no -f switch available. With docker version 1.5 this option was introduced. Now you can build provide a different Dockerfile like this:
docker build -f other-Dockerfile .
This should work,
docker build -t <tag-name> -f <file-name> .
You can acheive this also using docker-compose.
In your docker-compose.yml under the build section, you can specify the directory in which you store your dockerfile and its alternate-name as follow :
build:
context: "/path/to/docker/directory"
dockerfile: "dockerfile-alternate-name"
docker-compose
Windows User should try below command:
docker build -f absolute_docker_file_path dot
for example :
docker build -f D:\Code\core-api\API_TEST.Dockerfile .
If you have multiple Dockerfile's, as could be the case in large projects, you could specify the respective fully-qualified Dockerfile to use.
docker build -t swagger_local -f /Users/123456/myrepos/storage/services/gateway/swagger/Dockerfile .
The command when run from /Users/123456/myrepos/storage creates a swagger_local image using the Dockerfile in a child project (services/gateway in my case )
My case is in the /tmp directory and there are a lot of files and docker try with others files althought I pass the -f <Dockerfile.foo>
The most clean and easy (for me) solution is (intead the dockerfeed from above answers):
cat DockerFile.debian.foo | docker build -t debian.foo -
Let assume that you have successfully install docker toolbox including docker-compose and docker-machine. This is my docker-compose.yml
version: '2'
services:
web: ./www/python
ports:
- "5000:5000"
And this is under Dockerfile
FROM python:3.4-alpine
ADD ./www/python /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
Make sure those files located precisely under root folder. When you run docker-compose up then you are going to build image for web.
For me it worked once I put the entire path in the file option:
(meta_learning) brandomiranda~ ❯ docker build -f ~/iit-term-synthesis/Dockerfile_arm -t brandojazz/iit-term-synthesis:test_arm ~/iit-term-synthesis/
[+] Building 43.8s (9/28)
=> [internal] load build definition from Dockerfile_arm 0.0s
=> => transferring dockerfile: 2.59kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/continuumio/miniconda3:latest 0.0s
=> [ 1/24] FROM docker.io/continuumio/miniconda3 0.0s
=> CACHED https://api.github.com/repos/IBM/pycoq/git/refs/heads/main 0.0s
=> [ 2/24] RUN apt-get update && apt-get install -y --no-install-recommends ssh git m4 libgmp-dev opam wget ca-ce 14.9s
=> [ 3/24] RUN useradd -m bot 0.2s
=> [ 4/24] WORKDIR /home/bot 0.0s
=> [ 5/24] ADD https://api.github.com/repos/IBM/pycoq/git/refs/heads/main version.json 0.0s
=> [ 6/24] RUN opam init --disable-sandboxing 28.6s
=> => # [ocaml-base-compiler.4.14.0] downloaded from cache at https://opam.ocaml.org/cache
=> => # <><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><><><><><><>
=> => # ∗ installed base-bigarray.base
=> => # ∗ installed base-threads.base
=> => # ∗ installed base-unix.base
=> => # ∗ installed ocaml-options-vanilla.1
this failed for me:
(meta_learning) brandomiranda~ ❯ docker build -f Dockerfile_arm -t brandojazz/iit-term-synthesis:test_arm ~/iit-term-synthesis/
[+] Building 0.0s (1/2)
=> ERROR [internal] load build definition from Dockerfile_arm 0.0s
=> => transferring dockerfile: 40B 0.0s
------
> [internal] load build definition from Dockerfile_arm:
------
failed to solve with frontend dockerfile.v0: failed to read dockerfile: error from sender: open .Trash: operation not permitted

Resources