I am simply adding a user, group and directory into a standard image. It works fine until the userid number gets too big and then dock gets stuck exporting layers.I simplified the docker file down to:
FROM eclipse-temurin:17-jdk-jammy
USER root
RUN groupadd -r -g 996600555 testGroup
RUN useradd -u 997690599 -g testGroup -r -d /tmp/test testUser
RUN mkdir /tmp/test
RUN chown -R testUser:testGroup /tmp/test
RUN chmod -R g+rw /tmp/test
CMD ["/bin/bash"]
The build output show:
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 309B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/eclipse-temurin:17-jdk-jammy 1.3s
=> CACHED [1/7] FROM docker.io/library/eclipse-temurin:17-jdk-jammy#sha256:ff753441e51d0260f917710d0dfdea73f698624768db2b31be3f6685b4953874 0.0s
=> => resolve docker.io/library/eclipse-temurin:17-jdk-jammy#sha256:ff753441e51d0260f917710d0dfdea73f698624768db2b31be3f6685b4953874 0.0s
=> [2/6] RUN groupadd -r -g 996600555 testGroup 0.5s
=> [3/6] RUN useradd -u 997690599 -g testGroup -r -d /tmp/test testUser 0.6s
=> [4/6] RUN mkdir /tmp/test 0.6s
=> [5/6] RUN chown -R testUser:testGroup /tmp/test 0.6s
=> [6/6] RUN chmod -R g+rw /tmp/test 0.6s
=> exporting to image 163.4s
=> => exporting layers
UPDATE:
Without the added lines for adding user, group, and directory...image size is 454 MB
If I change the value of group and userid to be 10000...image size is 457 MB
If I change the value of group and userid to be 1000000...image size is 778 MB
If I change the value of group and userid to be 5000000...image size is 2.07 GB
If I change the value of group and userid to be 10000000...image size is 3.69 GB
As the userId gets increased, the processing time and size of images gets much larger.
Unfortunately, I need to stick with the large userId value to match up container and host user.Does anyone have a solution for this type of error?
DOCKER VERSION INFORMATION
Client:
Cloud integration: v1.0.28
Version: 20.10.17
API version: 1.41
Go version: go1.17.11
Git commit: 100c701
Built: Mon Jun 6 23:09:02 2022
OS/Arch: windows/amd64
Context: default
Experimental: true
Server: Docker Desktop 4.11.0 (83626)
Engine:
Version: 20.10.17
API version: 1.41 (minimum version 1.12)
Go version: go1.17.11
Git commit: a89b842
Built: Mon Jun 6 23:01:23 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.6
GitCommit: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc:
Version: 1.1.2
GitCommit: v1.1.2-0-ga916309
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Related
This question already has answers here:
Why should there be spaces around '[' and ']' in Bash?
(5 answers)
Closed last month.
This is my Dockerfile:
FROM tomcat:9
RUN apt-get update
RUN apt-get install -y iputils-ping file
ARG TARGETARCH
RUN if ["$TARGETARCH" = "amd64"]; then \
apt-get install -y libc6-i386 ; \
fi
WORKDIR /usr/local/tomcat
... // skipped
The problem is the libc6-i386 package. It just cannot be installed on amd64 architecture.
I am building docker images on my Mac M2 machine (not sure if this is the problem), with this command:
docker buildx build --platform linux/amd64,linux/arm64 --push -t foobar .
It can successfully build 2 architecture images, but amd64 passes libc6-i386 package.
I tried to direct assign linux/amd64:
docker buildx build --platform linux/amd64 -t foobar .
And I can see the output:
[+] Building 26.1s (15/15) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 3.37kB 0.0s
=> [internal] load metadata for docker.io/library/tomcat:9 1.8s
=> [auth] library/tomcat:pull token for registry-1.docker.io 0.0s
=> CACHED [ 1/10] FROM docker.io/library/tomcat:9#sha256:39cb3ef7ca9... 0.0s
=> => resolve docker.io/library/tomcat:9#sha256:39cb3ef7ca9005... 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 7.51kB 0.0s
=> [ 2/10] RUN apt-get update 14.7s
=> [ 3/10] RUN apt-get install -y iputils-ping file 8.7s
=> [ 4/10] RUN if ["amd64" = "amd64"]; then apt-get install -y libc6-i386 ; fi 0.1s
=> [ 5/10] WORKDIR /usr/local/tomcat
It is already comparing "amd64" = "amd64", but why doesn't it install libc6-i386 package?
Environments:
docker version
Client:
Cloud integration: v1.0.29
Version: 20.10.21
API version: 1.41
Go version: go1.18.7
Git commit: baeda1f
Built: Tue Oct 25 18:01:18 2022
OS/Arch: darwin/arm64
Context: default
Experimental: true
Server: Docker Desktop 4.15.0 (93002)
Engine:
Version: 20.10.21
API version: 1.41 (minimum version 1.12)
Go version: go1.18.7
Git commit: 3056208
Built: Tue Oct 25 17:59:41 2022
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.6.10
GitCommit: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker buildx version
github.com/docker/buildx v0.9.1 ed00243a0ce2a0aee75311b06e32d33b44729689
What's the problem here?
I think it's related to space.
When I try to run this command and got this error:
$ if ["amd64" = "amd64"]; then echo hi ; fi
[amd64: command not found
I change it to this command with 2 more space and works
$ if [ "amd64" = "amd64" ]; then echo hi ; fi
hi
So you should change your script to:
RUN if [ "$TARGETARCH" = "amd64" ]; then \
apt-get install -y libc6-i386 ; \
fi
Docker version:
my base_image with multiple architecture:

Dockerfile:
I use FROM --platform=linux/arm64 ${base_image} to force use the arm64 image but it does not work. Then I checked the image on this machine and found the arch of the image is amd64, so I doubt it has something to do with the local image.
so I change the base_image force to arm64 and re-build:
Magic happens!!!
So, my question is why --platform of FROM does not work? Why docker does not perform docker pull --platform instead it depends on my local machine image.
PS: I'm sorry I had to desensitize some sensitive words and it will affect your reading.
-------------------------reproduce on windows PC-------------------------------------------------
fengyq#DESKTOP-918EPFF:~$ docker version
Client: Docker Engine - Community
Cloud integration: 1.0.14
Version: 20.10.6
API version: 1.41
Go version: go1.13.15
Git commit: 370c289
Built: Fri Apr 9 22:46:45 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.6
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8728dd2
Built: Fri Apr 9 22:44:56 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.19.0
GitCommit: de40ad0
fengyq#DESKTOP-918EPFF:~$ cat Dockerfile
FROM --platform=linux/arm64 golang:1.16-alpine
RUN go version
fengyq#DESKTOP-918EPFF:~$ docker pull golang:1.16-alpine
1.16-alpine: Pulling from library/golang
Digest: sha256:5616dca835fa90ef13a843824ba58394dad356b7d56198fb7c93cbe76d7d67fe
Status: Downloaded newer image for golang:1.16-alpine
docker.io/library/golang:1.16-alpine
fengyq#DESKTOP-918EPFF:~$ docker image inspect golang:1.16-alpine|grep Architecture -A2
"Architecture": "amd64",
"Os": "linux",
"Size": 301868964,
# It will not download the image specified in FROM instruction, see the `go version` output
fengyq#DESKTOP-918EPFF:~$ docker build --no-cache . -t test
Sending build context to Docker daemon 122.4kB
Step 1/2 : FROM --platform=linux/arm64 golang:1.16-alpine
---> 7642119cd161
Step 2/2 : RUN go version
---> Running in 7d020707da41
go version go1.16.15 linux/amd64
Removing intermediate container 7d020707da41
---> 26c1c4bf971e
Successfully built 26c1c4bf971e
Successfully tagged test:latest
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
The --platform parameter was introduced in buildkit, and I tend to recommend that for most builds now:
$ DOCKER_BUILDKIT=1 docker build -t test-platform -f df.platform --progress plain --no-cache .
#1 [internal] load build definition from df.platform
#1 sha256:5c840b4d7475cccb1fc86fce5ee78796e600289df0bb6de6c73430d268e9389d
#1 transferring dockerfile: 38B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:1140f41a9b3ce804e3b52ff100b4cad659a81a19c059e58d6dc857c0e367c821
#2 transferring context: 34B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/golang:1.16-alpine
#3 sha256:066c23f588b92c8811e28ac05785cd295f354b1e7f60b3e42c4008ec173536c2
#3 DONE 0.2s
#4 [1/2] FROM docker.io/library/golang:1.16-alpine#sha256:5616dca835fa90ef13a843824ba58394dad356b7d56198fb7c93cbe76d7d67fe
#4 sha256:d20c37de2e493c7729ae105da84b8907178eed8cc5d1a935db9a50e2370830c2
#4 CACHED
#5 [2/2] RUN go version
#5 sha256:158e1ccd4f04dd9d9e1d7cb1008671d8b25cf42ff017d0f2fce6cc08899a77f4
#5 0.529 go version go1.16.15 linux/arm64
#5 DONE 0.5s
#6 exporting to image
#6 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#6 exporting layers 0.0s done
#6 writing image sha256:3901f37e2cfca681676cd6c6043d3b88594664c44b1f4e873c183e0a200852d5 done
#6 naming to docker.io/library/test-platform done
#6 DONE 0.0s
With the classic builder, it will default to the already existing image on the host, and only pull a new one when the image doesn't exist or when you specify --pull:
$ DOCKER_BUILDKIT=0 docker build -t test-platform -f df.platform .
Sending build context to Docker daemon 23.04kB
Step 1/2 : FROM --platform=linux/arm64 golang:1.16-alpine
---> df1795ddbf41
Step 2/2 : RUN go version
---> Running in f53586180318
go version go1.16.8 linux/amd64
Removing intermediate container f53586180318
---> a250bd04bb4b
Successfully built a250bd04bb4b
Successfully tagged test-platform:latest
$ DOCKER_BUILDKIT=0 docker build -t test-platform --pull -f df.platform .
Sending build context to Docker daemon 23.04kB
Step 1/2 : FROM --platform=linux/arm64 golang:1.16-alpine
1.16-alpine: Pulling from library/golang
9b3977197b4f: Already exists
1a89e8eeedd5: Already exists
94645a83ff95: Already exists
7ed97893b138: Already exists
57a2943bcc95: Already exists
Digest: sha256:5616dca835fa90ef13a843824ba58394dad356b7d56198fb7c93cbe76d7d67fe
Status: Downloaded newer image for golang:1.16-alpine
---> 4a5e4084930e
Step 2/2 : RUN go version
---> [Warning] The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
---> Running in 5a0893533b89
go version go1.16.15 linux/arm64
Removing intermediate container 5a0893533b89
---> 2dd93e25714a
Successfully built 2dd93e25714a
Successfully tagged test-platform:latest
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#understand-build-context
After read the above doc, I've tested in my local machine. My envs are like:
❯ sw_vers
ProductName: macOS
ProductVersion: 11.6.1
BuildVersion: 20G224
❯ docker version
Client:
Cloud integration: v1.0.22
Version: 20.10.13
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 10 14:08:44 2022
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Desktop 4.6.1 (76265)
Engine:
Version: 20.10.13
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 906f57f
Built: Thu Mar 10 14:06:05 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.10
GitCommit: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
I understood that following statements in the documentation means "Even if the Dockerfile does not use files in the build context directly, they still affect to the result image size":
Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size. This can increase the time to build the image, time to pull and push it, and the container runtime size
So, the first test is building two images with the same Dockerfile, one in the build context with the large file and another is not:
❯ cat Dockefile
FROM alpine:3.15.4
RUN ["echo", "a"]
❯ mkdir -p contexts/small contexts/large
❯ touch contexts/small/file
❯ dd if=/dev/urandom of=contexts/large/file bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.404337 s, 259 MB/s
❯ ll **/*/file --block-size=M
-rw-r--r-- 1 hansuk 100M 4 12 14:15 contexts/large/file
-rw-r--r-- 1 hansuk 0M 4 12 14:15 contexts/small/file
❯ tree -a .
.
├── Dockefile
└── contexts
├── large
│ └── file
└── small
└── file
3 directories, 3 files
❯ docker build --no-cache -f Dockefile -t small contexts/small
[+] Building 3.0s (6/6) FINISHED
=> [internal] load build definition from Dockefile 0.0s
=> => transferring dockerfile: 78B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.15.4 2.3s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aa 0.0s
=> [2/2] RUN ["echo", "a"] 0.4s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:695a2b36e3b6ec745fddb3d499acda249f7235917127cf6d68c93cc70665a1dc 0.0s
=> => naming to docker.io/library/small 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
❯ docker build --no-cache -f Dockefile -t large contexts/large
[+] Building 1.4s (6/6) FINISHED
=> [internal] load build definition from Dockefile 0.0s
=> => transferring dockerfile: 78B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.15.4 1.0s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aa 0.0s
=> [2/2] RUN ["echo", "a"] 0.3s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:7c48ae10200ceceff83c6619e9e74d76b7799032c53fba57fa8adcbe54bb5cce 0.0s
=> => naming to docker.io/library/large 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
❯ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
large latest 7c48ae10200c 7 seconds ago 5.57MB
small latest 695a2b36e3b6 15 seconds ago 5.57MB
The sizes of result images are exactly same and no difference in the size of 'transferring context'(This message is different from written in the documentation, "Sending build context to Docker daemon ...", but I guess it's about the BuildKit upgraded)
Then, I changed the Dockerfile to use the file for second test:
❯ cat Dockefile
FROM alpine:3.15.4 COPY file /file
❯ docker build --no-cache -f Dockefile -t small contexts/small
# ... Eliding unnecessary logs
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aabbf5acce2c4 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 25B 0.0s
=> [2/2] COPY file /file 0.0s
❯ docker build --no-cache -f Dockefile -t large contexts/large
# ... Eliding unnecessary logs
=> [internal] load build context 4.1s
=> => transferring context: 104.88MB 4.0s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aabbf5acce2c4 0.0s
=> [2/2] COPY file /file
❯ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
large latest 426740a3fa3c 7 minutes ago 110MB
small latest 33a32d1e5aab 7 minutes ago 5.57MB
The result sizes are different and it looks like by the file size. But I noticed with more tries to build "large" image, no transferring occured for the same file(context), even with the --no-cache option:
# Run a build again after the previous build
❯ docker build --no-cache -f Dockefile -t large contexts/large
# ... Build context is very small
=> [internal] load build context 0.0s
=> => transferring context: 28B 0.0s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aabbf5acce2c4 0.0s
=> [2/2] COPY file /file 0.6s
# ...
# Recreate the file
❯ dd if=/dev/urandom of=contexts/large/file bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.374838 s, 280 MB/s
❯ docker build --no-cache -f Dockefile -t large contexts/large
# ... Then it transfer the new build context(file)
=> [internal] load build context 2.7s
=> => transferring context: 104.88MB 2.7s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aabbf5acce2c4 0.0s
=> [2/2] COPY file /file 0.7s
Sum up my questions:
A size of build contexts only affects to the result image when any context, a file or a directory, is used on the Dockerfile? e.g. COPY build/context /here
Is there optimization options or improvements from when the documentation written?
A "build context" is cached (by buildkit or runtime)? I mean not the image layers.
Update
Sending the whole build context can reproduce in the legacy docker builder:
# Create a 1GB file in `small' image build context
❯ dd if=/dev/urandom of=contexts/small/dummy bs=1M count=1024
# Measurement
❯ time sh -c "DOCKER_BUILDKIT=0 docker build -t small -f Dockerfile contexts/small/"
Sending build context to Docker daemon 1.074GB
Step 1/2 : FROM alpine:3.15.4
---> 0ac33e5f5afa
Step 2/2 : COPY file /file
---> 5b93097102e3
Successfully built 5b93097102e3
Successfully tagged small:latest
real 0m26.639s
user 0m2.809s
sys 0m4.557s
❯ time sh -c "DOCKER_BUILDKIT=0 docker build -t large -f Dockerfile contexts/large/"
Sending build context to Docker daemon 10.49MB
Step 1/2 : FROM alpine:3.15.4
---> 0ac33e5f5afa
Step 2/2 : COPY file /file
---> Using cache
---> 3e24b0f37389
Successfully built 3e24b0f37389
Successfully tagged large:latest
real 0m0.655s
user 0m0.227s
sys 0m0.161s
The question for now is how the Buildkit opimize it.
A size of build contexts only affects to the result image when any context, a file or a directory, is used on the Dockerfile? e.g. COPY build/context /here
Is there optimization options or improvements from when the documentation written?
The image will only increase when you copy the file from the build context into the image. However, many images include entire directories without realizing the contents they included. Excluding specific files or subdirectories with a .dockerignore would shrink the image in those cases.
A "build context" is cached (by buildkit or runtime)? I mean not the image layers.
Buildkit changes this process dramatically and the documentation was written from the concept of the classic build tooling. With buildkit, previous versions of the context are cached, and only the files explicitly copied into the image are fetched using something similar to rsync to update it's cache. Note that this doesn't apply when building in ephemeral environments, like a CI server, that creates a new buildkit cache per build.
I have a fairly simple Dockerfile that I am trying to build locally before pushing to Azure DevOps for CI/CD usage. The build is falling on the last line (the cause is not the purpose of this post) and I am trying to get the complete build process to "start over". No matter what I do, Docker continues to use cached layers (images??) despite me deleting everything and telling the system to not use cache.
FROM mcr.microsoft.com/dotnet/sdk:5.0-alpine AS build
WORKDIR /src
# Personal access token to access Artifacts feed
ARG ACCESS_TOKEN
ARG ARTIFACTS_ENDPOINT
# Install the Credential Provider to configure the access
RUN apk add bash
RUN wget -qO- https://aka.ms/install-artifacts-credprovider.sh | bash
# Configure the environment variables
ENV NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED true
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\":\"${ARTIFACTS_ENDPOINT}\", \"password\":\"${ACCESS_TOKEN}\"}]}"
COPY . .
WORKDIR /src/Announcements.UI
RUN dotnet restore -s ${ARTIFACTS_ENDPOINT} -s https://api.nuget.org/v3/index.json
RUN dotnet build --no-restore -c Release -o /app/build
FROM build as publish
RUN dotnet publish --no-restore -c Release -o /app/publish
FROM nginx:alpine AS final
WORKDIR /usr/share/nginx/html
COPY --from=publish /app/publish/wwwroot .
COPY nginx.conf /etc/nginx/nginx.conf
I am executing the build using the following command
docker builder build --build-arg ARTIFACTS_ENDPOINT=https://pkgs.dev.azure.com/some-url --build-arg ACCESS_TOKEN=some-token --pull --no-cache --progress plain -f .\Announcements.UI\Dockerfile .
As you can see, I am specifying both no-cache and pull arguments.
Both docker image ls --all and docker container ls --all show nothing.
I have also executed the following commands
docker builder prune
docker build prune
docker container prune
docker system prune --all
docker system prune --volumes --all
And maybe a few others that I cannot even remember as I have been researching this issue. Unless I edit the beginning lines of the Dockerfile first, no matter what I do, the output ends up being similar to
#1 [internal] load build definition from Dockerfile
#1 sha256:cbdbb2aa8016147768da39aaa3cf4a88c42355ea185072e6aec7d1bf37e95a95
#1 transferring dockerfile: 32B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:935b1c32fc5d8819aedb2128b068f58f226088632b15af29f9cf4caac88bd889
#2 transferring context: 35B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/nginx:alpine
#3 sha256:b001d263a254f0e4960d52c837d5764774ef80ad3878c61304052afb6e0e9af2
#3 ...
#4 [internal] load metadata for mcr.microsoft.com/dotnet/sdk:5.0-alpine
#4 sha256:84368650d7715857ce3c251a792ea9a04a39b0cbc02b73ef90801d4be2c05b0f
#4 DONE 0.3s
#3 [internal] load metadata for docker.io/library/nginx:alpine
#3 sha256:b001d263a254f0e4960d52c837d5764774ef80ad3878c61304052afb6e0e9af2
#3 DONE 0.5s
#11 [internal] load build context
#11 sha256:123b3b80e7c135f4e2816bbcb5f2d704566ade8083a56fd10e8f2c6825b85db4
#11 transferring context: 6.41kB 0.0s done
#11 DONE 0.1s
#10 [build 4/8] RUN wget -qO- https://aka.ms/install-artifacts-credprovider.sh | bash
#10 sha256:d9ba90bfb15e2bafe5ba7621cd63377af7ec0ae7b30029507d2f41d88a17b7ed
#10 CACHED
#9 [build 3/8] RUN apk add bash
#9 sha256:a05883759b40f77aae0edcc00f1048f10b8e8c066a9aadd59a10fc879d4fd4df
#9 CACHED
#12 [build 5/8] COPY . .
#12 sha256:46509fe5fec5bb4235463dc240239f2529e48bcdc26909f0fb6861c97987ae93
#12 CACHED
#15 [build 8/8] RUN dotnet build --no-restore -c Release -o /app/build
#15 sha256:5f43c86c41c36f567ea0679b642bc7a5a8adc889d090b9b101f11440d7f22699
#15 CACHED
...
As you can see, some commands are still coming from cached results.
My docker version info is
docker version
Client: Docker Engine - Community
Cloud integration: 1.0.7
Version: 20.10.2
API version: 1.41
Go version: go1.13.15
Git commit: 2291f61
Built: Mon Dec 28 16:14:16 2020
OS/Arch: windows/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.2
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8891c58
Built: Mon Dec 28 16:15:28 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.19.0
GitCommit: de40ad0
could be related to this bug: https://github.com/moby/buildkit/issues/1939
Try switching off buildkit by setting an environment variable DOCKER_BUILDKIT=0
I have built an image with simple ASP .NET Core 2.0 App in Alpine Docker. When I run the Linux container on Windows Power-Shell, everything is fine. But if I run the Linux container on a Linux Server in PuTTY, the container will immediately exit with code 139. I have used the following conmmand:
docker run -it -d -p xxxx:80 imageName
output:
510e77c3899696823d4f1fba135cbaff3f8b4232deb1a73d41963fc3a8058d81
run without -d
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
OS: Win 10
Docker Version:
Client:
Version: 17.12.1-ce
API version: 1.35
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:15:20 2018
OS/Arch: linux/amd64
Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:17:54 2018
OS/Arch: linux/amd64
Experimental: false
Is there a way to keep the container alive? Did I miss anything important? Any ideas would be appreciated!
Update Dockerfile:
#build image
FROM microsoft/dotnet:2.1-sdk as builder
WORKDIR /app
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish --output /out/ --configuration Release
# runtime image
FROM microsoft/dotnet-nightly:2.1-runtime-alpine AS runtime
RUN apk add --no-cache libuv \
&& ln -s /usr/lib/libuv.so.1 /usr/lib/libuv.so
ENV ASPNETCORE_URLS http://+:80
WORKDIR /app
COPY --from=builder /out .
ENTRYPOINT ["dotnet", "WebAppHelloWorld.dll"]