I am packaging a rust app to a docker image to deploy to my server. I found the rust docker image size to be more than 1GB (larger than any other app using java and python). Why is the rust docker image so huge? I checked the layer and found the cargo build command takes more than 400MB.
FROM rust:1.54
LABEL maintainer="jiangtingqiang#gmail.com"
ENV ROCKET_ADDRESS=0.0.0.0
ENV ROCKET_PORT=11014
WORKDIR /app
COPY . .
RUN rustup default stable
RUN cargo build
CMD ["cargo", "run"]
Is it possible to make the rust docker image smaller?
The rust image is definitely not 1GB. From Dockerhub we can see that the images are far smaller. Your image is 1GB, because it contains all intermediate build artifacts which are not necessary for the functioning of the application - just check the size of the target folder on your PC
Rust image sizes:
+---------------+----------------+------------------+
| Digest | OS/ARCH | Compressed Size |
+---------------+----------------+------------------+
| 99d3d924303a | linux/386 | 265.43 MB |
| 852ba83a6e49 | linux/amd64 | 196.74 MB |
| 6eb0fe2709a2 | linux/arm/v7 | 256.59 MB |
| 2a218d86ec85 | linux/arm64/v8 | 280.22 MB |
+---------------+----------------+------------------+
The rust docker image contains the compiler, which you need to build your app, but you don;t have to package it with your final image. Nor you have to package all the temporary artifacts generated by the build process.
Solution
In order to reduce the final, production image, you have to use a multi-stage docker build:
The first stage builds the image
The second stage discards all irrelevant stuff and gets only the built application:
# Build stage
FROM rust:1.54 as builder
WORKDIR /app
ADD . /app
RUN cargo build --release
# Prod stage
FROM gcr.io/distroless/cc
COPY --from=builder /app/target/release/app-name /
CMD ["./app-name"]
Related
I am trying to build the following image out of a Dockerfile.
Dockerfile source#
https://github.com/AykutSarac/jsoncrack.com/blob/main/Dockerfile
Docker host machine spec:
Macbook Pro M1 chip
I checked the following post:
standard_init_linux.go:178: exec user process caused "exec format error"
I added on the top extra lines:
#!/bin/bash
# Build for AMD64
# Builder
FROM node:14-buster as builder
WORKDIR /src
COPY . /src
RUN yarn install --legacy-peer-deps
RUN yarn run build
# App
FROM nginxinc/nginx-unprivileged
COPY --from=builder /src/out /app
COPY default.conf /etc/nginx/conf.d/default.conf
And then I created the image using the following commmand:
docker build -t username/jsoncrack-1-amd64 . --no-cache=true --platform=linux/amd64
Still showing the image when is pushed as arm type not
Any ideas on how to get that image built as Linux/AMD64 out of that Dockerfile?
Note: I am able to create other docker images on the M1 Apple Macbook without issues, the issue is only with this dockerfile.
Thanks
Actually I had to delete older images that would match the image build which end up pushing the older version ARM, not AMD.
Everything is working as expected with the steps above (Just make sure to clean your local stored images)
I'm trying to find over the net how to manage properly Dockerfile in order to make the best possible image, but unfortunately no good way appeared to me. That's why I ask here.
This is my context :
I'm developping Net Core 3 web API
I'm using template from VS2019
I'm using the original DockerFile with some modifications
Here is my Dockerfile :
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
RUN apt-get update;apt-get install libfontconfig1 -y
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Src/API/API.csproj", "Src/API/"]
RUN dotnet restore "Src/API/API.csproj"
COPY . .
WORKDIR "/src/Src/API"
RUN dotnet build "API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "API.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "API.dll"]
There is my solution structure :
.
| .dockerignore
| mySolution.sln
+---Docs
+---Src
| \---API
| | API.csproj
| | API.csproj.user
| | appsettings.Development.json
| | appsettings.json
| | appsettings.Staging.json
| | Dockerfile
| | Dockerfile.original
| | Program.cs
| | Startup.cs
| +---.config
| | dotnet-tools.json
| +---bin
| +---Controllers (source files)
| +---Data (source files)
| +---Database (source files)
| +---Dtos (source files)
| +---Helpers (source files)
| +---Mail (source files)
| +---Migrations (EF source files)
| +---Models (source files)
| +---obj
| +---PDF (source files)
| +---Properties
| | | launchSettings.json
| +---Services (source files)
| \---wwwroot
| +---Templates
| \---uploads
\---Tests
As you can see, if I want to build my image without VS2019, I have to put the Dockerfile to the root directory (where is the .sln file is).
For now, if I use this Dockerfile, Docker will copy all files / directories from Src directory, including bin / obj directories, and wwwroot directory which can contains some files from my upload tests.
If I check in Visual Studio the file structure in my container :
As you can see, I don't need to all files, only my sources in order to build and deploy my app.
How can I upgrade my Dockerfile in order to make the most proper image ?
Some tips:
For security/portability use alpine instead of buster slim in the final image.
At the final image, use "USER NOBODY" to run Dockerfile as non-root.
That will require usage of ports above 1024.
For the building purpose control the current context using '-f' so you can leave Dockerfile inside but use context from solution root and even if you have CI/CD pipelines.
Run your unit tests inside Dockerfile before last stage so if it fails, it would stop.
Think about secrets and that depends where you will run your container because AppConfigs aren't recommended.
I'm building a multistage Docker image of a golang microservice and I want to make it very thin using busybox as base image to run the final executable.
The image is correctly built, but when I run it I get this error:
standard_init_linux.go:211: exec user process caused "no such file or directory"
I'm working on my Ubuntu laptop, so this error has nothing to do with the Windows OS as many other questions report.
This is my image.
# build stage
FROM golang:1.15.3 AS build-stage
RUN mkdir /build
ADD . /build/
WORKDIR /build
RUN go mod download
RUN go test ./...
RUN go build -o goapp .
# final stage
FROM busybox
WORKDIR /app
COPY --from=build-stage /build/goapp /app/
CMD ["./goapp"]
A very simplified version of my project folder could be:
project
├── Dockerfile
├── go.mod
├── go.sum
├── main.go
└── other-packages
You are building your app with CGO_ENABLED=1 (this is Go's default setting for builds) - and transplanting that executable to a docker image where there is no compatible library support (glibc, dns resolver etc.).
To ensure your build does not rely on any external libraries - statically binding all dependencies - and can then be deployed to a SCRATCH docker image or busybox - disable CGO during your build phase:
RUN CGO_ENABLED=0 go build -o goapp .
I guess there's a difference between the libc of the two images.
As introduced in description of busybox image, there are several libc variants, which is distinguished by image tags.
The libc of golang:1.15.3 is glibc (it is FROM the corresponding version of debian:buster), so you should use busybox:glibc on your final stage.
The default busybox:latest is using uclibc. They are incompatible. Check the sha256 digest of busybox:latest and busybox:uclibc.
I have a Docker Swarm with multiple Raspberry Pis (raspbian light). I try to run
ASP.NET Core Containers on them as service / stack.
I have following Dockerfile:
FROM microsoft/dotnet:2.0-sdk AS build
WORKDIR /source
COPY WebApi.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /publish -r linux-arm /p:PublishWithAspNetCoreTargetManifest="false"
FROM microsoft/dotnet:2.0-runtime-deps-stretch-arm32v7
ENV ASPNETCORE_URLS http://+:80
WORKDIR /app
COPY --from=build /publish .
ENTRYPOINT [ "./WebApi" ]
What works:
Building & pushing the container image on my win10 laptop (till the sdk image is only available for x64) and running the container on a single raspberry node with docker run:
docker run --rm -it myrepo/myimage
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
(so the container can run and it's no arm vs x64 issue)
What don't work:
docker service create myrep/myimage
2n4ahyhgb3ju5rvo97tekh9vg
overall progress: 0 out of 1 tasks
1/1: no suitable node (unsupported platform on 3 nodes)
And of course docker stack deploy.
If I inspect the created image (or even the arm32v7 image from microsoft) it just states amd64 instead of arm.
"Architecture": "amd64",
"Os": "linux",
So is it a case of wrong meta data? Which is only checked by docker if you use swarm? How can I change that / make it run? Building a base image on my own? Do I miss something?
Edit 1
I tried it with the .NET Core 2.1 images and had the same behavior.
With the latest .NET Core 2.1 Preview 2 images released yesterday it finally works.
Update (add Dockerfile):
This simple one should do the trick:
FROM microsoft/dotnet:2.1-sdk-stretch-arm32v7 AS builder
ENV DOTNET_CLI_TELEMETRY_OPTOUT 1
WORKDIR /src
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /publish -r linux-arm
FROM microsoft/dotnet:2.1-runtime-deps-stretch-slim-arm32v7
WORKDIR /app
COPY --from=builder /publish .
ENTRYPOINT [ "./MyService" ]
You can even build your image now on your raspberry pi. But be aware it´s still a (nice working) preview.
I made a Docker container which is fairly large. When I commit the container to create an image, the image is about 7.8 GB big. But when I export the container (not save the image!) to a tarball and re-import it, the image is only 3 GB big. Of course the history is lost, but this OK for me, since the image is "done" in my opinion and ready for deployment.
How can I flatten an image/container without exporting it to the disk and importing it again? And: Is it a wise idea to do that or am I missing some important point?
Now that Docker has released the multi-stage builds in 17.05, you can reformat your build to look like this:
FROM buildimage as build
# your existing build steps here
FROM scratch
COPY --from=build / /
CMD ["/your/start/script"]
The result will be your build environment layers are cached on the build server, but only a flattened copy will exist in the resulting image that you tag and push.
Note, you would typically reformulate this to have a complex build environment and only copy over a few directories. Here's an example with Go to make a single binary image from source code and a single build command without installing Go on the host and compiling outside of docker:
$ cat Dockerfile
ARG GOLANG_VER=1.8
FROM golang:${GOLANG_VER} as builder
WORKDIR /go/src/app
COPY . .
RUN go-wrapper download
RUN go-wrapper install
FROM scratch
COPY --from=builder /go/bin/app /app
CMD ["/app"]
The go file is a simple hello world:
$ cat hello.go
package main
import "fmt"
func main() {
fmt.Printf("Hello, world.\n")
}
The build creates both environments, the build environment and the scratch one, and then tags the scratch one:
$ docker build -t test-multi-hello .
Sending build context to Docker daemon 4.096kB
Step 1/9 : ARG GOLANG_VER=1.8
--->
Step 2/9 : FROM golang:${GOLANG_VER} as builder
---> a0c61f0b0796
Step 3/9 : WORKDIR /go/src/app
---> Using cache
---> af5177aae437
Step 4/9 : COPY . .
---> Using cache
---> 976490d44468
Step 5/9 : RUN go-wrapper download
---> Using cache
---> e31ac3ce83c3
Step 6/9 : RUN go-wrapper install
---> Using cache
---> 2630f482fe78
Step 7/9 : FROM scratch
--->
Step 8/9 : COPY --from=builder /go/bin/app /app
---> Using cache
---> 5645db256412
Step 9/9 : CMD /app
---> Using cache
---> 8d428d6f7113
Successfully built 8d428d6f7113
Successfully tagged test-multi-hello:latest
Looking at the images, only the single binary is in the image being shipped, while the build environment is over 700MB:
$ docker images | grep 2630f482fe78
<none> <none> 2630f482fe78 6 days ago 700MB
$ docker images | grep 8d428d6f7113
test-multi-hello latest 8d428d6f7113 6 days ago 1.56MB
And yes, it runs:
$ docker run --rm test-multi-hello
Hello, world.
Up from Docker 1.13, you can use the --squash flag.
Before version 1.13:
To my knowledge, you cannot using the Docker api. docker export and docker import are designed for this scenario, as you yourself already mention.
If you don't want to save to disk, you could probably pipe the outputstream of export into the input stream of import. I have not tested this, but try
docker export red_panda | docker import - exampleimagelocal:new
Take a look at docker-squash
Install with:
pip install docker-squash
Then, if you have a image, you can squash all layers into 1 with
docker-squash -f <nr_layers_to_squash> -t new_image:tag existing_image:tag
A quick 1-liner that is useful for me to squash all layers:
docker-squash -f $(($(docker history $IMAGE_NAME | wc -l | xargs)-1)) -t ${IMAGE_NAME}:squashed $IMAGE_NAME
Build the image with the --squash flag:
https://docs.docker.com/engine/reference/commandline/build/#squash-an-images-layers---squash-experimental
Also consider mopping up unneeded files, such as the apt cache:
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*