Create binaries using golang inside docker container - docker

I'm trying to create binaries (cross platform) using go build inside a container. When I tried in my system, it compiled faster. When I tried the same inside the docker container it's taking too long(comparatively 10 times slower). This statement may not be detailed enough about my problem. But this is what I'm trying to do.
my code snippet:
cmd := exec.Command("bash", "-c","CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o sample")
cmd.Env = append(os.Environ(), []string{"CGO_ENABLED=0", "GOOS=linux","GOARCH=amd64"}...)
cmd.Dir = filepath.Join(rulePath, taskName)
_, err := cmd.Output()
As you can see, I passed the go envs as arguments as well as env's. What I'm doing wrong here?

If I understand correctly, try to set "CGO_ENABLED=1".
If there are any questions, you can let me know.

Related

Share go modules with docker builder stage

[EDIT - added clarity]
Here is my current env setup :
$GOPATH = /home/fzd/go
projectDir = /home/fzd/go/src/github.com/fzd/amazingo
amazingo has a go.mod file that lists several (let's say thousands) dependencies.
So far, I used to go build -t bin/amazingo cmd/main.go, but I want to share this with other people and have a build command that is environment-independent. Using go build has the advantage of downloading each dependency once -- and then using those in ${GOPATH}/pkg/mod, which saves time and bandwidth.
I want to build in a multistage docker image, so I go with
> cat /home/fzd/go/src/github.com/fzd/amazingo/Dockerfile
FROM golang:1.17 as builder
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /bin/amazingo cmd/main.go
FROM alpine:latest
COPY --from=builder /bin/amazingo /amazingo
ENTRYPOINT ["/amazingo"]
As you can expect it, the builder is "naked" when I start it, so it has to download all my dependencies when I docker build -t amazingo:0.0.1 . . But it will do so everytime I call it, which can be several times a day.
Fortunately, I already have most of these dependencies on my disk. I would be happy to share these files (that are located in my $GOPATH/pkg/mod) with the builder, and help it build faster on my machine.
So the question is: how can I share my ${GOPATH} (or ${GOPATH}/mod/pkg) with the builder ?
I tried adding the following to the builder
ARG SRC_GOPATH
COPY ${SRC_GOPATH} /go
and call docker build --build-arg SRC_GOPATH=${GOPATH} -o amazingo:0.0.1 ., but it wasn't good enough - I got an error (COPY failed: file not found in build context or excluded by .dockerignore: stat home/fzd/go: file does not exist)
I hope this update brings a bit more clarity to the problem.
=======
I have a project with a go.mod file.
I want to build that project using a multistage docker image.
(this article is a perfect example)
The issue is that I have "lots" of dependencies, and each of them will be downloaded inside my Docker builder stage.
Is there a way to "share" my GOPATH/pkg/mod with the docker build... command (in some ways, having a local cache) ?
Your end goal isn't completely clear, but the way that I use a multistage build would look something like this for a (dirt-simple) go app, assuming that you ultimately want the docker container to run your go app. You will need to get your source into the build container somehow as well - that is not shown here:
FROM golang:1.17.2-alpine3.14 as builder
WORKDIR /my/app/source/dir
RUN go get && go build -o /path/to/my/app/binary
FROM alpine3.14 AS release
# install runtime deps, if any
# create necessary files and folders, if any
COPY --from=builder /path/to/my/app/binary /usr/local/bin
ENTRYPOINT /usr/local/bin/binary --options
In this way, the source of your application and all dependencies will not be present in the released image, only the compiled binary.
Of course you don't have to specify an output path for that, I think it just makes it a little clearer in this example. And of course you can use whatever base image/images you want to - I'm treating this as though you don't need the go runtime on your release image.

Can I run scripts from a docker build context without a copy?

I want to build on top of a windows docker container by installing a couple programs. The files total .5 GB and I want to keep the layers as small as possible. I am hoping I can run the setup files from the build-context, and then have the build-context swept away at the end so I don't have a needless copy of the source files for the setup.exe embedded in my container layers. However, I have not found an example where this is the case. Instead I mostly see people run a COPY command to a temporary build folder, run their setup, then remove the folder. Won't those files still be in the container layers because the COPY command creates a new layer when it's done?
I don't know if the container can see the build-context directly. I was hoping for some magical folder filled with the build-context files so I could run a script using it, but haven't found anything.
It seems like the alternative is to create a private file-server and perform a RUN that can download them from that private server and unpack them, run the install, and remove them (all as 1 docker step). I understand this would make it more available to others who need to rerun the build, but I'm not convinced we'll need to rerun it. It's not likely to change as the container will build patches for a legacy application. Just seems like a lot to host files on a private, public-facing server for something that will get called once every couple years if ever.
So are these my two options?
Make a container with needless copies of source files embedded within
Host the files on a private file server and download/install/remove them
Or am I missing another option or point about how the containers work?
It's a long shot as Windows is a tricky thing with file system, but you could do this way:
In your Dockerfile use a COPY command, install then RUN del ... to remove the installation files
Build your image docker build -t my-large-image:latest .
Run your image docker run --name my-large-container my-large-image:latest
Stop the container
Export your container filesystem docker export my-large-container > my-large-container.tar
Import the filesystem to a new image cat my-large-container.tar | docker import - my-small-image
Caveat is you need to run the container once which might not be what you want. And also I haven't tested with windows container, sorry.
I usually do the download or copy in one step, then in the next step I do the silent installation and remove the installer.
# escape=`
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ADD https://download.visualstudio.microsoft.com/download/pr/6afa582f-fa26-4a73-8cb9-194321e85f8d/ecea51ead62beb7acc73ad9799511ffdb3083ad384fe04ec50e2cbecfb426482/VS_RemoteTools.exe VS_RemoteTools_x64.exe
RUN Start-Process .\\VS_RemoteTools_x64.exe -ArgumentList #('/install','/quiet','/norestart') -NoNewWindow -Wait; `
Remove-Item -Path C:/VS_RemoteTools_x64.exe -Force;
But otherwise, I don't think you can mount a custom volume while it's being built.
I didn't find a satisfactory answer to this. Docker seems designed for only the modern era and assumes you'll be able to download what you need via scripts and tools hitting APIs and file servers. The easiest option I found that I eventually went with was to host the files on a private file server or service (in my case, AWS S3).
I really wish there was a way to have files hosted by the docker daemon in some way, eg. if it acted like a temporary server that you could get data from via http instead of needing to COPY the files and create a layer. Alas, I found no such feature.
Taking this route made my container about a GB smaller.

Host to add /usr/bin/host to scratch Dockerfile consisting of Go binary?

I would like for this very simple Go package to run in a Docker container using Scratch (or minimal) image.
package main
import (
"fmt"
"os/exec"
)
func main() {
cmd := "host"
args := []string{"-t", "ns", "google.com"}
output, err := exec.Command(cmd, args...).Output()
if err != nil {
fmt.Println(err)
}
fmt.Println(string(output))
}
My original Dockerfile is as follows:
FROM scratch
ADD gohost /
CMD ["/gohost"]
This results in exit code 0: exec: "host": executable file not found in $PATH
I figure this means I need to ADD the /usr/bin/host and set ENV on the added host.. but all the random combinations I've tried of this failed..
I've also tried to simply change cmd := "host" to point to the host binary I added (cmd := "/host"), but seems that isn't a possibility either.
Also, I'm not sure if it's relevant, but it's important the Go binary is built with env GOOS=linux GOARCH=arm64 go build.
If you're depending on calling external binaries, either you also need to make sure they're statically-compiled and included in your Dockerfile, or you can't use a FROM scratch image. A merely very small base image might work better.
In many cases there is a Go-native library to do what you need, and you might see if you can change your code to not need an external process. For instance, instead of calling host as an external process, you might call net.LookupHost instead.
Some common tools are available in busybox (image), including an nslookup but not a host or a dig, but usually these are the sorts of things where an in-process library call will do better. Otherwise you need a lightweight base Linux distribution like alpine where you can add software; your Dockerfile would be
FROM alpine
RUN apk add bind-tools
COPY gohost /bin
CMD ["gohost"]
The Alpine base image advertises itself as being only 5 MB, so your image will be bigger than a FROM scratch image but still not huge.

How do I build a static Go binary for the Docker Alpine image?

I want to build a Go 1.9.2 binary and run it on the Docker Alpine image. The Go code I wrote doesn't call any C code. It also uses the net package. Unfortunately it hasn't been as simple as it sounds as Go doesn't seem to quite build static binaries all the time. When I try to execute the binary I often get cryptic messages for why the binary didn't execute. There's quite a bit of information on the internet about this but most of it ends up with people using trial an error to make their binaries work.
So far I have found the following works, however I don't know why, if it is optimal or if it could be simplified.
env GOOS=linux GARCH=amd64 go install -v -a -tags netgo -installsuffix netgo -ldflags "-linkmode external -extldflags -static"
What is the canonical way (if it exists) to build a Go binary that will run on the Alpine 3.7 docker image? I am happy to use apk to install packages to the Alpine image if that would make things more efficient/easier. (Believe I need to install ca-certificates anyway.)
Yes, you often need to add extra resource files like certificates especially when using a minimal distribution like alpine but the fact that you can run go applications on such small distributions is often also seen as an advantage.
To add the certificates this is a really good explanation outlining how to do it on a scratch container:
https://blog.codeship.com/building-minimal-docker-containers-for-go-applications/
If you would rather stick with alpine then you can install this package to get them:
https://pkgs.alpinelinux.org/package/v3.7/main/x86/ca-certificates

Selecting different code branches when using a shared base image in Docker

I am containerising a codebase that serves multiple applications. I have created three images;
app-base:
FROM ubuntu
RUN apt-get install package
COPY ./app-code /code-dir
...
app-foo:
FROM app-base:latest
RUN foo-specific-setup.sh
and app-buzz which is very similar to app-foo.
This works currently, except I want to be able to build versions of app-foo and app-buzz for specific code branches and versions. It's easy to do that for app-base and tag appropriately, but app-foo and app-buzz can't dynamically select that tag, they are always pinned to app-base:latest.
Ultimately I want this build process automated by Jenkins. I could just dynamically re-write the Dockerfile, or not have three images and just have two nearly-but-not-quite identical Dockerfiles for each app that would need to be kept in sync manually (later increasing to 4 or 5). Each of those solutions has obvious drawbacks however.
I've seen lots of discussions in the past about things such as an INCLUDE statement, or dynamic tags. None seemed to come to anything.
Does anyone have a working, clean(ish) solution to this problem? As long as it means Dockerfile code can be shared across images, I'd be happy. If it also means that the shared layers of images don't need to be rebuilt for each app, then even better.
You could still use build args to do this.
Dockerfile:
FROM ubuntu
ARG APP_NAME
RUN echo $APP_NAME-specific-setup.sh >> /root/test
ENTRYPOINT cat /root/test
Build:
docker build --build-arg APP_NAME=foo -t foo .
Run:
$ docker run --rm foo
foo-specific-setup.sh
In your case you could run the correct script in the RUN using the argument you just set before. You would have one Dockerfile per app-base variant and run the correct set-up based on the build argument.
FROM ubuntu
RUN apt-get install package
COPY ./app-code /code-dir
ARG APP_NAME
RUN $APP_NAME-specific-setup.sh
Any layers before setting the ARG would not need to be rebuilt when creating other versions.
You can then push the built images to separate docker repositories for each app.
If your apps need different ENTRYPOINT instructions, you can have an APP_NAME-entrypoint.sh per app and rename it to entrypoint.sh within your APP_NAME-specific-setup.sh (or pass it through as an argument to run).

Resources