There is a library writing in Go. Its folder structure is:
lib/
golang/
go.mod
lib.go
example/
golang/
proto/
protofile
go.mod
main.go
Dockerfile
example is folder to show how to use this lib, so it has a main.go which can build and run. In order to use the lib, the example/golang/go.mod is:
module golang
go 1.15
require (
lib/golang v0.0.0
other stuff...
)
replace lib/golang => ../../golang
Now I want to pack the runnable example into a docker image, the Dockerfile is:
FROM golang:1.15
WORKDIR /go/src/app
COPY . .
RUN go env -w GO111MODULE=auto
RUN go env -w GOPROXY=https://goproxy.io,direct
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
Then I cd into example/golang run docker build -t example ., the error log is
open /go/golang/go.mod: no such file or directory
It seems can not access lib/go.mod file, then I cd into lib folder and run docker build -t server -f examples/golang/Dockerfile ., however this will affect the import of main.go:
import "golang/proto"
error log is:
golang/proto: unrecognized import path "golang/proto"
How should I fix this to make the docker image?
==========================================
After I spend some time to read docker doc, here is the summary about this problem:
docker build command follow a PATH argument, that is the dot . at the end. That control the content of building, it decides what files docker build can access. So the reason of first error is that the pwd is lib/example/golang, and docker build command path is ., then docker build can not access parent files of lib/example/golang and they are required in main.go as a lib.
the command should be: docker build -t example ../../ However, docker build seek Dockerfile only in root of content path, so use -f to tell it where the Dockerfile located: docker build -t example -f ./Dockerfile ../../
if pwd is lib/, command is docker build -t server -f examples/golang/Dockerfile .
Be short:
If the dockerfile is not located at project root path, use -f and PATH of docker build command to give it enough access of files. If using go module, make sure PATH contain a go.mod file
If main.go is located in sub folder, make sure the WORKDIR in dockerfile same as the sub folder after COPY all need, or else go build or go install fail to compile
Your Dockerfile is too nested. Since your go build relies on relative paths - paths that are in parent directories - a docker build . will not see any parent-directory files.
Move the Dockerfile to the top e.g.
Dockerile
lib/
and update to build the nested build directory:
FROM golang:1.15
WORKDIR /go/src/app
# just need to copy lib tree
COPY lib .
# working from here now
WORKDIR /go/src/app/lib/golang/example/golang
RUN go env -w GO111MODULE=auto
RUN go env -w GOPROXY=https://goproxy.io,direct
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
You can use go mod vendor before to build with Docker, it will centralise all your mod in vendor folder.
I did a a new file called build.sh with this inside:
#! /bin/sh
go mod vendor
docker build . -t myapp/myservice
rm -rf ./vendor
and run it whenever i need to build and by deleting vendor i can still use go run *.go with fresh version of my libraries
Related
I am trying to build my Dockerfile from a child folder context.
this is my build file in build.sh
#!/bin/bash -ex
docker build -t app:latest -f ../Dockerfile .
This is my Dockerfile
FROM app:latest
WORKDIR /app
# This path must exist as it is used as a mount point for testing
# Ensure your app is loading files from this location
FROM ubuntu:20.04
RUN apt update
RUN apt install -y python3-pip
# Install Dependencies
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
This is my requirements.txt
Flask
pandas
requests
gunicorn
pytest
When I attempt to run build.sh within the scripts folder I get this error
#8 ERROR: "/requirements.txt" not found: not found
------
> [stage-1 4/7] COPY requirements.txt .:
------
failed to compute cache key: "/requirements.txt" not found: not found
When I just do the docker build command in the command line in the app directory I will do this:
docker build -t app:latest -f Dockerfile .
This will work, however going into the child directory and attempting to build it using the bash script will fail with the requirements.txt caching issue.
How do I successfully build my docker container from the child folder?
The docker build command takes a path to a context directory
docker build [... options ...] .
# ^ this path
When the Dockerfile COPY requirements.txt . it is always relative to the path at the end of the docker build command. It doesn't matter where the Dockerfile itself is physically located.
If you want to build an image from a parent directory, where the Dockerfile is located in that parent directory, you need to specify the path. If the Dockerfile is named Dockerfile and is in the root of the context directory (the standard recommended location) then you do not need a docker build -f option.
cd $HOME/testing-docker
docker build -t app .
cd $HOME/testing-docker/scripts
docker build -t app ..
# ^^ build the parent directory
# but no -f option
# the Dockerfile is in the default place
# Any way to specify the path will work
cd
docker build -t app $HOME/testing-docker
I do
git clone https://github.com/openzipkin/zipkin.git
cd zipkin
The create a Dockerfile as below
FROM openjdk
RUN mkdir app
WORKDIR /app
COPY ./ .
ENTRYPOINT ["sleep", "1000000"]
then
docker build -t abc .
docker run abc
I then run docker exec -it CONTAINER_ID bash
pwd returns /app which is expected
but I ls and see that the files are not copied
only the directories and the xml file is copied into the /app directory
What is the reason? how to fix it?
Also I tried
FROM openjdk
RUN mkdir app
WORKDIR /app
COPY . /app
ENTRYPOINT ["sleep", "1000000"]
That repository contains a .dockerignore file which excludes everything except a set of things it selects.
That repository's docker directory also contains several build scripts for official images and you may find it easier to start your custom image FROM openzipkin/zipkin rather than trying to reinvent it.
I have a problem when I build with docker compose an application with local dependencies to create a docker image.
My Dockerfile:
FROM golang:alpine AS build
ENV GOPATH=$GOPATH
#GOPROXY
ENV GOPROXY=http://proxy.golang.org
ENV GO111MODULE=on
WORKDIR $GOPATH/src/github.com/julianskyline/motorcars-core-business
COPY . .
# Set OS as linux
RUN GOOS=linux go build -o $GOPATH/bin/github.com/julianskyline/motorcars-core-business main.go
FROM alpine
COPY --from=build $GOPATH/bin/github.com/julianskyline/motorcars-core-business $GOPATH/bin/github.com/julianskyline/motorcars-core-business
ENTRYPOINT [ "/go/bin/motorcars-core-business" ]
My go.mod
module github.com/julianskyline/motorcars-core-business
go 1.15
replace (
github.com/julianskyline/errors => /home/julianmarin/proyectos/go/src/github.com/julianskyline/errors
github.com/julianskyline/motorcars-db => /home/julianmarin/proyectos/go/src/github.com/julianskyline/motorcars-db
github.com/julianskyline/motorcars-models => /home/julianmarin/proyectos/go/src/github.com/julianskyline/motorcars-models)
Projects are in the same folder:
$GOPATH/src/github.com/julianskyline/errors
$GOPATH/src/github.com/julianskyline/motorcars-core-business
enter image description here
The go build/run work fine.
Error sudo docker-compose build:
Step 6/9 : RUN GOOS=linux go build -o $GOPATH/bin/github.com/julianskyline/motorcars-core-business main.go
---> Running in 45227441dfdd
go: github.com/julianskyline/errors#v0.0.0-00010101000000-000000000000 (replaced by /home/julianmarin/proyectos/go/src/github.com/julianskyline/errors): reading /home/julianmarin/proyectos/go/src/github.com/julianskyline/errors/go.mod: open /home/julianmarin/proyectos/go/src/github.com/julianskyline/errors/go.mod: no such file or directory
The command '/bin/sh -c GOOS=linux go build -o $GOPATH/bin/github.com/julianskyline/motorcars-core-business main.go' returned a non-zero code: 1
ERROR: Service 'api' failed to build
NOTE: The file /home/julianmarin/proyectos/go/src/github.com/julianskyline/errors/go.mod exists!
The Dockerfile is in $GOPATH/src/github.com/julianskyline/motorcars-core-business which means that the COPY . . within it will only copy $GOPATH/src/github.com/julianskyline/motorcars-core-business into the docker image.
The go.mod contains replace directives that reference folders not under $GOPATH/src/github.com/julianskyline/motorcars-core-business (e.g. $GOPATH/src/github.com/julianskyline/errors); this leads to compilation errors because those folders are not present within the docker image.
To resolve this you can:
Copy the entire julianskyline folder into the image (by moving Docker file into the parent folder, specifying the context on the command line or using docker-compose).
Remove the replace directives and letting go pull the files from github.
Posting answer as this was requested in the comments; I believe the comments provided sufficient info for the OP.
I have my Dockerfile in the root of directory with src/myapp folder, myapp contains myapp.go with main package.
Dockerfile looks like following:
FROM golang:1.9.2
ADD . /
RUN go build myapp;
ENTRYPOINT ["/go/bin/myapp"]
I get following error:
can't load package: package myapp: cannot find package "myapp" in any of:
/usr/local/go/src/myapp (from $GOROOT)
/go/src/myapp (from $GOPATH)
What am I doing wrong? Can I log ls command after docker has done ADD?
You are copying all the files to Image root directory, Didn't installed any dependencies, Trying to Build it and then run the binary from /go/bin/app. The binary doesn't exists in that directory and it's generating errors.
I would recommend using a Dockerfile like this,
FROM golang:1.9.2
ADD . /go/src/myapp
WORKDIR /go/src/myapp
RUN go get myapp
RUN go install
ENTRYPOINT ["/go/bin/myapp"]
This'll do the following.
Copy project files to /go/src/myapp.
Set Working directory to /go/src/myapp.
Install dependencies, I used go get but replace it with which ever dependency management tool you are using.
Install/build the binary.
Set entry point.
You can run ls or any other command using docker exec.
Example:
docker exec <image name/hash> ls
You can also enter the shell in the generated image to understand it well using
docker run --rm -it <image hash/name> /bin/sh
After experiments I've come to this way of building Golang apps.
This way has several advantages:
dependencies are installed on build stage
if you need you may uncomment test options
build first fully-functional image about 800 MB
copies your program to an fresh empty image and produces very small image about 10 MB
Dockerfile:
# Two-stage build:
# first FROM prepares a binary file in full environment ~780MB
# second FROM takes only binary file ~10MB
FROM golang:1.9 AS builder
RUN go version
COPY . "/go/src/github.com/your-login/your-project"
WORKDIR "/go/src/github.com/your-login/your-project"
#RUN go get -v -t .
RUN set -x && \
#go get github.com/2tvenom/go-test-teamcity && \
go get github.com/golang/dep/cmd/dep && \
dep ensure -v
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /your-app
CMD ["/your-app"]
EXPOSE 8000
#########
# second stage to obtain a very small image
FROM scratch
COPY --from=builder /your-app .
EXPOSE 8000
CMD ["/your-app"]
For go 1.11 , you can use go module, the following is example
FROM alpine AS base
RUN apk add --no-cache curl wget
FROM golang:1.11 AS go-builder
WORKDIR /go/app
COPY . /go/app
RUN GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /go/app/main /go/app/cmd/myapp/main.go
FROM base
COPY --from=go-builder /go/app/main /main
CMD ["/main"]
The official docs suggests the following Dockerfile:
FROM golang:1.8
WORKDIR /go/src/app
COPY . .
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
Please, visit https://hub.docker.com/_/golang for more info
myapp needs to be in /go/src/myapp as suggested, or in /usr/local/go/src/myapp. You can add it in ADD section.
If the objective is to create a container that simply runs your binary, I would take different approach.
First build the binary for linux:
GOOS=linux CGO_ENABLED=0 go build -a -installsuffix cgo
Then build a lightweight docker image from scratch:
FROM scratch
COPY myApp
CMD ["/myApp"]
I've got a repo set up like this:
/config
config.json
/worker-a
Dockerfile
<symlink to config.json>
/code
/worker-b
Dockerfile
<symlink to config.json>
/code
However, building the images fails, because Docker can't handle the symlinks. I should mention my project is far more complicated than this, so restructuring directories isn't a great option. How do I deal with this situation?
Docker doesn't support symlinking files outside the build context.
Here are some different methods for using a shared file in a container:
Build Time
Copy from a config image (Docker buildkit)
Recent versions of Docker allow RUN steps to bind mount from a named image or previous build stage with the --mount=type=bind,target=/dir,source=/dir,from=image-or-stage-name
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
FROM scratch
COPY config.json /config.json
Build and tag the config image me/worker-config
docker build -t me/worker-config:latest .
Mount the me/worker-config image during the real build
RUN --mount=type=bind,target=/worker-config,source=/,from=me/worker-config:latest \
cp /worker-config/config.json /app/config.json;
Share a base image
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
COPY config.json /config.json
Build and tag the image me/worker-config
docker build -t me/worker-config:latest .
Source the base me/worker-config image for all your worker Dockerfiles
FROM me/worker-config:latest
Build script
Use a script to push the common config to each of your worker containers.
./build worker-n
#!/bin/sh
set -uex
rundir=$(readlink -f "${0%/*}")
container=$(shift)
cd "$rundir/$container"
cp ../config/config.json ./config-docker.json
docker build "$#" .
Build from URL
Pull the config from a common URL for all worker-n builds.
ADD http://somehost/config.json /
Increase the scope of the image build context
Include the symlink target files in the build context by building from a parent directory that includes both the shared files and specific container files.
cd ..
docker build -f worker-a/Dockerfile .
All the source paths you reference in a Dockerfile must also change to match the new build context:
COPY workerathing /app
becomes
COPY worker-a/workerathing /app
Using this method can make all build contexts large if you have one large build context, as they all become shared. It can slow down builds, especially to remote Docker build servers. Note that only the .dockerignore file from the base of the build context is referenced.
Alternate build that can mount volumes
Other projects that strive for Dockerfile compatibility may support volumes at build time. For example a podman build / buildah support a --volume option to bind mount files from the host into a build container.
podman build --volume /project/config:/worker-config:ro,Z -t me/worker-a .
Then the build can reference the mounted volume
COPY /worker-config/config.json /app
Run time
Mount a config directory from a named volume
Volumes like this only work as directories, so you can't specify a file like you could when mounting a file from the host to container.
docker volume create --name=worker-cfg-vol
docker run -v worker-cfg-vol:/config worker-config cp config.json /config
docker run -v worker-cfg-vol:/config:/config worker-a
Mount config directory from data container
Again, directories only as it's basically the same as above. This will automatically copy files from the destination directory into the newly created shared volume though.
docker create --name wcc -v /config worker-config /bin/true
docker run --volumes-from wcc worker-a
Mount config file from host at runtime
docker run -v /app/config/config.json:/config.json worker-a
Node.js-specific solution
I also ran into this problem, and would like to share another method that hasn't been mentioned above. Instead of using npm link in my Dockerfile, I used yalc.
Install yalc in your container, e.g. RUN npm i -g yalc.
Build your library in Docker, and run yalc publish (add the --private flag if your shared lib is private). This will 'publish' your library locally.
Run yalc add my-lib in each repo that would normally use npm link before running npm install. It will create a local .yalc folder in your Docker container, create a symlink in node_modules that works inside Docker to this folder, and rewrite your package.json to refer to this folder too, so you can safely run install.
Optionally, if you do a two stage build, make sure that you also copy the .yalc folder to your final image.
Below an example Dockerfile, assuming you have a mono repository with three packages: models, gui and server, and the models repository must be shared and named my-models.
# You can access the container using:
# docker run -it my-name sh
# To start it stand-alone:
# docker run -it -p 8888:3000 my-name
FROM node:alpine AS builder
# Install yalc globally (the apk add... line is only needed if your installation requires it)
RUN apk add --no-cache --virtual .gyp python make g++ && \
npm i -g yalc
RUN mkdir /packages && \
mkdir /packages/models && \
mkdir /packages/gui && \
mkdir /packages/server
COPY ./packages/models /packages/models
WORKDIR /packages/models
RUN npm install && \
npm run build && \
yalc publish --private
COPY ./packages/gui /packages/gui
WORKDIR /packages/gui
RUN yalc add my-models && \
npm install && \
npm run build
COPY ./packages/server /packages/server
WORKDIR /packages/server
RUN yalc add my-models && \
npm install && \
npm run build
FROM node:alpine
RUN mkdir -p /app
COPY --from=builder /packages/server/package.json /app/package.json
COPY --from=builder /packages/server/dist /app/dist
# Make sure you copy the yalc registry too.
COPY --from=builder /packages/server/.yalc /app/.yalc
COPY --from=builder /packages/server/node_modules /app/node_modules
COPY --from=builder /packages/gui/dist /app/dist/public
WORKDIR /app
EXPOSE 3000
CMD ["node", "./dist/index.js"]
Hope that helps...
The docker build CLI command sends the specified directory (typically .) as the "build context" to the Docker Engine (daemon). Instead of specifying the build context as /worker-a, specify the build context as the root directory, and use the -f argument to specify the path to the Dockerfile in one of the child directories.
docker build -f worker-a/Dockerfile .
docker build -f worker-b/Dockerfile .
You'll have to rework your Dockerfiles slightly, to point them to ../config/config.json, but that is pretty trivial to fix.
Also check out this question/answer, which I think addresses the exact same problem that you're experiencing.
How to include files outside of Docker's build context?
Hope this helps! Cheers
An alternative solution is to upgrade all your soft links into hard links.