Why Docker COPY works differently with single file and directory? - docker

I have a front end project which includes package.json inside it.(imagine create-react-app for example)
When I run below command everything works fine with no error.
first DockerFile
COPY . develop
WORKDIR develop
But in case I want COPY, package.json next command I will face an error.
second DockerFile
COPY package.json develop
WORKDIR develop
error message: Cannot mkdir: /develop is not a directory
I know how to dockerize my project with the below command.
WORKDIR develop
COPY package.json .
I am just curious to know why the first Dockerfile works and the second one won't work.
I also used RUN ls after COPY command and find out in both case the develop directory has been generated.

It is because COPY package.json develop is instructed to copy the packages.json to the container as develop. So the next directive WORKDIR fails because develop is not a directory but a file.
Use the / before && after develop and it should work.
FROM alpine
COPY temp.txt /develop/
WORKDIR develop

Related

Dockerizing python project Dockerfile creation

This question is asked before yet After reviewing the answers I am still not able to copy the solution.
I am still new to docker and after watching tutorials and following articles I was able to create a Dockerfile for an existing GitHub repository.
I started by using the nearest available image as a base then adding what I need.
from what I read the problem is in WORKDIR and CMD commands
This is error message:
python: can't open file 'save_model.py': [Errno 2] No such file or directory*
This is my Dockerfile:
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR app
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY /home/pc/Desktop/yolo4_deep .
# command to run on container start
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
src
save_model.py
object_tracker.py
...
requirements.txt
Dockerfile
I tried WORKDIR command to set the absolute path: WORKDIR /home/pc/Desktop/yolo4_Deep_sort_nojupitor the result was Same Error.
I see multiple issues in your Dockerfile.
COPY /home/pc/Desktop/yolo4_deep .
The COPY command copies files from your local machine to the container. The path on your local machine must be path relative to your build context. The build context is the path you pass in when you run docker build . — in this case the . (the current directory) is the build context. Also the local machine path can only reference files located under the build context — i.e. paths containing .. (parent directory) or / (root directory) are not allowed.
WORKDIR app
WORKDIR sets the path inside the container not on your local machine. So WORKDIR /app means that all commands — RUN, CMD, ENTRYPOINT — will be executed from the /app directory.
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
As mentioned above WORKDIR /app causes all operations to be executed from the /app directory. So ./app/save_model.py is actually translated as /app/app/save_model.py.
Thanks for help Everyone.
As I mentioned earlier I'm beginner in the docker world. I solved the issue by editing the copy command.
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR /home/pc/Desktop/yolo4_deep
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
ENTRYPOINT ["./start.sh"]

How to use go mod with local package and docker?

I have two go modules github.com/myuser/mymainrepo and github.com/myuser/commonrepo
Here is how i have the files in my local computer
- allmyrepos
- mymainrepo
- Dockerfile
- go.mod
- commonrepo
- go.mod
mymainrepo/go.mod
...
require (
github.com/myuser/commonrepo
)
replace (
github.com/myuser/commonrepo => ../commonrepo
)
It works well i can do local development with it. Problem happens when i'm building docker image of mymainrepo
mymainrepo/Dockerfile
...
WORKDIR /go/src/mymainrepo
COPY go.mod go.sum ./
RUN go mod download
COPY ./ ./
RUN go build -o appbinary
...
Here replace replaces github.com/myuser/commonrepo with ../commonrepo but in Docker /go/src/commonrepo does not exists.
I'm building the Docker image on CI/CD which needs to fetch directly from remote github url but i also need to do local development on commonrepo. How can i do both ?
I tried to put all my files in GOPATH so it's ~/go/src/github.com/myuser/commonrepo and go/src/github.com/myuser/mymainrepo. And i removed the replace directive. But it looks for commonrepo inside ~/go/pkg/mod/... that's downloaded from github.
Create two go.mod files: one for local development, and one for your build. You can name it go.build.mod for example.
Keep the replace directive in your go.mod file but remove it from go.build.mod.
Finally, in your Dockerfile:
COPY go.build.mod ./go.mod
COPY go.sum ./
I still can't find other better solution even the voted answer doesn't work for me. Here a trick I've done that workaround for me. This is an example structure for doing this:
|---sample
| |---...
| |---go.mod
| |---Dockerfile
|---core
| |---...
| |---go.mod
We know that docker build error when it can't find our local module. Let's make one in the builder process:
# Use the offical golang image to create a binary.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.16.3-buster AS builder
# Copy core library
RUN mkdir /core
COPY core/ /core
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary
RUN go build -o /app/sample cmd/main.go
...
...
Ok, our working dir is /app and our core lib placed next to it /core.
Let's make a trick when build a docker image! Yeah, you know it.
cp -R ../core . && docker build --tag sample-service . && rm -R core/
Update
A way better, create a Makefile next to Dockerfile, with content below:
build:
cp -R ../core .
docker build -t sample-service .
rm -R core/
Then command, make build in the sample directory.
You can create make submit or make deploy commands as you like to.
=> Production ready!
Be aware that if there's an error occurs during docker build process, it won't delete back the core folder we have copied to sample.
Pls let me know if you find any better solution. ;)

dockerize NestJs project with build files only

I want to dockerize my Nest API. I'm completely new to Docker so I created a fresh Nest project with the CLI. I created a .dockerignore and added every file that shouldn't live in the Docker image.
.git
.gitignore
coverage
LICENSE
README.md
CONTRIBUTING.md
docker-compose.yml
Dockerfile
node_modules/
.github
.vscode
npm-debug.log
npm-debug.log.*
Next I started with the Dockerfile.
FROM node:12.13-alpine As api
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
ADD . /usr/src/app
CMD npm start
I'm wondering why the image has a size of 321 MB. Does someone know how to improve this? I don't need fancy stuff for development and testing purposes etc. I just would like to get into Docker by starting with a small "clean" image to setup the docker-compose file for the TypeORM database support.
If you don't need development and testing stuff, enhance project dependency installation in Dockerfile next way:
RUN npm install --production

Where is this dockerfile putting files in the container?

I'm using Docker to build an ASP site, and I'm confused about where my files are going. Here's my dockerfile (the application is called AspCore)
FROM microsoft/dotnet:2.2-sdk as build
ARG BUILDCONFIG=RELEASE
ARG VERSION=1.0.0
COPY AspCore.csproj /build/
RUN dotnet restore ./build/AspCore.csproj
COPY . ./build/
WORKDIR /build/
RUN dotnet publish ./AspCore.csproj -c $BUILDCONFIG -o out /p:Version=$VERSION
FROM microsoft/dotnet:2.2-aspnetcore-runtime
WORKDIR /app
COPY --from=build /build/out .
ENTRYPOINT ["dotnet", "AspCore.dll"]
This works correctly and I can access the site when I run the image, but I don't quite understand how this is working. When I open a shell to my container, I don't see a build directory either in the app directory or in the base directory. I also can't find the AspCore.csproj file.
Isn't the dockerfile copying AspCore.csproj into the build directory, so shouldn't there be a build directory with a bunch of files in it on my container? What am I misunderstanding?
That's just because you're using 2 stage on your Dockerfile to build your image.
Stage 1: (BUILD) Based on microsoft/dotnet:2.2-sdk as build image to build your source code into dll files
Stage 2: (SERVE) Based on microsoft/dotnet:2.2-aspnetcore-runtime to create a runtime for your dlls, in this stage you've already copy files from previous stage into a folder called app by this line COPY --from=build /build/out ..
After stage 2 copied files from stage1, nothing else you can see from stage1 but copied files, that's why when you start your container you didn't see /build folder
This pattern is good for production build when you want to minimize your image, because actually we don't need sdk in production environment, we just need runtime for the compiled code.
Hope that clear enough, for more information, you can take a look at this article

Docker multi-stage-build with different project

We are working with two project at the moment:
1 C++ based project
2 Nodejs based project
These two projectes are separated which means they have different codebase(git repoitory) and working directory.
C++ project will produce a node binding file .node which will be used by Nodejs project.
And we try to build an docker image for the Nodejs project with multi-stage like this:
from ubuntu:18.04 as u
WORKDIR /app
RUN apt-get........
copy (?) . #1 copy the c++ source codes
RUN make
from node:10
WORKDIR /app
copy (?) . #1 copy the nodejs cource codes
RUN npm install
copy --from=u /app/dist/xx.node ./lib/
node index.js
And I will build the image by docker build -t xx (?) #2.
However as commented in the dockerfile and the command, how to setup the context directory(see comment #2)? Since it will affect the path in the dockerfile (see comment #1).
Also which project should I put inside for the above dockerfile?
You will have two options on this, as the limiting factor is that Docker only allows copying from the same directory as the Dockerfile:
Create a new Repository
You can either create a new repo and use your repos as submodules or just for the Dockerfile (Than you would have to copy both repos into the root folder at build time). In the End what you want to achieve is the following structure:
/ (root)
|-- C-plus-plus-Repo
|-- |-- <Files>
|-- Node-Repo
|-- |-- <Files>
|-- Dockerfile
Than you can build your project with:
from ubuntu:18.04 as u
WORKDIR /app
RUN apt-get........
#1 copy the c++ source files
copy ./C-plus-plus-Repo .
RUN make
from node:10
WORKDIR /app
#1 copy the nodejs cource codes
copy ./Node-Repo .
RUN npm install
copy --from=u /app/dist/xx.node ./lib/
node index.js
In the root Directory execute:
docker build -t xx .
Build your staging containers extra
Docker allows to copy from an external container as stage.
So you can build the C++ container in your C++ Repo root
from ubuntu:18.04 as u
WORKDIR /app
RUN apt-get........
#1 copy the c++ source files
copy . .
RUN make
and Tag it:
# Build your C++ Container in root of the c++ repo
docker build . -t c-stage
then copy the files from it, using the tag (in your node Repo root):
from node:10
WORKDIR /app
#1 copy the nodejs source files
copy . .
RUN npm install
# Use the Tag-Name of the already build container "c-stage"
copy --from=c-stage /app/dist/xx.node ./lib/
node index.js
Both build steps can be executed from their respective repo roots.
Creating a deploy project with git submodules
How about creating a deploy project using git submodules?
This project would only exist for building the docker image and contains the Dockerfile and both of your projects as git submodules.
Since you not just copy the two projects, but manage them with git, you can always keep them up to date with git submodules update --remote, but note that this leaves your submodule in a detached head state. However this is not a problem as long as you do not try to update your C++ project or the node project from the deploy project.
You can create the project with the following commands:
mkdir deploy_project && cd deploy_project
git init
git submodule add git#your-gitserver.com:YourName/YourCppProject.git cpp_project
git submodule add git#your-gitserver.com:YourName/YourNodeProject.git nodejs_project
Then you can simply add the paths to the subprojects to your dockerfile and build the image in the root directory of the deploy project.
The dockerfile would look like this
FROM ubuntu:18.04 as u
WORKDIR /app
RUN apt-get........
COPY cpp_project/ . #1 copy the c++ source codes
RUN make
FROM node:10
WORKDIR /app
COPY nodejs_project/ . #1 copy the nodejs cource codes
RUN npm install
COPY --from=u /app/dist/xx.node ./lib/
You can use ADD command (her context watching to host directory where Dockerfile placed. It will copy everything placed in same directory as Dockerfile in host machine (in this case content of cpp_app directory) into the docker container.
...
ADD cpp_app /place/to/build
WORKDIR /place/to/build
RUN make
RUN mv result_file /place/where/result_file/have/to/be
WORKDIR /place/where/result_file/have/to/be
... execute your nodejs stuff

Resources