I want to copy the file to a folder location that is outside the working directory. I used the following lines in my docker file, but the files are not there when I look in the container.
WORKDIR /app
RUN cd ../opt/venv/lib/python3.7/site-packages/xxx/
COPY ./resources/abc.py .
When look a that /opt/venv/lib/python3.7/site-packages/xxx/ location the abc.py is not there
What is the issue with my approach? Appreciate your inputs.
You can't COPY a file from outside the build context. So if you are trying to COPY /opt/venv/lib/python3.7/site-packages/xxx/resources/abc.py into your docker image, and that is not in your build context, it will fail. Full stop.
Here's some annotated code.
# change to the /app directory in the container
WORKDIR /app
# run the command cd in the container. cd is a shell builtin, and after
# this command finishes you will still be inside the /app directory in
# your container.
RUN cd ../opt/venv/lib/python3.7/site-packages/xxx/
# Attempt to copy ./resources/abc.py from your host's build context
# (NOT /opt/venv/lib/python3.7/site-packages/xxx/) into the container.
COPY ./resources/abc.py .
The basic fix for this is to first copy abc.py into your build directory. Then you will be able to copy it into your docker container during your build like so:
WORKDIR /app
COPY abc.py .
# /app/abc.py now exists in your container
Note on cd
cd is a shell builtin that changes the working directory of the shell. When you execute it inside a script (or in this case a docker RUN) it only changes the working directory for that process, which ends when the script ends. After which your working directory will be the one you started in. So you cannot use it in the way you were intending. Much better explanation here.
Take this Dockerfile for example:
FROM alpine:latest
RUN cd /opt # cd to /opt
RUN pwd # check current directory, you're STILL in '/'
RUN cd /opt && \
pwd # works as expected because you're still in the same process that cd ran in.
# But once you exit this RUN command you will be back in '/'
# Use WORKDIR to set the working directory in a dockerfile
Here's the output of building that Dockerfile (removed noisy docker output):
$ docker build --no-cache .
Sending build context to Docker daemon 3.584kB
Step 1/4 : FROM alpine:latest
Step 2/4 : RUN cd /opt
Step 3/4 : RUN pwd
/
Step 4/4 : RUN cd /opt && pwd
/opt
From what I understand, you're trying to copy a file into a specific location (/opt/venv/lib/python3.7/site-packages/xxx/) in your Docker image that is outside the WORKDIR you defined in the Dockerfile for your image.
You can easily do this by specifying the absolute destination path in the COPY command:
WORKDIR /app
COPY ./resources/abc.py /opt/venv/lib/python3.7/site-packages/xxx/abc.py
Related
I do
git clone https://github.com/openzipkin/zipkin.git
cd zipkin
The create a Dockerfile as below
FROM openjdk
RUN mkdir app
WORKDIR /app
COPY ./ .
ENTRYPOINT ["sleep", "1000000"]
then
docker build -t abc .
docker run abc
I then run docker exec -it CONTAINER_ID bash
pwd returns /app which is expected
but I ls and see that the files are not copied
only the directories and the xml file is copied into the /app directory
What is the reason? how to fix it?
Also I tried
FROM openjdk
RUN mkdir app
WORKDIR /app
COPY . /app
ENTRYPOINT ["sleep", "1000000"]
That repository contains a .dockerignore file which excludes everything except a set of things it selects.
That repository's docker directory also contains several build scripts for official images and you may find it easier to start your custom image FROM openzipkin/zipkin rather than trying to reinvent it.
This question is asked before yet After reviewing the answers I am still not able to copy the solution.
I am still new to docker and after watching tutorials and following articles I was able to create a Dockerfile for an existing GitHub repository.
I started by using the nearest available image as a base then adding what I need.
from what I read the problem is in WORKDIR and CMD commands
This is error message:
python: can't open file 'save_model.py': [Errno 2] No such file or directory*
This is my Dockerfile:
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR app
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY /home/pc/Desktop/yolo4_deep .
# command to run on container start
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
src
save_model.py
object_tracker.py
...
requirements.txt
Dockerfile
I tried WORKDIR command to set the absolute path: WORKDIR /home/pc/Desktop/yolo4_Deep_sort_nojupitor the result was Same Error.
I see multiple issues in your Dockerfile.
COPY /home/pc/Desktop/yolo4_deep .
The COPY command copies files from your local machine to the container. The path on your local machine must be path relative to your build context. The build context is the path you pass in when you run docker build . — in this case the . (the current directory) is the build context. Also the local machine path can only reference files located under the build context — i.e. paths containing .. (parent directory) or / (root directory) are not allowed.
WORKDIR app
WORKDIR sets the path inside the container not on your local machine. So WORKDIR /app means that all commands — RUN, CMD, ENTRYPOINT — will be executed from the /app directory.
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
As mentioned above WORKDIR /app causes all operations to be executed from the /app directory. So ./app/save_model.py is actually translated as /app/app/save_model.py.
Thanks for help Everyone.
As I mentioned earlier I'm beginner in the docker world. I solved the issue by editing the copy command.
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR /home/pc/Desktop/yolo4_deep
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
ENTRYPOINT ["./start.sh"]
In the docker docs getting started tutorial part 2, it has one make a Dockerfile. It instructs to add the following lines:
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
What is /app, and why is this a necessary step?
There are two important directories when building a docker image:
the build context directory.
the WORKDIR directory.
Build context directory
It's the directory on the host machine where docker will get the files to build the image. It is passed to the docker build command as the last argument. (Instead of a PATH on the host machine it can be a URL). Simple example:
docker build -t myimage .
Here the current dir (.) is the build context dir. In this case, docker build will use Dockerfile located in that dir. All files from that dir will be visible to docker build.
The build context dir is not necessarily where the Dockerfile is located. Dockerfile location defaults to current dir and is otherwise indicated by the -f otpion. Example:
docker build -t myimage -f ./rest-adapter/docker/Dockerfile ./rest-adapter
Here build context dir is ./rest-adapter, a subdirectory of where you call docker build; the Dokerfile location is indicated by -f.
WORKDIR
It's a directory inside your container image that can be set with the WORKDIR instruction in the Dockerfile. It is optional (default is /, but base image might have set it), but considered a good practice. Subsequent instructions in the Dockerfile, such as RUN, CMD and ENTRYPOINT will operate in this dir. As for COPY and ADD, they use both...
COPY and ADD use both dirs
These two commands have <src> and <dest>.
<src> is relative to the build context directory.
<dest> is relative to the WORKDIR directory.
For example, if your Dockerfile contains...
WORKDIR /myapp
COPY . .
then the contents of your build context directory will be copied to the /myapp dir inside your docker image.
WORKDIR is a good practice because you can set a directory as the main directory, then you can work on it using COPY, ENTRYPOINT, CMD commands, because them will execute pointing to this PATH.
Docker documentation: https://docs.docker.com/engine/reference/builder/
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
The WORKDIR instruction can be used multiple times in a Dockerfile. If a relative path is provided, it will be relative to the path of the previous WORKDIR instruction.
Dockerfile Example:
FROM node:alpine
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
A alpine node.js was created and the workdir is /app, then al files are copied them into /app
Finally npm run start command is running into /app folder inside the container.
You should exec the following command in the case you have sh or bash tty:
docker exec -it <container-id> sh
or
docker exec -it <container-id> bash
After that you can do ls command and you will can see the WORKDIR folder.
I hope it may help you
You need to declare a working directory and move your code into it, because your code has to live somewhere. Otherwise your code wouldn't be present and your app wouldn't run. Then when commands like RUN, CMD, ENTRYPOINT, COPY, and ADD are used, they are executed in the context of WORKDIR.
/app is an arbitrary choice of working directory. You could use anything you like (foo, bar, or baz), but app is nice since it's self-descriptive and commonly used.
docker build Dockerfile .//running it correctly.
1.) I have mentioned in the comments each command will execute as written, Is that correct working of this Dockerfile?
2.)These commands will be used to make the image when I ran docker build, so
[ec2-user#ip-xx-xx-xx-xx ~]$cd /project/p1
[ec2-user#ip-xx-xx-xx-xx p1]$ls
Dockerfile a b c d
My Dockerfile consists of following commands.
Dockerfile
node 8.1.0 //puls the image from hub
RUN mkdir -p /etc/x/y //make directory in the host at path /etc/x/y
RUN mkdir /app //make directory in the host at path /app
COPY . /app //copy all the files that is
WORKDIR /app //cd /app; now the working directory will be /app for next commands i.e npm install.
RUN npm install
EXPOSE 3000 //what this will do?
Question 1: how to run docker build?
docker build Dockerfile . # am I running it correctly.
No, you run it with docker build . and docker will automatically look for the Dockerfile in the current directory. Or you use docker build -f Path_to_the_docker_file/DockerFile where you clearly specify the path to the DockerFile.
Question 2: Fixing errors and clarifying commands
There are few mistakes in the Dockerfile, check the edited comments:
# pulls the image from dockerhub : YES
# Needs to be preceeded with FROM
FROM node 8.1.0
# all directories are made inside the docker image
# make directory in the image at path /etc/x/y : YES
RUN mkdir -p /etc/x/y
# make directory in the image at path /app : YES
RUN mkdir /app
COPY . /app # copy all the files that is : YES
WORKDIR /app # cd /app; now the working directory will be /app for next commands i.e npm install. : YES
RUN npm install
EXPOSE 3000 # what this will do? => tells all docker instances of this image to listen on port 3000.
I've got a repo set up like this:
/config
config.json
/worker-a
Dockerfile
<symlink to config.json>
/code
/worker-b
Dockerfile
<symlink to config.json>
/code
However, building the images fails, because Docker can't handle the symlinks. I should mention my project is far more complicated than this, so restructuring directories isn't a great option. How do I deal with this situation?
Docker doesn't support symlinking files outside the build context.
Here are some different methods for using a shared file in a container:
Build Time
Copy from a config image (Docker buildkit)
Recent versions of Docker allow RUN steps to bind mount from a named image or previous build stage with the --mount=type=bind,target=/dir,source=/dir,from=image-or-stage-name
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
FROM scratch
COPY config.json /config.json
Build and tag the config image me/worker-config
docker build -t me/worker-config:latest .
Mount the me/worker-config image during the real build
RUN --mount=type=bind,target=/worker-config,source=/,from=me/worker-config:latest \
cp /worker-config/config.json /app/config.json;
Share a base image
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
COPY config.json /config.json
Build and tag the image me/worker-config
docker build -t me/worker-config:latest .
Source the base me/worker-config image for all your worker Dockerfiles
FROM me/worker-config:latest
Build script
Use a script to push the common config to each of your worker containers.
./build worker-n
#!/bin/sh
set -uex
rundir=$(readlink -f "${0%/*}")
container=$(shift)
cd "$rundir/$container"
cp ../config/config.json ./config-docker.json
docker build "$#" .
Build from URL
Pull the config from a common URL for all worker-n builds.
ADD http://somehost/config.json /
Increase the scope of the image build context
Include the symlink target files in the build context by building from a parent directory that includes both the shared files and specific container files.
cd ..
docker build -f worker-a/Dockerfile .
All the source paths you reference in a Dockerfile must also change to match the new build context:
COPY workerathing /app
becomes
COPY worker-a/workerathing /app
Using this method can make all build contexts large if you have one large build context, as they all become shared. It can slow down builds, especially to remote Docker build servers. Note that only the .dockerignore file from the base of the build context is referenced.
Alternate build that can mount volumes
Other projects that strive for Dockerfile compatibility may support volumes at build time. For example a podman build / buildah support a --volume option to bind mount files from the host into a build container.
podman build --volume /project/config:/worker-config:ro,Z -t me/worker-a .
Then the build can reference the mounted volume
COPY /worker-config/config.json /app
Run time
Mount a config directory from a named volume
Volumes like this only work as directories, so you can't specify a file like you could when mounting a file from the host to container.
docker volume create --name=worker-cfg-vol
docker run -v worker-cfg-vol:/config worker-config cp config.json /config
docker run -v worker-cfg-vol:/config:/config worker-a
Mount config directory from data container
Again, directories only as it's basically the same as above. This will automatically copy files from the destination directory into the newly created shared volume though.
docker create --name wcc -v /config worker-config /bin/true
docker run --volumes-from wcc worker-a
Mount config file from host at runtime
docker run -v /app/config/config.json:/config.json worker-a
Node.js-specific solution
I also ran into this problem, and would like to share another method that hasn't been mentioned above. Instead of using npm link in my Dockerfile, I used yalc.
Install yalc in your container, e.g. RUN npm i -g yalc.
Build your library in Docker, and run yalc publish (add the --private flag if your shared lib is private). This will 'publish' your library locally.
Run yalc add my-lib in each repo that would normally use npm link before running npm install. It will create a local .yalc folder in your Docker container, create a symlink in node_modules that works inside Docker to this folder, and rewrite your package.json to refer to this folder too, so you can safely run install.
Optionally, if you do a two stage build, make sure that you also copy the .yalc folder to your final image.
Below an example Dockerfile, assuming you have a mono repository with three packages: models, gui and server, and the models repository must be shared and named my-models.
# You can access the container using:
# docker run -it my-name sh
# To start it stand-alone:
# docker run -it -p 8888:3000 my-name
FROM node:alpine AS builder
# Install yalc globally (the apk add... line is only needed if your installation requires it)
RUN apk add --no-cache --virtual .gyp python make g++ && \
npm i -g yalc
RUN mkdir /packages && \
mkdir /packages/models && \
mkdir /packages/gui && \
mkdir /packages/server
COPY ./packages/models /packages/models
WORKDIR /packages/models
RUN npm install && \
npm run build && \
yalc publish --private
COPY ./packages/gui /packages/gui
WORKDIR /packages/gui
RUN yalc add my-models && \
npm install && \
npm run build
COPY ./packages/server /packages/server
WORKDIR /packages/server
RUN yalc add my-models && \
npm install && \
npm run build
FROM node:alpine
RUN mkdir -p /app
COPY --from=builder /packages/server/package.json /app/package.json
COPY --from=builder /packages/server/dist /app/dist
# Make sure you copy the yalc registry too.
COPY --from=builder /packages/server/.yalc /app/.yalc
COPY --from=builder /packages/server/node_modules /app/node_modules
COPY --from=builder /packages/gui/dist /app/dist/public
WORKDIR /app
EXPOSE 3000
CMD ["node", "./dist/index.js"]
Hope that helps...
The docker build CLI command sends the specified directory (typically .) as the "build context" to the Docker Engine (daemon). Instead of specifying the build context as /worker-a, specify the build context as the root directory, and use the -f argument to specify the path to the Dockerfile in one of the child directories.
docker build -f worker-a/Dockerfile .
docker build -f worker-b/Dockerfile .
You'll have to rework your Dockerfiles slightly, to point them to ../config/config.json, but that is pretty trivial to fix.
Also check out this question/answer, which I think addresses the exact same problem that you're experiencing.
How to include files outside of Docker's build context?
Hope this helps! Cheers
An alternative solution is to upgrade all your soft links into hard links.