What is the point of WORKDIR on Dockerfile? - docker

I'm learning Docker. For many times I've seen that Dockerfile has WORKDIR command:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ “npm”, “start” ]
Can't I just omit WORKDIR and Copy and just have my Dockerfile at the root of my project? What are the downsides of using this approach?

According to the documentation:
The WORKDIR instruction sets the working directory for any RUN, CMD,
ENTRYPOINT, COPY and ADD instructions that follow it in the
Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
Also, in the Docker best practices it recommends you to use it:
... you should use WORKDIR instead of proliferating instructions like
RUN cd … && do-something, which are hard to read, troubleshoot, and
maintain.
I would suggest to keep it.
I think you can refactor your Dockerfile to something like:
FROM node:latest
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD [ "npm", "start" ]

You dont have to
RUN mkdir -p /usr/src/app
This will be created automatically when you specifiy your WORKDIR
FROM node:latest
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD [ “npm”, “start” ]

You can think of WORKDIR like a cd inside the container (it affects commands that come later in the Dockerfile, like the RUN command). If you removed WORKDIR in your example above, RUN npm install wouldn't work because you would not be in the /usr/src/app directory inside your container.
I don't see how this would be related to where you put your Dockerfile (since your Dockerfile location on the host machine has nothing to do with the pwd inside the container). You can put the Dockerfile wherever you'd like in your project. However, the first argument to COPY is a relative path, so if you move your Dockerfile you may need to update those COPY commands.

Before applying WORKDIR. Here the WORKDIR is at the wrong place and is not used wisely.
FROM microsoft/aspnetcore:2
COPY --from=build-env /publish /publish
WORKDIR /publish
ENTRYPOINT ["dotnet", "/publish/api.dll"]
We corrected the above code to put WORKDIR at the right location and optimised the following statements by removing /Publish
FROM microsoft/aspnetcore:2
WORKDIR /publish
COPY --from=build-env /publish .
ENTRYPOINT ["dotnet", "api.dll"]
So it acts like a cd and sets the tone for the upcoming statements.

The answer by #juanlumn is great, but I wanted to add one more (important) thing.
In regular command line, if you cd somewhere, it stays there until you change it. However, in a Dockerfile, each RUN command starts back at the root directory! That's a gotcha for docker newbies, and something to be aware of.
So not only does WORKDIR make a more obvious visual cue to someone reading your code, but it also keeps the working directory for more than just the one RUN command.

Beware of using vars as the target directory name for WORKDIR - doing that appears to result in a "cannot normalize nothing" fatal error. IMO, it's also worth pointing out that WORKDIR behaves in the same way as mkdir -p <path> i.e. all elements of the path are created if they don't exist already.
UPDATE:
I encountered the variable related problem (mentioned above) whilst running a multi-stage build - it now appears that using a variable is fine - if it (the variable) is "in scope" e.g. in the following, the 2nd WORKDIR reference fails ...
FROM <some image>
ENV varname varval
WORKDIR $varname
FROM <some other image>
WORKDIR $varname
whereas, it succeeds in this ...
FROM <some image>
ENV varname varval
WORKDIR $varname
FROM <some other image>
ENV varname varval
WORKDIR $varname
.oO(Maybe it's in the docs & I've missed it)

Be careful where you set WORKDIR because it can affect the continuous integration flow. For example, setting it to /home/circleci/project will cause error something like .ssh or whatever is the remote circleci is doing at setup time.

Related

Workdir and dot at the end of directory path

I'm looking at this tutorial:
https://learn.microsoft.com/en-us/learn/modules/implement-docker-multi-stage-builds/3-examine-multi-stage-dockerfiles
And this part confuses me:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["WebApplication1.csproj", ""]
RUN dotnet restore "./WebApplication1.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "WebApplication1.csproj" -c Release -o /app/build
Why would you use WORKDIR twice here? Aren't we already in the src?
Does the dot at the end (/src/.) has any additional meaning?
Adding twice WORKDIR is for clarity. As best practise adding WORKDIR with absolute path is recommended for readability and maintenance purpose.
Working directory can also be verify as below
docker run -it imageName pwd
replace imageName -with actual Image Name
pwd with cd (windows current directory command)
Link for Docker best practises, refer WORKDIR section.

Understanding Dockerfile syntax?

I am learning Docker and looking at this Dockerfile example for React application
FROM node:alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
CMD ["npm", "run", "start"]
To me it's saying
Grab image node:alpine from docker library
create WORKDIR called /app
copy the package.json file to the /app dir
copy the lock file also to /app dir
I don't understand what COPY ./ ./ is doing?
command npm install
then CMD npm run start
Am I interpreting this language correctly? Can anyone give me insight of what is actually going on?
Docker is an open-source containerization platform. Here we run our application in the container(which is managed by Docker Engine). Dockerfile contains all the commands. Also, you can say in Dockerfile that we write all the procedures to make a container runnable.
Coming back to your point...
Here COPY ./ ./ meaning is, COPY <source_path> <destination_path> and <source_path> is the path in your host machine and <destination_path> is path in your container machine
I will try to simplify the other contents in your Dockerfile...
FROM node:alpine : Pull image node:alpine from Docker Hub. Here node is the package name and alpine is the Linux distribution with very minimal and required packages.
WORKDIR /app: (Work Directory) In container you're setting up your WORKDIR as /app folder.
COPY package.json ./: COPY the package.json(host machine) file to ./(current directory) in your container
And other COPY will also work in the same way.
RUN npm i: RUN command npm i in container
CMD ["npm", "run", "start"]: CMD command will executed(npm run start) when Docker Container will start.
For more detail please see Dockerfile Documentation.

Dockerizing python project Dockerfile creation

This question is asked before yet After reviewing the answers I am still not able to copy the solution.
I am still new to docker and after watching tutorials and following articles I was able to create a Dockerfile for an existing GitHub repository.
I started by using the nearest available image as a base then adding what I need.
from what I read the problem is in WORKDIR and CMD commands
This is error message:
python: can't open file 'save_model.py': [Errno 2] No such file or directory*
This is my Dockerfile:
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR app
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY /home/pc/Desktop/yolo4_deep .
# command to run on container start
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
src
save_model.py
object_tracker.py
...
requirements.txt
Dockerfile
I tried WORKDIR command to set the absolute path: WORKDIR /home/pc/Desktop/yolo4_Deep_sort_nojupitor the result was Same Error.
I see multiple issues in your Dockerfile.
COPY /home/pc/Desktop/yolo4_deep .
The COPY command copies files from your local machine to the container. The path on your local machine must be path relative to your build context. The build context is the path you pass in when you run docker build . — in this case the . (the current directory) is the build context. Also the local machine path can only reference files located under the build context — i.e. paths containing .. (parent directory) or / (root directory) are not allowed.
WORKDIR app
WORKDIR sets the path inside the container not on your local machine. So WORKDIR /app means that all commands — RUN, CMD, ENTRYPOINT — will be executed from the /app directory.
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
As mentioned above WORKDIR /app causes all operations to be executed from the /app directory. So ./app/save_model.py is actually translated as /app/app/save_model.py.
Thanks for help Everyone.
As I mentioned earlier I'm beginner in the docker world. I solved the issue by editing the copy command.
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR /home/pc/Desktop/yolo4_deep
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
ENTRYPOINT ["./start.sh"]

Dockerfile copy from build failing for create-react-app

I have a react app I'm trying to dockerize for production. It was based off create-react-app. To run the app locally, I am in the app's root folder and I run npm start. This works. I built the app with npm run build. Then I try to create the docker image with docker build . -t app-name. This is failing for not being able to find the folder I'm trying to copy the built app from (I think).
Here's what's in my Dockerfile:
FROM node:13.12.0-alpine as build
WORKDIR /src
ENV PATH /node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
COPY . ./
RUN npm run build
FROM nginx:alpine
COPY --from=build build /usr/share/nginx/html
EXPOSE 80
CMD ["npm", "start"]
I'm pretty sure I've got something wrong on the COPY --from line.
The app structure is like this, if it matters
-app-name (folder)
-src (folder)
-build (folder)
-dockerfile
-other stuff, but I think I listed what matters
The error I get is failed to compute cache key: "/build" not found: not found
I'm running my commands in windows powershell.
What do I need to change?
You were almost correct,
Just that the path where the build folder is generated is at /src/build and not at /build.
and hence the error you see,
and why the /src coming?
it's due to the WORKDIR /src.
and hence this should work: COPY --from=build /src/build /usr/share/nginx/html
besides, since you are using nginx server to serve the build static files,
you don't need to or you cant run npm start with CMD.
instead, just leave it, and you can access the application at port 80.
so the possible working Dockerfile would be:
FROM node:13.12.0-alpine as build
WORKDIR /src
ENV PATH /node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install --silent
COPY . ./
RUN npm run build
FROM nginx:alpine
COPY --from=build /src/build /usr/share/nginx/html
EXPOSE 80
This is in accordance with the Dockerfile in the above question,
in some specific cases, advanced configuration might be required.

Run mvn commands as part of the docker file using Entrypoint and/or CMD

How can we run mvn commands from a Dockerfile
Here is my Dockerfile:
FROM maven:3.3.9-jdk-8-alpine
WORKDIR /app
COPY code /app
WORKDIR /app
ENTRYPOINT ["mvn"]
CMD ["clean test -Dsurefire.suiteXmlFiles=/app/abc.xml"]
I tried to build and run the above image and it fails ( abc.xml is under the /app directory)
Is there a way to get this to work.
According to the documentation:
"If CMD is used to provide default arguments for the ENTRYPOINT instruction, both the CMD and ENTRYPOINT instructions should be specified with the JSON array format."
As such you should rewrite CMD as follow:
CMD ["clean","test","-Dsurefire.suiteXmlFiles=/app/abc.xml"]
You can also parameterize entry-point as a JSON array, as per documentation:
ENTRYPOINT["mvn","clean","test","-Dsurefire.suiteXmlFiles=/app/abc.
But, I suggest you use best-practice with an entrypoint ash-file. This ensures that changing these parameters does not require rewriting of the dockerfile:
create an entrypoint.sh file in the code directory. make it executable. It should read like this:
#!/bin/sh
if [ "$#" -ne 1 ]
then
FILE="abc.xml"
else
FILE=$1
fi
mvn clean test -Dsurefire.suiteXmlFiles="/app/$FILE"
replace your entrypoint with ENTRYPOINT["./entrypoint.sh"]
replace your command with CMD["abc.xml"]
PS
you have "WORKDIR /app" twice. this isn't what fails you, but it is redundant, you can get rid of it.

Resources