Below is my dockerfile, I do a copy of js file in copy command and set working directory after that,followed by volume and run command.Below is my dockerfile
1) I understand node_modules(Which is created because of running npm install) IS getting wipedoff when container is first initialized because of create volume in the same location
My quesion why my app.js which i copied in step 3 is not getting wiped of since its also on the same path as volume?
FROM node:latest
ENV NODE_ENV=production
ENV PORT=3000
COPY . /app
WORKDIR /app
VOLUME ["/app"]
RUN npm install
EXPOSE $PORT
ENTRYPOINT ["node","app.js"]
Q: Why is my app.js (which i copied in step 3) is not getting wiped off while node_modules is.
A: As explained in docker's documentation under the volume section.
Quote:
Changing the volume from within the Dockerfile:
If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.
Reference: https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes
Related
I would like to have the files created on the building phase stored on my local machine
I have this Dockerfile
FROM node:17-alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
RUN npm i -g #angular/cli
COPY . .
RUN ng build foo --prod
RUN touch test.txt #This is just for test
CMD ["ng", "serve"] #Just for let the container running
I also created a shared volume via docker compose
services:
client:
build:
dockerfile: Dockerfile.prod
context: ./foo
volumes:
- /app/node_modules
- ./foo:/app
If I attach a shell to the running container and run touch test.txt, the file is created on my local machine.
I can't understand why the files are not created on the building phase...
If I use a multi stage Dockerfile the dist folder on the container is created (just adding this to the Dockerfile), but still I can't see it on the local machine
FROM nginx
EXPOSE 80
COPY --from=builder /app/dist/foo /usr/share/nginx/html
I can't understand why the files are not created on the building
phase...
That's because the build phase doesn't involve volume mounting.
Mounting volumes only occur when creating containers, not building images. If you map a volume to an existing file or directory, Docker "overrides" the image's path, much like a traditional linux mount. Which means, before creating the container, you image has everything from /app/* pre-packaged, and that's why you're able to copy the contents in the multistage build.
However, as you defined a volume with the - ./foo:/app config in your docker-compose file, the container won't have those files anymore, and instead the /app folder will have the current contents of your ./foo directory.
If you wish to copy the contents of the image to a mounted volume, you'll have to do it in the ENTRYPOINT, as it runs upon container instantiation, and after the volume mounting.
I'm trying to build a web application based on flask and vue.js, using docker containers.
I use volume sharing in docker-compose and I'm facing an issue with the container structure.
I'd like to share the application folder from the host with the /app container folder. To do so the docker-compose is set up as
volumes:
- type: bind
source: ./
target: /app
inspecting the container shows that the data from the host is placed inside the folder /app/app and not inside the folder /app as expected. The working directory is set up inside the docker container:
FROM continuumio/miniconda3:latest
WORKDIR /app
COPY dependency.yml .
RUN conda env create -f dependency.yml
COPY setup.py .
RUN pip install -e .
In an attempt to try to understand the relative/absolute path I tried to change the target volume to /data in the docker-compose file. In this case the application files are installed in the /app and the host files are copied in the /data folder, as expected.
The question is: why if I try to use the absolute /app folder in the container does the system use it as relative to the WORKDIR, and this happens only if the WORKDIR has the same name as the target folder?
Basically I have a main directory and Books Directory (General file structure, there's more but these are the important pieces). So when I fire a request from main to booksServer, it doesn't work because the node modules are missing.
That's because the node modules are inside the docker container at a specific path: '/usr/src/app'
How can I have main.js see that books (service/container) does have the proper node packages inside this specific path?
I think I can use docker-compose, but I wanted to test it individually without docker-compose first.
**-Main Directory (Individual Service, has its own container)**
-Initiator (Fires commands)
-DockerFile
**-Books Directory (Individual Service, has its own container)**
-Stubs
-BooksStub.js (NEED THIS!, but it won't work because needs npm modules which is located in its container #/usr/src/app. How can I access the nodemodules that it's using?)
-booksServer.js
-Package*.json (lock and package.json)
-DockerFile
Inside the
Error:
internal/modules/cjs/loader.js:800
throw err;
^
Error: Cannot find module 'grpc'
Books Dockerfile
FROM node:12.14.0
WORKDIR /usr/src/app
COPY package*.json ./
COPY . /usr/src/app
RUN npm install
EXPOSE 30043
CMD ["node", "booksServer.js"]
Main DockerFile
FROM node:12.14.0
WORKDIR /usr/src/app
COPY package*.json ./
COPY . /usr/src/app
RUN npm install
EXPOSE 4555
CMD ["node", "main.js"]
You can create one common datavolume and attached your containers with the datavolume
Here is the step to create a datavolume,
Step 1 : docker volume create --name storageOne You can give any name instead of storageOne
Step 2 : Now you need to attach that volume with the container using docker run -ti --name=myContainer -v storageOne:/storageOne ubuntu command
Step 3 : Copy or create your required file in that datavolume
Step 4 : Now Create an another Container using docker run -ti --name=myContainer2 --volumes-from MyContainer ubuntu command
Step 5 : Restart your myStorage container
So whatever files are available in myStorage will be shareable between attached container.
May be this will help you
In the docker docs getting started tutorial part 2, it has one make a Dockerfile. It instructs to add the following lines:
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
What is /app, and why is this a necessary step?
There are two important directories when building a docker image:
the build context directory.
the WORKDIR directory.
Build context directory
It's the directory on the host machine where docker will get the files to build the image. It is passed to the docker build command as the last argument. (Instead of a PATH on the host machine it can be a URL). Simple example:
docker build -t myimage .
Here the current dir (.) is the build context dir. In this case, docker build will use Dockerfile located in that dir. All files from that dir will be visible to docker build.
The build context dir is not necessarily where the Dockerfile is located. Dockerfile location defaults to current dir and is otherwise indicated by the -f otpion. Example:
docker build -t myimage -f ./rest-adapter/docker/Dockerfile ./rest-adapter
Here build context dir is ./rest-adapter, a subdirectory of where you call docker build; the Dokerfile location is indicated by -f.
WORKDIR
It's a directory inside your container image that can be set with the WORKDIR instruction in the Dockerfile. It is optional (default is /, but base image might have set it), but considered a good practice. Subsequent instructions in the Dockerfile, such as RUN, CMD and ENTRYPOINT will operate in this dir. As for COPY and ADD, they use both...
COPY and ADD use both dirs
These two commands have <src> and <dest>.
<src> is relative to the build context directory.
<dest> is relative to the WORKDIR directory.
For example, if your Dockerfile contains...
WORKDIR /myapp
COPY . .
then the contents of your build context directory will be copied to the /myapp dir inside your docker image.
WORKDIR is a good practice because you can set a directory as the main directory, then you can work on it using COPY, ENTRYPOINT, CMD commands, because them will execute pointing to this PATH.
Docker documentation: https://docs.docker.com/engine/reference/builder/
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
The WORKDIR instruction can be used multiple times in a Dockerfile. If a relative path is provided, it will be relative to the path of the previous WORKDIR instruction.
Dockerfile Example:
FROM node:alpine
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
A alpine node.js was created and the workdir is /app, then al files are copied them into /app
Finally npm run start command is running into /app folder inside the container.
You should exec the following command in the case you have sh or bash tty:
docker exec -it <container-id> sh
or
docker exec -it <container-id> bash
After that you can do ls command and you will can see the WORKDIR folder.
I hope it may help you
You need to declare a working directory and move your code into it, because your code has to live somewhere. Otherwise your code wouldn't be present and your app wouldn't run. Then when commands like RUN, CMD, ENTRYPOINT, COPY, and ADD are used, they are executed in the context of WORKDIR.
/app is an arbitrary choice of working directory. You could use anything you like (foo, bar, or baz), but app is nice since it's self-descriptive and commonly used.
In the file below, the file apprequirements.txt is ADDed to the container. I know because pip install works. However, the myworker.py file is not copied/added. Why?
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD ./frontend/apprequirements.txt /code
RUN pip install -r apprequirements.txt
ADD ./backend/myworker.py /code
I run this with docker-compose, you can see the whole example on https://github.com/AvidSoftware-be/Docker-compose-test
After a deep review into your repo, this is my conclusion:
Your Dockerfile is fine, it does what is supposed to do. It creates an image, inside that image a folder /code was created and two files were copied apprequirements.txt and myworker.py.
Inside the docker-compose.yml file you have this line:
volumes:
- ./frontend:/code
This means that after you run the docker-compose up command,
docker is going to mount a volumen over the /code existing directory.
The content of /code isn't removed from the container, however it is "masked", because the mounted directory is mounted on top of the existing files. The files are still in the container, but there are not reachable.
Note: the folder ./frontend includes the file 'apprequirements.txt' is why you believe that only one file was added.