How to synchronize host folder in container folder with Docker - docker

I would like to know how to synchronize host folder in container folder with Docker.
This is my Dockerfile :
FROM node:carbon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
I have no docker-compose.yml
Thanks a lot :)

You have to map volume of your docker container directory with host directory.
For example :
docker run -v <host_dir>:<container_dir> -other options imagename
Here both directory synchronised vice or versa.
Host directory and container directory must be available.

by mounting volume
docker run -v /host/folder://usr/src/app -it imagename /bash
then you can change inside host folder it will reflect in container as well.

Related

Docker is not copying file on a shared volume during build

I would like to have the files created on the building phase stored on my local machine
I have this Dockerfile
FROM node:17-alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
RUN npm i -g #angular/cli
COPY . .
RUN ng build foo --prod
RUN touch test.txt #This is just for test
CMD ["ng", "serve"] #Just for let the container running
I also created a shared volume via docker compose
services:
client:
build:
dockerfile: Dockerfile.prod
context: ./foo
volumes:
- /app/node_modules
- ./foo:/app
If I attach a shell to the running container and run touch test.txt, the file is created on my local machine.
I can't understand why the files are not created on the building phase...
If I use a multi stage Dockerfile the dist folder on the container is created (just adding this to the Dockerfile), but still I can't see it on the local machine
FROM nginx
EXPOSE 80
COPY --from=builder /app/dist/foo /usr/share/nginx/html
I can't understand why the files are not created on the building
phase...
That's because the build phase doesn't involve volume mounting.
Mounting volumes only occur when creating containers, not building images. If you map a volume to an existing file or directory, Docker "overrides" the image's path, much like a traditional linux mount. Which means, before creating the container, you image has everything from /app/* pre-packaged, and that's why you're able to copy the contents in the multistage build.
However, as you defined a volume with the - ./foo:/app config in your docker-compose file, the container won't have those files anymore, and instead the /app folder will have the current contents of your ./foo directory.
If you wish to copy the contents of the image to a mounted volume, you'll have to do it in the ENTRYPOINT, as it runs upon container instantiation, and after the volume mounting.

Module Not found after attaching volume in docker

This is my dockerfile
FROM node:15
# sets the folder structure to /app directory
WORKDIR /app
# copy package.json to /app folder
COPY package.json .
RUN npm install
# Copy all files from current directory to current directory in docker(app)
COPY . ./
EXPOSE 3000
CMD ["node","index.js"]
I am using this command in my powershell to run the image in a container
docker run -v ${pwd}:/app -p 3000:3000 -d --name node-app node-app-image
${pwd}
returns the current directory.
But as soon as I hit enter, somehow node_modules isn't being installed in the container and I get "express not found" error in the log.
[![Docker log][1]][1]
I can't verify if node_modules isn't being installed because I can't get the container up to run the exec --it command.
I was following a freecodecamp tutorial and it seems to work in his pc and I've tried this command in command prompt too by replacing ${pwd} by %cd%.
This used to work fine before I added the volume flag in the command.
[1]: https://i.stack.imgur.com/4Fifu.png
Your problem was you build your image somewhere and then try to map another folder to it.
|_MyFolder/
|_ all-required-files
|_ all-required-folders
|_ Dockerfile
docker build -t node-app-image .
docker run -p 3000:3000 -d --name node-app node-app-image
Simplified Dockerfile
FROM node:15
# sets the folder structure to /app directory
WORKDIR /app
# Copy all files from current directory to current directory in docker(app)
COPY . ./
RUN npm install
EXPOSE 3000
CMD ["node","index.js"]

Get build files to persist on host after docker-compose build is run

I'm trying to run a docker-compose build command with a Dockerfile and a docker-compose.yml file.
Inside the docker-compose.yml file, I'm trying to bind a local folder on the host machine ./dist with a folder on the container app/dist.
version: '3.8'
services:
dev:
build:
context: .
volumes:
- ./dist:app/dist # I'm expecting files to be changed or added to the container's app/dist to be reflected to the host's ./dist folder
Inside the Dockerfile, I build some files with an NPM script that I'm wanting to make available on the host machine once the build is finished. I'm also touching a new file inside the /app/dist/test.md just as a simple test to see if the file ends up on the host machine, but it does not.
FROM node:8.17.0-alpine as example
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run dist
RUN touch /app/dist/test.md
Is there a way to do this? I also tried using the "long syntax" as mentioned in the Docker Compose v3 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3/
The easiest way to do this is to install Node and run the npm commands directly on the host.
$BREW_OR_APT_GET_OR_YUM_OR_SOMETHING install node
npm install
npm run dist
# done
There's not an easy way to use a Dockerfile to build host content. The Dockerfile can't write out directly to the host filesystem; if you use a volume mount, the host volume hides the container content before anything else happens.
That means, if you want to use this approach, you need to launch a temporary container to get the content out. You can do it with a one-off container, mounting the host directory somewhere other than /app, making the main container command be cp:
sudo docker build -t myimage .
sudo docker run --rm \
-v "$PWD/dist:/out" \
myimage \
cp -a /app/dist /out
Or, if you specifically wanted to use docker cp:
sudo docker build -t myimage .
sudo docker create --name to-copy myimage
sudo docker cp -r to-copy:/app/dist ./dist
sudo docker rm to-copy
Note that any of these sequences are more complex than just installing a local Node via a package manager, and require administrator permissions (you can use the same technique to overwrite any host file, including the /etc/shadow file with encrypted passwords).

Docker-compose volumes sharing issue with /app folder

I'm trying to build a web application based on flask and vue.js, using docker containers.
I use volume sharing in docker-compose and I'm facing an issue with the container structure.
I'd like to share the application folder from the host with the /app container folder. To do so the docker-compose is set up as
volumes:
- type: bind
source: ./
target: /app
inspecting the container shows that the data from the host is placed inside the folder /app/app and not inside the folder /app as expected. The working directory is set up inside the docker container:
FROM continuumio/miniconda3:latest
WORKDIR /app
COPY dependency.yml .
RUN conda env create -f dependency.yml
COPY setup.py .
RUN pip install -e .
In an attempt to try to understand the relative/absolute path I tried to change the target volume to /data in the docker-compose file. In this case the application files are installed in the /app and the host files are copied in the /data folder, as expected.
The question is: why if I try to use the absolute /app folder in the container does the system use it as relative to the WORKDIR, and this happens only if the WORKDIR has the same name as the target folder?

Docker: How to access files from another container from a given container?

Basically I have a main directory and Books Directory (General file structure, there's more but these are the important pieces). So when I fire a request from main to booksServer, it doesn't work because the node modules are missing.
That's because the node modules are inside the docker container at a specific path: '/usr/src/app'
How can I have main.js see that books (service/container) does have the proper node packages inside this specific path?
I think I can use docker-compose, but I wanted to test it individually without docker-compose first.
**-Main Directory (Individual Service, has its own container)**
-Initiator (Fires commands)
-DockerFile
**-Books Directory (Individual Service, has its own container)**
-Stubs
-BooksStub.js (NEED THIS!, but it won't work because needs npm modules which is located in its container #/usr/src/app. How can I access the nodemodules that it's using?)
-booksServer.js
-Package*.json (lock and package.json)
-DockerFile
Inside the
Error:
internal/modules/cjs/loader.js:800
throw err;
^
Error: Cannot find module 'grpc'
Books Dockerfile
FROM node:12.14.0
WORKDIR /usr/src/app
COPY package*.json ./
COPY . /usr/src/app
RUN npm install
EXPOSE 30043
CMD ["node", "booksServer.js"]
Main DockerFile
FROM node:12.14.0
WORKDIR /usr/src/app
COPY package*.json ./
COPY . /usr/src/app
RUN npm install
EXPOSE 4555
CMD ["node", "main.js"]
You can create one common datavolume and attached your containers with the datavolume
Here is the step to create a datavolume,
Step 1 : docker volume create --name storageOne You can give any name instead of storageOne
Step 2 : Now you need to attach that volume with the container using docker run -ti --name=myContainer -v storageOne:/storageOne ubuntu command
Step 3 : Copy or create your required file in that datavolume
Step 4 : Now Create an another Container using docker run -ti --name=myContainer2 --volumes-from MyContainer ubuntu command
Step 5 : Restart your myStorage container
So whatever files are available in myStorage will be shareable between attached container.
May be this will help you

Resources