How to exchange files between docker container and local filesystem? - docker

I have a TypeScript code that reads the contents of a directory and has to delete them one by one at some intervals.
Everything works fine locally. I made a docker container for my code and wanted to achieve the same purpose, however, I realized that the directory contents are the same ones existed at the time of building the container.
As for my understanding, the connection between the docker container and the local file system is missing.
I have been wandering around bind and volume options, and I came across the following simple tutorial:
How To Share Data Between the Docker Container and the Host
According to the previous tutorial, theoretically, I would be able to achieve my goal:
If you make any changes to the ~/nginxlogs folder, you’ll be able to see them from inside the Docker container in real-time as well.
However, I followed exactly the same steps but still couldn't see the changes made locally reflected in the docker container, or vice versa.
My question is: How can I access my local file system from a docker container to read/write/delete files?
Update
This is my dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]

Easy way to volume mount on docker run command
docker run -it -v /<Source Dir>/:/<Destination Dir> <container_name> bash
Another way is using docker-compose.
Let's try it with docker-compose
put your dockerfile and docker-compose at the same location or dir
main focus
volumes:
- E:\dirToMap:/work
docker-compose.yaml
version: "3"
services:
ampervue:
build:
context: ./
image: <Image Name>
container_name: ampervueservice
volumes:
- E:\dirToMap:/vol1
ports:
- 8080:8080
And add volume in dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
VOLUME /vol1
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
and run following command to up the container
docker-compose -f "docker-compose-sr.yml" up -d --build

At the examples below which come directly from the docs:
The --mount and -v examples below produce the same result. You can't run them both unless you remove the devtest container after running the first one.
with -v:
docker run -d -it --name devtest -v "$(pwd)"/target:/app nginx:latest
with --mount:
docker run -d -it --name devtest --mount type=bind,source="$(pwd)"/target,target=/app nginx:latest
This is where you have to type your 2 different paths:
-v /path/from/your/host:/path/inside/the/container
<-------host------->:<--------container------->
--mount type=bind,source=/path/from/your/host,target=/path/inside/the/container
<-------host-------> <--------container------->

Related

Docker Container Not Starting for create docker file

AM completely new to Docker, now am trying to create a container for tomact from ubuntu base image & written a docker file acoding to it:
From ubuntu
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install wget -y
RUN apt-get install openjdk-8-jdk -y
RUN mkdir /usr/local/tomcat
RUN wget https://mirrors.estointernet.in/apache/tomcat/tomcat-8/v8.5.61/bin/apache-tomcat-8.5.61.tar.gz
RUN tar xvzf apache-tomcat-8.5.61.tar.gz
RUN mv apache-tomcat-8.5.61 /usr/local/tomcat/
#MD ./usr/local/tomcat/apache-tomcat-8.5.61/bin/catlina.sh run
EXPOSE 8080
RUN /usr/local/tomcat/apache-tomcat-8.5.61/bin/catlina.sh run
Created Docker image for the respective docker file using:
docker build -t [filename] .
Tried to start the container using: docker run -itd --name my-con -p 8080:8080
but the container is not starting & the container is listed in stopped container
Cn any one help me fixing this issue
Thanks.
try this in last line:
CMD ["/usr/local/tomcat/bin/catalina.sh","run"]

Permissions in Docker volume

I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer

docker compose inside docker in a docker

I am pretty new to docker and was following the documentation found here, trying deploy several containers inside dind using docker-compose 1.14.0 I get the following
docker run -v /home/dudarev/compose/:/compose/ --privileged docker:dind /compose/docker-compose
/usr/local/bin/dockerd-entrypoint.sh: exec: line 21: /compose/docker-compose: not found
Did I miss something?
There is official docker image on dockerhub for docker-compose, just use that.
Follow these steps:
Create a directory on host mkdir /root/test
Create docker-compose.yaml file with following contents:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
Run docker run command to run docker-compose inside the container.
docker run -itd -v /var/run/docker.sock:/var/run/docker.sock -v /root/test/:/var/tmp/ docker/compose:1.24.1 -f /var/tmp/docker-compose.yaml up -d
NOTE: Here /var/tmp directory inside the container will contain docker-compose.yaml file so I have used -f option to specify complete path of the yaml file. Also docker.sock is mounted from host onto the container.
Hope this helps.
Add docker-compose installation to your Dockerfile before executing docker run.
For example, if you have an Ubuntu docker, add to your Dockerfile:
RUN aptitude -y install docker-compose
RUN ln -s /usr/local/bin/docker-compose /compose/docker-compose
Because it looks like if your entry-point looks up docker compose in /compose folder, while docker-compose is installed in /usr/local/bin by default.
If you want a concrete docker-compose version (for example 1.20.0-rc2):
RUN curl -L https://github.com/docker/compose/releases/download/1.20.0-rc2/docker-compose-`uname -s`-`uname -m` -o /compose/docker-compose
RUN chmod +x /compose/docker-compose
here is a full dockerfile to run docker-compose inside docker
FROM ubuntu:21.04
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y python3
RUN apt-get install -y pip
RUN apt-get install -y curl
RUN curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose

Why won't my docker container run unless I use -i -t?

If I run my Dockerfile with the following command, the docker container starts running and all is well.
docker run --name test1 -i -t 660c93c32a
However, if I run this command without the -it, the container does not appear to be running as docker ps returns nothing:
docker run -d --name test1 660c93c32a
.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
All I'm trying to do is run the container and then be able to attach and/or open a shell in the container later.
Not sure if the issue is in my dockerfile or not, so have pasted the dockerfile below.
############################################################
# Dockerfile to build Ubuntu/Ansible/Django
############################################################
# Set the base image to Ansible
FROM ubuntu:16.10
# File Author / Maintainer
MAINTAINER David
# Install Ansible and Related Deps #
RUN apt-get -y update && \
apt-get install -y python-yaml python-jinja2 python-httplib2 python-keyczar python-paramiko python-setuptools python-pkg-resources git python-pip
RUN mkdir /etc/ansible/
RUN echo '[local]\nlocalhost\n' > /etc/ansible/hosts
RUN mkdir /opt/ansible/
RUN git clone http://github.com/ansible/ansible.git /opt/ansible/ansible
WORKDIR /opt/ansible/ansible
RUN git submodule update --init
ENV PATH /opt/ansible/ansible/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin
ENV PYTHONPATH /opt/ansible/ansible/lib
ENV ANSIBLE_LIBRARY /opt/ansible/ansible/library
# Update the repository sources list
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-dev -y
RUN apt-get install python-setuptools -y
RUN apt-get install python-pip
RUN mkdir /ansible/
WORKDIR /ansible
COPY ./ansible ./
WORKDIR /
RUN ansible-playbook -c local ansible/playbooks/installdjango.yml
ENV PROJECTNAME davidswebsite
CMD django-admin startproject $PROJECTNAME
When you run your container, command after CMD or ENTRYPOINT becomes $1 process of you container. If this process doesn't run well, your container will die.
So, check container logs using: docker logs <container id>
and recheck your command in CMD django-admin startproject $PROJECTNAME

adding a command docker image

I have built an image using these steps:
download adminer package
wget https://www.adminer.org/static/download/4.2.4/adminer-4.2.4-en.php
mv adminer-4.2.4-en.php adminer.php
create docker file
vi dockerfile
FROM ubuntu
RUN apt-get -y install apache2 php5 php5-curl php5-cli php5-mysql php5-gd mysql-client mysql-server
RUN apt-get -y install postgresql postgresql-contrib
RUN apt-get -y install php5-pgsql
COPY adminer.php /var/www/html/
RUN chmod -R 777 /var/www/html/
build and run
docker build -t shantanuo/adminer1 .
docker run -i -t --rm -p 80:80 --name adminer1 shantanuo/adminer1
I need to run this command to start apache once I am inside the container.
sudo service apache2 start
How do I include this command in the dockerfile? (Build failed after adding CMD for this purpose.)
Is there any other (better) way of installing adminer.php package?
Is it possible to reduce the size of this image?
What you do is opening an interactive bash session and try to start a server.
It would be better if you started your same image in detached mode (-d) instead of -it, and let apache runs.
For that, as commented, you need to start FROM httpd:2.4 which:
has a Dockerfile starting by default apache
has a httpd-foreground scripts launching apache server in foreground.
Even better would be to use a PHP docker image:
FROM php:5.6-apache
That way, you don't even have to install apache or php. You just copy your php application.
Then, if you need to, you can still open a bash session with:
docker exec -it <yourContainer> bash

Resources