Permissions in Docker volume - docker

I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks

I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer

Related

Copy container working directory data to local host directory

I am new to docker. I build a docker image using the below code
FROM ubuntu
ADD requirements.txt .
RUN apt-get update && \
apt-get install -y python3 && \
apt install python3-pip -y && \
apt-get install -y libglib2.0-0 \
libsm6 \
libxrender1 \
libxext6
RUN python3 -m pip install -r requirements.txt
COPY my_code /container/home/user
ENV PYTHONPATH /container/home/user/program_dir_1
RUN apt install -y libgl1
WORKDIR /container/home/user
CMD python3 program_dir_1/program_dir_2/program_dir_3/main.py
Now I have a local dir /home/host/local_dir. I want to write/copy all the file that the program creates during the runtime to this local dir.
I am using the below command to bind the volume
docker run -it --volume /home/host/local_dir:/container/home/user my_docker_image
It is giving me an error that
program_dir_1/program_dir_2/program_dir_3/main.py [Errno 2] No such file or directory
When I run the below command
docker run -it --volume /home/host/local_dir:/container/home/user my_docker_image pwd
It is giving the path to the host dir which I linked. It seems like it is also switching the working directory to the host volume to which I am linking.
Can anyone please help me to understand how to copy all the files and data generated using the working directory of the container to the directory of the host?
PS: I have used the below StackOverflow link and tried to understand but didn't get any success
How to write data to host file system from Docker container # Found one solution but got the error that
docker run -it --rm --volume v_mac:/home/host/local_dir --volume v_mac:/container/home/user my_docker_image cp -r /home/host/local_dir /container/home/user
Docker: Copying files from Docker container to host # This is not much of use as I assume the container should be in a running state. In my mine it exited after completion of the program

Run Python scripts on command line running Docker images

I built a docker image using Dockerfile with Python and some libraries inside (no my project code inside). In my local work dir, there are some scripts to be run on the docker. So, here what I did
$ cd /path/to/my_workdir
$ docker run -it --name test -v `pwd`:`pwd` -w `pwd` my/code:test python src/main.py --config=test --results-dir=/home/me/Results
The command python src/main.py --config=test --results-dir=/home/me/Results is what I want to run inside the Docker container.
However, it returns,
/home/docker/miniconda3/bin/python: /home/docker/miniconda3/bin/python: cannot execute binary file
How can I fix it and run my code?
Here is my Dockerfile
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
MAINTAINER Me <me#me.com>
RUN apt update -yq && \
apt install -yq curl wget unzip git vim cmake sudo
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b && rm Miniconda3-latest-Linux-x86_64.sh
ENV PATH /home/docker/miniconda3/bin:$PATH
Run pip install absl-py==0.5.0 atomicwrites==1.2.1 attrs==18.2.0 certifi==2018.8.24 chardet==3.0.4 cycler==0.10.0 docopt==0.6.2 enum34==1.1.6 future==0.16.0 idna==2.7 imageio==2.4.1 jsonpickle==1.2 kiwisolver==1.0.1 matplotlib==3.0.0 mock==2.0.0 more-itertools==4.3.0 mpyq==0.2.5 munch==2.3.2 numpy==1.15.2 pathlib2==2.3.2 pbr==4.3.0 Pillow==5.3.0 pluggy==0.7.1 portpicker==1.2.0 probscale==0.2.3 protobuf==3.6.1 py==1.6.0 pygame==1.9.4 pyparsing==2.2.2 pysc2==3.0.0 pytest==3.8.2 python-dateutil==2.7.3 PyYAML==3.13 requests==2.19.1 s2clientprotocol==4.10.1.75800.0 sacred==0.8.1 scipy==1.1.0 six==1.11.0 sk-video==1.1.10 snakeviz==1.0.0 tensorboard-logger==0.1.0 torch==0.4.1 torchvision==0.2.1 tornado==5.1.1 urllib3==1.23
USER docker
ENTRYPOINT ["/bin/bash"]
Try making the file executable before running it.
as John mentioned to do in the dockerfile
FROM python:latest
COPY src/main.py /usr/local/share/
RUN chmod +x /usr/local/share/src/main.py #<-**--- just add this also
# I have some doubts about the pathing
CMD ["/usr/local/share/src/main.py", "--config=test --results-dir=/home/me/Results"]
You can run a python script in docker by adding this to your docker file:
FROM python:latest
COPY src/main.py /usr/local/share/
CMD ["src/main.py", "--config=test --results-dir=/home/me/Results"]

How to exchange files between docker container and local filesystem?

I have a TypeScript code that reads the contents of a directory and has to delete them one by one at some intervals.
Everything works fine locally. I made a docker container for my code and wanted to achieve the same purpose, however, I realized that the directory contents are the same ones existed at the time of building the container.
As for my understanding, the connection between the docker container and the local file system is missing.
I have been wandering around bind and volume options, and I came across the following simple tutorial:
How To Share Data Between the Docker Container and the Host
According to the previous tutorial, theoretically, I would be able to achieve my goal:
If you make any changes to the ~/nginxlogs folder, you’ll be able to see them from inside the Docker container in real-time as well.
However, I followed exactly the same steps but still couldn't see the changes made locally reflected in the docker container, or vice versa.
My question is: How can I access my local file system from a docker container to read/write/delete files?
Update
This is my dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
Easy way to volume mount on docker run command
docker run -it -v /<Source Dir>/:/<Destination Dir> <container_name> bash
Another way is using docker-compose.
Let's try it with docker-compose
put your dockerfile and docker-compose at the same location or dir
main focus
volumes:
- E:\dirToMap:/work
docker-compose.yaml
version: "3"
services:
ampervue:
build:
context: ./
image: <Image Name>
container_name: ampervueservice
volumes:
- E:\dirToMap:/vol1
ports:
- 8080:8080
And add volume in dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
VOLUME /vol1
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
and run following command to up the container
docker-compose -f "docker-compose-sr.yml" up -d --build
At the examples below which come directly from the docs:
The --mount and -v examples below produce the same result. You can't run them both unless you remove the devtest container after running the first one.
with -v:
docker run -d -it --name devtest -v "$(pwd)"/target:/app nginx:latest
with --mount:
docker run -d -it --name devtest --mount type=bind,source="$(pwd)"/target,target=/app nginx:latest
This is where you have to type your 2 different paths:
-v /path/from/your/host:/path/inside/the/container
<-------host------->:<--------container------->
--mount type=bind,source=/path/from/your/host,target=/path/inside/the/container
<-------host-------> <--------container------->

running a phpUnit test within a docker container

i want to run a specific version of phpUnit WITHIN a docker container. This container will use a specific version of php. i.e
php:5.6-apache
Its a Laravel application. i have install phpunit via composer on the hostfiles and then used the volume command to transfer this to the container.
my composer.json file has following entry:
"require-dev": {
"phpunit/phpunit": "^5.0"
},
This is my docker run command to run the test on my testdev container:
docker run --rm -it -v ~/Users/mow/Documents/devFolder/testdev:/app testdev_php "php ./vendor/bin/phpunit"
This returns the error:
exec: fatal: unable to exec php ./vendor/bin/phpunit: No such file or directory
i am unclear why it says this because te vendor directory is at the root of my site directory.
this is my dockerfile
FROM php:5.6-apache
ENV S6_OVERLAY_VERSION 1.11.0.1
RUN apt-get update && apt-get install -y \
libldap2-dev \
git \
--no-install-recommends \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ \
&& docker-php-ext-install ldap \
&& docker-php-ext-install mysqli pdo pdo_mysql
#install xdebug
RUN git clone https://github.com/xdebug/xdebug.git \
&& cd xdebug \
&& git checkout tags/XDEBUG_2_5_5 \
&& phpize \
&& ./configure --enable-xdebug \
&& make \
&& make install
RUN a2enmod rewrite
COPY ./docker/rootfs /
COPY . /app
WORKDIR /app
ENTRYPOINT ["/init"]
I guess the real question is this: what is the correct way to run a phpUnit test within a docker container so that its subject to the php version within that container.
In the Entrypoint, you need to ran composer install to install the requisite packages that will be available in the docker container
without Dockerfile use this code in windows PowerShell :
docker run --rm -v ${pwd}:/app composer:latest require --dev phpunit/phpunit:^8
create composer container to install phpunit then remove the container.
note: in windows Command %cd% / in windows PowerShell ${pwd} / in linux $PWD
then Create new PHP container with copy all fills include phpuint feamework
docker run -d -p 80:80 --name my-php-apache -v ${pwd}:/var/www/html php:7.4.0-apache
to active autoload file, and testing please visit this Page

Docker Container port issue: Not able to access tomcat url using host ip

I am new to Docker, I have setup Docker Container on an Amazon Linux box.
I have a docker file which installs tomcat java and a war.
I can see all the installations present in the docker container when I navigate through the container in the exact folders I have mentioned in the Docker file.
When I run the Docker container it says tomcat server has started and I have also tailed the logs so I can see the service is running.
But when I open the host IP URL and 8080 port it says URL can't be reached.
These are the commands to build and run the file which works fine and I can see the status as running.
docker build -t friendly1 .
docker run -p 8080:8080 friendly1
What am I missing here? Request some help on this.
FROM centos:latest
RUN yum -y update && \
yum -y install wget && \
yum -y install tar && \
yum -y install zip unzip
ENV JAVA_HOME /opt/java/jdk1.7.0_67/
ENV CATALINA_HOME /opt/tomcat/apache-tomcat-7.0.70
ENV SAVIYNT_HOME /opt/tomcat/apache-tomcat-7.0.70/webapps
ENV PATH $PATH:$JAVA_HOME/jre/jdk1.7.0_67/bin:$CATALINA_HOME/bin:$CATALINA_HOME/scripts:$CATALINA_HOME/apache-tomcat-7.0.70/bin
ENV JAVA_VERSION 7u67
ENV JAVA_BUILD 7u67
RUN mkdir /opt/java/
RUN wget https://<S3location>/jdk-7u67-linux-x64.gz && \
tar -xvf jdk-7u67-linux-x64.gz && \
#rm jdk*.gz && \
mv jdk* /opt/java/
# Install Tomcat
ENV TOMCAT_MAJOR 7
ENV TOMCAT_VERSION 7.0.70
RUN mkdir /opt/tomcat/
RUN wget https://<s3location>/apache-tomcat-7.0.70.tar.gz && \
tar -xvf apache-tomcat-${TOMCAT_VERSION}.tar.gz && \
#rm apache-tomcat*.tar.gz && \
mv apache-tomcat* /opt/tomcat/
RUN chmod +x ${CATALINA_HOME}/bin/*sh
WORKDIR /opt/tomcat/apache-tomcat-7.0.70/
CMD "startup.sh" && tail -f /opt/tomcat/apache-tomcat-7.0.70/logs/*
EXPOSE 8080

Resources