Docker bind-mount not working as expected within AWS EC2 Instance - docker

I have created the following Dockerfile to run a spring-boot app: myapp within an EC2 instance.
# Use an official java runtime as a parent image
FROM openjdk:8-jre-alpine
# Add a user to run our application so that it doesn't need to run as root
RUN adduser -D -s /bin/sh myapp
# Set the current working directory to /home/myapp
WORKDIR /home/myapp
#copy the app to be deployed in the container
ADD target/myapp.jar myapp.jar
#create a file entrypoint-dos.sh and put the project entrypoint.sh content in it
ADD entrypoint.sh entrypoint-dos.sh
#Get rid of windows characters and put the result in a new entrypoint.sh in the container
RUN sed -e 's/\r$//' entrypoint-dos.sh > entrypoint.sh
#set the file as an executable and set myapp as the owner
RUN chmod 755 entrypoint.sh && chown myapp:myapp entrypoint.sh
#set the user to use when running the image to myapp
USER myapp
# Make port 9010 available to the world outside this container
EXPOSE 9010
ENTRYPOINT ["./entrypoint.sh"]
Because I need to access myapp's logs from the EC2 host machine, i want to bind-mount a folder into the logs folder sitting within "myapp" container here: /home/myapp/logs
This is the command that i use to run the image in the ec2 console:
docker run -p 8090:9010 --name myapp myapp:latest -v home/ec2-user/myapp:/home/myapp/logs
The container starts without any issues, but the mount is not achieved as noticed in the following docker inspect extract:
...
"Mounts": [],
...
I have tried the followings actions but ended up with the same result:
--mount type=bind instead of -v
use volumes instead of bind-mount
I have even tried the --privileged option
In the Dockerfile: I tried to use the USER root instead of myapp
I believe that, this has nothing to do with the ec2 machine but my container. Since running other containers with bind-mounts on the same host works like a charm.
I am pretty sure i am messing up with my Dockerfile.
But what am i doing wrong in that Dockerfile ?
or
What am i missing out ?
Here you have the entrypoint.sh if needed:
#!/bin/sh
echo "The app is starting ..."
exec java ${JAVA_OPTS} -Djava.security.egd=file:/dev/./urandom -jar -Dspring.profiles.active=${SPRING_ACTIVE_PROFILES} "${HOME}/myapp.jar" "$#"

I think the issue might be the order of the options on the command line. Docker expects the last two arguments to be the image id/name and (optionally) a command/args to run as pid 1.
https://docs.docker.com/engine/reference/run/
The basic docker run command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You have the mount options (-v in the example you provided) after the image name (myall:latest). I'm not sure but perhaps the -v ... is being interpreted as arguments to be passed to your entrypoint script (which are being ignored) and docker run isn't seeing as a mount option.
Also, the source of the mount here (home/ec2-user/myapp) doesn't start with a leading forward slash (/), which, I believe, will make it relative to where the docker run command is executed from. You should make sure the source path starts with a forward slash (i.e. /home/ec2-user/myapp) so that you're sure it will always mount the directory you expect. I.e. -v /home/ec2-user...
Have you tried this order:
docker run -p 8090:9010 --name myapp -v /home/ec2-user/myapp:/home/myapp/logs myapp:latest

Related

understanding docker : how come my docker container content is dynamic?

I want to make sure I understand correctly docker: when i build an image from the current directory I run:
docker build -t imgfile .
What happens when i change the content of a file in the directory AFTER the image is built? From what i've tried it seems it changes the content of the docker image also dynamically.
I thought the docker image was like a zip file that could only be changed with docker commands or logging into the image and running commands.
The dockerfile is :
FROM lambci/lambda:build-python3.8
WORKDIR /var/task
EXPOSE 8000
RUN echo 'export PS1="\[\e[36m\]zappashell>\[\e[m\] "' >> /root/.bashrc
CMD ["bash"]
And the docker run command is :
docker run -ti -p 8000:8000 -e AWS_PROFILE=zappa -v "$(pwd):/var/task" -v ~/.aws/:/root/.aws --rm zappa-docker-image
Thank you
Best,
Your docker run command isn't really running your image at all. The docker run -v $(pwd):/var/task syntax overwrites what was in /var/task in the image with a bind mount to the current directory on the host. So when you edit a file on your host, the container has the same host directory (and not the content from the image) and you see the changes inside the container as well.
You're right that the image is immutable. The image you show doesn't really contain anything, beyond a .bashrc file that won't usually be used. You can try running the image without the -v options to see:
docker run --rm zappa-docker-image ls -al
# just shows `.` and `..` directories
I'd recommend making sure you COPY your application into the image, setting its CMD to actually run the application, and removing the -v option that overwrites its main directory. If your goal is to run host code against host files with host supporting data like your AWS credentials, you're not really getting much benefit from introducing Docker in between your application and every single file it uses.

Docker not saving a file created using python - Flask application

I created a Flask Application. This application receives a XML from a url and saves it:
response = requests.get(base_url)
with open('currencies.xml', 'wb') as file:
file.write(response.content)
When I run the application without Docker, the file currencies.xml is correctly created inside my app folder.
However, this behaviour does not occur when I use docker.
In docker I run the following commands:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker # This is where I want the file to be saved: inside the main Flask folder
When I run the second command, I get:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/Users/name/Desktop/myApiDocker\": stat /Users/name/Desktop/myApiDocker: no such file or directory": unknown.
ERRO[0001] error waiting for container: context canceled
But If I run:
docker build -t my-api-docker:latest .
docker run -p 5000:5000 my-api-docker # Without specifying the PATH
I can access the website (but it is pretty useless without the file currencies.xml
Dockerfile
FROM python:3.7
RUN pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
CMD [ "flask", "run", "--host=0.0.0.0" ]
When you
docker run -p 5000:5000 my-api-docker ~/Desktop/myApiDocker
Docker interprets everything after the image name (my-api-docker) as the command to run. It runs /Users/name/Desktop/myApiDocker as a command, instead of what you have as the CMD in the Dockerfile, and when that path doesn't exist in the container, you get the error you see.
It's a little unlikely you'll be able to pass this path to your flask run command as a command-line argument. A typical way of dealing with this is by using an environment variable instead. In your code,
download_dir = os.environ.get('DOWNLOAD_DIR', '.')
currencies_xml = os.path.join(download_dir, 'currencies.xml')
with open(currencies_xml, 'wb') as file:
...
Then when you start your container, you can pass that as an environment variable with the docker run -e option. Note that this names a path inside the container; there's no particular need for this to match the path on the host.
docker run \
-p 5000:5000 \
-e DOWNLOAD_DIR=/data \
-v $HOME/Desktop/myApiDocker:/data \
my-api-docker
It's also fairly common to put an ENV statement in your Dockerfile or otherwise pick a fixed path for this, and just specify that your image's interface is that it will download the file into whatever is mounted on /data.
When you docker run the image, the process' context is the container's file system not your host's file system. So my-api-docker ~/Desktop/myApiDocker (attempts to) place the file in the container's (!) ~/Desktop.
Instead you need to mount one of your host's directories into the container's file system and store the file in the mounted directory.
Something like:
docker run ... \
--volume=[HOST-PATH]:[CONTAINER-PATH] \
... \
my-api-docker [CONTAINER-PATH]/thefile
The container then writes the file to [CONTAINER-PATH]/thefile but this is mapped to the host's [HOST-PATH]/thefile.
NB The values for [HOST-PATH] and [CONTAINER-PATH] must be absolute not relative paths.
You may prove this behavior to yourself using e.g. either python:3.7 or busybox:
# List my host's root
ls -l /
# List the container's root
docker run --rm busybox ls -l /
# Mount the host's /tmp into the container's /tmp
ls -l /tmp
docker run --rm --volume=/tmp:/tmp busybox ls -l /tmp
HTH!

DNS resolution with the container

I have a docker image which is build from the following file.
FROM java:7
MAINTAINER Tushar Gandhi
ARG version
ENV version=$version
ARG port
ENV port=$port
RUN mkdir -p /cacheDir/services/live/prediction/p$port/$version/logs
RUN ls -tlr /cacheDir/services/live/prediction/p$port/
RUN mkdir -p /cacheDir/services/releases/prediction/p$port/$version/
RUN mkdir -p /cacheDir/services/predictionmodel
ADD target/predictionDependencies/* /cacheDir/services/predictionmodel/
ADD /target/prediction-0.0.13-SNAPSHOT.jar /cacheDir/services/releases/prediction/p$port/$version/prediction-0.0.13-SNAPSHOT.jar
ADD /target/instance.properties /cacheDir/services/releases/prediction/p$port/$version/instance.properties
ADD /target/logback.xml /cacheDir/services/releases/prediction/p$port/$version/logback.xml
RUN ls -ltr /cacheDir/services/live/prediction/p$port/$version/
RUN ls -ltr /cacheDir/services/releases/prediction/p$port/$version/
RUN ls -ltr /cacheDir/services/predictionmodel
ENTRYPOINT ["sh","-c","java -server -Xmx2g -Xloggc:/cacheDir/services/live/prediction/p${port}/${version}/logs/gc.log -verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/cacheDir/services/live/prediction/p${port}/${version}/oom.dump -Dlogback.configurationFile=/cacheDir/services/releases/prediction/p${port}/${version}/logback.xml -Dlog.home=/cacheDir/services/live/prediction/p${port}/${version}/logs -Dlogback.debug=true -Dbroker.l^Ct=sv-kafka6.pv.sv.nextag.com:9092,sv-kafka7.pv.sv.nextag.com:9092,sv-kafka8.pv.sv.nextag.com:9092,sv-kafka9.pv.sv.nextag.com:9092 -jar /cacheDir/services/releases/prediction/p${port}/${version}/prediction-0.0.13-SNAPSHOT.jar $port /cacheDir/services/releases/prediction/p${port}/${version}/instance.properties /com/abc/services/$ZK_PATH"]
I'm using the following build command to build the image.
docker build --build-arg version=test1 --build-arg port=3001 -f Dockerfile -t prediction:test1 .
The image creation is successful and the container comes up to be successful. Run command used
sudo docker run -p 7105:3001 -v ~/PredictionVolume/logs/:/cacheDir/services/live/prediction/p5030/Testing1/logs/ -e ZK_PATH=qa -t prediction:test
Now, the problem lies in that my application when runs in a docker container, it tries to access URL qa-zk1.com:2181. This URL is accessible from my system but not from the docker container. Can anyone please suggest a way to make the URL accessible from the container.
[Edit] I have been trying different methods and came across that I was able to ping google.com. This showed me that internet is working. If internet is working, then that URL should also be accessible, but it isn't, therefore it seems to be a problem of DNS resolution. I tried with the IP address and was able to hit the service properly, now I need to find out how to enable that search pattern using a URL rather than an IP address.
In case you can reach the site by IP, it means that inside the container you are pointing to the DNS server, which does not know "qa-zk1.com" name.
You can 2 options:
Add your ip to the local hosts file
/etc/hosts
Update container's DNS configuration
See Configure container DNS for more details

Why the directory created after WORKDIR disappear

Currently, I have the following docker file
FROM ubuntu:14.04
ADD . /var/www/html/xxx
RUN mkdir -p /tmp/debug1
WORKDIR /var/www/html/xxx
RUN mkdir -p /var/www/html/xxx/debug2
After docker-compose up -d
I went inside the container by using the following command
docker exec -it xxx_xxx_1 bash
To my suprise
/tmp/debug1 is created
/var/www/html/xxx/debug2 is not created
However, if I went inside the container using
docker run -it xxx bash
What I realize is that
/tmp/debug1 is created
/var/www/html/xxx/debug2 is created
May I know why there is such behavior, after line WORKDIR?
This sounds like volumes being in play. Can you update your question with your compose file?
I assume that you have defined a volume in the compose file which mounts something to /var/www/html
If this is true, then this would explain what you see. When you run the container without compose, you see /var/www/html/xxx/debug2 because /var/www/html contains the content originally created at docker build time. If you run it with compose, docker mounts a host directory or a docker volume onto /var/www/html, in which case everything that was originally there get "invisible". If your volume/host-dir is empty, then /var/www/html is also empty.

Docker add warfile to official Tomcat image

I pulled official Docker image for Tomcat by running this command.
docker run -it --rm tomcat:8.0
By using this as base image I need to build new image that contains my war file in the tomcat webapps folder. I created Dockerfile like this.
From tomcat8
ADD warfile /usr/local/tomcat
When I run this Dockerfile by building image I am not able to see Tomcat front page.
Can anybody tell me how to add my warfile to official Tomcat images webapp folder.
Reading from the documentation of the repo you would do something like that
FROM tomcat
MAINTAINER xyz
ADD your.war /usr/local/tomcat/webapps/
CMD ["catalina.sh", "run"]
Then build your image with docker build -t yourName <path-to-dockerfile>
And run it with:
docker run --rm -it -p 8080:8080 yourName
--rm removes the container as soon as you stop it
-p forwards the port to your host (or if you use boot2docker to this IP)
-it allows interactive mode, so you see if something get's deployed
Building on #daniel's answer, if you want to deploy your WAR to the root of tomcat, I did this:
FROM tomcat:7-jre7
MAINTAINER xyz
RUN ["rm", "-fr", "/usr/local/tomcat/webapps/ROOT"]
COPY ./target/your-webapp-1.0-SNAPSHOT.war /usr/local/tomcat/webapps/ROOT.war
CMD ["catalina.sh", "run"]
It deletes the existing root webapp, copies your WAR to the ROOT.war filename then executes tomcat.
docker run -it --rm --name MYTOMCAT -p 8080:8080 -v .../wars:/usr/local/tomcat/webapps/ tomcat:8.0
where wars folder contains war to deploy
How do you check the webapps folder?
The webapps folder is within the docker container.
If you want to access your webapps container you could mount a host directory within your container to use it as webapps folder. That way you can access files without accessing docker.
Details see here
To access your logs you could do that when you run your container e.g.
docker run -rm -it -p 8080:8080 **IMAGE_NAME** /path/to/tomcat/bin/catalina.sh && tail -f /path/to/tomcat/logs
or you start your docker container and then do something like:
docker exec -it **CONTAINER_ID** tail -f /path/to/tomcat/logs
If you are using spring mvc project then you require server to run your application suppose you use tomcat then you need base image of tomcat that your application uses which you can specify through FROM command.
You can set environment variable using ENV command.
You can additionally use RUN command which executes during Docker Image buiding.
eg to give read write execute permissions to webapps folder for tomcat to unzip war file
RUN chmod -R 777 $CATALINA_HOME/webapps
And one more command is CMD. Whatever you specifying in CMD command it will execute at a time of container running. You can specify options in CMD command using double quotes(" ") seperated by comma(,).
eg
CMD ["catalina.sh","start"]
(NOTE : Remember RUN command execute at a time of image building and CMD execute at a time of running container this is confusing for new users).
This is my Dockerfile -
FROM tomcat:9.0.27-jdk8-openjdk
VOLUME /tmp
RUN chmod -R 777 $CATALINA_HOME/webapps
ENV CATALINA_HOME /usr/local/tomcat
COPY target/*.war $CATALINA_HOME/webapps/myapp.war
EXPOSE 8080
CMD ["catalina.sh","run"]
Build your image using command
docker build -t imageName <path_of_Dockerfile>
check your docker image using command
docker images
Run image using command
docker run -p 9999:8080 imageName
here 8080 is tomcat port and application can access on 9999 port
Try accessing your application on
localhost:9999/myapp/

Resources