Yesterday I completed one exercise of the book "Machine Learning Engineering with MLFlow". I was very satisfied, but then I got to think about one thing that was not explained.
When using Docker (unrelated to MLFlow in particular) we build an image and then later when running the image we map ports and volumes in order for the ports to be able to open the ports from inside the container and in order to have the file structure of the host reflected in the container. If we do not do that, that does not happen.
However, now talking about the MLFlow exercise I had the following MLProject file:
name: stockpred
docker_env:
image: stockpred
entry_points:
main:
command: "python train.py"
I built the image with
docker build -t stockpred .
and then I run
mlflow run .
and after that I had my mlruns folder constructed in my host.
How did MLflow mapp my host volumes to the ones it used inside the container to run train.py??
I investigated further and it seems that when running mlflow run . it automatically calls
docker run --rm -v /home/host/route/mlruns:/mlflow/tmp/mlruns \
-v /home/host/route/mlruns/0/4c277e4e8412a9f708890af939fef/artifacts:/home/host/route/mlruns/0/4c277e4e8412a9f708890af939fef/artifacts \
-e MLFLOW_RUN_ID=4c277e4e8412a9f708890af939fef -e MLFLOW_TRACKING_URI=file:///mlflow/tmp/mlruns \
-e MLFLOW_EXPERIMENT_ID=0 stockpred:17db901 python train.py' in run with ID '4c277e4e8412a9f708890af939fef' ===
The only strange thing is the second mapping which seems to map a host path in the container path
Related
I am trying to use my nginx server on docker but I cannot use the files / folder if they belong to my volume. Problem, the goal of my test is to keep a volume between the file in my computer and the container.
I have searched during 3 days and tried a lot of solution but no effects...( useradd, chmod, chown, www_data, etc.....)
I don't understand how is it possible to use ngnix, a volume and docker?
The only solution actually for me is to copy the folder of my volume in another folder, and so I can chown the folder and use NGIX. There is no official solution on the web and I am surprised because for me using docker with a volume binded with his container would be the basic for a daily work.
If someone has managed to implement it, I would be very happy if you could share you code. I need to understand what I am missing.
FYI I am working with a VM.
Thanks !
I think you are not passing the right path in the volume option. There are a few ways to do it, you can pass the full path or you can use the $(pwd) if you are using a Linux machine. Let's say you are on /home/my-user/code/nginx/ and your HTML files are on html folder.
You can use:
$ docker run --name my-nginx -v /home/my-user/code/nginx/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
or
$ docker run --name my-nginx -v ~/code/nginx/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
or
$ docker run --name my-nginx -v $(pwd)/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
I've created an index.html file inside the html folder, after the docker run, I was able to open it:
$ echo "hello world" >> html/index.html
$ docker run --name my-nginx -v $(pwd)/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
$ curl localhost:8080
hello world
You can also create a Dockerfile, but you would need to use COPY command. I'll give a simple example that's working, but you should improve this by using a version and etc..
Dockerfile:
FROM nginx
COPY ./html /usr/share/nginx/html
...
$ docker build -t my-nginx:0.0.1 .
$ docker run -d -p 8080:80 my-nginx:0.0.1
$ curl localhost:8080
hello world
You can also use docker-compose. By the way, those examples are just to give you some idea of how it works.
I am facing an issue where after runnig the container and using bind mount to mount the directory on host to container I am not able to see new files created in host machine inside container.Below is my project structure.
The python code creates a file inside the container which should be available inside the host machine too however this does happen when I start the container with below command. However updates to python code and html is available inside the container.
sudo docker container run -p 5000:5000 --name flaskapp --volume feedback1:/app/feedback/ --volume /home/deepak/PycharmProjects/NewDockerProject/sampleapp:/app flask_image
However after starting the container using below command, everything seems to work fine. I can see all the files from container to host and vice versa(new created , edited).I git this command from docker in the month of lunches book.
sudo docker container run --mount type=bind,source=/home/deepak/PycharmProjects/NewDockerProject/sampleapp,target=/app -p 5000:5000 --name flaskapp
Below is the content of my dockerfile
FROM python:3.8-alpine
WORKDIR /app
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python","main.py"]
Could someone please help me in figuring out the difference between the two commands ? I am using ubuntu. Thank you
In my case i got working volumes using following docker run args (but i am running without --mount type=bind):
docker run -it ... -v mysql_data:/var/lib/mysql -v storage:/usr/shared/app_storage
where:
mysql_data is a volume name
/var/lib/mysql path inside container machine
you could list volumes as:
docker volume ls
and inspect them to see where it points on your system (usually /var/lib/docker/volumes/{volume_nanme}/_data):
docker volume inspect mysql_data
to create volume use following command:
docker volume create {volume_name}
I'm building a simple docker image based on a Dockerfile, and I'd like to add an alias to the hosts file to allow me to access an application on my local machine rather than out on the internet.
When I run the following...
> docker build --add-host=example.com:172.17.0.1 -f ./Dockerfile -t my-image .
> docker run --name=my-container --network=my-bridge --publish 8080:8080 my-image
> docker exec -it my-container cat /etc/hosts
I don't see example.com 172.17.0.1 like I'd expect. Where does the host get added? Or is it not working? The documentation is very sparse, but it looks like I'm using the param correctly.
My Dockerfile is doing very little - just specifying a base image, installing a few things, and setting some environment variables. It looks somewhat like this:
FROM tomcat:9.0.40-jdk8-adoptopenjdk-openj9
RUN apt update
RUN apt --assume-yes iputils-ping
# ... a few more installs ...
COPY ./conf /usr/local/tomcat/conf
COPY ./lib /usr/local/tomcat/lib
COPY ./webapps /usr/local/tomcat/webapps
ENV SOME_VAR=some value
# ... more env variables ...
EXPOSE 8080
When the image is created and the container is run my web app works fine, but I'd like to have certain communications (to example.com) redirected to an app running on my local machine.
when you run the container you can put the --add-host
docker run --add-host=example.com:172.17.0.1 --name=my-container --network=my-bridge --publish 8080:8080 my-image
the --add-host feature during build is designed to allow overriding a host during build, but not to persist that configuration in the image.
see also question for docker build --add-host command
I've create volume like docker volume create my-vol in my machine. But when I run my container as follow:
docker run -d \
--name=ppshein-test \
--mount source=my-vol,destination=/var/www/ -p 3000:3000 \
ppshein:latest
and found that my container is not working, that's why I've tried to logs
> sample-docker#1.0.0 start /var/www
> node index.js
and found as above. That's why I've tried to run that same image without attaching specific volume as follow:
docker run -d --restart=always -p 3001:3000 ppshein:latest
and found it's working smoothly. But I check its container logs and found as follow:
> sample-docker#1.0.0 start /var/www
> node index.js
Example app listening on port 3000!
Oddly, what I've found Example app listening on port 3000! of that last container even not found that same message on previous container.
Please let me know why. Thanks much.
I think that can be something you are looking for,
(from docker docs)
If you use --mount to bind-mount a file or directory that does not yet exist on the Docker host, Docker does not automatically create it for you, but generates an error.
I am new to docker and I have a problem when it comes to shipping data containers. Ok, usually we ship images and users can start as may containers from this image as the want, right?
Now I want to ship some data too - so I have made a data container so:
docker create -v /dbdata --name dbdata phusion/baseimage
Next I simply started a bash and inserted some data into my data container
docker run --volumes-from dbdata -i -t phusion/baseimage /bin/bash
echo "foo" > /dbdata/bar.txt
exit
Now I want to allow my team members to use the same data (offline), so I would like to "send" my data container to them. Therefore I have used
docker export dbdata > /tmp/cool_data.tar
But when I re import this with
cat /tmp/data.tar | sudo docker import - dbdata2
I can not use this "container" because it seems to be an image
docker run --volumes-from dbdata2 -i -t phusion/baseimage /bin/bash
FATA[0000] Error response from daemon: Container dbdata2 not found. Impossible to mount its volumes
How do I export and import data containers correctly?
You can't export and import data in volumes like this - volumes are simply not included in export/import.
You don't need to do this however - just zip or tar the directories the volumes are mapped to and send to your colleagues. They can then make their own data containers using those files.
You may also want to look at flocker, which can help you migrate containers and data.
You need to make a container out of this image first. Run this:
docker create -v /dbdata --name dbdata2 dbdata2
For more details, check out Creating and mounting Data Volume Containers
I'm having good luck with the following Dockerfile:
from scratch
ADD install_theGreatSoftwarePkg /install
VOLUME /install
Then I do a build and create:
docker build -t greatSoftwareInstallImage .
docker create -t --name=greatSoftwareMedia greatSoftwareInstallImage /bin/true