I've started a Docker container with tutum/lamp image with this command:
docker run -d -P --name design_patterns -v /public_html:/app tutum/lamp
As you can see, I mounted my local folder /public_html to /app directory in the container.
Having started this container, I realized that PHP files that are present in /public_html are not accessible from the browser. I should have probably mounted my local folder to a different location in the container.
How can I inspect the running container to check where my local data should be loaded?
you can do 2 things:
docker inspect design_patterns
will show you some info about the running container
or just get into the container
docker exec -it design_patterns bash
this will drop you into a bash shell into the container, then you can inspect the current state with regular bash commands
Related
Inside VM I created a docker container, Now there are some python files present outside the container(present in host directory of VM) how can I execute these python files from container anyone help me with this
You should mount the vm directory like this:
docker run -d -it --name your-container-name -v /host/path:/usr/local/bin container:image
Also you must be sure /host/path permissions are propperly set.
Volumes in docker
you can copy or mount the python file inside the docker container then execute the python with CMD.
For ex:
docker run -it -v myfile.py:/tmp/myfile.py python python /tmp/myfile.py
docker run -t -i -v <host_dir>:<container_dir> ubuntu /bin/bash
By this command I'm able to access host directory by inside the container where the directories are mounted and can able to run python file present in host by container
Dockerfile contains
FROM java:8
Iam running this by mounting my host directory into docker by following command
docker run -it -p 8585:9090 -v ~/Docker/:/data d23bdf5b1b1b /data/bin/script.sh
I am able to run this successfully but the problem is when i try to access it from browser i am not able to see anything because of port conflicting
,2 services are running on same port ..
How to solve this ?
Your problem is that you are trying to run a script in a new container and that container then exists. It has nothing to with any existing container that is running.
Also when your specify a command to be run with docker it would not run the CMD command that you had defined while building the Dockerfile.
So what you need to do is below.
docker run -d -p 8585:9090 -v ~/Docker/:/data d23bdf5b1b1b
After the above container is run it will print the ID of the new container. Now you want to execute your command in this new container
docker exec -it <containerid> /data/bin/script.sh
I'm attempting to create a Jenkins job that remotely runs "docker cp" to copy a folder from the running container to the host machine.
Currently I have
docker run --rm docker:1.7.1 docker -H stuff.dev.blah.com:5000 cp cc_head:/opt/blah/build/cc_head/games /home/devadmin/games
But that doesn't work..
So, the machine host is stuff.dev.blah.com, and I can ssh to it with ssh devadmin#stuff.dev.blah.com
and at the host machine docker cp cc_head:/opt/blah/build/cc_head/games /home/devadmin/games works
All we can have here is docker 1.7.1, but if you manage to do this with a newer version I'd also be happy
the running container is called cc_head
Any suggestions?
You have two options
Mount the folder in cc_head container
Where you run the container cc_head and add -v /home/devadmin/games:/somefolder while running the same
docker run --rm docker:1.7.1 docker -H stuff.dev.blah.com:5000 cp cc_head:/opt/blah/build/cc_head/games cc_head:/somefolder
Mount the folder in separate container
Run another container on the host and map the /home/devadmin/games and use that for the copy operation
docker run --rm docker:1.7.1 docker -H stuff.dev.blah.com:5000 cp cc_head:/opt/blah/build/cc_head/games container:/somefolder
I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.
When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.