I've an running oracle container from which I want to create an image. This will be shipped to the registry (as an initial dev setup). However, there is an volume configured. Is there a way to integrate the volume in the image? (I know this might is not best practice but would help a lot for the dev team.)
docker inspect foobar
...
"Mounts": [
{
"Type": "volume",
"Name": "0a...c2d",
"Source": "/mnt/sda1/var/lib/docker/volumes/...c2d/_data",
"Destination": "/ORCL",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
...
No, volumes are external to the containers run - that's their sole purpose. You're correct that it would be a bad practice, so it's not possible at all.
A Docker container is an isolated application; if you want files to be a part of the image - add them in with a COPY or ADD command in the Dockerfile when you build your image.
Related
I successfully run PostgreSQL thus:
$ docker run --name postgresql --env POSTGRES_PASSWORD=password --publish 6000:5432 --volume /home/russ/dev/pg:/var/lib/postgresql/data postgres
only to find that:
$ docker inspect postgresql
...
"Mounts": [
{
"Type": "volume",
"Name": "06d27a1fe489cedfa47d6a3e801cb286494958e1c3a17f044205629cc7070952",
"Source": "/var/lib/docker/volumes/06d27a1fe489cedfa47d6a3e801cb286494958e1c3a17f044205629cc7070952/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
...
Docker's usual, random filesystem backing is used instead of the hard-coded path I tried to map. Why is this or what should I have done instead?
If you look at the Postgres Dockerfile, you'll see a VOLUME [/var/lib/postgresql/data].
This command creates the default, "anonymous" volume you're seeing and takes precedence over the --volume argument you provide with the CLI (as well as any commands in "child" Dockerfiles or configuration in docker-compose files).
This extremely annoying quirk of Docker applies to other commands as well is currently being debated in https://github.com/moby/moby/issues/3465. This comment describes a similar problem with mysql images.
Unfortunately, there isn't an easy workaround but here are some common methods I've seen used:
Reconfigure Postgres to work out of a different directory and mount to that instead
Have another container mount to the same anonymous volume and to your machine and have it copy data over periodically
If you just want the data persist between container starts, I would recommend keeping it in the anonymous volume to keep it simple.
I have a very simple docker-compose.yml:
version: '2.4'
services:
containername:
image: ${DOCKER_IMAGE}
volumes:
- ./config:/root/config
I'm using a remote staging server accessed via ssh:
docker context create staging --docker "host=ssh://ubuntu#staging.example.com"
docker context use staging
However, I see unexpected results with my volume after I docker-compose up:
docker-compose --context staging up -d
docker inspect containername
...
"Mounts": [
{
"Type": "bind",
"Source": "/Users/crummy/code/.../config",
"Destination": "/root/config",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
...
It seems the expansion of ./config to a full path happens on the machine docker-compose is running on. Not the machine Docker is running on.
I can fix this problem by hardcoding the entire path: /home/ubuntu/config:/root/config. But this makes my docker-compose file a little less flexible than it could be. Is there a way to get the dot expansion to occur on the remote machine?
No, the docs say that:
You can mount a relative path on the host, which expands relative to the directory of the Compose configuration file being used. Relative paths should always begin with . or ..
I believe that happens for two reasons:
There's no easy way and objective way that the docker-compose can find out how to expand . in this context, as there's no way to know what . would mean for the ssh client (home? same folder?).
Even though the docker cli is using a different context, the expansion is done by the docker-compose tool, that's unaware about the context switch.
Even using environment variables might pose a problem, since the env expansion would also happen in the machine you're running the docker-compose command.
I have a Sumologic log collector which is a generic log collector. I want the log collector to see logs and a config file from a different container. How do I accomplish this?
ECS containers can mount volumes so you would define
{
"containerDefinitions": [
{
"mountPoints": [
{
"sourceVolume": "logs",
"containerPath": "/tmp/clogs/"
},
}
],
"volumes": [
{
"name": "logs",
}
]
}
ECS also has a nice UI you can click around to set up the volumes at the task definition level, and then the mounts at the container level.
Once that's set up, ECS will mount a volume at the container path, and everything inside that path will be available to all other containers that mount the volume.
Further reading:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
I'm trying to pass default parameters such as volumes or envs to my docker container, which I create through Marathon and Apache Mesos. It is possible through arguments passed to mesos-slave. I've put in /etc/mesos-slave/default_container_info file with JSON content (mesos-slave read this file and put it as its arguments):
{
"type": "DOCKER",
"volumes": [
{
"host_path": "/var/lib/mesos-test",
"container_path": "/tmp",
"mode": "RW"
}
]
}
Then I've restarted mesos-slave and create new container in marathon, but I can not see mounted volume in my container. Where I could do mistake? How can I pass default values to my containers in other way?
This will not work for you. When you schedule task on Marathon with docker, Marathon creates TaskInfo with ContainerInfo and that's why Mesos do not fill your default.
From the documentation
--default_container_info=VALUE JSON-formatted ContainerInfo that will be included into any ExecutorInfo that does not specify a ContainerInfo
You need to add volumes to every Marathon task you have or create RunSpecTaskProcessor that will augment all tasks with your volumes
I am really stuck with the usage of docker VOLUME's. I have a plain dockerfile:
FROM ubuntu:latest
VOLUME /foo/bar
RUN touch /foo/bar/tmp.txt
I ran $ docker build -f dockerfile -t test . and it was successful. After this, I interactively ran a shell into the docker container associated with the run of the created test image. That is, I ran $ docker run -it test
Observations:
/foo/bar is created but empty.
docker inspect test mounting info:
"Volumes": {
"/foo/bar": {}
}
It seems that it is not mounting at all. The task seems pretty straight but am I doing something wrong ?
EDIT : I am looking to persist the data that is created inside this mounted volume directory.
The VOLUME instruction must be placed after the RUN.
As stated in https://docs.docker.com/engine/reference/builder/#volume :
Note: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
If you want to know the source of the volume created by the docker run command:
docker inspect --format='{{json .Mounts}}' yourcontainer
will give output like this:
[{
"Name": "4c6588293d9ced49d60366845fdbf44fac20721373a50a1b10299910056b2628",
"Source": "/var/lib/docker/volumes/4c6588293d9ced49d60366845fdbf44fac20721373a50a1b10299910056b2628/_data",
"Destination": "/foo/bar",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}]
Source contains the path you are looking for.