I am really stuck with the usage of docker VOLUME's. I have a plain dockerfile:
FROM ubuntu:latest
VOLUME /foo/bar
RUN touch /foo/bar/tmp.txt
I ran $ docker build -f dockerfile -t test . and it was successful. After this, I interactively ran a shell into the docker container associated with the run of the created test image. That is, I ran $ docker run -it test
Observations:
/foo/bar is created but empty.
docker inspect test mounting info:
"Volumes": {
"/foo/bar": {}
}
It seems that it is not mounting at all. The task seems pretty straight but am I doing something wrong ?
EDIT : I am looking to persist the data that is created inside this mounted volume directory.
The VOLUME instruction must be placed after the RUN.
As stated in https://docs.docker.com/engine/reference/builder/#volume :
Note: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
If you want to know the source of the volume created by the docker run command:
docker inspect --format='{{json .Mounts}}' yourcontainer
will give output like this:
[{
"Name": "4c6588293d9ced49d60366845fdbf44fac20721373a50a1b10299910056b2628",
"Source": "/var/lib/docker/volumes/4c6588293d9ced49d60366845fdbf44fac20721373a50a1b10299910056b2628/_data",
"Destination": "/foo/bar",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}]
Source contains the path you are looking for.
Related
I successfully run PostgreSQL thus:
$ docker run --name postgresql --env POSTGRES_PASSWORD=password --publish 6000:5432 --volume /home/russ/dev/pg:/var/lib/postgresql/data postgres
only to find that:
$ docker inspect postgresql
...
"Mounts": [
{
"Type": "volume",
"Name": "06d27a1fe489cedfa47d6a3e801cb286494958e1c3a17f044205629cc7070952",
"Source": "/var/lib/docker/volumes/06d27a1fe489cedfa47d6a3e801cb286494958e1c3a17f044205629cc7070952/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
...
Docker's usual, random filesystem backing is used instead of the hard-coded path I tried to map. Why is this or what should I have done instead?
If you look at the Postgres Dockerfile, you'll see a VOLUME [/var/lib/postgresql/data].
This command creates the default, "anonymous" volume you're seeing and takes precedence over the --volume argument you provide with the CLI (as well as any commands in "child" Dockerfiles or configuration in docker-compose files).
This extremely annoying quirk of Docker applies to other commands as well is currently being debated in https://github.com/moby/moby/issues/3465. This comment describes a similar problem with mysql images.
Unfortunately, there isn't an easy workaround but here are some common methods I've seen used:
Reconfigure Postgres to work out of a different directory and mount to that instead
Have another container mount to the same anonymous volume and to your machine and have it copy data over periodically
If you just want the data persist between container starts, I would recommend keeping it in the anonymous volume to keep it simple.
I have a very simple docker-compose.yml:
version: '2.4'
services:
containername:
image: ${DOCKER_IMAGE}
volumes:
- ./config:/root/config
I'm using a remote staging server accessed via ssh:
docker context create staging --docker "host=ssh://ubuntu#staging.example.com"
docker context use staging
However, I see unexpected results with my volume after I docker-compose up:
docker-compose --context staging up -d
docker inspect containername
...
"Mounts": [
{
"Type": "bind",
"Source": "/Users/crummy/code/.../config",
"Destination": "/root/config",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
...
It seems the expansion of ./config to a full path happens on the machine docker-compose is running on. Not the machine Docker is running on.
I can fix this problem by hardcoding the entire path: /home/ubuntu/config:/root/config. But this makes my docker-compose file a little less flexible than it could be. Is there a way to get the dot expansion to occur on the remote machine?
No, the docs say that:
You can mount a relative path on the host, which expands relative to the directory of the Compose configuration file being used. Relative paths should always begin with . or ..
I believe that happens for two reasons:
There's no easy way and objective way that the docker-compose can find out how to expand . in this context, as there's no way to know what . would mean for the ssh client (home? same folder?).
Even though the docker cli is using a different context, the expansion is done by the docker-compose tool, that's unaware about the context switch.
Even using environment variables might pose a problem, since the env expansion would also happen in the machine you're running the docker-compose command.
I am attempting to run a task with persistent storage. The task executes a docker image which creates a directory and copies a file into it. However, when the task definition mounts a volume to the created directory, the file is lost.
For brevity, the relevant lines Dockerfile are:
RUN mkdir /root/createdDir
COPY ./myFile.txt /root/createdDir/myFile.txt
And the relevant fields of the task definition JSON are:
{
"containerDefinitions":[
{
...,
"mountPoints": [
{
"readOnly": null,
"containerPath": "/root/createdDir",
"sourceVolume": "shared"
}
],
"image": "myImage"
}]
"volumes": [
{
"name": "shared",
"host": {
"sourcePath": null
}
}]
}
When the task is run, the file can no longer be found. If I run the task without adding a mount point to the container, the file is still there.
When trying to do this locally using docker-compose, I can use the same Dockerfile and in the docker-compose.yml file add the following specification to the service volumes: shared:/root/createdDir, where shared is a volume also declared in the docker-compose.yml with a local driver.
The behavior of mounting a volume into an existing directory on the container can be confusing. It is consistent with the general behavior of Linux's mount:
The previous contents (if any) and owner and mode of dir become invisible.
Avoid doing this whenever possible, because it can lead to hard-to-find issues when the volume and the container have files with same names.
I unable to get container IP address to run it from Browser.
Code Snippet
PS H:\DevAreaLocal\COMPANY - RAD PROJECTS\DockerApp\WebDockerCoreApp> docker-compose build
Building webdockercoreapp
Step 1/5 : FROM microsoft/aspnetcore:1.1
---> 4fe9b4d0d093
Step 2/5 : WORKDIR /WebDockerCoreApp
---> Using cache
---> b1536c639a21
Step 3/5 : COPY . ./
---> Using cache
---> 631ca2773407
Step 4/5 : EXPOSE 80
---> Using cache
---> 94a50bb10fbe
Step 5/5 : ENTRYPOINT dotnet WebDockerCoreApp
---> Using cache
---> 7003460ebe84
Successfully built 7003460ebe84
Successfully tagged webdockercoreapp:latest
PS H:\DevAreaLocal\COMPANY - RAD PROJECTS\DockerApp\WebDockerCoreApp> docker inspect --format="{{.Id}}" 7003460ebe84
Got Bellow ID
sha256:7003460ebe84bdc3e8647d7f26f9038936f032de487e70fb4f1ca137f9dde737
If I run bellow command
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" 7003460ebe84
Got bellow response
Template parsing error: template: :1:19: executing "" at <.NetworkSettings.Net...>: map has no entry for key "NetworkSettings"
Docker.Compose.yml file settings
version: '2.1'
services:
webdockercoreapp:
image: webdockercoreapp
build:
context: ./WebDockerCoreApp
dockerfile: Dockerfile
ports:
- "5000:80"
networks:
default:
external:
name: nat
By runnging "docker network ls"
got bellow response
NETWORK ID NAME DRIVER SCOPE
f04966f0394c nat nat local
3bcb5f906e01 none null local
680d4b4e1a0d webdockercoreapp_default nat local
When I run "docker network inspect webdockercoreapp_default"
Got below response
[
{
"Name": "webdockercoreapp_default",
"Id": "680d4b4e1a0de228329986f217735e5eb35e9925fd04321569f9c9e78508ab88",
"Created": "2017-12-09T22:59:55.1558081+05:30",
"Scope": "local",
"Driver": "nat",
"EnableIPv6": false,
"IPAM": {
"Driver": "windows",
"Options": null,
"Config": [
{
"Subnet": "0.0.0.0/0",
"Gateway": "0.0.0.0"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.windowsshim.hnsid": "ad817a46-e7ff-4fc7-9bb9-d6cf17820b8a"
},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "webdockercoreapp"
}
}
]
When you're running the command docker inspect --format="{{.Id}}" 7003460ebe84 - you're currently running this against the image ID not the container ID.
Images are the static asset that you build, and of which containers are run from. So what you need to do is first bring your image online, via:
docker-compose up
Now you'll be able to see the running containers via:
docker ps
Find the container you want; let's say it's abcd1234
Now you'll be able to run your original command against the container - rather than the image.
docker inspect --format="{{.Id}}" abcd1234
This will return the full SHA of the container; and since you originally asked about the network settings; you'll be able to run something like:
docker inspect -f "{{ .NetworkSettings.Networks.your_network_here.IPAddress }}" abcd1234
If you're unsure exactly what your network name is (Looks like it should be nat) just do a full docker inspect abcd1234 and look at the output and adjust the filter as needed.
Change in command withing Docker file solve the issue. Bellow is the code snippet
Since we must build our project, this first container we create is a
temporary container which we will use to do just that, and then
discard it at the end.
Next, we copy over the .csproj files into our temporary container's
'/app' directory. We do this because .csproj files contain contain a
list of package references our project needs. After copying this file,
dotnet will read from it and then to go out and fetch all of the
dependencies and tools which our project needs.
Once we've pulled down all of those dependencies, we copy it into the
temporary container. We then tell dotnet to publish our application
with a release configuration and specify the output path.
We should have successfully compiled our project. Now we need to build
our finalized container.
Our base image for this final container is similar but different to
the FROM command above--it doesn't have the libraries capable of
building an ASP.NET app, only running.
Concussion:
We have now successfully performed what is called a multi-stage build.
We used the temporary container to build our image and then moved over
the published dll into another container so that we minimized the
footprint of the end result. We want this container to have the
absolute minimum required dependencies to run; if we had kept with
using our first image, then it would have come packaged with other
layers (for building ASP.NET apps) which were not vital and therefore
would increase our image size.
FROM microsoft/aspnetcore-build:1.1 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM microsoft/aspnetcore:1.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "CoreApiDockerSupport.dll"]
Put withing .dockerignore file
bin\
obj\
Note: Above settings will work if you create a Asp.Net Core Project from Visual Studio IDE with in-build docker support (using docker-compose.yml and docker-compose.ci.build.yml) files or without it.
Source 1 - building-sample-app
Source 2 - asp-net-core-on-linux with Docker
I was setting up some materials for a trainning, when I came around this sample compose file:
https://github.com/dockersamples/example-voting-app/blob/master/docker-compose.yml
and I couldn't find out how this volume is mounted, on lines 48 and 49 of the file:
volumes:
db-data:
Can someone explain me where is this volume on the host? Couldn't find it and I wouldn't like to keep any postgresql data dangling around after the containers are gone. Similar thing happens to the networks:
networks:
front-tier:
back-tier:
Why docker compose accepts empty network definitions like this?
Finding the volumes
Volumes like this are internal to Docker and stored in the Docker store (which is usually all under /var/lib/docker). You can get a list of volumes:
$ docker volume ls
DRIVER VOLUME NAME
local 1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465
local 2f13b0cec834a0250845b9dcb2bce548f7c7f35ed9cdaa7d5990bf896e952d02
local a3d54ec4582c3c7ad5a6172e1d4eed38cfb3e7d97df6d524a3edd544dc455917
local e6c389d80768356cdefd6c04f6b384057e9fe2835d6e1d3792691b887d767724
You can find out exactly where the volume is stored on your system if you want to:
$ docker inspect 1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465/_data",
"Name": "1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465",
"Options": {},
"Scope": "local"
}
]
Cleaning up unused volumes
As far as just ensuring that things are not left dangling, you can use the prune commands, in this case docker volume prune. That will give you this output, and you choose whether to continue pruning or not.
$ docker volume prune
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N]
"Empty" definitions in docker-compose.yml
There is a tendency to accept these "empty" definitions for things like volumes and networks when you don't need to do anything other than define that a volume or network should exist. That is, if you want to create it, but are okay with the default settings, then there is no particular reason to specify the parameters.
first method
you have to see your volume list :
docker volume ls
then run this command :
sudo docker inspect <volume-name> | grep Mountpoint | awk '{ print $2 }'
second method
you can use this method :
first run docker ps to get your container id then run this :
docker inspect --format="{{.Mounts}}" $containerID
We will get volume path.