How to migrate data between named volumes? - docker

I want to migrate data from an old_volume volume to a new new_volume volume. I want to move data from a local volume to a storage volume.
The old_volume is a named volume that uses the default Docker path. The new_volume has a device associated with the default Docker path.
I copied the files from old_volume to new_volume using one of the commands below:
rsync -av /var/lib/docker/volumes/old_volume/_data/ /var/lib/docker/volumes/new_volume/_data
or
rsync -av /var/lib/docker/volumes/old_volume/_data/ /storage/volumes/myapp/new_volume
Data from /storage/volumes/myapp/new_volume does not appear in /var/lib/docker/volumes/new_volume/_data. Why does this occur?
When a device is used to create a container, the data appears in both paths. In this case when copying the data they do not appear in both paths.
Is it possible to resolve this?
docker inspect old_volume
[
{
"CreatedAt": "2022-06-01T18:35:53-03:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/old_volume/_data",
"Name": "old_volume",
"Options": {},
"Scope": "local"
}
]
docker inspect new_volume
[
{
"CreatedAt": "2022-06-02T09:27:40-03:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/new_volume/_data",
"Name": "new_volume",
"Options": {
"device": "/storage/volumes/myapp/new_volume",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]

Related

Unexpected behavior with named Docker NFS volumes in a Swarm cluster

I'm new to Docker Swarm and I'm managing all config via Ansible. I have a four-node Swarm cluster with the following members
manager (also the NFS exporter for volumes shared across the cluster)
cluster0
cluster1
cluster2
cluster[0-2] are workers in the swarm.
I'm configuring a service which makes use of a named volume media, which is an NFS export from manager. The service is constrained to the worker nodes. Once a service is deployed that makes use of this volume, I'd expect to see the following output of docker volume inspect media from any of the worker nodes with the replicated service:
user#cluster1:~ $ docker volume inspect media
[
{
"CreatedAt": "2021-01-27T12:23:55-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": {
"device": ":/path/media",
"o": "addr=manager.domain.com,rw",
"type": "nfs"
},
"Scope": "local"
}
]
and indeed when I docker volume inspect media from the swarm manager, I see:
user#manager:~$ docker volume inspect media
[
{
"CreatedAt": "2021-11-22T07:58:55-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": {
"device": ":/path/media",
"o": "addr=manager.domain.com,rw",
"type": "nfs"
},
"Scope": "local"
}
]
However, this is the output I see from the nodes with the replicated service:
user#cluster1:~ $ docker volume inspect media
[
{
"CreatedAt": "2021-11-22T07:59:02-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": null,
"Scope": "local"
}
]
As a result, I'm unable to access the NFS-exported files. I'm not entirely sure how to troubleshoot. Is this a limitation of Swarm and if so, how are others working around this issue? If not, what am I missing?

Docker: find out by command line if and which shared drives are enabled

Do you know if there is a way to find out by using the command line, if and which shared drives are enabled?
Thanks
EDIT:
[Windows - Git Bash] To find out which drives are shared:
cat C:/Users/[user]/AppData/Roaming/Docker/settings.json
[Windows - CMD prompt]
type C:\Users\[User]\AppData\Roaming\Docker\settings.json
Within the file, you'll find the JSON object you're looking for:
"SharedDrives": {
"C": true
},...
To find out which volumes are mounted on your host you can use the following commands:
docker volume ls
This will give you a list, for more details you can inspect a single volume
docker volume inspect 2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d
Which will return something similar to:
{
"CreatedAt": "2020-02-24T08:35:57Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d/_data",
"Name": "2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d",
"Options": null,
"Scope": "local"
}

deploy running docker in external IP (VM host)

My API couldn't be published on the specific ip address (VM host) when using docker
First, I run the file in terminal :
Rscript run.R
This works fine, my api is up and running on the ip address http://35.157.131.3:8000/swagger/ . After which, I would like to deploy it with docker:
docker run --rm -p 8000:8000 --expose 8000 -d --name diemdiem trestletech/plumber
This showed the file was plumbed successfully, however, when i went to the api link, http://35.157.131.3:8000/swagger/ showed 404-error.
After reading docker documentations, i created a container network which specifies the host ip address that i want the docker container would run on:
-o "com.docker.network.bridge.host_binding_ipv4"="35.157.131.3" \
simple-network````
then, i connect the running diemdiem container to simple-network:
``` docker network connect simple-network diemdiem```
I inspect to see whether the container is connected or not:
```docker network inspect simple-network```
The result is:
[
{
"Name": "simple-network",
"Id": "95ec0c55aeb984952459edda2d4d0bb7c9eea71824e6cec184b7c61d2e807e7b",
"Created": "2019-07-08T17:30:23.709654207Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c83125bf68a89aebda3effe28ebee4d6323657e1427cf08fd3d63b6e411f8448": {
"Name": "diemdiem",
"EndpointID": "7fab3354e051dc81ef798bd86c19361f6a721b578237b3a3695cb415b1aee2e4",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.host_binding_ipv4": "35.157.131.3"
},
"Labels": {}
}
]
The final API is still not up and running in the ip address which i specified. I appreciate your advice.

What is IP of bridge localhost for Docker?

I am dockerizing my application. I have two containers now. One of it wants to talk to another, in it's config I have "mongo": "127.0.0.1" I suppose they should talk through the bridge network:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
},
"Labels": {}
}
]
Should I now change "mongo": "127.0.0.1" to "mongo": "0.0.0.0"?
You can check a container IP.
$ docker inspect $(container_name) -f "{{json .NetworkSettings.Networks}}"
You can find IPAddress attribute from the output of json.
Yes, you should use a bridge network. The default "bridge" can be used but won't give you DNS resolution, check https://docs.docker.com/engine/userguide/networking/#user-defined-networks for details.
Best way to use is using with --link option to avoid to many changes.
for ex: --link mongo01:mongo it will instruct Docker to use the container named mongo01 as a linked container, and name it mongo inside your application container
So in your application you can use mongo:27017. without making any changes.
refer this for more details.
https://www.thachmai.info/2015/05/10/docker-container-linking-mongo-node/

Run multiple containers on same docker network localhost

I want to connect from my app to mongodb on localhost, so they need to have same localhost address.
So the question is: Can two containers share they localhost, or for each container the localhost ip must be different?
I'm doing this for test environment purposes, so I don't want in-memory database, changed mongo uri or any different solution. I just want to connect from A to B by localhost.
To run my network and containers i type:
docker network create --driver bridge isolated_nw
docker run --name mongodb -d -p 27017:27017 --network=isolated_nw mongo:3.4.2
docker run --name roomate-profiles --network=isolated_nw -d -p 8080:8080 sovas/roomate-profiles
My custom docker network:
[
{
"Name": "isolated_nw",
"Id": "3efd6831784c2a8c9e9ea345144fcc6b9180e70c0e1b4b5d1a72219051b24e67",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1/16"
}
]
},
"Internal": false,
"Containers": {
"57d4e2fb1f0c8d776329fd6ce82e5905df00e261ab6923595578dcb35913b03e": {
"Name": "roomate-profiles",
"EndpointID": "5a8158dc1aba6958218d1cca3c98ca911ab2cfa73be839ceece2e7819b244c91",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"8fa815735d7ebb77434f8abf11e58f18faeb5d67e2743903d81f4600bd558c35": {
"Name": "mongodb",
"EndpointID": "7b7a7ed1ad08bbe381fb6d66c6e9fea66ee9b7c581f530bdf4d82f0741bff04b",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
application.properties
spring.data.mongodb.uri=mongodb://localhost:27017/admin
localhost won't work since it refers to the roomate-profiles container. But you can do
spring.data.mongodb.uri=mongodb://mongodb:27017/admin
since both containers are connected to the same network. There is also no need to map the mongodb port to the host (unless you need it for something else).

Resources