I try to start my app locally with aws s3 to upload images.
My docker-compose file looks like:
version: '3.8'
services:
localstack:
container_name: innoter_aws_services
image: localstack/localstack:latest
ports:
- "4566:4566"
environment:
- SERVICES=s3,ses,lambda,dynamodb
- EDGE_PORT=4566
- DATA_DIR=/tmp/localstack/data
- S3_DIR=localstack/s3
volumes:
- localstack-data:/var/lib/localstack
- /var/run/docker.sock:/var/run/docker.sock
env_file:
- docker_aws.env
volumes:
localstack-data:
name: localstack-data
then, for example:
(env) D:\>docker-compose up -d
[+] Running 1/1
- Container innoter_aws_services Started
(env) D:\>awslocal s3 mb s3://my-bucket
make_bucket: my-bucket
(env) D:\>awslocal s3 mb s3://my-bucket02
make_bucket: my-bucket02
(env) D:\>awslocal s3 ls
2023-01-30 12:19:08 my-bucket
2023-01-30 12:19:18 my-bucket02
then, after docker-compose stop, again do docker-compose up -d
and awslocal s3 ls is empty. No bucket, no information. When I inspect the bucket, I get info about mounts:
"Mounts": [
{
"Type": "volume",
"Source": "localstack-data",
"Target": "/var/lib/localstack",
"VolumeOptions": {}
}
"Mountpoint": "/var/lib/docker/volumes/localstack-data/_data",
"Name": "localstack-data",
"Options": null,
"Scope": "local"
What do I need to do for saving data locally in my volume?
Related
I have two docker containers connected through a frontendbuild docker volume:
services:
nginx:
container_name: nginx
build:
context: .
dockerfile: ./compose/production/nginx_ssltls/Dockerfile
#restart: unless-stopped
volumes:
- ./compose/production/nginx_live:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- staticfiles_harcelement:/app/static
- mediafiles_harcelement:/app/media
- frontendbuild:/usr/share/nginx/html/build
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- network_app
react
build:
context: .
dockerfile: ./compose/production/frontend/Dockerfile
#restart: always
volumes:
- frontendbuild:/app/frontend/build
networks:
- network_app
Each time i re-run my build, the volume is not updated with the updated /app/frontend/build folder from the updated react docker container.
I have found how to update a volume from a folder on my host machine, but this time the build is created in the Dockerfile, so the files I need to update to the volume are inside the container...
How can i automatize this in code?
Here is result of docker inspect volume :
[
{
"CreatedAt": "2022-07-07T19:37:23+02:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "app-harcelement",
"com.docker.compose.version": "1.25.0",
"com.docker.compose.volume": "frontendbuild"
},
"Mountpoint": "/var/lib/docker/volumes/frontendbuild/_data",
"Name": "frontendbuild",
"Options": null,
"Scope": "local"
}
]
Thank you
I want to use a named volume inside my docker compose file which binds to a user defined path in the host. It seems like it should be possible since I have seen multiple examples online one of them being How can I mount an absolute host path as a named volume in docker-compose?.
So, I wanted to do the same. Please bear in mind that this is just an example and I have a use case where I want to use named volumes for DRYness.
Note: I am using Docker for Windows with WSL2
version: '3'
services:
example:
image: caddy
restart: unless-stopped
volumes:
- caddy_data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
volumes:
caddy_data:
name: caddy_data
driver_opts:
o: bind
device: D:\Some\path_in\my\host
type: none
# volumes:
# caddy_data:
# external: true
# name: caddyvol
This does not work and everytime I do docker compose up -d I get the error:
[+] Running 1/2
- Volume "caddy_data" Created 0.0s
- Container project-example-1 Creating 0.9s
Error response from daemon: failed to mount local volume: mount D:\Some\path_in\my\host:/var/lib/docker/volumes/caddy_data/_data, flags: 0x1000: no such file or director
But if I create the volume first using
docker volume create --opt o=bind --opt device=D:\Some\path_in\my\host --opt type=none caddyvol
and then use the above in my docker compose file (see the above file's commented section), it works perfectly.
I have even tried to see the difference between the volumes created and have found none
docker volume inspect caddy_data
[
{
"CreatedAt": "2021-12-12T18:19:20Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "ngrok-compose",
"com.docker.compose.version": "2.2.1",
"com.docker.compose.volume": "caddy_data"
},
"Mountpoint": "/var/lib/docker/volumes/caddy_data/_data",
"Name": "caddy_data",
"Options": {
"device": "D:\\Some\\path_in\\my\\host",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
docker volume inspect caddyvol
[
{
"CreatedAt": "2021-12-12T18:13:17Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/caddyvol/_data",
"Name": "caddyvol",
"Options": {
"device": "D:\\Some\\path_in\\my\\host",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
Any ideas what's going wrong in here?
Finally managed to figure it out thanks to someone pointing out half of my mistake. While defining the volume in the compose file, the device should be in linux path format without the : after the drive name. Also, the version number should be fully defined. So, in the example case, it should be
version: '3.8'
services:
example:
image: caddy
restart: unless-stopped
volumes:
- caddy_data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
volumes:
caddy_data:
name: caddy_data
driver_opts:
o: bind
device: d/Some/path_in/my/host
type: none
But this still did not work. And it seemed to not work only in Windows Docker Desktop. So, I went into \\wsl.localhost\docker-desktop-data\version-pack-data\community\docker\volumes and checked the difference between the manually created volume and the volume generated from the compose file.
The only difference was in the MountDevice key in the opts.json file for each. The manually created file had /run/desktop/mnt/host/ appended to the path provided. So, I updated my compose file to
version: '3.8'
services:
example:
image: caddy
restart: unless-stopped
volumes:
- caddy_data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
volumes:
caddy_data:
name: caddy_data
driver_opts:
o: bind
device: /run/desktop/mnt/host/d/Some/path_in/my/host
type: none
And this worked!
I need to find volume by label or name easily not by a docker assigned id like:
docker volume ls --filter label=key=value
but if I try add a 'container_name' or 'labels' to docker-compose.yaml I can't see any assigned label of name to volume when I inspect it, here is an output:
>>> docker volume inspect <volume_id>
[
{
"CreatedAt": "2020-10-28T11:41:51+01:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/4dce13df34f4630b34fbf1f853f7b59dbee2e3150a5122fa38d02024c155ec7d/_data",
"Name": "4dce13df34f4630b34fbf1f853f7b59dbee2e3150a5122fa38d02024c155ec7d",
"Options": null,
"Scope": "local"
}
]
I believe I can filter volumes by labels and name.
Here is a part of docker-compose.yml config file for mongo service:
version: '3.4'
services:
mongodb:
container_name: some_name
image: mongo
labels:
com.docker.compose.project: app-name
restart: always
ports:
- 27017:27017
volumes:
- ./mongo:/data/db
I'm not exactly sure what you're tring to acheive here, but I hope something in my response will be helpful.
You can define a named volume within your docker-compose.yml
version: '3.4'
services:
mongodb:
container_name: some_name
image: mongo
labels:
com.docker.compose.project: app-name
restart: always
ports:
- 27017:27017
volumes:
- mongo_db:/data/db
volumes:
mongo_db:
You could then use the docker volume inspect command to see some details about this volume.
docker volume inspect mongo_db
I created a named volume with my docker-compose file and setup a 'bind' to a local directory, but the files are written in my custom directory and the default docker directory for volumes /var/lib/docker/....
How can i make docker write only on my custom directory ?
My docker-compose file
version: '3.7'
services:
app:
image: php
ports:
- 8090:80
volumes:
- app_files_data:/app/files
volumes:
app_files_data:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/home/myapp/files'
The docker volume inspect
[
{
"CreatedAt": "2019-08-14T09:47:29Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "php",
"com.docker.compose.version": "1.24.0",
"com.docker.compose.volume": "app_files_data"
},
"Mountpoint": "/var/lib/docker/volumes/my_app_files_data/_data",
"Name": "php_app_files_data",
"Options": {
"device": "/home/myapp/files",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
by using app_files_data:/app/files and the volumes sectiony ou use a named volume therefor the location is in /var/lib/docker/volumes you may change the compose to :
version: '3.7'
services:
app:
image: php
ports:
- 8090:80
volumes:
- /path/to/app_files_data:/app/files
not that I removed the volumes section
My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "docker-socket",
"host": {
"sourcePath": "/var/run/docker.sock"
}
}
],
"containerDefinitions": [
{
"name": "nginx",
"image": "nginx",
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "demo.local"
}
],
"essential": true,
"memory": 128
},
{
"name": "nginx-proxy",
"image": "jwilder/nginx-proxy",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"mountPoints": [
{
"sourceVolume": "docker-socket",
"containerPath": "/tmp/docker.sock",
"readOnly": true
}
]
}
]
}
Running this locally using "eb local run" results in:
ERROR: you need to share your Docker host socket with a volume at
/tmp/docker.sock Typically you should run your jwilder/nginx-proxy
with: -v /var/run/docker.sock:/tmp/docker.sock:ro See the
documentation at http://git.io/vZaGJ
If I ssh into my docker machine and run:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock
jwilder/nginx-proxy
It creates the container and mounts the volumes correctly.
Why is the above Dockerrun.aws.json configuration not mounting the /var/run/docker.sock:/tmp/docker.sock volume correctly?
If I run the same configuration from a docker-compose.yml, it works fine locally. However, I want to deploy this same configuration to Elastic Beanstalk using a Dockerrun.aws.json:
version: '2'
services:
nginx:
image: nginx
container_name: nginx
cpu_shares: 100
volumes:
- /var/www/html
environment:
- VIRTUAL_HOST=demo.local
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
cpu_shares: 100
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
My local setup is using:
VirtualBox 5.0.22 r108108
docker-machine version 0.7.0, build a650a40
EB CLI 3.7.7 (Python 2.7.1)
Your Dockerrun.aws.json file works fine in AWS EB for me (only changed it slightly to use our own container / hostname in place of the 'nginx' container). Is it just a problem with the 'eb local run' setup, perhaps?
As you said are on Mac, try upgrading to the new docker 1.12 that runs docker natively on osx, or at least a newer version of docker-machine - https://docs.docker.com/machine/install-machine/#/installing-machine-directly