We are creating a network with Hyperledger Fabric. All our ledger data are stored inside /var/lib/docker/volume directory, which in fact getting removed once the network is down. To me it looks like Docker mounting of volume is not working. How to use local system volume as suggested by Hyperledger Fabric ../var/hyperledger to store the same.
You could use the Volumes properties in docker yaml file.
volumes:
- LOCAL SYSTEM PATH RELATIVE TO THIS FILE: VOLUME INSIDE DOCKER CONTAINER
e.g-
volumes:
- ../system-genesis-block:/var/lib/docker/volume
If you are using the network.sh script to bring down the network, it removes everything - which I think is good for development. Using docker-compose to stop the network leaves everything as is. There is a school of thought that says your data should exist outside of the container to be safe. You can always recreate the container but it will not bring back lost data.
I like to store my volumes close to the hyperledger code, so I use the following in my YAML
volumes:
- ../system-genesis-block/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../organizations/ordererOrganizations/vfact.ie/orderers/orderer.vfact.ie/msp:/var/hyperledger/orderer/msp
- ../organizations/ordererOrganizations/vfact.ie/orderers/orderer.vfact.ie/tls/:/var/hyperledger/orderer/tls
- $PWD/data/orderer.vfact.ie:/var/hyperledger/production/orderer
where the current working directory (pwd) is the root area for the Hyperledger files and directories. Without knowing too much about how you work your system, I am offering this as a suggestion from one who has had good experiences of this type of usage. If you are using network.sh to bring down the network, perhaps you could remove the down --volumes --remove-orphans from the networkDown() function call to docker-compose.
Related
I have a task that I already solved, but where I'm not satisfied with the solution. Basically, I have a webserver container (Nginx) and a fast-CGI container (PHP-FPM). The webserver container is built on an off-the-shelf image, the FCGI container is based on a custom image and contains the application files. Now, since not everything is sourcecode and processed on the FCGI container, I need to make the application files available inside the webserver container as well.
Here's the docker-compose.yml that does the job:
version: '3.3'
services:
nginx:
image: nginx:1-alpine
volumes:
- # customize just the Nginx configuration file
type: bind
source: ./nginx.conf
target: /etc/nginx/nginx.conf
- # mount application files from PHP-FPM container
type: volume
source: www-data
target: /var/www/my-service
read_only: true
volume:
nocopy: true
ports:
- "80:80"
depends_on:
- php-fpm
php-fpm:
image: my-service:latest
command: ["/usr/sbin/php-fpm7.3", "--nodaemonize", "--force-stderr"]
volumes:
- # create volume from application files
# This one populates the content of the volume.
type: volume
source: www-data
target: /var/www/my-service
volumes:
# volume with application files shared between nginx and php-fpm
www-data:
What I don't like here is mostly reflected by the comments concerning the volumes. Who creates and stores data should be obvious from the code and not from comments. Also, what I really dislike is that docker actually creates a place where it stores data for this volume. Not only does this use up disk space and increase startup time, it also requires me to never forget to use docker-compose down --volumes in order to refresh the content on next start. Imagine my anger when I found out that down didn't tear down what up created and that I was hunting ghosts from previous runs.
My questions concerning this:
Can I express in code that one container contains data that should be made available to other containers more clearly? The above code works, but it fails utterly to express the intent.
Can I avoid anything persistent being created to avoid above mentioned downsides?
I would have liked to investigate into things like tmpfs volumes or other volume options. My problem is that I can't find documentation for available volume drivers or even explore which volume drivers exist. Maybe I have missed some CLI for that, I'd really appreciate a nudge in the right direction here.
You can use the local driver with option type=tmpfs, for example:
volumes:
www-data:
driver: local
driver_opts:
type: tmpfs
device: tmpfs
Which will follow your requirements:
Data will be shared between containers at runtime
Data should not be persisted, i.e. volumes will be emptied when container are stopped, restarted or destroyed
This is CLI equivalent of
docker volume create --driver local --opt type=tmpfs --opt device=tmpfs www-data
Important note: this is NOT a Docker tmpfs mount, but a Docker volume using a tmpfs option. As stated in the local volume driver documentation it uses the Linux mount options, in our case --types to specify a tmpfs filesystem. Contrary to a simple tmpfs mount, it will allow you to share the volume between container while retaining classic behavior of a temporary filesystem
I can't find documentation for available volume drivers or even explore which volume drivers exist
Volume doc, including tmpfs, Bind mount and Volumes
local driver options are found in the docker volume create doc
Volume driver plugins - some are still updated regularly or seem maintained, but most of them have not been updated for a long time or are deprecated. The list does not seem exhaustive though, for instance vieux/sshfs is not mentioned.
Can I express in code that one container contains data that should be made available to other containers more clearly? The above code works, but it fails utterly to express the intent.
I don't think so, your code is are already quite clear as to the intent of this volume:
That's what volume are for: sharing data between containers. As stated in the doc Volumes can be more safely shared among multiple container, furthermore only containers are supposed to write into volumes.
nocopy and read_only clearly express that nginx relies on data written by another container as it will only be able to read from this volume
Given your volume is not external, it is safe to assume only another container from the same stack can use it
A bit of logic and experience with Docker allow to quickly come to the previous point, but even for less experienced Docker users your comments gives clear indications, and your comments are part of the code ;)
You also can make custom nginx image with copy of static from your php image
here is Dockerfile for nginx
FROM my-service:latest AS src-files
FROM nginx
COPY --from=src-files /path-to-static-in-my-service-image /path-to-static-in-nginx
This will allow you to use no volumes with source code
Also can use TAG from env variables in Dockerfile
FROM my-service:${TAG} AS src-files
...
It depends of the usecase of your setup. If it's only for local dev or if you want the same thing on production. On dev, having a volume populated manually or by one container for others could be OK.
But if you want something that will run the same way in production, you may need something else. For exemple, in production, I don't want to have my code in a volume but in my image in an immutable way and I just need to redeploy it.
For me a volume is not for storing application code but for storing data like cache, user uploaded content, etc. Something we want to keep between deployments.
So, if we want to have 2 images with the same files not in a volume, I will build 2 images with the application code and static content, one for php, one for nginx.
But the deployment is usually not synchrone for the 2 images. We solve this issue by deploying the PHP application first and the nginx application after. On nginx, we add a config that try to serve static content from it first and if the file doesn't exist, to ask it to PHP.
For the dev environment, we will reuse the same image but use a volume to mount the current code inside the container (an host bind mount).
But in some case, the bind mount could have some issues:
- On Mac the file sharing is slow but it should be better with the latest version of Docker Desktop in the Edge channel (2.3.1.0)
- On Windows, the file sharing is slow too
- On Linux, we need to be careful about the file permission and the user used inside the container
If you try to solve one/many of this issues with the volume solution, we could find some solution for that. Ex, on Mac, I will try to Edge release of docker first, on Windows, if possible, I will use WSL2 and Docker set to use the WSL2 backend.
What's the difference between declaring in the docker-compose.yml file a volume section and just using the volumes keyword under a service?
For example, I map a volume this way for a container:
services:
mysqldb:
volumes:
- ./data:/var/lib/mysql
This will map to the folder called data from my working directory.
But I could also map a volume by declaring a volume section and use its alias for the container:
services:
mysqldb:
volumes:
- data_volume:/var/lib/mysql
volumes:
data_volume:
driver: local
In this method, the actual location of where the mapped files are stored appears to be somewhat managed by docker compose.
What are the differences between these 2 methods or are they the same? Which one should I really use?
Are there any benefits of using one method over the other?
The difference between the methods you've described is that first method is a bind mount, and the other is a volume. These are more of Docker functions (rather than Docker Compose), and there are several benefits volumes provide over mounting a path from your host's filesystem. As described in the documentation, they:
are easier to back up or migrate
can be managed with docker volumes or the API (as opposed to the raw filesystem)
work on both Linux and Windows containers
can be safely shared among multiple containers
can have content pre-populated by a container (with bind mounts sometimes you have to copy data out, then restart the container)
Another massive benefit to using volumes are the volume drivers, which you'd specify in place of local. They allow you to store volumes remotely (i.e. cloud, etc) or add other features like encryption. This is core to the concept of containers, because if the running container is stateless and uses remote volumes, then you can move the container across hosts and it can be run without being reconfigured.
Therefore, the recommendation is to use Docker volumes. Another good example is the following:
services:
webserver_a:
volumes:
- ./serving/prod:/var/www
webserver_b:
volumes:
- ./serving/prod:/var/www
cache_server:
volumes:
- ./serving/prod:/cache_root
If you move the ./serving directory somewhere else, the bind mount breaks because it's a relative path. As you noted, volumes have aliases and have their path managed by Docker, so:
you wouldn't need to find and replace the path 3 times
the volume using local stores data somewhere else on your system and would continue mounting just fine
TL;DR: try and use volumes. They're portable, and encourage practices that reduce dependencies on your host machine.
I will try to describe my desired functionality:
I'm running docker swarm over docker-compose
In the docker-compose, I've services,for simplicity lets call it A ,B ,C.
Assume C service that include shared code modules need to be accessible for services A and B.
My questions are:
1. Should each service that need access to the shared volume must mount the C service to its own local folder,(using the volumes section as below) or can it be accessible without mounting/coping to a path in local container.
In docker swarm, it can be that 2 instances of Services A and B will reside in computer X, while Service C will reside on computer Y.
Is it true that because the services are all maintained under the same docker swarm stack, they will communicate without problem with service C.
If not which definitions should it have to acheive it?
My structure is something like that:
version: "3.4"
services:
A:
build: .
volumes:
- C:/usr/src/C
depends_on:
- C
B:
build: .
volumes:
- C:/usr/src/C
depends_on:
- C
C:
image: repository.com/C:1.0.0
volumes:
- C:/shared_code
volumes:
C:
If what you’re sharing is code, you should build it into the actual Docker images, and not try to use a volume for this.
You’re going to encounter two big problems. One is getting a volume correctly shared in a multi-host installation. The second is a longer-term issue: what are you going to do if the shared code changes? You can’t just redeploy the C module with the shared code, because the volume that holds the code already exists; you need to separately update the code in the volume, restart the dependent services, and hope they both work. Actually baking the code into the images makes it possible to test the complete setup before you try to deploy it.
Sharing code is an anti-pattern in a distributed model like Swarm. Like David says, you'll need that code in the image builds, even if there's duplicate data. There are lots of ways to have images built on top of others to limit the duplicate data.
If you still need to share data between containers in swarm on a file system, you'll need to look at some shared storage like AWS EFS (multi-node read/write) plus REX-Ray to get your data to the right containers.
Also, depends_on doesn't work in swarm. Your apps in a distributed system need to handle the lack of connection to other services in a predicable way. Maybe they just exit (and swarm will re-create them) or go into a retry loop in code, etc. depends_on is mean for local docker-compose cli in development where you want to spin up a app and its dependencies by doing something like docker-compose up api.
I want to use Docker MySQL.
docker run mysql
But I don't want to save data on the host machine. I want all the information to be protected inside the container. By default, this image created an unnamed volume, and attach it to the container.
Is it possible, to use the same container, (I don't want to create a new MySQL image from ground), but disable the volume?
In other words: Many Docker images in docker hub are using volumes by default. What is the easiest way to save all the data inside the container (so push, and commit will contain the data)? There is a command to stop a container, change it's Mounts settings, and start again?
I know that it is not best practice, my question is if it is possible.
EDIT: There is a tool mentioned in the comments of the below thread that can edit docker image metadata, allowing you to remove a volume.
This is currently an open issue awaiting someone with the bandwidth to code it. You can track the progress here, with this link going directly to the applicable comment:
#veqryn since reopening this issue, nobody started working on a pull-request; the existing pull request did no longer apply cleanly on the code-base so a new one has to be opened; if anyone is interested in working on this, then things can get going again.
I too would like this feature! Mounting /var/lib/mysql/ on windows hosts with NTFS gives the volume root:root permissions which can't be chown'd; I don't want to add mysql user to the root group. I would like to UNVOLUME the /var/lib/mysql directory and replace it with a symlink that does have mysql:mysql permissions, pointed at /host/ntfs/mnt which is root:root 🤷♀️
As shown in this question, you can create, name and associate a container volume easily enough to the default unnamed one of mysql
version: '2'
services:
db:
image: mysql
volumes:
- dbdata:/var/lib/mysql
volumes:
dbdata:
driver: local
See "Mount a shared-storage volume as a data volume": you can uise other drivers, like flocker, and benefit from a multi-host portable volume.
I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6
Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).
You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external