Docker compose - save configuration - docker

Here's my docker-compose.yml file, adapted from here:
version: '3.1'
services:
mysql:
image: mariadb
environment:
MYSQL_DATABASE: drupal8
MYSQL_USER: drupal8
MYSQL_PASSWORD: drupal8
MYSQL_ROOT_PASSWORD: admin
volumes:
- /var/lib/mysql
restart: always
drupal:
image: drupal:8.2-apache
ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
restart: always
links:
- mysql
Now on running this and opening up localhost:8080 in my browser, I'm presented with Drupal's configuration setup, which I duly follow and presto, my first Drupal page is created. What I ultimately need to do is:
Save the configuration somehow, so that the settings persist
Be able to push these two containers to a single repository in Docker Hub
The end goal is to be able to issue docker run myDockerHubUsername/myRepo, which would pull these two containers and Drupal would be preconfigured.

Your docker-compose is already saving all the data/configurations you made. Even you destroy the containers, the data persists.
You need to keep your mounted volumes!
If you want to run these somewhere else. You need to always carry your data/volume. Remember to check or change the paths.
For 2nd, it is not advisable to keep multiple images in one image. If you still want. You need to prepare a Dockerfile, and prepare a single image out of that.

Related

Network Setting in Docker Compose file, to Join Wordpress Containers Together

I'm hosting 2 wordpress website on my VPS, and I'm using Nginx Proxy Manager to proxy them.
I use Docker network connect to join NPM & 2 Wordpress containers together to make them work, but after reload or restart docker the networks between them is broken. (Is that because I use systemctl restart docker? or compose down & up ?)
So now I decide to create a new network in docker called bridge_default, and put this network in docker compose file so I don't have to connect those containers together to make them work every time.
But now I don't know where is wrong in docker compose file, Can any one tell me how to put networks in docker compose file correctly?
version: "3"
# Defines which compose version to use
services:
# Services line define which Docker images to run. In this case, it will be MySQL server and WordPr> db:
image: mariadb:10.6.4-focal
# image: mysql:5.7 indicates the MySQL database container image from Docker Hub used in this inst> restart: always
networks:
- default
environment:
MYSQL_ROOT_PASSWORD: PassWord#123
MYSQL_DATABASE: wordpress
MYSQL_USER: admin
MYSQL_PASSWORD: PassWord#123
# Previous four lines define the main variables needed for the MySQL container to work: databas>
wordpress:
depends_on:
- db
image: wordpress:latest
restart: always
# Restart line controls the restart mode, meaning if the container stops running for any reason, > networks:
- default
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: admin
WORDPRESS_DB_PASSWORD: PassWord#123
WORDPRESS_DB_NAME: wordpress
# Similar to MySQL image variables, the last four lines define the main variables needed for the Word> volumes:
["./wordpress:/var/www/html"]
volumes:
mysql: {}
networks:
default: bridge_default
Docker compose file
Docker networks
Can any one tell me how to put networks in docker compose file correctly?

How to bind folders inside docker containers?

I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.

Docker Volumes Error. Need help setting it right

I have been trying to install drupal using the official image from docker hub. I created a new folder in my D directory, for my Drupal project and created a docker-compose.yml file.
Drupal with PostgreSQL
Access via "http://localhost:8080"
(or "http://$(docker-machine ip):8080" if using docker-machine)
During initial Drupal setup,
Database type: PostgreSQL
Database name: postgres
Database username: postgres
Database password: example
ADVANCED OPTIONS; Database host: postgres
version: '3.1' services:
drupal:
image: drupal:8-apache ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
this takes advantage of the feature in Docker that a new anonymous
volume (which is what we're creating here) will be initialized with the
existing content of the image at the same location
- /var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose up -d command in a terminal from within the folder which constrong texttained docker-compose.yml file, my drupal container and its databse were successfully installed and running and I was able to access the site from http://localhost:8080 but I couldnt find their core files in the folder. It was just docker-compose.yml file in the folder.
I then removed the whole docker container and began with a fresh installation again with by editing the volume section in the docker-compose.yml file to point to the directory and folder where I want the core files of drupal to be populated.
Example D:/My Project/Drupal Project.
Drupal with PostgreSQL
Access via "http://localhost:8080"
(or "http://$(docker-machine ip):8080" if using docker-machine)
During initial Drupal setup,
Database type: PostgreSQL
Database name: postgres
Database username: postgres
Database password: example
ADVANCED OPTIONS; Database host: postgres
version: '3.1'
services:
drupal:
image: drupal:latest
ports:
- 8080:80
volumes:
- d:\projects\drupalsite/var/www/html/modules
- d:\projects\drupalsite/var/www/html/profiles
- d:\projects\drupal/var/www/html/themes
this takes advantage of the feature in Docker that a new anonymous
volume (which is what we're creating here) will be initialized with the
existing content of the image at the same location
- d:\projects\drupalsite/var/www/html/sites
restart: always
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
When I ran the docker-compose.yml command I received the error as shown below.
Container drupalsite_postgres_1 Created 3.2s
- Container drupalsite_drupal_1 Creating 3.2s
Error response from daemon: invalid mount config for type "volume": invalid mount path: 'z:/projects/drupalsite/var/www/html/sites' mount path must be absolute
PS Z:\Projects\drupalsite>
Please help me find a solution to this.
If these directories contain your application, they probably shouldn't be in volumes: at all. Create a file named Dockerfile that initializes your custom application:
FROM drupal:8-apache
COPY modules/ /var/www/html/modules/
COPY profiles/ /var/www/html/profiles/
COPY themes/ /var/www/html/themes/
COPY sites/ /var/www/html/sites/
# EXPOSE, CMD, etc. come from the base image
Then reference this in your docker-compose.yml file:
version: '3.8'
services:
drupal:
build: . # instead of image:
ports:
- 8080:80
restart: always
# no volumes:
postgres:
image: postgres:10
environment:
POSTGRES_PASSWORD: example
restart: always
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
If you really want to use volumes: here, there are three forms of that directive. The form you have in the question with just a path creates an anonymous volume: it causes Compose to persist that directory, initialized from what's in the image, but disconnected from your host system. With a bare name and a path, it creates a named volume, which is similar but can be explicitly managed. With two paths, it creates a bind mount, which unconditionally replaces the container content with the host-system content (there is no initialization).
version: '3.8'
services:
something:
volumes:
- /path1 # anonymous volume
- named:/path2 # named volume
- /host/path:/path3 # bind mount
volumes: # named volumes referenced in containers only
named: # usually do not need any settings
So if you do want to replace the image's contents with host directories, you need to use the bind-mount syntax. Relative paths here are interpreted relative to the location of the docker-compose.yml file.
version: '3.8'
services:
drupal:
image: drupal:8-apache
volumes:
- ./modules:/var/www/html/modules
# etc.
A final comment on named volume initialization: your file has a comment about initializing anonymous volumes. There are two major problems with this approach, though. First, the second time you start the container, the content of the volume takes precedence, and any changes in the underlying images will be ignored. Second, this setup only works for Docker named and anonymous volumes, but not Docker bind mounts, volume mounts in Kubernetes, or other types of mount. I'd generally avoid relying on this "feature".

How can I store data with Docker Compose containers?

I have this docker-compose.yml, and I have a Postgres database and Grafana running over it to make queries on data.
version: "3"
services:
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- db
ports:
- "3000:3000"
I start this compose with the command docker-compose up, but then, if I want to not lose any data, I must run docker-compose stop instead of docker-compose down.
I also read about docker commit, but "the commit operation will not include any data contained in volumes mounted inside the container", so I guess it's no use for my needs.
What's the proper way to store the created volumes and reusing them with commands up/down, so even when recreating the containers? I must use some sort of backup methods provided by every image (so, for example, a DB export for Postgres, and some other type of export for Grafana), or there is a way to do this inside docker-compose.yml?
EDIT:
I also read about volumes, but is there a standard way to store everything?
In the link provided by #DannyB, setting volumes to ./postgres-data:/var/lib/postgresql instead of ./postgres-data:/var/lib/postgresql/data caused the container to not store the actual folder.
My question is: every image must follow a particular pattern like the one above? This path to data to store the volume underlying is present in every Docker image Readme? Or is there something like:
volumes:
- ./my_image_root:/
Docker provides for volumes as the way to persist volumes between container invocations and to share data between containers.
They are quite simple to declare and use in compose files:
volumes:
postgres:
grafana:
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
volumes:
- postgres:/var/lib/postgresql/data
grafana:
image: grafana/grafana
depends_on:
- db
volumes:
- grafana:/var/lib/grafana
ports:
- "3000:3000"
Optionally, you can also set a local directory as your container volume
with the added convince of having the files easily accessible not only from inside the container. This is especially helpful for mounting specific config files to their location in the container, you can edit the file locally like any other file restart the container with the updated configuration (certificates and other similar files also make good use of this option). And you do that like so:
volumes:
- /home/myusername/postgres_data/:/var/lib/postgresql/data/
PS. I have omitted the container_name and version directives from this compose.yml because (as of docker 20.10), the docker compose spec determines version automatically, and docker compose exposes enough functionality that accessing the containers directly using short names isn't necessary usually.

Docker - how to set up compose for local webserver

I have a docker compose file in a local folder on my mac. I have also another folder /src which should act as the root element. The docker-compose file looks like this:
version: '2'
services:
fpm:
image: sbusso/php-fpm-ion
nginx:
image: nginx:stable
ports:
- "80:80"
links:
- fpm
- db
db:
image: orchardup/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: myproject
I understand what we are doing here, but I am missing the solution that /src is taken as the root and I think I need to set up an lsync service which syncs between local and my docker container. So I found this one, but it is not working properly - the root /src is not taken into account. I just want to type localhost in my browser and it should open the /src folder.
version: '2'
services:
fpm:
image: sbusso/php-fpm-ion
links:
- sync
volumes_from:
- sync
db:
image: orchardup/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: myproject
links:
- sync
volumes_from:
- sync
nginx:
image: nginx:stable
ports:
- "80:80"
links:
- sync
volumes_from:
- sync
sync:
image: zeroboh/lsyncd
volumes:
- /var/www/html
- ./src:/src:Z
- ./docker-config/nginx:/etc/nginx/conf.d
- /var/lib/php/session
- ./docker-config/lrsync/lrsync.lua:/etc/lrsync/lrsync.lua
- ./sync:/sync
What I do understand is that every image that is loaded links the sync service into it. What I do not understand is why every image needs a volumes_from and that the syntax in sync explicitly says - can somebody help me, setting this up correctly?
Thanks
volumes_from imports volumes from another container
By default, each container has no volumes. You can define local volumes using the volumes attribute, but the volumes are only used in that container. In order for other containers to make use of them, those containers must import the volumes using volumes_from, pointing to the name of one or more containers. All volumes in those named containers are then made available in the current container.
The Z volume label indicates a private volume
You are mounting the /src volume using this:
volumes:
- ./src:/src:Z
That's fine, except you are also using volumes_from, and your question indicates that you specifically wanted to share /src. But by using the Z label, you have told Docker to make this a private volume.
From the documentation:
Volume labels
Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running inside the container from using the content. By default, Docker does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume.
In this case, "current container" is sync, so only that container may use the volume. The others may not use it.

Resources