What are the consequences of using "docker" and "docker-compose" commands interchangeably? - docker

I have several containers which are described in a docker-compose-<service>.yaml file each, and which I start with
docker-compose -f docker-compose-<service>.yaml up -d
I then see via docker ps the container running.
I expected that I could stop that container via
docker-compose -f docker-compose-<service>.yaml down
The container is however not stopped. Neither it is when I use the comane above with stop instead of down.
Doing a docker kill <service> stops the container.
My question: since all my services started with docker-compose are effectively one container for each docker-compose-<service>.yaml file, can I use the bare docker command to stop it?
Or more generally speaking: is docker-compose simply a helper for underlying docker commands which means that using docker is always safe (from a "consistency in using different commands" perspective)?

My question: since all my services started with docker-compose are effectively one container for each docker-compose-.yaml file, can I use the bare docker command to stop it?
Actually docker-compose is using docker engine, you can try locally:
ex: docker-compose.yaml:
version: "3"
services:
# Database
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
- wpsite
# phpmyadmin
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- '9090:80'
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: wordpress
networks:
- wpsite
networks:
wpsite:
You can now interact with them thought docker engine if needed:
More globally docker-compose is a kind of orchestrater ( I prefer the terme of composer), if you need a to define container stack, dependent each others (like the previous example phpmyadmin/mysql) it is perfect to test in dev environment. In my point of view to have a better resilience, HA, service management... of containers stack in production environment, you strongly need to consider the implementation of a real orchestrater such as docker-swarm, kubernetes, openshift....
Here some documentation to explain the difference: https://linuxhint.com/docker_compose_vs_docker_swarm/
You can also see: What is the difference between `docker-compose build` and `docker build`?

Related

Network Setting in Docker Compose file, to Join Wordpress Containers Together

I'm hosting 2 wordpress website on my VPS, and I'm using Nginx Proxy Manager to proxy them.
I use Docker network connect to join NPM & 2 Wordpress containers together to make them work, but after reload or restart docker the networks between them is broken. (Is that because I use systemctl restart docker? or compose down & up ?)
So now I decide to create a new network in docker called bridge_default, and put this network in docker compose file so I don't have to connect those containers together to make them work every time.
But now I don't know where is wrong in docker compose file, Can any one tell me how to put networks in docker compose file correctly?
version: "3"
# Defines which compose version to use
services:
# Services line define which Docker images to run. In this case, it will be MySQL server and WordPr> db:
image: mariadb:10.6.4-focal
# image: mysql:5.7 indicates the MySQL database container image from Docker Hub used in this inst> restart: always
networks:
- default
environment:
MYSQL_ROOT_PASSWORD: PassWord#123
MYSQL_DATABASE: wordpress
MYSQL_USER: admin
MYSQL_PASSWORD: PassWord#123
# Previous four lines define the main variables needed for the MySQL container to work: databas>
wordpress:
depends_on:
- db
image: wordpress:latest
restart: always
# Restart line controls the restart mode, meaning if the container stops running for any reason, > networks:
- default
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: admin
WORDPRESS_DB_PASSWORD: PassWord#123
WORDPRESS_DB_NAME: wordpress
# Similar to MySQL image variables, the last four lines define the main variables needed for the Word> volumes:
["./wordpress:/var/www/html"]
volumes:
mysql: {}
networks:
default: bridge_default
Docker compose file
Docker networks
Can any one tell me how to put networks in docker compose file correctly?

Docker volume mariadb has root permission

I stumbled across a problem with docker volumes while starting docker containers with a docker compose file (MariaDB, RabbitMQ, Maven). I start them simply with docker-compose up -d (WITHOUT SUDO)
My volumes are definied like this:
...
volumes:
- ./production/mysql:/var/lib/mysql:z
...
Everything is working fine and the ./production directory is created (where the volumes are mapped)
But when I again try to restart the docker containers with down/up, I get following error:
error checking context: 'no permission to read from '…/production/mysql/aria_log.00000001'
When I check the mentioned file I saw that it needs root:root permission. This is because the file is generated with the root user inside the container. So I tried to use namespace as mentioned in the docs.
Anyway the error still occurs. Any ideas or references?
Thanks.
Docker Compose File:
version: '3.8'
services:
mysql:
image: mariadb:latest
restart: always
env_file:
- config.env
volumes:
- ./production/mysql:/var/lib/mysql:z
environment:
MYSQL_DATABASE: ${DATABASE_NAME}
MYSQL_USER: ${DATABASE_USER}
MYSQL_PASSWORD: ${DATABASE_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}
networks:
- testnetwork
networks:
testnetwork:
The issue comes from the mapping between the host user/group IDs and the ones inside the container. One of the solutions is to use a named volume and avoid all this hassle, but you can also do the following:
Add user: ${UID}:${GID} to your service inside the docker-compose file.
Run UID=${id -u} GID=${id -g} docker-compose up. This way you make sure that the user in the container will have the same UID/GID as the user on the host and files created in the container will have proper permissions.
NOTE: Docker for Mac (using the osxfs driver) does this behind the scenes and you don't need to worry about users and groups.
Run the Docker daemon as a non-root user this can be helpfull for your purpose.
all document are here.

Cannot exec into container using GitBash when using Docker Compose

I'm new to Docker Compose, but have used Docker for years. The screen shot below is of PowerShell and of GitBash. If I run containers without docker-compose I can docker exec -it <container_ref> /bin/bash with no problems from either of these shells.
However, when running using docker-compose up both shells give no error when attempting to use docker-compose exec. They both just hang a few seconds and return to prompt.
Lastly, for some reason I do get an error in GitBash when using what I know: docker exec.... I've used this for years so I'm perplexed and posting a question. What does Docker Compose do that messes with GitBash docker ability, but not with PowerShell? And, why the hang when using docker-compose exec..., but no error?
I am using tty: true in the docker-compose.yml, but that honestly doesn't seem to make a difference. Not to throw a bunch of questions in one post, but whatever is going on could it also be the reason I can't hit my web server in the browser only when using Docker Compose to run it?
version: '3.8'
volumes:
pgdata:
external: true
services:
db:
image: postgres
container_name: trac-db
tty: true
restart: 'unless-stopped'
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: iol
volumes:
- pgdata:/var/lib/postgresql/data
network_mode: 'host'
expose:
- 5432
web:
image: lindben/trac-server
container_name: trac-server
tty: true
restart: 'unless-stopped'
environment:
ADDRESS: localhost
PORT: 3000
NODE_ENV: development
depends_on:
- db
network_mode: 'host'
privileged: true
expose:
- 1234
- 3000
```
I'm gonna be assuming you're using Docker for Desktop and so the reason you can docker exec just fine using powershell is because for windows docker is a native program\command and for GitBash which is based on bash a linux shell (bash = Bourne-Again SHell) not so much.
so when using a windows command that needs a tty you need some sort of "adapter" like winpty for example to bridge the gap between docker's interface and GitBash's one.
Here's a more detailed explanation on winpty
putting all of this aside, if trying to only use the compose options it maybe better for you to advise this question
Now, regarding your web service issue, I think that you're not actually publicly exposing your application using the expose tag. take a look at the docker-compose
expose reference. what you need is to add a "ports" tag like so as referenced here:
db:
ports:
- "5432:5432"
web:
ports:
- "1234:1234"
- "3000:3000"
Hope this solves your pickle ;)

What is a docker-compose.yml file?

I can't find a real definition of what a docker-compose file is.
Is it correct to say this:
A docker-compose file is a YAML file that allows us to deploy multiples Docker containers at the same time.
I'd like to be able to explain a bit better what a docker-compose file is.
A docker-compose.yml is a config file for Docker Compose.
It allows to deploy, combine, and configure multiple docker containers at the same time. The Docker "rule" is to outsource every single process to its own Docker container.
Take for example a simple web application: You need a server, a database, and PHP. So you can set three docker containers with Apache2, PHP, and MySQL.
The advantage of Docker Compose is easy configuration. You don't have to write a big bunch of commands into Bash. You can predefine it in the docker-compose.yml:
db:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: example_db
MYSQL_USER: root
MYSQL_PASSWORD: rootpw
php:
image: php
ports:
- "80:80"
- "443:443"
volumes:
- ./SRC:/var/www/
links:
- db
As you can see in my example, I define port forwarding, volumes for external data, and links to the other Docker container. It's fast, reproducible, and not that hard to understand.
The Docker Compose file format is formally specified which enables docker-compose.yml files being executed with something else than Docker, Podman for example.
Docker Compose is a tool that allows you to deploy and manage multiple containers at the same time.
A docker-compose.yml file contains instructions on how to do that.
In this file, you instruct Docker Compose for example to:
From where to take the Dockerfile to build a particular image
Which ports you want to expose
How to link containers
Which ports you want to bind to the host machine
Docker Compose reads that file and executes commands.
It is used instead of all optional parameters when building and running a single docker container.
Example:
version: '2'
services:
nginx:
build: ./nginx
links:
- django:django
- angular:angular
ports:
- "80:80"
- "8000:8000"
- "443:443"
networks:
- my_net
django:
build: ./django
expose:
- "8000"
networks:
- my_net
angular:
build: ./angular2
links:
- django:django
expose:
- "80"
networks:
- my_net
networks:
my_net:
external:
name: my_net
This example instructs Docker Compose to:
Build nginx from path ./nginx
Links angular and django containers (so their IP in the Docker network is resolved by name)
Binds ports 80, 443, 8000 to the host machine
Add it to network my_net
(so all 3 containers are in the same network and therefore accessible from each other)
Then something similar is done for the django and angular containers.
If you would use just Docker commands, it would be something like:
docker build --name nginx .
docker run --link django:django angular:angular --expose 80 443 8000 --net my_net nginx
So while you probably don't want to type all these options and commands for each image/container, you can write a docker-compose.yml file in which you write all these instructions in a human-readable format.

How can I run configuration commands after startup in Docker?

I have a Dockerfile set up to run a service that requires some subsequent commands be run in order to initialize properly. I have created a startup script with the following structure and set it as my entrypoint:
Set environment variables for service, generate certificates, etc.
Run service in background mode.
Run configuration commands to finish initializing service.
Obviously, this does not work since the service was started in background and the entry point script will exit with code 0. How can I keep this container running after the configuration has been done? Is it possible to do so without a busy loop running?
How can I keep this container running after the configuration has been done? Is it possible to do so without a busy loop running?
Among your many options:
Use something like sleep inf, which is not a busy loop and does not consume CPU time.
You could use a process supervisor like supervisord to start your service and start the configuration script.
You could run your configuration commands in a separate container after the service container has started.
You can look at this GitHub issue and specific comment -
https://github.com/docker-library/wordpress/issues/205#issuecomment-278319730
To summarize, you do something like this:
version: '2.1'
services:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
image: wordpress:latest
volumes:
- "./wp-init.sh:/usr/local/bin/apache2-custom.sh"
depends_on:
db:
condition: service_started
ports:
- 80:80
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
command:
- apache2-custom.sh
wp-init.sh is where you write the code to execute.
Note the command yml tag:
command:
- apache2-custom.sh
because we bounded the two in the volumes tag, it will actually run the code in wp-init.sh within your container.

Resources