Dockerize pt-kill as daemon - docker

I am coverting my infra. to containers. I have a couple daemons that right now live in rc.local but I want to do this the docker way
here are the commands:
sudo /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --daemonize --busy-time 30 --kill --print h=db-1,u=master,p=password,P=3306
sudo /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --daemonize --busy-time 30 --kill --print h=db-2,u=master,p=password,P=3306
What is the proper way to do this via docker?

AFAIK Percona doesn't provide an official Docker image for the toolkit but, as suggested by #VonC in his answer as well, you could try using the Dockerfile provided in their Github repository, it will give you a base image with the necessary tools installed, including pt-kill. To run pt-kill, you will need to provide the necessary command when running your docker container, or extend the image by including a CMD in your Dockerfile with the necessary information. For reference, I build the aforementioned Dockerfile:
docker build -t local/pt:3.5.0-5.el8 .
And was able to using pt-kill against a local docker based MySQL database running the following command from my terminal:
docker run -d local/pt:3.5.0-5.el8 /usr/bin/pt-kill --match-command Query --victims all --busy-time 5s --print h=172.17.0.2,D=local,u=local,p=local,P=3306
I tested it running the following sentence from MySQL Workbench:
SELECT SLEEP(10)
Which produces the following output from pt-kill:
# 2023-01-01T22:12:33 KILL 16 (Query 5 sec) SELECT SLEEP(10)
LIMIT 0, 1000
# 2023-01-01T22:12:35 KILL 16 (Query 7 sec) SELECT SLEEP(10)
LIMIT 0, 1000
# 2023-01-01T22:12:37 KILL 16 (Query 9 sec) SELECT SLEEP(10)
LIMIT 0, 1000
The way in which this container could be run will depend on your actual infrastructure.
I assume by the existence of the --rds flag in your command that you are connecting to an Amazon RDS instance.
You have many ways for running containers in AWS (see for instance this blog entry, for naming some of them).
In your use case probably the way to go would be using ECS running over EC2 compute instances (the Fargate serverless option doesn't make sense this time), or even EKS, although I think it would be overkill.
You could provision an EC2 instance, install docker, and deploy locally your containers as well, but probably it would a less reliable solution than using ECS.
Just in case, and the same applies if running your containers from an on-premise machine, you will need to launch the containers at startup. In my original answer I stated that in the end you probably will need to use rc.local or systemd to run your container, perhaps by invoking an intermediate shell script, that will launch the actual container using docker run, but thinking about it I realized that the dependency with the docker daemon - it should be running to run your container - could be a problem. Although some kind of automation could be required, consider running your container indicating always or unless-stopped as the --restart policy.
As you suggested, you could use docker-compose for defining both daemons too. The following docker-compose.yaml file could be of help:
version: '3'
x-pt-kill-common:
&pt-kill-common
build: .
restart: always
services:
pt-kill-db-1:
<<: *pt-kill-common
command: /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --busy-time 30 --kill --print h=db-1,u=master,p=password,P=3306
pt-kill-db-2:
<<: *pt-kill-common
command: /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --busy-time 30 --kill --print h=db-2,u=master,p=password,P=3306
We are building the docker image in compose itself: it assumes the existence of the mentioned Percona toolkit Dockerfile in the same directory in which the docker-compose.yaml file is located. You can build the image and publish it to ECR or wherever you see fit and use it in your docker-compose.yaml file as an alternative if you prefer:
version: '3'
x-pt-kill-common:
&pt-kill-common
image: aws_account_id.dkr.ecr.region.amazonaws.com/pt:3.5.0-5.el8
restart: always
services:
pt-kill-db-1:
<<: *pt-kill-common
command: /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --busy-time 30 --kill --print h=db-1,u=master,p=password,P=3306
pt-kill-db-2:
<<: *pt-kill-common
command: /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --busy-time 30 --kill --print h=db-2,u=master,p=password,P=3306
In order to reuse as much code as possible, the example uses extension fragments, although of course you can repeat the service definition if necessary.
Note as well that we get rid of the --daemonize option in the command definition.
In any case, you will need to configure your security groups to allow communication with the RDS database.
Having said all that, in my opinion, your current solution is a good one: although especially using ECS could be a valid solution, probably provisioning a minimal EC2 instance with the necessary tools installed could be a cheaper and simple option than running them in containers.

As mentioned in "How to use Percona Toolkit in a Docker container?", you might need to build your own image, starting from this Dockerfile
The thread mentions The docker image under perconalab (perconalab/percona-toolkit) seems to be the same but isn’t.
Maybe perconalab/pmm-client is a better option.

Related

In docker-compose, why one service could reach another, but not the other way around?

I'm writing an automated test that involves running several containers at once. The test submits some workload to the tested service, and expects a callback from it after a time.
To run the whole system, I use docker compose run with the following docker-compose file:
version: "3.9"
services:
service:
build: ...
ports: ...
tester:
image: alpine
depends_on:
- service
profiles:
- testing
The problem is, I can see "service" from "tester", but not the other way around, so the callback from the service could not land to "tester":
$ docker compose -f .docker/docker-compose.yaml run --rm tester \
nslookup service
Name: service
Address 1: ...
$ docker compose -f .docker/docker-compose.yaml run --rm service \
nslookup tester
** server can't find tester: NXDOMAIN
I tried specifying the same network for them, and giving them "links", but the result is the same.
It seems like a very basic issue, so perhaps I'm missing something?
When you docker-compose run some-container, it starts a temporary container based on that description plus the things it depends_on:. So, when you docker-compose run service ..., it doesn't depends_on: anything, and Compose only starts the temporary container, which is why the tester container doesn't exist at that point.
If you need the whole stack up to make connections both ways between containers, you need to run docker-compose up -d. You can still docker-compose run temporary containers on top of these.

How to package several services in one docker image?

I have a docker compose application, which works fine in local. I would like to create an image from it and upload it to the docker hub in order to pull it from my azure virtual machine without passing all files. Is this possible? How can I do it?
I tried to upload the image I see from docker desktop and then pull it from the VM but the container does not start up.
Here I attach my .yml file. There is only one service at the moment but in the future there will be multiple microservices, this is why I want to use compose.
version: "3.8"
services:
dbmanagement:
build: ./dbmanagement
container_name: dbmanagement
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./dbmanagement:/dbmandj
ports:
- "8000:8000"
environment:
- POSTGRES_HOST=*******
- POSTGRES_NAME=*******
- POSTGRES_USER=*******
- POSTGRES_PASSWORD=*******
Thank you for your help
The answer is: yes, you can but you should not
According to the Docker official docs:
It is generally recommended that you separate areas of concern by using one service per container
Also check this:
https://stackoverflow.com/a/68593731/3957754
docker-compose is enough
docker-compose exist just for that: Run several services with one click (minimal configurations) and commonly in the same server.
foreground process
In order to works a docker container needs a foreground process. To understand what is this, check the following links. As a extremely summary we can said you that a foreground process is something that when you launch it using the shell, the shell is taken and you can and you cannot enter more commands. You need to press ctrl + c to kill the process and get back your shell.
https://unix.stackexchange.com/questions/175741/what-is-background-and-foreground-processes-in-jobs
https://linuxconfig.org/understanding-foreground-and-background-linux-processes
The "fat" container
Anyway, if you want to join several services or process in one container (previously an image) you can do it with supervisor.
Supervisor could works a our foreground process. Basically you need to register one or many linux processes and then, supervisor will start them.
how to install supervisor
sudo apt-get install supervisor
source: https://gist.github.com/hezhao/bb0bee800531b89d7be1#file-supervisor_cmd-sh
add single config: /etc/supervisor/conf.d/myapp.conf
[program:myapp]
autostart = true
autorestart = true
command = python /home/pi/myapp.py
environment=SECRET_ID="secret_id",SECRET_KEY="secret_key_avoiding_%_chars"
stdout_logfile = /home/pi/stdout.log
stderr_logfile = /home/pi/stderr.log
startretries = 3
user = pi
source: https://gist.github.com/hezhao/bb0bee800531b89d7be1
start it
sudo supervisorctl start myapp
sudo supervisorctl tail myapp
sudo supervisorctl status
In the previous sample, we are used supervisor to start a python process.
multiple process with supervisor
You just need to add more [program] sections to the config file:
[program:php7.2]
command=/usr/sbin/php-fpm7.2-zts
process_name=%(program_name)s
autostart=true
autorestart=true
[program:dropbox]
process_name=%(program_name)s
command=/app/.dropbox-dist/dropboxd
autostart=true
autorestart=true
Here some examples, just like your requirement: several process in one container:
canvas lms : Basically starts 3 process: postgress, redis and a ruby app
https://github.com/harvard-dce/canvas-docker/blob/master/assets/supervisord.conf
ngnix + php + ssh
https://gist.github.com/pollend/b1f275eb7f00744800742ae7ce403048#file-supervisord-conf
nginx + php
https://gist.github.com/lovdianchel/e306b84437bfc12d7d33246d8b4cbfa6#file-supervisor-conf
mysql + redis + mongo + nginx + php
https://gist.github.com/nguyenthanhtung88/c599bfdad0b9088725ceb653304a91e3
Also you could configure a web dashboard:
https://medium.com/coinmonks/when-you-throw-a-web-crawler-to-a-devops-supervisord-562765606f7b
Another samples with docker + supervisor:
https://gist.github.com/chadrien/7db44f6093682bf8320c
https://gist.github.com/damianospark/6a429099a66bfb2139238b1ce3a05d79

Why is my Docker volume not working in a remote build box?

I am attempting to add a volume to a Docker container that will be built and run in a Docker Compose system on a hosted build service (CircleCI). It works fine locally, but not remotely. CircleCI provide an SSH facility I can use to debug why a container is not behaving as expected.
The relevant portion of the Docker Compose file is thus:
missive-mongo:
image: missive-mongo
command: mongod -v --logpath /var/log/mongodb/mongodb.log --logappend
volumes:
- ${MONGO_LOCAL}:/data/db
- ${LOGS_LOCAL_PATH}/mongo:/var/log/mongodb
networks:
- storage_network
Locally, if I do docker inspect integration_missive-mongo_1 (i.e. the running container name, I will get the volumes as expected:
...
"HostConfig": {
"Binds": [
"/tmp/missive-volumes/logs/mongo:/var/log/mongodb:rw",
"/tmp/missive-volumes/mongo:/data/db:rw"
],
...
On the same container, I can shell in and see that the volume works fine:
docker exec -it integration_missive-mongo_1 sh
/ # tail /var/log/mongodb/mongodb.log
2017-11-28T22:50:14.452+0000 D STORAGE [initandlisten] admin.system.version: clearing plan cache - collection info cache reset
2017-11-28T22:50:14.452+0000 I INDEX [initandlisten] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" }
2017-11-28T22:50:14.452+0000 I INDEX [initandlisten] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2017-11-28T22:50:14.452+0000 D INDEX [initandlisten] bulk commit starting for index: incompatible_with_version_32
2017-11-28T22:50:14.452+0000 D INDEX [initandlisten] done building bottom layer, going to commit
2017-11-28T22:50:14.454+0000 I INDEX [initandlisten] build index done. scanned 0 total records. 0 secs
2017-11-28T22:50:14.455+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 3.4
2017-11-28T22:50:14.455+0000 I NETWORK [thread1] waiting for connections on port 27017
2017-11-28T22:50:14.455+0000 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner
2017-11-28T22:50:14.455+0000 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor
OK, now for the remote. I kick off a build, it fails because Mongo won't start, so I use the SSH facility that keeps a box alive after a failed build.
I first hack the DC file so that it does not try to launch Mongo, as it will fail. I just get it to sleep instead:
missive-mongo:
image: missive-mongo
command: sleep 1000
volumes:
- ${MONGO_LOCAL}:/data/db
- ${LOGS_LOCAL_PATH}/mongo:/var/log/mongodb
networks:
- storage_network
I then run the docker-compose up script to bring all containers up, and then examine the problematic box: docker inspect integration_missive-mongo_1:
"HostConfig": {
"Binds": [
"/tmp/missive-volumes/logs/mongo:/var/log/mongodb:rw",
"/tmp/missive-volumes/mongo:/data/db:rw"
],
That looks fine. So on the host I create a dummy log file, and list it to prove it is there:
bash-4.3# ls /tmp/missive-volumes/logs/mongo
mongodb.log
So I try shelling in, docker exec -it integration_missive-mongo_1 sh again. This time I find that the folder exists, but not the volume contents:
/ # ls /var/log
mongodb
/ # ls /var/log/mongodb/
/ #
This is very odd, because the reliability of volumes in the remote Docker/Compose config has been exemplary up until now.
Theories
My main one at present is that the differing versions of Docker and Docker Compose could have something to do with it. So I will list out what I have:
Local
Host: Linux Mint
Docker version 1.13.1, build 092cba3
docker-compose version 1.8.0, build unknown
Remote
Host: I suspect it is Alpine (it uses apk for installing)
I am using the docker:17.05.0-ce-git image supplied by CircleCI, the version shows as Docker version 17.05.0-ce, build 89658be
Docker Composer is installed via Pip, and getting the version produces docker-compose version 1.13.0, build 1719ceb.
So, there is some version discrepancy. As a shot in the dark, I could try bumping up Docker/Compose, though I am wary of breaking other things.
What would be ideal though, is some sort of advanced Docker commands I can use to debug why the volume appears to be registered but is not exposed inside the container. Any ideas?
CircleCI runs docker-compose remotely from the Docker daemon so local bind mounts don't work.
A named volume will default to the local driver and would work in CircleCI's Compose setup, the volume will exist where ever the container runs.
Logging should generally be left to stdout and stderr in a single process per container setup. Then you can make use of a logging driver plugin to ship to a central collector. MongoDB defaults to logging to stdout/stderr when run in the foreground.
Combining the volumes and logging:
version: "2.1"
services:
syslog:
image: deployable/rsyslog
ports:
- '1514:1514/udp'
- '1514:1514/tcp'
mongo:
image: mongo
command: mongod -v
volumes:
- 'mongo_data:/data/db'
depends_on:
- syslog
logging:
options:
tag: '{{.FullID}} {{.Name}}'
syslog-address: "tcp://10.8.8.8:1514"
driver: syslog
volumes:
mongo_data:
This is a little bit of a hack as the logging endpoint would normally be external, rather than a container in the same group. This is why the logging uses the external address and port mapping to access the syslog server. This connection is between the docker daemon and the log server, rather than container to container.
I wanted to add an additional answer to accompany the accepted one. My use-case on CircleCI is to run browser-based integration tests, in order to check that a whole stack is working correctly. A number of the 11 containers in use have volumes defined for various things, such as log output and raw database file storage.
What I had not realised until now was that the volumes in CircleCI's Docker executor do not work, as a result of a technical Docker limitation. As a result of this failure, in each case previously, the files were just written to an empty folder.
In my new case however, this issue was causing Mongo to fail. The reason for that was that I'm using --logappend to prevent Mongo from doing its own log rotation on start-up, and this switch requires the path specified in --logpath to exist. Since it existed on the host, but the volume creation failed, the container could not see the log file.
To fix this, I have modified my Mongo service entry to call a script in the command section:
missive-mongo:
image: missive-mongo
command: sh /root/mongo-logging.sh
And the script looks like this:
#!/bin/sh
#
# The command sets up logging in Mongo. The touch is for the benefit of any
# environment in which the logs do not already exist (e.g. Integration, since
# CircleCI does not support volumes)
touch /var/log/mongodb/mongodb.log \
&& mongod -v --logpath /var/log/mongodb/mongodb.log --logappend
In the two possible use cases, this will act as follows:
In the case of the mount working (dev, live) it will simply touch a file if it exists, and create it if it does not (e.g. a completely new environment),
In the case of the mount not working (CircleCI) it will create the file.
Either way, this is a nice safety feature to prevent Mongo blowing up.

How does one close a dependent container with docker-compose?

I have two containers that are spun up using docker-compose:
web:
image: personal/webserver
depends_on:
- database
entrypoint: /usr/bin/runmytests.sh
database:
image: personal/database
In this example, runmytests.sh is a script that runs for a few seconds, then returns with either a zero or non-zero exit code.
When I run this setup with docker-compose, web_1 runs the script and exits. database_1 remains open, because the process running the database is still running.
I'd like to trigger a graceful exit on database_1 when web_1's tasks have been completed.
You can pass the --abort-on-container-exit flag to docker-compose up to have the other containers stop when one exits.
What you're describing is called a Pod in Kubernetes or a Task in AWS. It's a grouping of containers that form a unit. Docker doesn't have that notion currently (Swarm mode has "tasks" which come close but they only support one container per task at this point).
There is a hacky workaround beside scripting it as #BMitch described. You could mount the Docker daemon socket from the host. Eg:
web:
image: personal/webserver
depends_on:
- database
volumes:
- /var/run/docker.sock:/var/run/docker.sock
entrypoint: /usr/bin/runmytests.sh
and add the Docker client to your personal/webserver image. That would allow your runmytests.sh script to use the Docker CLI to shut down the database first. Eg: docker kill database.
Edit:
Third option. If you want to stop all containers when one fails, you can use the --abort-on-container-exit option to docker-compose as #dnephin mentions in another answer.
I don't believe docker-compose supports this use case. However, making a simple shell script would easily resolve this:
#!/bin/sh
docker run -d --name=database personal/database
docker run --rm -it --entrypoint=/usr/bin/runmytests.sh personal/webserver
docker stop database
docker rm database

Difference between docker-compose and manual commands

What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting

Resources