How to "reset" a docker-compose systemd service? - docker

I have created a systemd service that starts a set of Docker containers using Docker-Compose, as outlined in this answer:
# /etc/systemd/system/docker-compose-app.service
[Unit]
Description=Docker Compose Application Service
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/srv/docker
ExecStart=/usr/local/bin/docker-compose up -d
ExecStop=/usr/local/bin/docker-compose down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
This allows me to start the Docker-Compose services using
sudo systemctl start docker-compose-app
which uses docker-compose up -d under the hood, and shutting them down using
sudo systemctl stop docker-compose-app
which uses docker-compose down. Please note that the down command is run without the -v flag, which means that volumes will remain in place, preserving the data of my containers across restarts/recreation. This is pretty much what I want in the majority of cases.
There are situations where I want to erase all data in the services, basically running the down -v command instead of just down.
Is there a way to extend the above systemd service definition to allow for an additional command (or using one of the existing systemctl commands) that would allow me to run the occasional down -v if needed. I want to do this ad-hoc if needed, not scheduled or anything like that.
How can I run
docker-compose down -v
occasionally if needed through the same systemd setup, while keeping the standard functionality of maintaining the containers' data across restarts?

You may try to use ExecReload definition:
ExecReload=/usr/local/bin/docker-compose down -v && /usr/local/bin/docker-compose up -d
And then you can use:
sudo systemctl reload docker-compose-app
So "reload" command will be used for "reset" in this case.

Related

Systemctl unable to start service in Docker because of its service exec type

Whiole trying to start the gsad service in a Docker container, I'm receiving the following error:
root#7146c6073ae5:/# systemctl start gsad.service
ERROR:systemctl: gsad.service: Failed to parse service type, ignoring: exec
ERROR:systemctl:unsupported run type 'exec'
Here's the original /etc/systemd/system/gsad.service file, which is based on the original documentation:
[Unit]
Description=Greenbone Security Assistant daemon (gsad)
Documentation=man:gsad(8) https://www.greenbone.net
After=network.target gvmd.service
Wants=gvmd.service
[Service]
Type=exec
User=gvm
Group=gvm
RuntimeDirectory=gsad
RuntimeDirectoryMode=2775
PIDFile=/run/gsad/gsad.pid
ExecStart=/usr/local/sbin/gsad --foreground --listen=127.0.0.1 --port=9392 --http-only
Restart=always
TimeoutStopSec=10
[Install]
WantedBy=multi-user.target
Alias=greenbone-security-assistant.service
My base image for the Dockerfile is ubuntu:latest and I've built openvas using that article from the source.
While I'm not sure this is contributing to the error, I received systemctl by performing the following in my Dockerfile in the very beginning:
git clone https://github.com/gdraheim/docker-systemctl-replacement /opt/systemctl-github && \
ln -s /opt/systemctl-github/files/docker/systemctl3.py /usr/bin/systemctl
Your problem is that you're not actually running systemctl from Systemd (in which support for the exec service type was introduced back in 2018). You're using the systemctl command provided by https://github.com/gdraheim/docker-systemctl-replacement, which doesn't understand the exec service type (you can see the list of supported types here).
There doesn't seem to be much point to using this wrapper: just take the ExecStart command from your service file and make that the CMD entry in your Dockerfile:
FROM ubuntu:latest
.
.
.
CMD /usr/local/sbin/gsad --foreground --listen=127.0.0.1 --port=9392 --http-only
You can add the necessary USER, WORKDIR, etc directives to reproduce the environment configured by your service file.
You'll probably want to change that --listen option; in almost all cases it doesn't make sense to bind to the localhost address inside a container.
It looks like the Greenbone project provides their own Docker images, along with instructions for running things in a containerized environment. The documentation is at https://greenbone.github.io/docs/latest/22.4/container/index.html.

Is there a way to command Docker to start containers to start on next boot instead of now?

I am using Packer to build an EC2 AMI containing Docker images. I want a few services (restart policy set to unless-stopped to be downloaded and ready to run on first boot without actually running during build time.
At the moment I docker-compose up -d, wait an arbitrary amount of time, then finish the packer build (which probably ungracefully stops the running containers).
What I am planning is to docker-compose pull && docker-compose build and create some kind of init script that issues the docker compose run command.
Is there a better way to do this?
You can just do docker pull and add and enable a systemd unit file for each docker container. Something like this:
[Unit]
Description=Redis Container
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop %n
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull redis
ExecStart=/usr/bin/docker run --rm --name %n redis
[Install]
WantedBy=multi-user.target
Read more in this blog post.

running a docker container with systemd

I have a service definition that starts a docker container using systemd on CentOS 7:
[Unit]
Description=MappingService
After=portal.service
Requires=docker.service
[Service]
TimeoutStartSec=3000
Type=forking
WorkingDirectory=/home/user/Downloads/MS_0.3.4_artifact
ExecStartPre=-/bin/docker rm -f eb-mapping-service-container
ExecStartPre=/home/user/Downloads/MS_0.3.4_artifact/deploy.sh /home/user/Downloads/MS_0.3.4_artifact/eb-mapping-service.tgz
ExecStartPre=/bin/docker run -v /dev/log:/dev/log -d -ti --log-driver=journald --network=bridge -p 9090:9090 --name eb-mapping-service-container eb-mapping-service /bin/bash -c "cd /build/MappingService; ./start_multiple_clients_mapping_service.sh"
ExecStart=/bin/docker start -a eb-mapping-service-container
ExecStop=/bin/docker stop eb-mapping-service-container
[Install]
WantedBy=multi-user.target
This service works. The Docker container it launches is up.
Whenever I boot the computer it is running. My problem with this service is that it's never reaching the Active(Running) status. Instead, it is stuck in 'Activating(start)' status.
The 'start_multiple_clients_mapping_service.sh' script starts a node.js server and it starts listening, so it isn't exiting directly.
I've searched everywhere and scoured Docker's documentation about this and couldn't find an answer.
Also, if I remove the '-a' from the docker start command, then the status will be Inactive(dead) even though the container will be up and running.
Update:
After a while, I don't have an exact number, the service fails with the timeout reason. This isn't after 3000 seconds but way earlier. Although the service failed, the docker is still on the air and can be used.
I've verified with docker container ls
Question:
How do I change my service definition to reflect the Active(Running) status for the docker?
I understood the problem. There were a couple of things wrong:
the docker run command should not be used with the flags -d and -ti.
the Type should be set to exec instead of forking.
After doing these two changes, I got the much sought after Active(Running) status with the Docker successfully launched.

How to check that the docker container is restarted and accessible?

I have a docker container with JETTY CMD instruction.
After "docker restart", which goes immediately, I cannot access JETTY about 9-10 seconds. After that time docker container or jetty service is UP again and I can access it.
Question is: is there a standard way to check that the docker container is really up?
Surely I can make a loop with test requests to my service and wait to 200 response code. But maybe there is a more beautiful solution?
Thanks
Sergey, youre need to use initialization system as supervisor of your in-Docker processes. You may use distro-built-in init systems like systemd/upstart or init.d depends on your OS for checking a container state.
In theory you should to create independent service in you init system on each docker run command without -d option, because with -d option docker detached a container and returned 0 exit status to init system. As result init system lost a control of target process.
For example, realization of this mechanism in Systemd:
Create something.service file in /etc/systemd/system
And type to it something like this:
[Unit]
Description=Simple Blog Rails Docker Container Service
After=docker.service
Requires=docker.service
[Service]
Restart=on-failure
ExecStartPre=-/usr/bin/docker kill simple-blog-rails-container
ExecStartPre=-/usr/bin/docker rm simple-blog-rails-container
ExecStart=/usr/bin/docker run simple-blog-rails
ExecStop=/usr/bin/docker stop simple-blog-rails-container
[Install]
WantedBy=multi-user.target
Reload Systemd configuration systemctl daemon-reload
Just try to run your container by typing systemctl start something.service or restart instead of start.
You can check service state systemctl status something.service
For more information about using systemd and docker you may read this CoreOS manual: https://coreos.com/docs/launching-containers/launching/getting-started-with-systemd/
Keep in mind that docker restart defaults to a wait time of 10 seconds:
restart Usage: docker restart [OPTIONS] CONTAINER [CONTAINER...]
Restart a running container
-t, --time=10 Number of seconds to try to stop for before
killing the container. Once killed it will then be restarted. Default
is 10 seconds.
When I try this with some long running script in a docker container it takes 10 seconds before it's done. If I change the command line to use a different timeout (like -t=4} it comes back in 4 seconds.
If you want to determine if a container is running (even if your service contained within it is not quite ready yet) you can:
Run docker ps - which will give you a list of all running docker
containers
Use the Docker Remote API command GET /containers/json - this will
give you a json response with a list of running containers.
https://docs.docker.com/reference/api/docker_remote_api_v1.16/
https://docs.docker.com/reference/commandline/cli/#ps

Systemd - Run Utility Docker Container During `ExecStop=`

I am testing CoreOS to see if it meets our needs, and so far things are going a little slow, but OK. I like systemd, but it doesn't seem to be working properly - specifically, on shutdown.
My Goal
My goal is to have a script run on service start and stop that adds and removes records from our DNS server respectively for a service. It works when the service is started by the system on boot, or when it is manually started or shut down - but not when the system is rebooted or halted (shutdown -r now, shutdown -h now).
Here is a slightly simplified version of a docker registry service I am using for an example:
[Unit]
Description=Docker Registry
After=docker.service
Before=registry-ui.service
Wants=registry-ui.service
[Service]
Conflicts=halt.target poweroff.target reboot.target shutdown.target sleep.target
TimeoutStartSec=0
Restart=on-failure
ExecStartPre=-/usr/bin/docker kill Registry
ExecStartPre=-/usr/bin/docker rm Registry
ExecStartPre=-/usr/bin/docker run --rm myrepo:5000/tool runtool arg1 arg2
ExecStart=/usr/bin/docker run various args registry:latest
ExecStop=-/usr/bin/docker run --rm myrepo:5000/tool runtool arg1 arg2
ExecStop=-/usr/bin/docker stop Registry
[X-Fleet]
MachineID=coreos1
[Install]
WantedBy=multi-user.target
RequiredBy=registry-ui.service
Also=registry-ui.service
(This unit works together with another unit - registry-ui.service. When one is started the other does as well.)
Note the Conflicts=... line. I added it after spending time trying to figure out why the service wasn't shutting down properly. It did nothing. According to the docs, however, services have a Conflicts=shutdown.target line by default. When services conflict and one starts up, the other shuts down - or so the docs say.
What did I miss? Why won't my ExecStop= lines run?
Update
I've determined that ExecStop= lines do run. Using journalctl -u registry.service -n 200 gave me this message:
Error response from daemon: Cannot start container 7b9083a3f81710febc24256b715fcff1e8146c867500c6e8ce4d170ad1cfd11a: dbus: connection closed by user
Which indicates that the problem is (as I speculated in the comments) that my docker container won't start during shutdown. I've added the following lines to my [Unit] section:
[Unit]
After=docker.service docker.socket
Requires=docker.service docker.socket
...
The new lines have no effect on the journalctl error, so my question now becomes, is there a way to run a utility docker container prior to shutdown?
If I understood your goal, you would like to run some DNS cleanup when a server is being powered off, and you are attempting to do this inside the systemd docker service file.
Why don't you use fleet for that task ?
try to create a fleet unit-file that monitors your DNS server, and when it detect the server is not reachable you can launch your cleanup tasks.
On fleet, when you destroy a service using fleetctl destroy, the exec stop lines don't run (similar to power-off). If you want to have a cleanup session, you usually achieve this using a satellite service with this directives
serviceFile.service
[Unit]
Wants=cleanup#serviceFile.service
cleanup#serviceFile.service
[Unit]
PartOf=%i.service
Instead of starting everything in one unit file, you can start 2 different services and bind them using bindsTo so that they start and stop together : http://www.freedesktop.org/software/systemd/man/systemd.unit.html#BindsTo=
That would make two easy and smaller unit files, each taking care of one container. It would also make debugging easier for you.

Resources