How to Deploy Systemd Microservices to Docker Image - docker

I was maintaining application which develop on C, running by Systemd and it was microservices. Each services can communicate by Linux shared memory (IPCS), and use HTTP to communicate to outside. My question is, is it good to move this all services to one Docker container? I new in the container topic and people was recommended me to learn and use it.
The simple design of my application is below:
Note: MS is Microservice

Docker official web says :
It is generally recommended that you separate areas of concern by using one service per container
When the docker starts, it needs to link to a live and foreground process. If this process ends, the entire container ends. Also the default behavior in docker is related to logs is "catch" the stdout of the single process.
several process
If you have several process, a no one is the "main", I think it is possible to start them as background process but you will need a heavy while in bash to simulate a foreground process. Inside this loop you could check if your services still running because has no sense a live container when its internal process are exited or has errors.
while sleep 60; do
ps aux |grep my_first_process |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep my_second_process |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
One process
As apache and other tools make, you could create one live process and then, start your another child process from inside. This is called spawn process. Also as you mentioned http, this process could expose http endpoints to exchange information with the outside.
I'm not a C expert, but system method could be an option to launch another process:
#include <stdlib.h>
int main()
{
system("commands to launch service1");
system("commands to launch service2");
return 0;
}
Here some links:
How do you spawn another process in C?
https://suchprogramming.com/new-linux-process-c/
http://cplusplus.com/forum/general/250912/
Also to create a basic http server in c, you could check this:
https://stackoverflow.com/a/54164425/3957754
restinio::run(
restinio::on_this_thread()
.port(8080)
.address("localhost")
.request_handler([](auto req) {
return req->create_response().set_body("Hello, World!").done();
}));
This c program will keep live when it starts because is a server. So this will be perfect for docker.
Rest Api is the most common strategy to exchange information over internet between servers and/or devices.
If you achieve this, your c program will have these features:
start another required process (ms1,ms2,ms3, etc)
expose rest http endpoints to send a receive some information between your services and the world. Sample
method: get
url: https://alexsan.com/domotic-services/ms1/message/1
description: rest endpoint which returns the message 1 from service ms1 queue
returns:
{
"command": "close gateway=5"
}
method: post
url: https://alexsan.com/domotic-services/ms2/message
description: rest endpoint which receives a message containing a command to be executed on service ms2
receive:
{
"id": 100,
"command" : "open gateway=2"
}
returns:
{
"command": "close gateway=5"
}
These http endpoints could be invoked from webs, mobiles, etc
Use high level languages
You could use python, nodejs, or java to start a server and from its inside, launch your services and if you want expose some http endpoints. Here a basic example with python:
FROM python:3
WORKDIR /usr/src/app
# create requirements
RUN echo "bottle==0.12.17" > requirements.txt
# app.py is creating with echo just for demo purposes
# in real scenario, app.py should be another file
RUN echo "from bottle import route, run" >> app.py
RUN echo "import os" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms1.acme')" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms2.acme')" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms3.acme')" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms4.acme')" >> app.py
RUN echo "#route('/domotic-services/ms2/message')" >> app.py
RUN echo "def index():" >> app.py
RUN echo " return 'I will query the message'" >> app.py
RUN echo "run(host='0.0.0.0', port=80)" >> app.py
RUN pip install --no-cache-dir -r requirements.txt
CMD [ "python", "./app.py" ]
Also you can use nodejs:
https://github.com/jrichardsz/nodejs-express-snippets/blob/master/01-hello-world.js
https://nodejs.org/en/knowledge/child-processes/how-to-spawn-a-child-process/

Related

Can I run a docker container from the browser?

I don't suppose anyone knows if it's possible to call the docker run or docker compose up commands from a web app?
I have the following scenario in which I have a react app that uses openlayers for it's maps. I have it so that when the user loses internet connection it fallback onto making the requests to a map server running locally on docker. The issue is that the user needs to manually start the server via the command line. To make things easier for the user, I added the following bash script and docker compose file to boot up the server with a single command, but was wondering if I could incorporate that functionality into the web app and have the user boot the map server by the click of a button?
Just for references sake these are my bash and compose files.
#!/bin/sh
dockerDown=`docker info | grep -qi "ERROR" && echo "stopped"`
if [ $dockerDown ]
then
echo "\n ********* Please start docker before running this script ********* \n"
exit 1
fi
skipInstall="no"
read -p "Have you imported the maps already and just want to run the app (y/n)?" choice
case "$choice" in
y|Y ) skipInstall="yes";;
n|N ) skipInstall="no";;
* ) skipInstall="no";;
esac
pbfUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei-latest.osm.pbf'
#polyUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei.poly'
#-e DOWNLOAD_POLY=$polyUrl \
docker volume create openstreetmap-data
docker volume create openstreetmap-rendered-tiles
if [ $skipInstall = "no" ]
then
echo "\n ***** IF THIS IS THE FIRST TIME, YOU MIGHT WANT TO GO GET A CUP OF COFFEE WHILE YOU WAIT ***** \n"
docker run \
-e DOWNLOAD_PBF=$pbfUrl \
-v openstreetmap-data:/var/lib/postgresql/12/main \
-v openstreetmap-rendered-tiles:/var/lib/mod_tile \
overv/openstreetmap-tile-server \
import
echo "Finished Postgres container!"
fi
echo "\n *** BOOTING UP SERVER CONTAINER *** \n"
docker compose up
My docker compose file
version: '3'
services:
map:
image: overv/openstreetmap-tile-server
volumes:
- openstreetmap-data:/var/lib/postgresql/12/main
- openstreetmap-rendered-tiles:/var/lib/mod_tile
environment:
- THREADS=24
- OSM2PGSQL_EXTRA_ARGS=-C 4096
- AUTOVACUUM=off
ports:
- "8080:80"
command: "run"
volumes:
openstreetmap-data:
external: true
openstreetmap-rendered-tiles:
external: true
There is the Docker API, and you are able to start containers,
In the Docker documentation,
https://docs.docker.com/engine/api/
To start the containers using the Docker API
https://docs.docker.com/engine/api/v1.41/#operation/ContainerStart

Dockerize 'at' scheduler

I want to put at daemon (atd) in separate docker container for running as external environment independent scheduler service.
I can run atd with following Dockerfile and docker-compose.yml:
$ cat Dockerfile
FROM alpine
RUN apk add --update at ssmtp mailx
CMD [ "atd", "-f" ]
$ cat docker-compose.yml
version: '2'
services:
scheduler:
build: .
working_dir: /mnt/scripts
volumes:
- "${PWD}/scripts:/mnt/scripts"
But problems are:
1) There is no built-in option to reditect atd logs to /proc/self/fd/1 for showing them via docker logs command. at just have -m option, which sends mail to user.
Is it possible to redirect at from user mail to /proc/self/fd/1 (maybe some compile flags) ?
2) Now I add new task via command like docker-compose exec scheduler at -f test.sh now + 1 minute. Is it a good way ? I think a better way is to find a file where at stores a queue, add this file as volume, update it externally and just send docker restart after file change.
But I can't find where at stores its data on alpine linux ( I just found /var/spool/atd/.SEQ where at stores id of last job ). Anyone knows where at stores its data ?
Also will be glad to hear any advices regarding at dockerization.
UPD. I found where at stores its data on alpine, it's /var/spool/atd folder. When I create a task via at command it creates here executable file with name like a000040190a2ff and content like
#!/bin/sh
# atrun uid=0 gid=0
# mail root 1
umask 22
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; export PATH
HOSTNAME=e605e8017167; export HOSTNAME
HOME=/root; export HOME
cd /mnt/scripts || {
echo 'Execution directory inaccessible' >&2
exit 1
}
#!/usr/bin/env sh
echo "Hello world"
UPD2: the difference between running at with and without -m option is third string of generated script
with -m option:
#!/bin/sh
# atrun uid=0 gid=0
# mail root 1
...
without -m :
#!/bin/sh
# atrun uid=0 gid=0
# mail root 0
...
According official man
The user will be mailed standard error and standard output from his
commands, if any. Mail will be sent using the command
/usr/sbin/sendmail
and
-m
Send mail to the user when the job has completed even if there was no
output.
I tried to run schedule simple Hello World script and found that no mail was sent:
# mail -u root
No mail for root

Docker swarm: guarantee high availability after restart

I have an issue using Docker swarm.
I have 3 replicas of a Python web service running on Gunicorn.
The issue is that when I restart the swarm service after a software update, an old running service is killed, then a new one is created and started. But in the short period of time when the old service is already killed, and the new one didn't fully start yet, network messages are already routed to the new instance that isn't ready yet, resulting in 502 bad gateway errors (I proxy to the service from nginx).
I use --update-parallelism 1 --update-delay 10s options, but this doesn't eliminate the issue, only slightly reduces chances of getting the 502 error (because there are always at least 2 services running, even if one of them might be still starting up).
So, following what I've proposed in comments:
Use the HEALTHCHECK feature of Dockerfile: Docs. Something like:
HEALTHCHECK --interval=5m --timeout=3s \
CMD curl -f http://localhost/ || exit 1
Knowing that Docker Swarm does honor this healthcheck during service updates, it's relative easy to have a zero downtime deployment.
But as you mentioned, you have a high-resource consumer health-check, and you need larger healthcheck-intervals.
In that case, I recomend you to customize your healthcheck doing the first run immediately and the successive checks at current_minute % 5 == 0, but the healthcheck itself running /30s:
HEALTHCHECK --interval=30s --timeout=3s \
CMD /service_healthcheck.sh
healthcheck.sh
#!/bin/bash
CURRENT_MINUTE=$(date +%M)
INTERVAL_MINUTE=5
[ $((a%2)) -eq 0 ]
do_healthcheck() {
curl -f http://localhost/ || exit 1
}
if [ ! -f /tmp/healthcheck.first.run ]; then
do_healhcheck
touch /tmp/healthcheck.first.run
exit 0
fi
# Run only each minute that is multiple of $INTERVAL_MINUTE
[ $(($CURRENT_MINUTE%$INTERVAL_MINUTE)) -eq 0 ] && do_healhcheck
exit 0
Remember to COPY the healthcheck.sh to /healthcheck.sh (and chmod +x)
There are some known issues (e.g. moby/moby #30321) with rolling upgrades in docker swarm with the current 17.05 and earlier releases (and doesn't look like all the fixes will make 17.06). These issues will result in connection errors during a rolling upgrade like you're seeing.
If you have a true zero downtime deployment requirement and can't solve this with a client side retry, then I'd recommend putting in some kind of blue/green switch in front of your swarm and do the rolling upgrade to the non-active set of containers until docker finds solutions to all of the scenarios.

How to know if my program is completely started inside my docker with compose

In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)

Nagios Percona Monitoring Plugin

I was reading a blog post on Percona Monitoring Plugins and how you can somehow monitor a Galera cluster using pmp-check-mysql-status plugin. Below is the link to the blog demonstrating that:
https://www.percona.com/blog/2013/10/31/percona-xtradb-cluster-galera-with-percona-monitoring-plugins/
The commands in this tutorial are run on the command line. I wish to try these commands in a Nagios .cfg file e.g, monitor.cfg. How do i write the services for the commands used in this tutorial?
This was my attempt and i cannot figure out what the best parameters to use for check_command on the service. I am suspecting that where the problem is.
So inside my /etc/nagios3/conf.d/monitor.cfg file, i have the following:
define host{
use generic-host
host_name percona-server
alias percona
address 127.0.0.1
}
## Check for a Primary Cluster
define command{
command_name check_mysql_status
command_line /usr/lib/nagios/plugins/pmp-check-
mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
}
define service{
use generic-service
hostgroup_name mysql-servers
service_description Cluster
check_command pmp-check-mysql-
status!wsrep_cluster_status!==!str!non-Primary
}
When i run the command Nagios and go to monitor it, i get this message in the Nagios dashboard:
status: UNKNOWN; /usr/lib/nagios/plugins/pmp-check-mysql-status: 31:
shift: can't shift that many
You verified that:
/usr/lib/nagios/plugins/pmp-check-mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
works fine on command line on the target host? I suspect there's a shell escape issue with the ==
Does this work well for you? /usr/lib64/nagios/plugins/pmp-check-mysql-status -x wsrep_flow_control_paused -w 0.1 -c 0.9

Resources