I have a Docker container that runs just a Go binary I created that is a http server with Gin framework. I don't use any other Web Server, just Go's internal http server. Inside my Dockerfile at the end of file I have this:
EXPOSE 80
CMD ["/home/project/microservices/backend/engine/cmd/main"]
I use docker-compose to run the container and restart: always for each container. And it works!
But my question is that, if the http server that I created fails due to programming error or something, It will restart? how can I check this? does Docker has tools for this?
I tried go with Supervisord but it has some problems and I was not successful on running it.
I want a workaround to keep the http server inside container always running.
What can I do?
You can try killing the process from the host. Find the process id using something like
ps -aux | grep main
Then kill it using
sudo kill <process id>
Docker will then restart it. By using
docker ps
you should see that the 'status' has changed to something like Up 10 seconds.
I'm trying to connect a ksqlDB CLI (running on a container using image 0.20.0) but it says the [ksqlDB] Server Status is unknow...
CLI v0.20.0, Server v<unknown> located at http://127.0.0.1:8088
WARNING: Could not identify server version.
Non-matching CLI and server versions may lead to unexpected errors.
Server Status: <unknown>
... which is funny since I'm running the ksqlDB server (version 0.20.0 as well) based on these instructions and I see the startup log
[2021-08-23 12:28:10,795] INFO ksqlDB API server listening on http://0.0.0.0:8088 (io.confluent.ksql.rest.server.KsqlRestApplication:389)
[2021-08-23 12:28:10,796] INFO Server up and running (io.confluent.ksql.rest.server.KsqlServerMain:93)
[2021-08-23 12:28:11,923] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter:146)
Also, in Docker Desktop (I'm running this on a Windows) I see it under the Container/Apps tab as running on port 8088 and it allows me to "Open in browser" where I see the response
KsqlServerInfo
version "0.20.0"
kafkaClusterId "lkc-****"
ksqlServiceId "default_"
serverStatus "RUNNING"
Any idea of what's going on?
By default, the container doesn't know how to reach an external server; it will try to connect to itself on 127.0.0.1
You would need to follow some steps like the following
# create a network
docker network create ksql-network
# TODO start Zookeeper and Kafka
# start the server on that network
docker run -d --name=ksql-server --network=ksql-network ... confluentinc/ksql-server:<version>
# Start the CLI to point at the '--name' on the '--network'
docker run --network=ksql-network confluentinc/ksql-cli:<version> http://ksql-server:8088
Or, you should just use Compose
Inside a Docker container which doesn't have rsyslogd installed, what happens to logs from the command logger error "my error message"?
It seems odd to have the logger command available, without anything to capture and process the log events it emits. I would naively have expected, both the logger command and the mechanism to process the log messages to be packaged together, it doesn't make sense to me to have one without the other.
============= EDIT with more information =============
I'm using the Dockers official MariaDB 10.1 image. I'm entering the container with the command
docker exec -it maria_10_2_test bash
then I'm using the Linux logger utility to try and log to the syslog like with
logger error "My message here"
The logger command exits successfully with a 0 code, but nothing is written to the syslog (as there is no syslog daemon in the image).
I think this question is more general than Docker and is a general Linux question. If on the host machine (Ubuntu 20.04), I turn off the syslog service and socket, I can still use the logger command, it still gives a zero exit code and nothing is written to the syslog.
I'm new in Dart lang, also new in API services on linux.
My question is, how to I keep the Dart service active in linux?
And how can I do it to recycle if I have a problem with the service?
I need to run in crontab?
You can create a systemd service for you Aqueduct and enable it to run automatically when you server are started. There are a lot of options for systemd service but I have tried to make an example for you with you requirements:
[Unit]
Description=Dart Web Server
Wants=network-online.target
After=network-online.target
[Service]
Restart=always
ExecStart=/opt/dart-sdk/bin/dart bin/main.dart
WorkingDirectory=/tmp/web/my_project
User=webserver_user
[Install]
WantedBy=multi-user.target
Save this as /etc/systemd/system/name_of_your_service.service
Run hereafter the following commands:
systemctl daemon-reload
This will ensure the latest changes to you available services are loaded into systemd.
systemctl start name_of_your_service.service
This will start you service. You can stop it with "stop" and restart it with "restart".
systemctl enable name_of_your_service.service
This will enable the service so it will start after boot. You can also "disable" it.
Another good command is status command where you can see some information about your service (e.g. is it running?) and some of the latest log events (from stdout):
systemctl status name_of_your_service.service
Let me go through the settings I have specified:
"Wants"/"After" ensures that the service are first started after a network connection has been established (mostly relevant for when the service should start under the boot sequence).
"Restart" specifies what should happen if the dart process are stopped without using "systemctl stop". With "always" the service are restarted no matter how the program was terminated.
"ExecStart" the program which we want to keep running.
"User" is the user your want the service to run as.
The "WantedBy" part are relevant for the "systemctl enable" part and specifies when the service should be started. Use multi-user.target here unless you have some specific requirements.
Again, there are lot of options for systemd services and you should also check out journalctl if you want to see stdout log output for you service.
My application will send out syslog local0 messages.
When I move my application into docker, I found it is difficult to show the syslog.
I've tried to run docker as --log-dirver as syslog or journald, both works strange, the /var/log/local0.log show console output of docker container instead of my application's syslog when I try to run this command inside container
logger -p local0.info -t a message
So, I try to install syslog-ng inside the docker container.
The outside docker box is Arch Linux (kernel 4.14.8 + systemctl).
The docker container is running as CentOS 6. If I install syslog-ng inside the container and start it, it shows following message.
# yum install -y syslog-ng # this will install syslog-ng 3.2.5
# /etc/init.d/syslog-ng start
Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
Starting syslog-ng: Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
Error initializing source driver; source='s_sys', id='s_sys#0'
Error initializing message pipeline;
I also had problems getting the standard "syslog" output from my app after it has been dockerized.
I have attacked the problem from a different direction. I wanted to get the container syslogs on the host /var/log/syslog
I have ran my container with an extra mount the /dev/log device and voila it worked like a charm.
docker run -v /dev/log:/dev/log sysloggingapp:latest
CentOS 6:
1.
Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
Starting syslog-ng: Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
You can fix above error by installing syslog-ng-libdbi package:
yum install -y syslog-ng-libdbi
2.
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
Error initializing source driver; source='s_sys', id='s_sys#0'
Error initializing message pipeline;
Since syslog-ng doesn't have direct access on the kernel messages, you need to disable (comment) that in its configuration:
sed -i 's|file ("/proc/kmsg"|#file ("/proc/kmsg"|g' /etc/syslog-ng/syslog-ng.conf
CentOS 7:
1.
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
The system() source is in default configuration. This source reads platform-specific sources automatically, and reads /dev/kmsg on Linux if the kernel is version 3.5 or newer. So, we need to disable (comment) system() source in configuration file:
sed -i 's/system()/# system()/g' /etc/syslog-ng/syslog-ng.conf
2. When we start it in foreground mode syslog-ng -F we get the following:
# syslog-ng -F
syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
So, we need to run syslog-ng as root, without capability-support:
syslog-ng --no-caps -F
Another way is to set up central logging with syslog/ rsyslog server, then use the syslog docker driver for logging. The syntax to use on the docker run command line is:
$ docker run --log-driver=syslog \
--log-opt syslog-address=udp://address:port image-name
Destination syslog server protocol can be udp or tcp and the server address can be a remote server, VM, a different container or local container address.
Replace image-name with your application docker image name.
A ready rsyslog docker image is available on https://github.com/jumanjihouse/docker-rsyslog
References: Docker Logging at docker.com,
Docker CLI, https://www.aquasec.com/wiki/display/containers/Docker+Containers+vs.+Virtual+Machines
For anyone trying to figure this out in the future,
The best way I've found is to just set LOG_PERROR flag in openlog().
That way, your syslog will print to stderr, which docker will then log by default (you don't need to run syslog process in docker for this). This is much easier then trying to figure out how to run a syslog process alongside your application inside your docker container (which docker probably isn't designed to do anyway).