I'm new in Dart lang, also new in API services on linux.
My question is, how to I keep the Dart service active in linux?
And how can I do it to recycle if I have a problem with the service?
I need to run in crontab?
You can create a systemd service for you Aqueduct and enable it to run automatically when you server are started. There are a lot of options for systemd service but I have tried to make an example for you with you requirements:
[Unit]
Description=Dart Web Server
Wants=network-online.target
After=network-online.target
[Service]
Restart=always
ExecStart=/opt/dart-sdk/bin/dart bin/main.dart
WorkingDirectory=/tmp/web/my_project
User=webserver_user
[Install]
WantedBy=multi-user.target
Save this as /etc/systemd/system/name_of_your_service.service
Run hereafter the following commands:
systemctl daemon-reload
This will ensure the latest changes to you available services are loaded into systemd.
systemctl start name_of_your_service.service
This will start you service. You can stop it with "stop" and restart it with "restart".
systemctl enable name_of_your_service.service
This will enable the service so it will start after boot. You can also "disable" it.
Another good command is status command where you can see some information about your service (e.g. is it running?) and some of the latest log events (from stdout):
systemctl status name_of_your_service.service
Let me go through the settings I have specified:
"Wants"/"After" ensures that the service are first started after a network connection has been established (mostly relevant for when the service should start under the boot sequence).
"Restart" specifies what should happen if the dart process are stopped without using "systemctl stop". With "always" the service are restarted no matter how the program was terminated.
"ExecStart" the program which we want to keep running.
"User" is the user your want the service to run as.
The "WantedBy" part are relevant for the "systemctl enable" part and specifies when the service should be started. Use multi-user.target here unless you have some specific requirements.
Again, there are lot of options for systemd services and you should also check out journalctl if you want to see stdout log output for you service.
Related
I read the Enable Live Restore, but when I tried it.
ubuntu#ip-10-0-0-230:~$ cat /etc/docker/daemon.json
{
"live-restore": true
}
I started an nginx container in detached mode.
sudo docker run -d nginx
c73a20d1bb620e2180bc1fad7d10acb402c89fed9846f06471d6ef5860f76fb5
$sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
c73a20d1bb62 nginx "nginx -g 'daemon of…" 5 seconds ago Up 4
seconds
Then I stopped the dockerd
sudo systemctl stop snap.docker.dockerd.service
and I checked that there was no container running
ps aux | grep nginx
After that, I restarted the docker service and still, there wasn't any container.
Any Idea? How this "enable live restore" works?
From the documentation, after modifying the daemon.json (adding "live-restore": true) you need to :
Restart the Docker daemon. On Linux, you can avoid a restart (and avoid any downtime for your containers) by reloading the Docker daemon. If you use systemd, then use the command systemctl reload docker. Otherwise, send a SIGHUP signal to the dockerd process.
You can also do this but it's not recommended :
If you prefer, you can start the dockerd process manually with the --live-restore flag. This approach is not recommended because it does not set up the environment that systemd or another process manager would use when starting the Docker process. This can cause unexpected behavior.
It seems that you had not done this step. You said that you've made the modification to the daemon.json and directly started a container and then stopped the dockerd.
In order to make the Live Restore functionality work follow all steps in the right order :
Modify the daemon.json by adding "live-restore": true
Reload the Docker daemon with the command :
sudo systemctl reload docker
Then try the functionality with your example (firing up a container and making the daemon unavailable).
I've tested and it works if you follow the steps in order :
Tested with Docker version 19.03.2, build 6a30dfc and Ubuntu 19.10 (Eoan Ermine)
You've installed Docker via snap : snap.docker.dockerd.service
Unfortunately, it's not recommended since snap model is not fully compatible with Docker. Furthermore, docker-snap is no longer maintained by Docker, Inc. Users encounters some issues when they installed Docker via snap see 1 2
You should delete the snap Docker installation to avoid any potential overlapping installation issues via this command :
sudo snap remove docker --purge
Then install Docker with the official way and after that try the Live Restore functionality by following the above steps.
Also be careful when restarting the daemon the documentation says that :
Live restore upon restart
The live restore option only works to restore containers if the daemon options, such as bridge IP addresses and graph driver, did not change. If any of these daemon-level configuration options have changed, the live restore may not work and you may need to manually stop the containers.
Also about downtime :
Impact of live restore on running containers
If the daemon is down for a long time, running containers may fill up the FIFO log the daemon normally reads. A full log blocks containers from logging more data. The default buffer size is 64K. If the buffers fill, you must restart the Docker daemon to flush them.
On Linux, you can modify the kernel’s buffer size by changing /proc/sys/fs/pipe-max-size.
I am trying to automate deployment of a webapplication, updating the app requires shutting down cron and nginx.
Problem is, when I stop the process via service nginx stop and service cron stop, these are restarted by supervisord.
There is no init.d script for supervisord, furthermore I am not certain if one is to use supervisorctl to manage services.
What is the proper approach?
You need to use supervisorctl for this. But that would only work if you have supervisorctl configured in your supervisorconfig
So you need to use
$ supervisorctl status
This will give you the names of the services and then you can use
$ supervisorctl stop nginx-program
$ supervisorctl stop cron-program
That is how you should be handling it
When I in a fedora container systemctl use, I get:
Failed to get D-Bus connection:: Unknown error -1
Does someone know how to fix this? Or can systemctl not be used in a docker container?
The systemctl command talks to systemd over a DBus connection. It is unlikely that you are running systemd in your container, so systemctl has nothing with which to talk.
While it is possible to run systemd in a container, doing so is often (but not always!) a sign that you need to rethink the architecture of your containers.
I have fixed a similar issue, check this answer.
The main idea is to make /usr/sbin/init the first process inside the container.
As already said, the standard systemctl needs SystemD. But for a command like "systemctl enable " or starting a service process one actually do that without a running SystemD.
The "systemctl enable" will essentially look into the sshd.service file for a "WantedBy=multi-user.target" clause and then it creates a symlink in /etc/systemd/system/multi-user.target.wants/. Similary, a "systemctl start" will look for the "ExecStart=/usr/bin/sshd" clause in the ssh.service file.
If you do not want look that up and run those parts manually, you could use my systemctl.py helper from the docker-systemctl-replacement which can do the interpretation of systemd service files for you.
I have a docker container with JETTY CMD instruction.
After "docker restart", which goes immediately, I cannot access JETTY about 9-10 seconds. After that time docker container or jetty service is UP again and I can access it.
Question is: is there a standard way to check that the docker container is really up?
Surely I can make a loop with test requests to my service and wait to 200 response code. But maybe there is a more beautiful solution?
Thanks
Sergey, youre need to use initialization system as supervisor of your in-Docker processes. You may use distro-built-in init systems like systemd/upstart or init.d depends on your OS for checking a container state.
In theory you should to create independent service in you init system on each docker run command without -d option, because with -d option docker detached a container and returned 0 exit status to init system. As result init system lost a control of target process.
For example, realization of this mechanism in Systemd:
Create something.service file in /etc/systemd/system
And type to it something like this:
[Unit]
Description=Simple Blog Rails Docker Container Service
After=docker.service
Requires=docker.service
[Service]
Restart=on-failure
ExecStartPre=-/usr/bin/docker kill simple-blog-rails-container
ExecStartPre=-/usr/bin/docker rm simple-blog-rails-container
ExecStart=/usr/bin/docker run simple-blog-rails
ExecStop=/usr/bin/docker stop simple-blog-rails-container
[Install]
WantedBy=multi-user.target
Reload Systemd configuration systemctl daemon-reload
Just try to run your container by typing systemctl start something.service or restart instead of start.
You can check service state systemctl status something.service
For more information about using systemd and docker you may read this CoreOS manual: https://coreos.com/docs/launching-containers/launching/getting-started-with-systemd/
Keep in mind that docker restart defaults to a wait time of 10 seconds:
restart Usage: docker restart [OPTIONS] CONTAINER [CONTAINER...]
Restart a running container
-t, --time=10 Number of seconds to try to stop for before
killing the container. Once killed it will then be restarted. Default
is 10 seconds.
When I try this with some long running script in a docker container it takes 10 seconds before it's done. If I change the command line to use a different timeout (like -t=4} it comes back in 4 seconds.
If you want to determine if a container is running (even if your service contained within it is not quite ready yet) you can:
Run docker ps - which will give you a list of all running docker
containers
Use the Docker Remote API command GET /containers/json - this will
give you a json response with a list of running containers.
https://docs.docker.com/reference/api/docker_remote_api_v1.16/
https://docs.docker.com/reference/commandline/cli/#ps
I am testing CoreOS to see if it meets our needs, and so far things are going a little slow, but OK. I like systemd, but it doesn't seem to be working properly - specifically, on shutdown.
My Goal
My goal is to have a script run on service start and stop that adds and removes records from our DNS server respectively for a service. It works when the service is started by the system on boot, or when it is manually started or shut down - but not when the system is rebooted or halted (shutdown -r now, shutdown -h now).
Here is a slightly simplified version of a docker registry service I am using for an example:
[Unit]
Description=Docker Registry
After=docker.service
Before=registry-ui.service
Wants=registry-ui.service
[Service]
Conflicts=halt.target poweroff.target reboot.target shutdown.target sleep.target
TimeoutStartSec=0
Restart=on-failure
ExecStartPre=-/usr/bin/docker kill Registry
ExecStartPre=-/usr/bin/docker rm Registry
ExecStartPre=-/usr/bin/docker run --rm myrepo:5000/tool runtool arg1 arg2
ExecStart=/usr/bin/docker run various args registry:latest
ExecStop=-/usr/bin/docker run --rm myrepo:5000/tool runtool arg1 arg2
ExecStop=-/usr/bin/docker stop Registry
[X-Fleet]
MachineID=coreos1
[Install]
WantedBy=multi-user.target
RequiredBy=registry-ui.service
Also=registry-ui.service
(This unit works together with another unit - registry-ui.service. When one is started the other does as well.)
Note the Conflicts=... line. I added it after spending time trying to figure out why the service wasn't shutting down properly. It did nothing. According to the docs, however, services have a Conflicts=shutdown.target line by default. When services conflict and one starts up, the other shuts down - or so the docs say.
What did I miss? Why won't my ExecStop= lines run?
Update
I've determined that ExecStop= lines do run. Using journalctl -u registry.service -n 200 gave me this message:
Error response from daemon: Cannot start container 7b9083a3f81710febc24256b715fcff1e8146c867500c6e8ce4d170ad1cfd11a: dbus: connection closed by user
Which indicates that the problem is (as I speculated in the comments) that my docker container won't start during shutdown. I've added the following lines to my [Unit] section:
[Unit]
After=docker.service docker.socket
Requires=docker.service docker.socket
...
The new lines have no effect on the journalctl error, so my question now becomes, is there a way to run a utility docker container prior to shutdown?
If I understood your goal, you would like to run some DNS cleanup when a server is being powered off, and you are attempting to do this inside the systemd docker service file.
Why don't you use fleet for that task ?
try to create a fleet unit-file that monitors your DNS server, and when it detect the server is not reachable you can launch your cleanup tasks.
On fleet, when you destroy a service using fleetctl destroy, the exec stop lines don't run (similar to power-off). If you want to have a cleanup session, you usually achieve this using a satellite service with this directives
serviceFile.service
[Unit]
Wants=cleanup#serviceFile.service
cleanup#serviceFile.service
[Unit]
PartOf=%i.service
Instead of starting everything in one unit file, you can start 2 different services and bind them using bindsTo so that they start and stop together : http://www.freedesktop.org/software/systemd/man/systemd.unit.html#BindsTo=
That would make two easy and smaller unit files, each taking care of one container. It would also make debugging easier for you.