creating docker container for gpsd with auto usb option - docker

Docker noob here.. I want to dockerize my project. There is gps module in it. I can use python gpsd library with autousb option true. It automatically detects the usb port if it is connected. However I want to dockerize this option but could not found any solution. In ubuntu I can set up gpsd options with systemd services but there is no systemd in docker container. I do not want to set the usb port statically.
My ubuntu local gpsd settings
/etc/default/gpsd
# Start the gpsd daemon automatically at boot time
START_DAEMON="true"
# They need to be read/writeable, either by user gpsd or the group dialout.
DEVICES=""
# Use USB hotplugging to add new USB devices automatically to the daemon
USBAUTO="true"
# Other options you want to pass to gpsd
GPSD_OPTIONS="-F /var/run/gpsd.sock"
/lib/systemd/system/gpsd.service
[Unit]
Description=GPS (Global Positioning System) Daemon
Requires=gpsd.socket
# Needed with chrony SOCK refclock
After=chronyd.service
[Service]
Type=forking
EnvironmentFile=-/etc/default/gpsd
ExecStart=/usr/sbin/gpsd $GPSD_OPTIONS $DEVICES
[Install]
WantedBy=multi-user.target
Also=gpsd.socket
/lib/systemd/system/gpsd.socket
[Unit]
Description=GPS (Global Positioning System) Daemon Sockets
[Socket]
ListenStream=/var/run/gpsd.sock
ListenStream=[::1]:2947
# ListenStream=127.0.0.1:2947
# To allow gpsd remote access, start gpsd with the -G option and
# uncomment the next two lines:
# ListenStream=[::1]:2947
ListenStream=0.0.0.0:2947
SocketMode=0600
[Install]
WantedBy=sockets.target

Related

How to run a service running in a container in systemd, including systemd-notify and logging

We currently have a number of different services running on a host, and we are using systemd extensively, including systemd-notify for message passing and an own front-end for service management.
We would like to start running these services inside a container, to simplify dependency management and running multiple versions alongside each-other for testing. We want:
systemd-notify
Logging to both systemd journal and syslog
Start and stop services with systemctl start / stop.
Please note; Most questions are about running systemd inside a docker container. That is not what this question is about. Instead, I want to run a (docker?) container inside systemd.
We went with the following solution:
Podman
We decided to go with Podman. Podman is a wrapper around RunC, with the CLI tools tuned to be a drop-in replacement for Docker. However, because it's not running the containers under a daemon (which I like a bit better anyway), hardly any plumbing is required to make systemd-notify work.
Just specifying Environment=NOTIFY_SOCKET=/run/systemd/notify in the sytemd service file suffices.
See here as well.
systemd-notify
Full example:
I'm using the systemd-notify test-script from: https://github.com/bb4242/sdnotify
Dockerfile
FROM python
COPY test.py /
RUN pip install sdnotify
RUN chmod 755 /test.py
ENTRYPOINT ["/usr/local/bin/python", "test.py"]
CMD ["run"]
EXPOSE 8080
build.sh - Creates the Podman container, needs to be in the same folder as Dockerfile and test.py script.
#!/bin/bash
IMAGE_NAME=python-test
CONTAINER_NAME=python-test
sudo podman build . -t ${IMAGE_NAME}
sudo podman rm ${CONTAINER_NAME}
sudo podman create -e PYTHONUNBUFFERED=true -d --name=${CONTAINER_NAME} ${IMAGE_NAME}
notify-test.service
[Unit]
Description=A test service written in Python
[Service]
# Note: setting PYTHONUNBUFFERED is necessary to see the output of this service in the journal
# See https://docs.python.org/2/using/cmdline.html#envvar-PYTHONUNBUFFERED
Environment=PYTHONUNBUFFERED=true
Environment=NOTIFY_SOCKET=/run/systemd/notify
SyslogIdentifier=notify-test
NotifyAccess=all
ExecStart=/usr/bin/podman start -a python-test
ExecStop=/usr/bin/podman stop python-test
# Note that we use Type=notify here since test.py will send "READY=1"
# when it's finished starting up
Type=notify
[Install]
WantedBy=multi-user.target
So first install podman, and put the test.py from the url above, Dockerfile, and build.sh in a separate folder. Run ./build.sh.
Then take the .service file, and put it with other systemd service files in /usr/lib/systemd/user. Do sudo systemctl daemon-reload.
Now, the service can be started and stopped with sudo systemctl start notify-test and sudo systemctl stop notify-test.
Logging
systemd will by default automatically log whatever is written to stdout/stderr to both its own journal (accessible with journalctl), and to the syslog.
See: https://www.freedesktop.org/software/systemd/man/systemd.exec.html
SyslogLevelPrefix=
Takes a boolean argument. If true and
StandardOutput= or StandardError= are set to journal or kmsg (or to
the same settings in combination with +console), log lines written by
the executed process that are prefixed with a log level will be
processed with this log level set but the prefix removed. If set to
false, the interpretation of these prefixes is disabled and the logged
lines are passed on as-is. This only applies to log messages written
to stdout or stderr. For details about this prefixing see
sd-daemon(3). Defaults to true.
Two issues:
Problem: When using podman as ExecStart=, the log-source will by default be the name of the executable, which is 'podman'.
Solution: Use the SyslogIdentifier= to specify the name for logging, like in the .service file example above.
Problem: There won't be any difference between log-levels for log-lines.
Solution: Like described here in the systemd documentation, prepend the log-lines with <7> (for debug), <6> (for info), <4> (for warn) etc. to have systemd set the right log levels everywhere, including for syslog. Even get colors in the journalctl tool for free!

How to "reset" a docker-compose systemd service?

I have created a systemd service that starts a set of Docker containers using Docker-Compose, as outlined in this answer:
# /etc/systemd/system/docker-compose-app.service
[Unit]
Description=Docker Compose Application Service
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/srv/docker
ExecStart=/usr/local/bin/docker-compose up -d
ExecStop=/usr/local/bin/docker-compose down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
This allows me to start the Docker-Compose services using
sudo systemctl start docker-compose-app
which uses docker-compose up -d under the hood, and shutting them down using
sudo systemctl stop docker-compose-app
which uses docker-compose down. Please note that the down command is run without the -v flag, which means that volumes will remain in place, preserving the data of my containers across restarts/recreation. This is pretty much what I want in the majority of cases.
There are situations where I want to erase all data in the services, basically running the down -v command instead of just down.
Is there a way to extend the above systemd service definition to allow for an additional command (or using one of the existing systemctl commands) that would allow me to run the occasional down -v if needed. I want to do this ad-hoc if needed, not scheduled or anything like that.
How can I run
docker-compose down -v
occasionally if needed through the same systemd setup, while keeping the standard functionality of maintaining the containers' data across restarts?
You may try to use ExecReload definition:
ExecReload=/usr/local/bin/docker-compose down -v && /usr/local/bin/docker-compose up -d
And then you can use:
sudo systemctl reload docker-compose-app
So "reload" command will be used for "reset" in this case.

How to make Dart/Aqueduct run permanently

I'm new in Dart lang, also new in API services on linux.
My question is, how to I keep the Dart service active in linux?
And how can I do it to recycle if I have a problem with the service?
I need to run in crontab?
You can create a systemd service for you Aqueduct and enable it to run automatically when you server are started. There are a lot of options for systemd service but I have tried to make an example for you with you requirements:
[Unit]
Description=Dart Web Server
Wants=network-online.target
After=network-online.target
[Service]
Restart=always
ExecStart=/opt/dart-sdk/bin/dart bin/main.dart
WorkingDirectory=/tmp/web/my_project
User=webserver_user
[Install]
WantedBy=multi-user.target
Save this as /etc/systemd/system/name_of_your_service.service
Run hereafter the following commands:
systemctl daemon-reload
This will ensure the latest changes to you available services are loaded into systemd.
systemctl start name_of_your_service.service
This will start you service. You can stop it with "stop" and restart it with "restart".
systemctl enable name_of_your_service.service
This will enable the service so it will start after boot. You can also "disable" it.
Another good command is status command where you can see some information about your service (e.g. is it running?) and some of the latest log events (from stdout):
systemctl status name_of_your_service.service
Let me go through the settings I have specified:
"Wants"/"After" ensures that the service are first started after a network connection has been established (mostly relevant for when the service should start under the boot sequence).
"Restart" specifies what should happen if the dart process are stopped without using "systemctl stop". With "always" the service are restarted no matter how the program was terminated.
"ExecStart" the program which we want to keep running.
"User" is the user your want the service to run as.
The "WantedBy" part are relevant for the "systemctl enable" part and specifies when the service should be started. Use multi-user.target here unless you have some specific requirements.
Again, there are lot of options for systemd services and you should also check out journalctl if you want to see stdout log output for you service.

Configure Docker with proxy per host/url

I use Docker Toolbox on Windows 7 in a corporate environment. My workflow requires pulling containers from one artifactory and pushing them to a different one (eg. external and internal). Each artifactory requires a different proxy to access it. Is there a way to configure Docker daemon to select proxy based on a URL? Or, if not, what else can I do to make this work?
Since, as Pierre B. mentioned, Docker daemon does not support URL-based proxy selection, the solution is to point it to a local proxy configured to select the proper upstream proxy based on the URL.
While any HTTP[S] proxy capable of upstream selection would do, (pac4cli project being particularly interesting for it's advertised capability to select the upstream based on proxy-auto-discovery protocol used by most Web browsers a in corporate setting), I've chosen to use tinyproxy, as more mature and light-weight solution. Furthermore, I've decided to run my proxy inside the docker-machine VM in order to simplify it's deployment and make sure the proxy is always running when the Docker daemon needs it.
Below are the steps I used to set up my system. I'm especially grateful to phoenix for providing steps to set up Docker Toolbox on Windows behind a corporate proxy, and will borrow heavily from that answer.
From this point on I will assume either Docker Quickstart Terminal or GitBash, with docker in the PATH, as your command line console and that "username" is your Windows user name.
Step 1: Build tinyproxy on your target platform
Begin by pulling a clean Linux distribution, I used CentOS, and run bash inside it:
docker run -it --name=centos centos bash
Next, install the tools we'll need:
yum install -y make gcc
After that we pull the latest release of Tinyproxy from it's GitHub repository and extract it inside root's home directory (at the time of this writing the latest release was 1.10.0):
cd
curl -L https://github.com/tinyproxy/tinyproxy/releases/download/1.10.0/tinyproxy-1.10.0.tar.gz \
| tar -xz
cd tinyproxy-1.10.0
Now let's configure and build it:
./configure --enable-upstream \
--disable-filter\
--disable-reverse\
--disable-transparent\
--disable-xtinyproxy
make
While --enable-upstream is obviously required, disabling other default features is optional but a good practice. To make sure it actually works run:
./src/tinyproxy -h
You should see something like:
Usage: tinyproxy [options]
Options are:
-d Do not daemonize (run in foreground).
-c FILE Use an alternate configuration file.
-h Display this usage information.
-v Display version information.
Features compiled in:
Upstream proxy support
For support and bug reporting instructions, please visit
<https://tinyproxy.github.io/>.
We exit the container by pressing Ctrl+D and copy the executable to a special folder location accessible from the docker-machine VM:
docker cp centos://root/tinyproxy-1.10.0/src/tinyproxy \
/c/Users/username/tinyproxy
Substitute "username" with your Windows user name. Please note that double slash — // before "root" is required to disable MINGW path conversion.
Now we can delete the container:
docker rm centos
Step 2: Point docker daemon to a local proxy port
Choose a TCP port number to run the proxy on. This can be any port that is not in use on the docker-machine VM. I will use number 8618 in this example.
First, let's delete the existing default Docker VM:
WARNING: This will permanently erase all currently stored containers and images
docker-machine rm -f default
Next, we re-create the default machine setting HTTP_PROXY and HTTPS_PROXY environment variables to the local host and the port we selected, and then refresh our shell environment:
docker-machine create default \
--engine-env HTTP_PROXY=http://localhost:8618 \
--engine-env HTTPS_PROXY=http://localhost:8618
eval $(docker-machine env)
Optionally, we could also set NO_PROXY environment variable to list hosts and/or wildcards (separated by ;) to which the daemon should connect directly, bypassing the proxy.
Step 3: Set up tinyproxy inside docker-machine VM
First, we will create two files in the /c/Users/username directory (this is where our tinyproxy binary should reside after Step 1 above) and then we'll copy them to the VM.
The first file is tinyproxy.conf, the exact syntax is documented on the Tinyproxy website, but the example below should have all the settings need:
# These settings can be customized to your liking,
# the port though must be the same we used in Step 2
listen 127.0.0.1
port 8618
user nobody
group nogroup
loglevel critical
syslog on
maxclients 50
startservers 2
minspareServers 2
maxspareServers 5
disableviaheader yes
# Here is the actual proxy selection, rules apply from top
# to bottom, and the last one is the default. More info on:
# https://tinyproxy.github.io/
upstream http proxy1.corp.example.com:80 ".foo.example.com"
upstream http proxy2.corp.example.com:80 ".bar.example.com"
upstream http proxy.corp.example.com:82
In the example above:
http://proxy1.corp.example.com:80 will be used to connect to URLs that end with "foo.example.com", such as http://www.foo.example.com
http://proxy2.corp.example.com:80 will be used to connect to URLs that end with "bar.example.com", such as http://www.bar.example.com, and
http://proxy.corp.example.com:80 will be used to connect all other URLs
It is also possible to match exact host names, IP addresses, subnets and hosts without domains.
The second file is as the shell script that will launch the proxy, its name must be bootlocal.sh:
#! /bin/sh
# Terminate on error
set -e
# Switch to the script directory
cd $(dirname $0)
# Launch proxy server
./tinyproxy -c tinyproxy.conf
Now, let's connect to the docker VM, get root, and switch to boot2docker directory:
docker-machine ssh
sudo -s
cd /var/lib/boot2docker
Next, we'll copy all three files over and a set their permissions:
cp /c/Users/username/boot2docker/{tinyproxy{,.conf},bootlocal.sh} .
chmod 755 tinyproxy bootlocal.sh
chmod 644 tinyproxy.conf
Exit VM session by pressing Ctrl+D twice and restart it:
docker-machine restart default
That's it! Now docker should be able pull and push images from different URLs automatically selecting the right proxy server.

How to change dockerd parameters with systemd? [duplicate]

This question already has answers here:
Setting DNS for Docker daemon on OS with systemd
(2 answers)
Closed 6 years ago.
Since 16.04 release Ubuntu stopped using Upstart and switch to Systemd for its init system.
How can I change default DOCKER_OPTS parameters?
Execute following commands as root (or with sudo).
To extend the default docker unit file with additional configuration options, first create a configuration directory in /etc/systemd/system/:
mkdir /etc/systemd/system/docker.service.d/
Now put a configuration file in /etc/systemd/system/docker.service.d/. It's imperative that the file name must end with the .conf suffix:
touch /etc/systemd/system/docker.service.d/docker.conf
To change daemon parameters create configuration file with following content (ex. adds --dns option):
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --dns 8.8.8.8
After saving docker unit file, before systemd will take it into account, systemd needs to reload modified data:
systemctl daemon-reload
Finally docker service can be restarted:
systemctl restart docker
You can check that status by running:
systemctl status docker.service | grep dns
Default
On Ubuntu default configuration is located in /lib/systemd/system/docker.service.
Resources
Control and configure Docker with systemd
Modifying Existing Unit Files

Resources