Wait for connman to finish configuring the network in a sytemd system - beagleboneblack

Is there any way to configure a systemd service (e.g. serviceX) to wait for the connmand service to finish configuring network interfaces prior to serviceX running ? From what I understand of systemd, using or relying on the network.target is pointless because that functionality is horribly broken. The system I'm using (BeagleBone Black with Angstrom Linux) uses connman rather than NetworkManager.

According to systemd documentation, all systemd units that need to wait for a working online connection at boot time need to include the following:
[Unit]
...
Wants=network-online.target
After=network-online.target
If you want to be compatible with older systemd versions, you can also use:
[Unit]
...
Wants=network.target network-online.target
After=network.target network-online.target
That's for systemd. With NetworkManager (for completeness, I'm aware you're not using it), this works since the upstream versions 0.9.10 and some distributions including Fedora also worked with older upstream versions.
https://bugzilla.gnome.org/show_bug.cgi?id=728965
As you are using connman instead, you need to check whether connman implements network-online.target correctly. Checking connman 1.30 source code reveals no appearance of network-online.target at all, so I must assume that connman is lagging behind. You may want to start a feature request in connman and/or your linux distribution. In that case it would be nice if you added a note about it here.
Basically, with newer systemd versions, a network service that implements network-online.target correctly, and services using the correct dependencies, everything should work out of the box for the user.
According to a comment to the other answer, the Unit section of connman.service looks as follows:
[Unit]
Description=Connection service
After=syslog.target
There should really be at Before=network.target, at the least. The After=syslog.target is redundant with current systemd versions. But full implementation of network-online.target would be preferred.

Wants=network.target network-online.target and After=network.target network-online.target seem to be insufficient for Angstrom on the BeagleBone Black at the time of this writing. I also had to add connman.service to Wants= in order to get everything to work correctly.

Following the instructions here, I found that this line (in the [Unit] section of the .service file) worked for me:
Wants=network-online.target #wait for network up. Can slow down script.
I applied this fix for the purpose described, to get opkg upgrade working correctly, which it now does. I think using network-online rather than network may be the trick.

Related

Does docker allow for robotic process automation?

Can docker containers be used along with UI based RPA tools like blueprism or uiPath? Blueprism recommends using virtual machines but offers no support of docker
Yes it will be possible. I'm unfamiliar with the solutions you describe so I'm unable to provide you with specific examples.
Any Linux (and Windows) process can be run in a container.
Docker made containers into a thing but they're really not. They're just (very useful) conceptual "sugar" on Linux namespaces and cgroups to make the functionality more accessible. They provide a way to segregate e.g. one or more Linux processes (and their resources).
So, unless someone else has done the "containerization" already (likely), you should be able to do this reasonably easily for yourself. The primary challenge will be in relaxing the container boundary to access machine or other process resources.
I can confirm that Kofax RPA is fully supported on Docker (and Kubernetes)
Yes, containerization is possible for BP server. But not for client/resource as it requires terminal session to run the desktop applications.

How to interact with already running instance via terminal in Mongooseim?

I am using Mongooseim 3.2.0 from the source code on the ubuntu server. Below are concern:
What is the best way to run mongooseim as a service so that it automatically restarts if mongooseim crashes or system restarts?
How to interact via terminal with already running mongooseim instance on the ubuntu server like "mongooseimctl live". My guess is running "mongooseimctl live" will try to create another instance. I just want to see the live logs and interaction and don't want to keep scrolling the long log files for this purpose.
I apologize if the answer to above is obvious but just want to follow the best guidance.
mongooseimctl live or mongooseimctl foreground is mostly useful for development or smoke testing a deployment (unless you're running inside a container). For real world use cases you should start the server in the background with mongooseimctl start.
Back to the container - the best approach for containerised applications is to run them in the foreground, therefore in a container startup script use mongooseimctl foreground.
Once the server is running (no matter how it was started) attaching a shell to troubleshoot issues can be done with mongooseimctl debug. This is the command to use when you get the Protocol 'inet_tcp': the name mongooseim#localhost seems to be in use by another Erlang node error. Be careful if it's a production environment - you can easily take the server down with access to this shell.
If you're just interested in watching logs, with no interactive access to the server internals that the shell offers, a simple tail -f /your-configured-mongooseim-log-dir/* should be enough.
Ubuntu nowadays uses systemd for managing its services' lifetimes. A systemd .service file can be found at https://github.com/esl/MongooseIM/blob/master/tools/pkg/platforms/debian_stretch/files/build/mongooseim.service - we use it for packaging into Debian/Ubuntu .deb packages.

Advantages of Docker layer over plain OS

I'm not realy good at administrative tasks. I need couple of tomcat, LAMP, node.js servers behind ngnix. For me it seems really complicated to set everything on the system directly. I'm thinking about containerize the server. Install Docker and create ngnix container, node.js container etc.
I expecting it to be more easy to manage, only routing to the first ngnix maybe a little bit hassle. It will bring me also possibility to backup, add servers etc. easily. Not to forget about remote deployment and management. And also repeatability of the server setup task. Separation will probably shield me from recoprocating problem of completely breaking server, by changing some init script, screwing some app. server setup etc.
Are my expectation correct that Docker will abstract me little bit more from the "raw" system administration.
Side question is there anywhere some administrative GUI I can run and easily deploy, start/stop, interconnect the containers?
UPDATE
I found nice note here
By containerizing Nginx, we cut down on our sysadmin overhead. We will no longer need to manage Nginx through a package manager or build it from source. The Docker container allows us to simply replace the whole container when a new version of Nginx is released. We only need to maintain the Nginx configuration file and our content.
Yes docker will do this for you, but that does not mean, you will no longer administrate the OS for the services you run.
Its more that docker simplifies that management because you:
do not need to pick a specific OS for all of our services, which will enforce you to offside install a service because it has been not released for the OS of your choice. You would have the wrong version and so on. Instead, Docker will provide you the option, to pick the right OS or OS version ( debian wheezy, jessie or ubuntu 12.x, 14.x 16.x ) for the service in question. (Or even alpine)
Also, docker offers you pre-made images to avoid that you need remake the image for nginx, mysql, nodejs and so on. You find those on https://hub.docker.com
Docker makes it very easy and convenient to remove a service again, not littering your system by any means (over time).
Docker offers you better "mobility" you can easily move the stack or replicate it on a different host - you do not need to reconfigure the host and hope it to "be the same".
With Docker you do not need to think about the convergence of containers during their live time / or stack improvements, since they are remade from the image again and again - from the scratch, no convergence.
But, docker also (con)
Adds more complexity since you might run "more microservices". You might need a service-discovery, live configuration system and you need to understand the storage system ( volumes ) quiet a bit
Docker does not "remove" the OS-Layer, it just makes it simpler. Still you need to maintain
Volumes in general might feel not as simple as local file storage ( depends on what you choose )
GUI
I think the most compelling thing would match what you define a "GUI" is, is rancher http://rancher.com/ - its more then a GUI, its the complete docker-server management stack. High learning curve first, a lot of gain afterwards
You will still need to manage the docker host OS. Operations like:
Adding Disks from time to time.
Security Updates
Rotating Logs
Managing Firewall
Monitoring via SNMP/etc
NTP
Backups
...
Docker Advantages:
Rapid application deployment
Portability across machines
Version control and component reuse
Lightweight footprint and minimal overhead
Simplified maintenance
...
Docker Disadvantages:
Adds Complexity (Design, Implementation, Administration)
GUI tools available, some of them are:
Kitematic -> windows/mac
Panamax
Lorry.io
docker ui
...
Recommendation: Start Learning Docker CLI as the GUI tools don't have all the nifty CLI features.

Docker, CoreOS and fleet based deployments

I am trying to wrap my head around CoreOS and I perused their official docs, some random articles, and even watched this excellent presentation by their CTO.
My understanding of CoreOS is that its a stripped down, bare bones Linux distribution that requires anything running on it to be an OCF-compliant container, not just a Docker container.
My understanding of fleet is that its systemd at the cluster level
My understanding of flannel is that its a network layer that is used by both etcd and fleet to route network requests to containers living in the cluster
So first off, if my above assertions are incorrect or misled in any way, please begin by correcting me! Assuming that I'm more or less on track, I have a few concerns here:
What concrete benefit(s) does CoreOS offer Docker-contained apps that is not present with other Linux distros, such as Ubuntu or Debian? In other words, what objective benefits do I gain by going Docker/CoreOS vs. Docker/Ubuntu?
Fleet just seems like a scheduling engine, like Mesos or Kubernetes. Is it a direct competitor to these projects, or do they handle scheduling at different "layers" (different types of responsibilities)? If so, what are these distinctions?
There are a lot of moving parts here. The answer already posted is very good. I think there are going to be opinions in any answer you get. I thought I'd go through your punch list in my attempt at 100 bounty points :-)
I've been using CoreOS/Flannel/Kubernetes/Fleet everyday now for about 6 months. When you posted the url to the introduction I decided to watch it. Wow, great presentation. I think Brandon Philips is a very good teacher. I like the way he built upon each technology as he introduced it. I would recommend that tutorial to anyone.
CoreOS is a linux based operating system. It is very stripped down, nothing extra running. For me, it does these things:
Auto updates. Does this well. Dual partitions, updates non-active, swaps active, falls back (I think, I have never experienced a fallback). The have tackled the 'how to update your operating system after you deploy' issue and made it relatively painless.
systemd init system. This one took me a bit longer to like (being a /etc/init.d guy) but, after a while it grows on you. There is a pretty steep learning curve. Once you get what is going on you will like how systemd keeps the machine running specific things, dependencies, restarts (if you want), listening on sockets (like super initd) and spawning processes, d-bus (although I don't know much about this part yet). systemd lets you specify 'units' and units can have dependencies, pre and post processes, etc.
basic services. I've copied the brief description line from each of the services that are running on my CoreOS system.
systemd - It provides a system and service manager that runs as PID 1 and starts the rest of the system
docker - Docker is an open source project to pack, ship and run any application as a lightweight container
etcd - etcd is a distributed, consistent key-value store for shared configuration and service discovery
sshd - sshd (OpenSSH Daemon) is the daemon program for ssh(1). Together these programs replace rlogin and rsh, and provide secure encrypted communications between two untrusted hosts over an insecure network.
locksmithd - locksmith is a reboot manager for the CoreOS update engine which uses etcd to ensure that only a subset of a cluster of machines are rebooting at any given time. locksmithd runs as a daemon on CoreOS machines and is responsible for controlling the reboot behaviour after updates.
journald - systemd-journald is a system service that collects and stores logging data.
timesyncd - systemd-timesyncd is a system service that may be used to synchronize the local system clock with a remote Network Time Protocol server
update_engine
udevd - systemd-udevd listens to kernel uevents. For every event, systemd-udevd executes matching instructions specified in udev rules. See udev(7).
logind - systemd-logind is a system service that manages user logins.
resolved - systemd-resolved is a system service that manages network name resolution. It implements a caching DNS stub resolver and an LLMNR resolver and responder.
hostnamed - This is a tiny daemon that can be used to control the host name and related machine meta data from user programs.
networkd - systemd-networkd is a system service that manages networks. It detects and configures network devices as they appear, as well as creating virtual network devices.
CoreOS doesn't necessarily require that everything that you want to run must be a container. It will run anything that a unix box will run. yum and apt-get are conspicuously missing, but wget is included. So, you can 'install' programs, libraries, even apt-get via wget and be on your way to polluting the CoreOS base. That wouldn't be good, though. You really do want to keep it pristine. To that end, they include a 'toolbox' which lets you run a container like sandbox to do your work that goes away when you log out of it.
My favorite part of CoreOS is the cloud-config. On first boot you can provide user_data called a cloud-config. It is a yaml file which tells the base CoreOS what to do when it boots the first time. This is where you install things like fleet, flannel, kubernetes, etc. It is a real easy way to get a repeatable install of a combination of your choosing on a VM. In a typical cloud-config I will write configuration files, copy files from other machines to install on the new machine, and create unit files that control the other processes we want CoreOS' systemd to manage (like flannel, fleet, etc). And it is completely repeatable.
Here is another interesting thing about CoreOS. You can modify the dependency and configuration of existing units. For example, CoreOS starts docker. But, I want to modify the startup sequence of docker, so I can add a drop-in configuration that augments the existing system docker configuration. I use this to drop-in the dependency for flannel before docker starts, so I can configure docker to use a flannel provided network. This isn't necessarily CoreOS, but, it does make it all fit together.
I think you can use cloud-config with Ubuntu as well as CoreOS, and you can do the same things. So, I think the benefit you get from CoreOS over Ubuntu would be that you get a new release often, the operating system is auto-updated, and you don't have anything 'extra' running (it's lean, and a reduced attack vector is fallout). CoreOS is tuned for docker (it is already running) and ubuntu doesn't have it already running. Although, you can create a cloud-config file that will make ubuntu run docker... In summary, I think you have CoreOS understood.
Another thing that you can get with CoreOS is support, directly from the company, either paid or unpaid. I have had many questions answered by the people at CoreOS via this forum and CoreOS Dev/CoreOS User Google groups.
Your fleet description is also pretty good. Fleet manages a cluster. A cluster is one or more CoreOS machines. So, if you are going to use fleet you must use CoreOS, I guess this would be another of those benefits of CoreOS over Ubuntu.
Much like how a Unit File for systemd controls running a process on a host, a Unit File for fleetd controls running a process on a cluster. There is a bit of syntactic sugar, but a Unit file for fleet is about the same as a unit file for systemd. They fit very well together. Fleet's unit files are saved in etcd's database, so once ingested the unit is persistent, even if the machine(s) that host the unit service go down, the unit description exists in etcd.
Fleet also has commands for listing my machines in my cluster, listing a unit file, showing the units that are running, etc. Basically you can submit units to run on the cluster (or all machines, or on a specific kind of machine (like with ssd drives), or on the same machine as something else is running (affinity), etc, etc).
Fleet keeps it running. If the machine goes away its units are going to be run on some other machine in the cluster.
In the tutorial you reference Brandon uses Fleet to launch Kubernetes. It is very simple to do. By making the Fleet unit files place Kubernetes on all machines in the fleet cluster, as machines are added and subtracted from the fleet cluster Kubernetes automatically uses that machine and schedules the Kubernetes to run on them. I have run my Kubernetes cluster like this as well. However, I don't do that much anymore. I am sure there is a benefit that I don't see, but, I feel like it is not necessary in my environment. Since I already boot my machines with a cloud-config file, it is trivial to put the Kubernetes node services directly in there. In fact, with cloud-config, if I wanted to use Fleet to boot the Kubernetes stuff, I would have to write the Fleet unit files, start Fleet, submit the unit files I wrote to Fleet, when I could just write a unit file to start the Kubernetes node. But I digress...
Fleet is a scheduling mechanism, just like Kubernetes. However, Fleet can start any executable just like systemd via a unit file, where Kubernetes is geared towards containers. Kubernetes allows definition of:
replication controllers
services
pods
containers
(other stuff as well).
So, the assertion that Fleet is just a different 'layer' of scheduling is a good one. You might add that Fleet schedules different things. In my work I don't use the Fleet layer, I just jump directly to the Kubernetes because I am working only with containers.
Finally, the assertion about flannel is incorrect. Flannel uses etcd for its database. Flannel creates a private network for each host that it routes between them. The flannel network is handed to docker, and docker is told to use that network to assign ip addresses from. So, docker processes that use flannel can communicate with each other over ip. All of the port mapping stuff can be skipped since each container gets its own ip address. These docker processes can communicate infra and intra machine on the flannel network. I could be wrong, but I don't think there is any connection between Fleet and flannel. Also, I don't think etcd or Fleet use flannel to route their data. Etcd and Fleet route whether or not flannel is being used. Docker containers route their traffic over flannel.
-g
Yes, your understanding is pretty much correct.
Coreos is designed as a more secure operating system that autoupdates itself by default and runs the bare minimum in services to lessen any attack vector. http://www.activestate.com/blog/2013/08/alex-polvi-explains-coreos
Everything needs to either run in a container, be statically compiled (golang binaries as an example), or be a shell script. There is no python or ruby installed.
Containers/systemd units started by Fleet can be rescheduled on another node should its server fail (assuming you container is ephemeral) and should keep the requested number of instances running over the cluster, whilst obeying deployment constraints. https://coreos.com/using-coreos/clustering/
Mesos is more of a framework for a scheduler, you still need something else (chronos/marathon) to provide jobs to execute, but it is very flexible in that regard and handles utilising server resources better.
I don't have much experience with Flannel, but the new networking plugins coming in a future version of Docker may give you more options for container networking. http://blog.docker.com/2015/06/networking-receives-an-upgrade/
what objective benefits do I gain by going Docker/CoreOS vs.
Docker/Ubuntu?
Technical benefits
The features that attract me to CoreOS are:
It's a cluster, not a single-machine, OS
It's built with failure in mind
It's self-updating
CoreOS is a cluster, while Ubuntu is a single machine. With CoreOS when the machine a container is on disappears, the cluster starts the container somewhere else. When that Ubuntu server fails, its containers go down with it. CoreOS allows the machine to be disposable, which is a good thing.
With that said, keep in mind CoreOS does not handle data persistence; data stored in a container does not exist! ;) In my case I dynamically attach EBS volumes where needed.
Design benefits
To me more importantly, the technical benefits above bring along design benefits. Going into a system designing knowing, "this process will randomly disappear," is great for building resiliency. From the beginning, services are stateless and because you literally have no idea what system a dependent service is on, they must also be discoverable. CoreOS's etcd, a distributed configuration store, can be used to discover where a service is located. Finally, because processes may not be on the same machine, network-accessible services -- a must for horizontally-scalable systems -- are the only way to go.
All in all, I find CoreOS a great for building Twelve-Factor Apps and you get Chaos Monkey for free.
Fleet just seems like a scheduling engine, like Mesos or Kubernetes.
Is it a direct competitor to these projects, or do they handle
scheduling at different "layers" (different types of
responsibilities)? If so, what are these distinctions?
Yes, Fleet schedules a container and determines where in a cluster it runs. If that machine disappears, Fleet also takes responsibility for re-launching it on a working machine.
I haven't taken a deep dive into Kubernetes, but there does appear to be overlap. The way I understand it thus far is that Fleet handles running a single container (a "unit"), while Kubernetes is complementary and orchestrates multiple units comprising a system. For example, Fleet ensures Postgres stays running; Kubernetes ensures your application, e.g. comprised of Postgres, Redis, and Django, are all humming away.

Start Docker container using systemd socket activation?

Can an individual Docker container, for example a web server, that exposes (listens on) a port be started using systemd's socket activation feature? The idea is to save resources by starting a container only when it is actually needed for the first time (and possibly stop it again when idle to save resources).
Note: This question is not about launching the Docker daemon itself using socket activation (which is already supported), but about starting individual containers on demand.
In short, you can't.
But, if you wanted to approach a solution, you would first need to run a tool like CoreOS or geard that runs each Docker container in a systemd service.
Even then, Docker's support for inheriting the socket has come and gone. I know geard is working on stable support. CoreOS has published generalized support for socket activation in Go. Red Hat folks have also added in related patches to Fedora's Docker packages that use Go's socket activation library and improve "foreground mode," a key component in making it work.
(I am the David Strauss from Lennart's early article on socket activation of containers, and this topic interests me a lot. I've emailed the author of the patch at Red Hat and contacted the geard team. I'll try to keep this answer updated.)
If it has to be using systemd, there was a blog post last month about that, here (haven't tried it myself yet).
If the choice of technology is not a hard constraint, you could just write a small proxy in your favorite programming language, and simply make a Docker API call to ensure the container is started. That's the way snickers (my experimental nodejs proxy) does it.
Yes, you can with Podman.
Podman supports socket activation since version 3.4.0 (released Sep 2021).
(Docker does not yet support socket activation of containers so you would need to use Podman for this)
Example 1: mariadb
I wrote a small example demo of how to set up socket activation with systemd, podman and a MariaDB container:
https://github.com/eriksjolund/mariadb-podman-socket-activation
MariaDB supports socket activation since version 10.6 (released April 2021)
Example 2: nginx
https://github.com/eriksjolund/podman-nginx-socket-activation
See also my answer
https://stackoverflow.com/a/71188085/757777

Resources