Say you had a bunch of wordpress containers running on a machine with each application sitting behind cache. Is there a way to stop a container and start it only if the url is not found in cache?
systemd provides a Socket Activation feature that can activate a service on tcp connection and proxy the connection in. Atlassian have a detailed article on using it with Docker.
I don't believe systemd has the ability to stop the service when there is no activity. You will need something that can close down the service after there are no connections left being served. This could be done in the wordpress app container or externally via systemd on the host.
Some more socket reading from the systemd developer:
http://0pointer.de/blog/projects/socket-activated-containers.html
http://0pointer.de/blog/projects/socket-activation2.html
http://0pointer.de/blog/projects/socket-activation.html
Related
I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.
After finding a solution for this problem, I have another question: I am running a flask app in a docker container (my web map), and on this map I want to show tiles served by a (flask-based) Terracotta tile server running in another docker container. The two containers are on the same docker network and can talk to each other, however only the port where my web server is running is open to the public, and I like to keep it that way. Is there a way I can serve my tiles somehow "from local" without opening the port of the tile server? Maybe by setting up some redirects or something?
Main reason for this is that I need someone else to open ports for me, which takes ages.
If you are running your docker containers on a remote machine like ec2, then you need not worry about a port being open to public, as by default ports are closed in ec2 or similar services. You just need to open the port on which you are running your app, you can use aws console for that.
If you are running your docker container locally or on some server for which you don't have cosole access, then you can use somekind of firewall to open or close a port. I personally prefer UFW for Ubuntu systems. You can allow a certain range of ports using a simple command such as sudo ufw allow 9000 to allow incoming tcp packets on port 9000. Similarly you can deny incoming packets to a port. Also, you can open a port to a certain ip (like your own ip) using sudo ufw allow from <ip address>.
I have a server running multiple web applications inside separate docker containers. I am using Traefik as a reverse proxy. So, whenever a container is idle for, say 15 mins, I stop the container from inside (end the running process which causes the container to stop). How can I restart the container on demand, i.e., when there is an incoming request for the stopped container?
As asked, I am not using any cluster manager or anything like that. Basically, I have an API server which uses the docker-py library to create images and containers. Traefik listens to the docker events and generates configuration whenever a container is created to route URLs to respective containers.
I tried systemd socket activation. Here are the socket and service files.
app.socket
[Unit]
Description=App Socket
[Socket]
ListenStream=3000
Accept=yes
[Install]
WantedBy=sockets.target
app#.service
[Unit]
Description=App Service
Requires=app.socket
[Service]
Type=simple
ExecStart=/usr/bin/npm start --prefix /path/to/dir
StandardInput=socket
StandardError=journal
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
This is my current approach. My containers are running node apps. So, I end the node process inside the containers. While ending the node process, I'll enable and start app.socket. And when there is incoming traffic on port 3000, my apps will be started by socket activation.
But nothing happens when I try to access that port. I've confirmed that socket activation is working. When I execute the command date | netcat 127.0.0.1 3000, the app seems to start and then immediately stops without any error.
Maybe socket activation doesn't work how I'm expecting it to. I can see that a process init with PID 1 is running on port 3000 after enabling app.socket. As soon as traffic comes on port 3000, I want to start my node app inside the container. But how can the app start on 3000, if there is already a process running on that port?
Perhaps there's some way to do it with Traefik since it is the reverse proxy I am using. Is there some functionality which can allow me to execute a command or script whenever 404 occurs?
It would be more helpful if you can tell how are you managing your docker containers ( k8 or swarm or anything else ). But based on your initial input, I guess you are looking for Inetd or systemd socket activation. This post can be helpful https://www.reddit.com/r/docker/comments/72sdyf/startrun_a_container_when_incoming_traffic/
I am writing a small application with flask which is meant to interact with the docker api in order to run containers on demand. I would like to deploy this application within a docker container. However, I understood that it is relatively bad to mount the docker socket, as it has root privilege on the local host.
Is there a proper method to access the docker api within a container in order to avoid this caveat ?
Why is mounting the Docker socket to an unprivileged container a bad idea?
In order to mount the unix socket to your Docker container, you would need to change the permissions of the Docker daemon socket. This, obviously, could give non-root users the ability to access the Docker daemon, which might be a problem if you are worried about privilege escalation attacks. (source)
Do I really need to secure the Docker socket?
This depends on your usecase. If you have many users on your server, and are particularly worried about a non-privileged user affecting your app, then definitely secure the socket. If this is a virtual machine that is completely dedicated to the app, insecure might be easier.
How do I interact with the socket insecurely?
Just change the permissions (described here) and then mount the socket to the container. It's that simple.
How do I interact with the socket securely?
I think there are two good ways of doing this:
Restart the Docker Daemon with TLS Authentication enabled. Rather than accessing the unix socket, access it using HTTPS with a signed SSL key. More instructions on setting that up can be found here.
Use an Authorization Plugin on the unix socket as described here.
I have a system of three small Spring Boot apps, each to serve a different purpose, which contain REST endpoints. The three apps are meant to all work off of the same database (MariaDB). Right now, the system works as four separate dockers. Three docker containers for the three apps, and a fourth container for the MariaDB (based on MariaDB Docker image). All three app containers connect to the database container using the --link network pattern.
Each of the app dockers were launched from the same image, using:
docker run -i -t -p 8080:8080 --link mariadb:mariadb javaimage /bin/bash
This docker system currently works as expected. All three apps can call the MariaDB, each app is accessible from the host machine via REST calls by calling http://localhost:8080/pathToEndpoint. The project has recently expanded and a new requirement has been added. We are using Netflix Eureka as a service lookup point, which should in the future help allow these Docker's to be deployed anywhere with minimal changes needed to the software calling the Docker's. Netflix Eureka requires me to effectively "check in" when the app is launched. This is all handled by Spring Boot itself, so when the app is launched this "check in" is apart of the startup process. The Eureka server is on the same network as the host machine, and for the time being is being accessed via an ip address. If the Spring Boot applications running this Eureka check-in component are launched directly on the host machine, everything works as expected. The app makes a successful call to the Eureka server and notify it of the app's existence. If I run the same app within a Docker container on the same host machine, this fails as the connection is refused. Upon investigation I found I could not even ping the ip address of the Eureka server from within the Docker container, explaining why it is failing. I continued a little further into testing what does and does not work, and found I can ping external sites such as google without a problem, but any internal to my network servers are un-reachable when I try to ping them from within the Docker container.
Therefore my question is, what network configuration I am missing to cause this? I recognize that Docker has quite a lot of network configuration options, but I have not been able to find someone with a similar issue.
Any help is appreciated. Thank you!