When running a Docker Compose project, it would be nice to be able to open an app with certain parameters on the host operating system (on which docker-compose up is being invoked). This would be legitimately useful when running web apps. For example, I would love to have Docker Compose automatically open a browser on the host with location of http://localhost:8080, when I run docker-compose run, rather than manually opening a browser and entering the http://localhost:8080. Just the way we see in Minikube (e.g when running minikube service web-deployment).
I am aware there are parameters to use in docker-compose.yml to pass commands to run in containers, like command and entrypoint, but I don't know if that is possible for applications on the host OS.
Compose can do a pretty limited set of things. It can build Docker images, without any ordering constraints, and it can start (presumably long-running) Docker containers, with very limited ordering constraints. It can create a couple of associated Docker objects like networks and named volumes. That's literally all it can do, though; it cannot do larger-scale orchestration task ("run this migration container to completion, then run this application") or launch non-Docker tasks.
You might use some host-based tool to manage this instead. Even a shell script would be enough; possibly something like
#!/bin/sh
# start the container stack
# (assumes the caller has permission to do this)
docker-compose up -d
# wait for the service to be ready
while ! curl --fail --silent --head http://localhost:8080; do
sleep 1
done
# open the browser window
open http://localhost:8080
Related
Note: I've tried searching for existing answers in any way I could think of, but I don't believe there's any information out there on how to achieve what I'm after
Context
I have an existing swarm running a bunch of networked services across multiple hosts. The deployment is done via docker-compose build && docker stack deploy. Several of the services contain important state necessary for the functioning of the main service this stack is for, including when interacting with it via CLI.
Goal
How can I create an ad-hoc container within the existing stack running on my swarm for interactive diagnostics and troubleshooting of my main service? The service has a CLI interface, but it needs access to the other components for that CLI to function, thus it needs to be run exactly as if it were a service declared inside docker-compose.yml. Requirements:
I need to run it in an ad-hoc fashion. This is for troubleshooting by an operator, so I don't know when exactly I'll need it
It needs to be interactive, since it's troubleshooting by a human
It needs to be able to run an arbitrary image (usually the image built for the main service and its CLI, but sometimes other diagnostics might be needed through other containers I won't know ahead of time)
It needs to have full access to the network and other resources set up for the stack, as if it were a regular predefined service in it
So far the best I've been able to do is:
Find an existing container running my service's image
SSH into the swarm host on which it's running
docker exec -ti into it to invoke the CLI
This however has a number of downsides:
I don't want to be messing with an already running container, it has an important job I don't want to accidentally interrupt, plus its state might be unrelated to what I need to do and I don't want to corrupt it
It relies on the service image also having the CLI installed. If I want to separate the two, I'm out of luck
It relies on some containers already running. If my service is entirely down and in a restart loop, I'm completely hosed because there's nowhere for me to exec in and run my CLI
I can only exec within the context of what I already have declared and running. If I need something I haven't thought to add beforehand, I'm sadly out of luck
Finding the specific host on which the container is running and going there manually is really annoying
What I really want is a version of docker run I could point to the stack and say "run in there", or docker stack run, but I haven't been able to find anything of the sort. What's the proper way of doing that?
Option 1
deploy a diagnostic service as part of the stack - a container with useful tools in it, with an entrypoint of tail -f /dev/null - use a placement contraint to deploy this to a known node.
services:
diagnostics:
image: nicolaka/netshoot
command: tail -f /dev/null
deploy:
placement:
constraints:
- node.hostname == host1
NB. You do NOT have to deploy this service with your normal stack. It can be in a separate stack.yml file. You can simply stack deploy this file to your stack later, and as long as --prune is not used, the services are cumulative.
Option 2
To allow regular containers to access your services - make your network attachable. If you havn't specified the network explicitly you can just explicitly declare the default network.
networks:
default:
driver: overlay
attachable: true
Now you can use docker run and attach to the network with a diagnostic container :-
docker -c manager run --rm --network <stack>_default -it nicolaka/netshoot
Option 3
The third option does not address the need to directly access the node running the service, and it does not address the need to have an instance of the service running, but it does allow you to investigate a service without effecting its state and without needing tooling in the container.
Start by executing the usual commands to discover the node and container name and id of the service task of interest:
docker service ps ${service} --no-trunc --format '{{.Node}} {{.Name}}.{{.ID}}' --filter desired-state=running
Then, assuming you have docker contexts to match your node names: - pick one ${node}, ${container} from the list of {{.Node}}, {{.Name}}.{{.ID}} and run a container such as ubuntu or netshoot, attaching it to the network namespace of the target container.
docker -c ${node} run --rm -it --network container:${container} nicolaka/netshoot
This container can be used to perform diagnostics in the context of the running service task, and then closed without affecting it.
I'm new to docker and I'm starting of building, deploying, and maintaining telemetry like services (grafana, prometheus, ...). One thing I've come accross is that I have a need to start up grafana with some default/preconfigured settings (dashboard, users, org, datasources, ...). Grafana allows some startup configuration in its config file but not with all its features (users, org, ...). Outside of (if I weren't using) docker I use a ansible script to configure the not supported parts of grafana. However, when I build my custom grafana image (with allowed startup config) and later start a grafana container of that image is there a way to specify "post-start" commands or steps in docker file? I image it to be something like every time a container of my image is deployed some steps are issues to configure that container.
Any suggestions? Would I still need to use ansible or other tools like this to manage it?
This is trickier than it sounds. Continuing to use Ansible to configure it post-startup is probably a good compromise between being straightforward, code you already have, and using standard Docker tooling and images.
If this is for a test environment, one possibility is to keep a reference copy of Grafana's config and data directories. You'd have to distribute these separately from the Docker images.
mkdir grafana
docker run \
-v $PWD/grafana/config:/etc/grafana \
-v $PWD/grafana/data:/var/lib/grafana \
... \
grafana/grafana
...
tar cvzf grafana.tar.gz grafana
Once you have the tar file, you can restart the system from a known configuration:
tar xvzf grafana.tar.gz
docker run \
-v $PWD/grafana/config:/etc/grafana \
-v $PWD/grafana/data:/var/lib/grafana \
... \
grafana/grafana
Several of the standard Docker Hub database images have the ability to do first-time configuration, via an entrypoint script; I'll refer to the mysql image's entrypoint script here. The basic technique involves:
Determine whether the command given to start the container is to actually start the server, and if this is the first startup.
Start the server, as a background process, recording its pid.
Wait for the server to become available.
Actually do the first-time initialization.
Stop the server that got launched as a background process.
Go on to exec "$#" as normal to launch the server "for real".
The basic constraint here is that you want the server process to be the only thing running in the container once everything is done. That means commands like docker stop will directly signal the server, and if the server fails, it's the main container process so that will cause the container to exit. Once the entrypoint script has replaced itself with the server as the main container process (by execimg it), you can't do any more post-startup work. That leads to the sequence of starting a temporary copy of the server to do initialization work.
Once you've done this initialization work once the relevant content is usually stored in persisted data directories or external databases.
SO questions have a common shortcut of starting a server process in the background, and then using something like tail -f /dev/null as the actual main container process. This means that docker stop will signal the tail process, but not tell the server that it's about to shut down; it also means that if the server does fail, since the tail process is still running, the container won't exit. I'd discourage this shortcut.
I am creating a docker container with Ubuntu:16.04 image using python docker package. I am passing tty as True and detach as True to the client.containers.run() function. The container starts with /sbin/init process. The container is created successfully. But the problem is, the login prompt on my host machine is replaced with the container login prompt on my host machine console. As a result, I am not able to the login on the machine on the console. SSH connection to the machine work fine.
This happens even when I run my python script after connecting SSH to the machine. I tried different options like setting tty to False, setting stdout to False, setting the environment variable TERM to xterm in the container, but nothing help.
It would be really great if someone can suggest a solution for this problem.
My script is very simple:
import docker
client = docker.from_env()
container = client.containers.run('ubuntu:16.04', '/sbin/init', privileged=True,
detach=True, tty=True, stdin_open=True, stdout=False, stderr=False,
environment=['TERM=xterm'])
I am not using any dockerfile.
I have been able to figure out that this problem happens when I start container in privileged mode. If I do this, the /sbin/init process launches /sbin/agetty processes which causes /dev/tty to be attached to the container. I need to figure out a way to start /sbin/init in such a way that it does not create /sbin/agetty processes.
/sbin/init in Ubuntu is a service called systemd. If you look at the linked page it does a ton of things – configures various kernel parameters, mounts filesystems, configures the network, launches getty process, .... Many of these things require changing host-global settings, and if you launch a container with --privileged you're allowing systemd to do that.
I'd give two key recommendations on this command:
Don't run systemd in Docker. If you really need a multi-process init system, supervisord is popular, but prefer single-process containers. If you know you need some init(8) (process ID 1 has some responsibilities) then tini is another popular option.
Don't directly run bare Linux distribution images. Whatever software you're trying to run, it's almost assuredly not in an alpine or ubuntu image. Build a custom image that has the software you need and run that; you should set up its CMD correctly so that you can docker run the image without any manual setup.
Also remember that the ability to run any Docker command at all implies unrestricted root-level access over the host. You're seeing some of that here where a --privileged container is taking over the host's console; it's also very very easy to read and edit files like the host's /etc/shadow and /etc/sudoers. There's nothing technically wrong with the kind of script you're showing, but you need to be extremely careful with standard security concerns.
In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.
I'm dockerizing some of our services. For our dev environment, I'd like to make things as easy as possible for our developers and so I'm writing some scripts to manage the dockerized components. I want developers to be able to start and stop these services just as if they were non-dockerized. I don't want them to have to worry about creating and running the container vs stopping and starting and already-created container. I was thinking that this could be handled using Fig. To create the container (if it doesn't already exist) and start the service, I'd use fig up --no-recreate. To stop the service, I'd use fig stop.
I'd also like to ensure that developers are running containers built using the latest images. In other words, something would check to see if there was a later version of the image in our Docker registry. If so, this image would be downloaded and run to create a new container from that image. At the moment it seems like I'd have to use docker commands to list the contents of the registry (docker search) and compare that to existing local containers (docker ps -a) with the addition of some greping and awking or use the Docker API to achieve the same thing.
Any persistent data will be written to mounted volumes so the data should survive the creation of a new container.
This seems like it might be a common pattern so I'm wondering whether anyone else has given these sorts of scenarios any thought.
This is what I've decided to do for now for our Neo4j Docker image:
I've written a shell script around docker run that accepts command-line arguments for the port, database persistence directory on the host, log file persistence directory on the host. It executes a docker run command that looks like:
docker run --rm -it -p ${port}:7474 -v ${graphdir}:/var/lib/neo4j/data/graph.db -v ${logdir}:/var/log/neo4j my/neo4j
By default port is 7474, graphdir is $PWD/graph.db and logdir is $PWD/log.
--rm removes the container on exit, however the database and logs are maintained on the host's file system. So no containers are left around.
-it allows the container and the Neo4j service running within it to receive signals so that the service can be gracefully shut down (the Neo4j server gracefully shuts down on SIGINT) and the container exited by hitting ^C or sending it a SIGINT if the developer puts this in the background. No need for separate start/stop commands.
Although I certainly wouldn't do this in production, I think this fine for a dev environment.
I am not familiar with fig but your scenario seems good.
Usually, I prefer to kill/delete + run my container instead of playing with start/stop though. That way, if there is a new image available, Docker will use it. This work only for stateless services. As you are using Volumes for persistent data, you could do something like this.
Regarding the image update, what about running docker pull <image> every N minutes and checking the "Status" that the command returns? If it is up to date, then do nothing, otherwise, kill/rerun the container.