In my Golang program, I am currently spawning a Docker container to perform some work. I chose to use a Docker container here since there are a lot of dependencies and OS-related items that will be much simpler to manage via a packaged container image. I am using the Golang Docker API to manage the containers (github.com/docker/docker/client)
One issue I am facing is if the consumer of my Golang program presses Ctrl-C, the program quits but the Docker container is still running. This will cause actions to keep continuing even if the consumer believes they have stopped the program.
If the Golang program was instead a bash script, I believe that running docker run without the -d flag would cause the container to be stopped as soon as this calling parent is stopped. However, in the Golang docker client at the URL provided previously, I don't see an option to do this. There are two parts here: container_create.go and container_start.go. The structs provided for container_create only contain pre-run based configurations (such as ports to expose, etc.), but there is no mention of background or detached modes. container_start also does not seem to have any options relevant to this.
Related
We just started moving our app to containers so I am very new to container world.
In our container image we only have the base linux image with some rpms installed and some scripts copied to the container. We were thinking that we will not have any command/entrypoint in the image itself. When the container comes up, our deployment job will then run a script inside the container to bring up the services (jetty/hbase/..). i.e. container bringup and services bringup are 2 different steps in the deployment job.
This was working until I was bringing up the container using the docker run/podman run command.
Now, we thought of moving to the docker-compose way. However when I say "docker-compose up" it complaints that "Error: No command specified on command line or as CMD or ENTRYPOINT in this image". i.e. while starting a container using the run command it's ok to not have any CMD or ENTRYPOINT, but while starting a container using docker-compose it's mandatory to provide one, why is that so ?
In order to get past that error, we tried putting some simple CMD in the compose file like, say, /bin/bash. However, with this approach, the container exits immediately. I found many stackoverflow links explaining why this is happening, eg: Why docker container exits immediately. If I put CMD as tail -f /dev/null in the compose file only then the container stays up.
Can you please help clarify what is the right thing to do. As mentioned, our requirement is that we want to bringup container without any services, and then bringup the services separately. Hence we don't have any use case for CMD/ENTRYPOINT.
Container( Image)s should be the thing that you deploy not a thing that you deploy code into; it is considered good practice to have immutable infrastructure (containers, VMs etc.).
Your build process should probably (!?) generate container images. A container image is (sha-256) hashed to uniquely identify it.
Whenever your sources change, you should consider generating a new container image. It is a good idea to label container images so that you can tie a specific image (not image name but tagged version) to a specific build so that you can always determine which e.g. commit resulted in which image version.
Corollary: it is considered bad practice to change container images.
One reason for preferring immutable infrastructure is that you will have reproducible deployments. If you have issues in a container version, you know you didn't change it and you know what build produced it and you know what source was used ...
There are other best practices for containers including that they should contain no state etc. It's old but seems comprehensive 10 thinks to avoid in containers and there are many analogs to The Twelve-Factor App
(Too!?) Often containers use CMD to start their process but, in my experience, it is better to use ENTRYPOINT. Both can be overridden but CMD is trivially overwritten while ENTRYPOINT requires a specific --entrypoint flag. In essence, if you use CMD, your users must remember to also run your process if they want to use command-line args. Whereas, ENTRYPOINT containers act more like running a regular-old binary.
Upto now i had setup my ansible-playbook commands running on AWS EC2 instances.
can i run regular ansible commands like (linefile, apt, pip, etc) on container?
can i add my container-ip to hosts file in container-group and then does the same code works, here if i chanage my main.yml file that has
hosts: ec2-group
to
hosts:contaniers-group
does all commands work?
i am bit beginner into this..please do confirm me i am actually thinking of making docker-compose files from scratch, and run docker-compose commands using ansible.
You can, but it's not really how Docker is designed to be used.
A Docker container is usually a wrapper around a single process. In the standard setup you create an image that has that application built and packaged, and you can just run it without any further setup. It's not usually interesting to run a bare Linux distribution container (which won't have an application installed) or to run an interactive shell as the main container process. Tutorials like Docker's Build and run your image walk through this sequence.
A corollary to this is that containers don't usually have any local state. In the best case any state a container needs is in an external database; if you can't do that then you store local state in a volume that outlives the container.
Finally, it's extremely routine to delete and recreate containers. You need to do this to change some common options; in a cluster environment like Kubernetes this can happen outside your control. When this happens the new container will restart running its default setup, and it won't know about any manual changes the previous container might have had.
So you don't usually want to try to install software directly in a running container, since that will get lost as soon as the container exits. You can, in principle, get a shell in a container (via docker exec) but this is more of a debugging tool than an administration tool. You could make the only process a container runs be an ssh daemon, but anything you start this way will get lost as soon as the container exits (and I've never seen a recipe that correctly and securely sets up credentials to access it).
I'd recommend learning the standard Dockerfile system and running self-contained Docker images over trying to adapt Ansible to this rather different environment.
Background
I have a container, its running alot of stuff including a frontend that is exposed to other developers.
Users are going to be uploading some of their shell/python scripts unto my container to be run.
To keep the my container working, my plan is to send the script to a sibling container which will then run them and send me back the response. The user's scripts should be able to download external packages etc.
Then I want the sibling container to be "cleaned"
Question
Can I have that sibling container restart itself from its source image once it is done running the user's script? This way users can get a consistently clean container to run their scripts on.
Note
If I am completely barking up the wrong tree with this solution, please let me know. I am trying to get some weird functionalities going and could be approaching this from the wrong angle.
EDIT 1 (other approaches and why I don't think I like them)
Two alternatives that I have thought of is having the container with the frontend run containers on it. Or have the sibling container run docker containers on it. But these two solutions run into the difficulty of Docker-in-docker. The other solution may be to heighten my frontend container's permissions until it can make sibling containers on the fly for running scripts. But, I am worried that this may result in giving my frontend container unnecessarily high permissions.
EDIT 2 (all the documentation I have found on having a container restart itself)
I am aware of the documentation on autorestart, but I don't believe that this will clean the containers contents. For instance if a file was downloaded onto it.
My answer has some severe security implications.
You could control your sibling containers from your main container, if you map the docker socket from the host into your main container.
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now you have complete control over the docker engine from inside your main container. You can start, stop, etc your sibling containers, and spawn new (clean) siblings.
But remember that this is effectively granting host root rights to your main container.
I have a running docker container with a base image fedora:latest.
I would like to preserve the state of my running applications, but still update a few packages which got security fixes (i.e. gnutls, openssl and friends) since I first deployed the container.
How can I do that without interrupting service or losing the current state?
So optimally I would like to get a bash/csh/dash/sh on the running container, or any fleet magic?
It's important to note that you may run into some issues with the container shutting down.
For example, imagine that you have a Dockerfile for an Apache container which runs Apache in the foreground. Imagine that you attach a shell to your container (via docker exec) and you start updating. You have to apply a fix to Apache and, in the process of updating, Apache restarts. The instant that Apache shuts down, the container will stop. You're going to lose the current state of the applications. This is going to require extremely careful planning and some luck, and some updates will probably not be possible.
The better way to do it is rebuild the image upon which the container is based with all the appropriate updates, then re-run the container. There will be a (brief) interruption in service. However, in order for you to be able to save the state of your applications, you would need to design the images in such a way that any state information that needs to be preserved is stored in a persistent manner - either in the host file system by mounting a directory or in a data container.
In short, if you're going to lose important information when your container shuts down, then your system is fragile & you're going to run into problems sooner or later. Better to redesign it so that everything that needs to be persistent is saved outside the container.
If the docker container has a running bash
docker attach <containerIdOrName>
Otherwise execute a new program in the same container (here: bash)
docker exec -it <containerIdOrName> bash
I'm dockerizing some of our services. For our dev environment, I'd like to make things as easy as possible for our developers and so I'm writing some scripts to manage the dockerized components. I want developers to be able to start and stop these services just as if they were non-dockerized. I don't want them to have to worry about creating and running the container vs stopping and starting and already-created container. I was thinking that this could be handled using Fig. To create the container (if it doesn't already exist) and start the service, I'd use fig up --no-recreate. To stop the service, I'd use fig stop.
I'd also like to ensure that developers are running containers built using the latest images. In other words, something would check to see if there was a later version of the image in our Docker registry. If so, this image would be downloaded and run to create a new container from that image. At the moment it seems like I'd have to use docker commands to list the contents of the registry (docker search) and compare that to existing local containers (docker ps -a) with the addition of some greping and awking or use the Docker API to achieve the same thing.
Any persistent data will be written to mounted volumes so the data should survive the creation of a new container.
This seems like it might be a common pattern so I'm wondering whether anyone else has given these sorts of scenarios any thought.
This is what I've decided to do for now for our Neo4j Docker image:
I've written a shell script around docker run that accepts command-line arguments for the port, database persistence directory on the host, log file persistence directory on the host. It executes a docker run command that looks like:
docker run --rm -it -p ${port}:7474 -v ${graphdir}:/var/lib/neo4j/data/graph.db -v ${logdir}:/var/log/neo4j my/neo4j
By default port is 7474, graphdir is $PWD/graph.db and logdir is $PWD/log.
--rm removes the container on exit, however the database and logs are maintained on the host's file system. So no containers are left around.
-it allows the container and the Neo4j service running within it to receive signals so that the service can be gracefully shut down (the Neo4j server gracefully shuts down on SIGINT) and the container exited by hitting ^C or sending it a SIGINT if the developer puts this in the background. No need for separate start/stop commands.
Although I certainly wouldn't do this in production, I think this fine for a dev environment.
I am not familiar with fig but your scenario seems good.
Usually, I prefer to kill/delete + run my container instead of playing with start/stop though. That way, if there is a new image available, Docker will use it. This work only for stateless services. As you are using Volumes for persistent data, you could do something like this.
Regarding the image update, what about running docker pull <image> every N minutes and checking the "Status" that the command returns? If it is up to date, then do nothing, otherwise, kill/rerun the container.