Can I get a docker container to restart itself from an image? - docker

Background
I have a container, its running alot of stuff including a frontend that is exposed to other developers.
Users are going to be uploading some of their shell/python scripts unto my container to be run.
To keep the my container working, my plan is to send the script to a sibling container which will then run them and send me back the response. The user's scripts should be able to download external packages etc.
Then I want the sibling container to be "cleaned"
Question
Can I have that sibling container restart itself from its source image once it is done running the user's script? This way users can get a consistently clean container to run their scripts on.
Note
If I am completely barking up the wrong tree with this solution, please let me know. I am trying to get some weird functionalities going and could be approaching this from the wrong angle.
EDIT 1 (other approaches and why I don't think I like them)
Two alternatives that I have thought of is having the container with the frontend run containers on it. Or have the sibling container run docker containers on it. But these two solutions run into the difficulty of Docker-in-docker. The other solution may be to heighten my frontend container's permissions until it can make sibling containers on the fly for running scripts. But, I am worried that this may result in giving my frontend container unnecessarily high permissions.
EDIT 2 (all the documentation I have found on having a container restart itself)
I am aware of the documentation on autorestart, but I don't believe that this will clean the containers contents. For instance if a file was downloaded onto it.

My answer has some severe security implications.
You could control your sibling containers from your main container, if you map the docker socket from the host into your main container.
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now you have complete control over the docker engine from inside your main container. You can start, stop, etc your sibling containers, and spawn new (clean) siblings.
But remember that this is effectively granting host root rights to your main container.

Related

Running a `docker` container with `detach=False`

In my Golang program, I am currently spawning a Docker container to perform some work. I chose to use a Docker container here since there are a lot of dependencies and OS-related items that will be much simpler to manage via a packaged container image. I am using the Golang Docker API to manage the containers (github.com/docker/docker/client)
One issue I am facing is if the consumer of my Golang program presses Ctrl-C, the program quits but the Docker container is still running. This will cause actions to keep continuing even if the consumer believes they have stopped the program.
If the Golang program was instead a bash script, I believe that running docker run without the -d flag would cause the container to be stopped as soon as this calling parent is stopped. However, in the Golang docker client at the URL provided previously, I don't see an option to do this. There are two parts here: container_create.go and container_start.go. The structs provided for container_create only contain pre-run based configurations (such as ports to expose, etc.), but there is no mention of background or detached modes. container_start also does not seem to have any options relevant to this.

Mandatory command or entrypoint in docker-compose

We just started moving our app to containers so I am very new to container world.
In our container image we only have the base linux image with some rpms installed and some scripts copied to the container. We were thinking that we will not have any command/entrypoint in the image itself. When the container comes up, our deployment job will then run a script inside the container to bring up the services (jetty/hbase/..). i.e. container bringup and services bringup are 2 different steps in the deployment job.
This was working until I was bringing up the container using the docker run/podman run command.
Now, we thought of moving to the docker-compose way. However when I say "docker-compose up" it complaints that "Error: No command specified on command line or as CMD or ENTRYPOINT in this image". i.e. while starting a container using the run command it's ok to not have any CMD or ENTRYPOINT, but while starting a container using docker-compose it's mandatory to provide one, why is that so ?
In order to get past that error, we tried putting some simple CMD in the compose file like, say, /bin/bash. However, with this approach, the container exits immediately. I found many stackoverflow links explaining why this is happening, eg: Why docker container exits immediately. If I put CMD as tail -f /dev/null in the compose file only then the container stays up.
Can you please help clarify what is the right thing to do. As mentioned, our requirement is that we want to bringup container without any services, and then bringup the services separately. Hence we don't have any use case for CMD/ENTRYPOINT.
Container( Image)s should be the thing that you deploy not a thing that you deploy code into; it is considered good practice to have immutable infrastructure (containers, VMs etc.).
Your build process should probably (!?) generate container images. A container image is (sha-256) hashed to uniquely identify it.
Whenever your sources change, you should consider generating a new container image. It is a good idea to label container images so that you can tie a specific image (not image name but tagged version) to a specific build so that you can always determine which e.g. commit resulted in which image version.
Corollary: it is considered bad practice to change container images.
One reason for preferring immutable infrastructure is that you will have reproducible deployments. If you have issues in a container version, you know you didn't change it and you know what build produced it and you know what source was used ...
There are other best practices for containers including that they should contain no state etc. It's old but seems comprehensive 10 thinks to avoid in containers and there are many analogs to The Twelve-Factor App
(Too!?) Often containers use CMD to start their process but, in my experience, it is better to use ENTRYPOINT. Both can be overridden but CMD is trivially overwritten while ENTRYPOINT requires a specific --entrypoint flag. In essence, if you use CMD, your users must remember to also run your process if they want to use command-line args. Whereas, ENTRYPOINT containers act more like running a regular-old binary.

Ansible commands on docker containers?

Upto now i had setup my ansible-playbook commands running on AWS EC2 instances.
can i run regular ansible commands like (linefile, apt, pip, etc) on container?
can i add my container-ip to hosts file in container-group and then does the same code works, here if i chanage my main.yml file that has
hosts: ec2-group
to
hosts:contaniers-group
does all commands work?
i am bit beginner into this..please do confirm me i am actually thinking of making docker-compose files from scratch, and run docker-compose commands using ansible.
You can, but it's not really how Docker is designed to be used.
A Docker container is usually a wrapper around a single process. In the standard setup you create an image that has that application built and packaged, and you can just run it without any further setup. It's not usually interesting to run a bare Linux distribution container (which won't have an application installed) or to run an interactive shell as the main container process. Tutorials like Docker's Build and run your image walk through this sequence.
A corollary to this is that containers don't usually have any local state. In the best case any state a container needs is in an external database; if you can't do that then you store local state in a volume that outlives the container.
Finally, it's extremely routine to delete and recreate containers. You need to do this to change some common options; in a cluster environment like Kubernetes this can happen outside your control. When this happens the new container will restart running its default setup, and it won't know about any manual changes the previous container might have had.
So you don't usually want to try to install software directly in a running container, since that will get lost as soon as the container exits. You can, in principle, get a shell in a container (via docker exec) but this is more of a debugging tool than an administration tool. You could make the only process a container runs be an ssh daemon, but anything you start this way will get lost as soon as the container exits (and I've never seen a recipe that correctly and securely sets up credentials to access it).
I'd recommend learning the standard Dockerfile system and running self-contained Docker images over trying to adapt Ansible to this rather different environment.

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

update running docker container

I have a running docker container with a base image fedora:latest.
I would like to preserve the state of my running applications, but still update a few packages which got security fixes (i.e. gnutls, openssl and friends) since I first deployed the container.
How can I do that without interrupting service or losing the current state?
So optimally I would like to get a bash/csh/dash/sh on the running container, or any fleet magic?
It's important to note that you may run into some issues with the container shutting down.
For example, imagine that you have a Dockerfile for an Apache container which runs Apache in the foreground. Imagine that you attach a shell to your container (via docker exec) and you start updating. You have to apply a fix to Apache and, in the process of updating, Apache restarts. The instant that Apache shuts down, the container will stop. You're going to lose the current state of the applications. This is going to require extremely careful planning and some luck, and some updates will probably not be possible.
The better way to do it is rebuild the image upon which the container is based with all the appropriate updates, then re-run the container. There will be a (brief) interruption in service. However, in order for you to be able to save the state of your applications, you would need to design the images in such a way that any state information that needs to be preserved is stored in a persistent manner - either in the host file system by mounting a directory or in a data container.
In short, if you're going to lose important information when your container shuts down, then your system is fragile & you're going to run into problems sooner or later. Better to redesign it so that everything that needs to be persistent is saved outside the container.
If the docker container has a running bash
docker attach <containerIdOrName>
Otherwise execute a new program in the same container (here: bash)
docker exec -it <containerIdOrName> bash

Resources