Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/
Related
I'm pretty new to using Docker. I'm needing to deploy a NiFi instance through my employer, but the internal service we need to use requires a Dockerfile, not an image.
The service we're using requires the Dockerfile because each time the repository we're using is updated, the service is pointed to the Dockerfile and initiates the build process from it, then runs/operates the container.
I've already set up the NiFi flow to how it needs to operate, I'm just unsure of how to get a Dockerfile from an already existing container (or if that is even possible?)
I was looking into this myself, apparently there is no real way to do it, but you can inspect the docker container and pretty much get all the commands used to create the container except the OS used which is easy to find, you can spawn a bash into the container and do something like sudo uname -a, which you can just take and make your own docker image with. Usually you can find it on github, though.
docker inspect <image>
or you can do it through the docker desktop UI
You can use the Dockerfile that is in NiFi source code, see in this directory: https://github.com/apache/nifi/tree/main/nifi-docker/dockerhub
Upto now i had setup my ansible-playbook commands running on AWS EC2 instances.
can i run regular ansible commands like (linefile, apt, pip, etc) on container?
can i add my container-ip to hosts file in container-group and then does the same code works, here if i chanage my main.yml file that has
hosts: ec2-group
to
hosts:contaniers-group
does all commands work?
i am bit beginner into this..please do confirm me i am actually thinking of making docker-compose files from scratch, and run docker-compose commands using ansible.
You can, but it's not really how Docker is designed to be used.
A Docker container is usually a wrapper around a single process. In the standard setup you create an image that has that application built and packaged, and you can just run it without any further setup. It's not usually interesting to run a bare Linux distribution container (which won't have an application installed) or to run an interactive shell as the main container process. Tutorials like Docker's Build and run your image walk through this sequence.
A corollary to this is that containers don't usually have any local state. In the best case any state a container needs is in an external database; if you can't do that then you store local state in a volume that outlives the container.
Finally, it's extremely routine to delete and recreate containers. You need to do this to change some common options; in a cluster environment like Kubernetes this can happen outside your control. When this happens the new container will restart running its default setup, and it won't know about any manual changes the previous container might have had.
So you don't usually want to try to install software directly in a running container, since that will get lost as soon as the container exits. You can, in principle, get a shell in a container (via docker exec) but this is more of a debugging tool than an administration tool. You could make the only process a container runs be an ssh daemon, but anything you start this way will get lost as soon as the container exits (and I've never seen a recipe that correctly and securely sets up credentials to access it).
I'd recommend learning the standard Dockerfile system and running self-contained Docker images over trying to adapt Ansible to this rather different environment.
OS: Amazon Linux (hosted on AWS)
Docker version: 17.x
Tools: Ansible, Docker
Our developers use Ansible to be able to spin up individual AWS spot environments that get populated with docker images that get built on their local machines, pushed into a docker registry created on the AWS spot machine, then pulled down and run.
When the devs do this locally on their Macbooks, ansible will orchestrate building the code with sbt, spin up an AWS spot instance, run a docker registry, push the image into the docker registry, command the instance to pull down the image and run it, run a testsuite, etc.
To make things better and easier for non-devs to be able to run individual test environments, we put the ansible script behind Jenkins and use their username to let ansible create a domain name in Route53 that points to their temporary spot instance environment.
This all works great without the registry -- i.e. using JFrog Artifactory to have these dynamic envs just pull pre-built images. It lets QA team members spin up any version of the env they want. But now to allow it to build code and push, I need to have an insecure registry and that is where things fell apart...
Since any user can run this, the Route53 domain name is dynamic. That means I cannot just hardcode in daemon.json the --insecure-registry entry. I have tried to find a way to set a wildcard registry but it didnt seem to work for me. Also since this is a shared build server (the one that is running the ansible commands) so I dont want to keep adding entries and restarting docker because other things might be running.
So, to summarize the questions:
Is there a way to use a wildcard for the insecure-registry entry?
How can I get docker to recognize insecure-registry entry without restarting docker daemon?
So far I've found this solution to satisfy my needs, but not 100% happy yet. I'll work on it more. It doesn't handle the first case of a wildcard, but it does seem to work for the 2nd question about reloading without restart.
First problem is I was editing the wrong file. It doesn't respect /etc/sysconfig/docker nor does it respect $HOME/.docker/daemon.json. The only file that works on Amazon Linux for me is /etc/docker/daemon.json so I manually edited it and then tested a reload and verified with docker info. I'll work on this more to programmatically be able to insert entries as needed, but the manual test works:
sudo vim /etc/docker/daemon.json
sudo systemctl reload docker.service
docker info
I am just discovering Docker - I am finding so much information, but I can't seem to get a straight answer on this option. If someone could give me a clear explanation based on my understanding I have of it so far it would be appreciated.
I am downloading a docker image locally - say the default one from Microsoft, using microsoft/dotnet-samples:dotnetapp-nanoserver I am lost as to where this is downloaded to? Is this downloaded and installed as a program on the host machine, with a isolated script that controls the container? The download is about 1.3 gigs because it includes .Net Core
In another example, if I download apache2 to run as a web server, does it install it in the default paths on the host system, but every container I want to use taps into that - or does every container contain it's isolated version of apache2?
I ask this because I can't find files that mimic the file size of these programs.
I know they are not complete VM's but where can I find the files associated with a container?
I am using Windows Server 2016 and a Mac since I want to do some trials with containers.
An image is a filesystem
Docker images are encapsulated filesystems. The software and files inside are not being directly installed onto your system.
You can think of a Docker image sort of the way you think of a .zip file. You can download a .zip file from somewhere, and it is a single file. Contained inside it might be one file, or dozens of files, or a nested tree of directories and files. But on your disk, it exists as one file.
A Docker image is similar (conceptually, at least... the details are more complicated).
Image storage
Where images are stored varies by platform. On a Linux system, they are usually under /var/lib/docker. I don't know where they are stored on Windows, but this is a more or less opaque store. Poking around inside will not reveal very much to you anyway.
To see what you have, you should use the docker images command. It will show you the images you have stored locally.
Like I said earlier, each image may consist of multiple layers. By default, that command will only show you the top layer, which is the one you'll care about, to run containers from. Technically, there are other layers, and you can see all of them using docker images -a.
Where is the software installed?
When you download an Apache image, nothing is installed on your system at all. The image file(s) are downloaded and stored. Hiding inside is Apache and everything Apache needs in order to run, but Apache is not installed onto your Windows OS anywhere.
When you want to use Apache, you would run a container. Docker takes the Apache image and, using it as a starting template, creates a running process container, inside of which Apache is running. This is isolated from your operating system. Apache is only running inside of the container.
If you run a second container from the Apache image, you now have two completely separate Apache instances running, each in their own isolated filesystem environment.
Where can I find the files?
If you just want to poke around in the container filesystem, you can start the container in interactive mode, and run a shell instead of whatever it normally runs (like Apache). For instance, if you have an image apache:latest, you can do this:
docker run --rm -it apache:latest bash
This will run an instance of apache:latest, but instead of launching Apache, it will run a bash shell and drop you into it.
The --rm flag is convenient for cases like this. It tells Docker to remove the running container when its process exits. That way for a "just looking at something" container like this one, it cleans up after itself.
The -it is actually two flags. -i is interactive mode, and -t allocates a terminal. This is a common flag to pass when you want to directly interact with the container.
Once inside, you can use the usual commands to look at files and directory listings. Note that many containers are stripped-down, though. You don't always have all of the tools you are used to having. Things like ls in Linux are typically there, but a lot of things will not be.
Simply exit when you're done looking around to exit.
Looking around while the process is running
You can also look at the container while Apache is running. First start it normally.
docker run -d apache:latest
This will return a container ID. You can also get the ID from docker ps. Then you can attach to the container with that ID by executing a shell.
docker exec -it <container_id> bash
Now you're in the container in a shell, but Apache is in there running.
I want to setup up git and git-sync in my new docker container but I am not sure how to do that or if that is the right way to do it? If there is a easier way to do it for example I also use kubernetes and I am trying to see what kubernetes can do as far as git-sync is concerned. Any ideas?
Don't treat Docker container as a VM. Usually you shouldn't go to the container to run commands or to set up settings. Use docker build to build all what do you need (jar file, JVM server, ....) from Dockerfile and use environment variable to handle any settings (or volume with setting file). Your container image entrypoint (cmd) can be some your script (bootstrap.sh), which can handle also some start activities. Generally: your container should be stateless. For versioning use tags. Take your time and read some doc and some real Docker app examples. You will see there what is the best practice.