Rancher configuration lost - docker

I have restarted the rancher host a few times while configuring rancher.
Nothing was lost, even though containers had been started and stopped several times during these reboots.
I had to stop and run the container again to set a specific IP for the UI, so I could use the other IP addresses available in the host as HostPorts for containers.
This is the command I had to execute again:
docker run -d --restart=unless-stopped -p 1.2.3.4:80:80 -p 1.2.3.4:443:443 rancher/rancher
After running this, rancher started up as a clean installation, asking me for password, to setup a cluster, and do everything from scratch, even though I see a lot of containers running.
I tried rerunning the command that rancher showed on the first installation (including the old token and ca-checksum). Still nothing.
Why is this happening? Is there a way to restore the data, or should I do the configuration and container creation again?
What is the proper way of cleaning up, if I need to start from scratch? docker rm all containers and do the setup again?
UPDATE
I just found some information from another member in a related question, because this problem happened following a suggestion from another user.
Apparently there is an upgrade process that needs to be followed, but I am missing what needs to be done exactly. I can see my old, stopped container here: https://snag.gy/h2sSpH.jpg
I believe I need to do something with that container so the new rancher container becomes online with the previous data.
Should I be running this?
docker run -d --volumes-from stoic_newton --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest

Ok, I can confirm that this process works.
I have followed the guide here: https://rancher.com/docs/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/#completing-the-upgrade
I just add to stop the new rancher container which was lacking the data, copy if from the original docker container to create a backup, and then restart the new container with the volumes from the data container which was created in the process.
I could probably have launched the new rancher container with the volumes from the old rancher container, but I preferred playing it safe and following every step of the guide, and as a plus I ended up with a backup :)

Related

Using remove option with interactive docker container [duplicate]

I am trying Docker for the first time and do not yet have a "mental model". Total beginner.
All the examples that I am looking at have included the --rm flag to run, such as
docker run -it --rm ...
docker container run -it --rm ...
Question:
Why do these commands include the --rm flag? I would think that if I were to go through the trouble of setting up or downloading a container with the good stuff in it, why remove it? I want to keep it to use again.
So, I know I have the wrong idea of Docker.
Containers are merely an instance of the image you use to run them.
The state of mind when creating a containerized app is not by taking a fresh, clean ubuntu container for instance, and downloading the apps and configurations you wish to have in it, and then let it run.
You should treat the container as an instance of your application, but your application is embedded into an image.
The proper usage would be creating a custom image, where you embed all your files, configurations, environment variables etc, into the image. Read more about Dockerfile and how it is done here
Once you did that, you have an image that contains everything, and in order to use your application, you just run the image with proper port settings or other dynamic variables, using docker run <your-image>
Running containers with --rm flag is good for those containers that you use for very short while just to accomplish something, e.g., compile your application inside a container, or just testing something that it works, and then you are know it's a short lived container and you tell your Docker daemon that once it's done running, erase everything related to it and save the disk space.
The flag --rm is used when you need the container to be deleted after the task for it is complete.
This is suitable for small testing or POC purposes and saves the headache for house keeping.
From https://docs.docker.com/engine/reference/run/#clean-up---rm
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag
In short, it's useful to keep the host clean from stopped and unused containers.
When you run a container from an image using a simple command like (docker run -it ubuntu), it spins up a container. You attach to your container using docker attach container-name (or using exec for different session).
So, when you're within your container and working on it and you type exit or ctrl+z or any other way to come out of the container, other than ctrl+p+q, your container exits. That means that your container has stopped, but it is still available on your disk and you can start it again with : docker start container-name/ID.
But when you run the container with —rm tag, on exit, the container is deleted permanently.
I use --rm when connecting to running containers to perform some actions such as database backup or file copy. Here is an example:
docker run -v $(pwd):/mnt --link app_postgres_1:pg --rm postgres:9.5 pg_dump -U postgres -h pg -f /mnt/docker_pg.dump1 app_db
The above will connect a running container named 'app_postgres_1' and create a backup. Once the backup command completes, the container is fully deleted.
The "docker run rm " command makes us run a new container and later when our work is completed then it is deleted by saving the disk space.
The important thing to note is, the container is just like a class instance and not for data storage. We better delete them once the work is complete. When we start again, it starts fresh.
The question comes then If the container is deleted then what about the data in a container? The data is actually saved in the local system and get linked to it when the container is started. The concept is named as "Volume or shared volume".

Why does vscode's remote explorer get a list of old containers? (Docker)

I succeeded in connecting to a remote server configured with Docker through vscode. By the way, the list of containers from the past was fetched from the remote explorer of vscode. If you look at this list of containers, they are obviously containers made with images I downloaded a few days ago. I don't know why this is happening.
Presumably, it is a problem with the settings.json file or a problem with some log.
I pressed f1 in vscode and select Remote-Containers: Attach to Running Container...
Then the docker command was entered automatically in the terminal. Here, a container (b25ee2cb9162) that I do not know where it came from has appeared.
After running this container, a new window opens with the message Starting Dev Container.
This is the list of containers that I said downloaded a few days ago. This is what vscode showed me.
What's the reason that this happened?
Those containers you are seeing are similar to those if you run docker container ls. The containers you are seeing have exited and are not automatically cleaned up by Docker unless specified in CLI --rm option.
The docs for the --rm option explain the reason for this nicely:
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag:
From this answer about these non-running containers taking up system resources you don't have to be concerned about these taking up much space expect minimal disk space.
To remove those containers, you have a few options:
[Preemptive] Use --rm flag when running container
You can pass the --rm flag when you run a container with the Docker to remove the containers after they have exited so old containers don't accumulate.
As the docs mention, the downside is after the container exits, it's difficult to debug why the container exited if something failed inside the container.
See the docs here if using docker run: https://docs.docker.com/engine/reference/run/#clean-up---rm
See this answer if using docker-compose run
Clean up existing containers from the command line
Use the docker container prune command to remove all stopped containers.
See the docs here: https://docs.docker.com/engine/reference/commandline/container_prune/
See this related SO answer if you're looking for other options:
Clean up containers from VSCode
VSCode Docker Containers Extension you clean up containers if you open the command palate and enter Docker Containers: Remove
Or you can simply right click those containers.

Docker error: Cannot start service ...: network 7808732465bd529e6f20e4071115218b2826f198f8cb10c3899de527c3b637e6 not found

When starting a docker container (not developed by me), docker says a network has not been found.
Does this mean the problem is within the container itself (so only the developer can fix it), or is it possible to change some network configuration to fix this?
I'm assuming you're using docker-compose and seeing this error. I'd recommend
docker-compose up --force-recreate <name>
That should recreate the containers as well as supporting services such as the network in question (it will likely create a new network).
shutdown properly first, then restart
docker-compose down
docker-compose up
I was facing this similar issue and this worked for me :
Try running this
- docker container ls -a and remove the container id by docker container rm ca877071ac10 (this is the container id ).
The problem was there were some old container instances which were not removed. Once all the old terminated instances get removed, you can start the container with docker-compose file
This can be caused by some old service that has not been killed, first add
--remove-orphans flag when bringing down your container to remove any undead services running, then bring the container back up
docker-compose down --remove-orphans
docker-compose up
This is based in this answer.
In my case the steps that produced the error where:
Server restart, containers from a docker-compose stack remained stopped.
Network prune ran, so the network associated with stack containers where deleted.
Running docker-compose --project-name "my-project" up -d failed with the error described in this topic.
Solved simply adding force-recreate, in this way:
docker-compose --project-name "my-project" up -d --force-recreate
This possibly works because with this containers are recreated linked with the also recreated network (previously pruned as described in the pre conditions).
Apparently VPN was causing this. Turning off VPN and resetting Docker to factory settings has solved the problem in two computers in our company. A third, personal computer that did not have VPN never showed the problem.
Amongst other things docker system prune will remove 'all networks not used by at least one container' allowing them to be recreated next docker-compose up
More precisely docker network prune can also be used.

What is the '--rm' flag doing?

I am trying Docker for the first time and do not yet have a "mental model". Total beginner.
All the examples that I am looking at have included the --rm flag to run, such as
docker run -it --rm ...
docker container run -it --rm ...
Question:
Why do these commands include the --rm flag? I would think that if I were to go through the trouble of setting up or downloading a container with the good stuff in it, why remove it? I want to keep it to use again.
So, I know I have the wrong idea of Docker.
Containers are merely an instance of the image you use to run them.
The state of mind when creating a containerized app is not by taking a fresh, clean ubuntu container for instance, and downloading the apps and configurations you wish to have in it, and then let it run.
You should treat the container as an instance of your application, but your application is embedded into an image.
The proper usage would be creating a custom image, where you embed all your files, configurations, environment variables etc, into the image. Read more about Dockerfile and how it is done here
Once you did that, you have an image that contains everything, and in order to use your application, you just run the image with proper port settings or other dynamic variables, using docker run <your-image>
Running containers with --rm flag is good for those containers that you use for very short while just to accomplish something, e.g., compile your application inside a container, or just testing something that it works, and then you are know it's a short lived container and you tell your Docker daemon that once it's done running, erase everything related to it and save the disk space.
The flag --rm is used when you need the container to be deleted after the task for it is complete.
This is suitable for small testing or POC purposes and saves the headache for house keeping.
From https://docs.docker.com/engine/reference/run/#clean-up---rm
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag
In short, it's useful to keep the host clean from stopped and unused containers.
When you run a container from an image using a simple command like (docker run -it ubuntu), it spins up a container. You attach to your container using docker attach container-name (or using exec for different session).
So, when you're within your container and working on it and you type exit or ctrl+z or any other way to come out of the container, other than ctrl+p+q, your container exits. That means that your container has stopped, but it is still available on your disk and you can start it again with : docker start container-name/ID.
But when you run the container with —rm tag, on exit, the container is deleted permanently.
I use --rm when connecting to running containers to perform some actions such as database backup or file copy. Here is an example:
docker run -v $(pwd):/mnt --link app_postgres_1:pg --rm postgres:9.5 pg_dump -U postgres -h pg -f /mnt/docker_pg.dump1 app_db
The above will connect a running container named 'app_postgres_1' and create a backup. Once the backup command completes, the container is fully deleted.
The "docker run rm " command makes us run a new container and later when our work is completed then it is deleted by saving the disk space.
The important thing to note is, the container is just like a class instance and not for data storage. We better delete them once the work is complete. When we start again, it starts fresh.
The question comes then If the container is deleted then what about the data in a container? The data is actually saved in the local system and get linked to it when the container is started. The concept is named as "Volume or shared volume".

How to autoremove linked Docker container?

I have a container with PHP and a linked container with MySQL database, because I need an ability to run PHPUnit with database (integration tests).
The basic command looks like this:
docker run -i --rm --link db binarydata/phpunit php script.php
I have created db container and started it before running this command.
binarydata/phpunit container gets removed after a command was run. But db container stays up and running.
Question is: how can I achieve --rm functionality on a linked container, so it will be removed too after command was executed?
how can I achieve --rm functionality on a linked container, so it will be removed too after command was executed?
First, you don't have to use --link anymore with docker 1.10+. docker-compose will create for you a network in which all containers see each others.
And with docker-compose alias, you can declare your binary/phpunit as "db" for other containers to use.
Second, with that network in place, if you stop/remove the php container, it will be removed from said network, including its alias 'db'.
That differs from the old link (docker 1.8 and before), which would modify the /etc/hosts of the container who needed it. In that case, removing the linked container would not, indeed, change the /etc/hosts.
With the new embedded docker-daemon DNS, there is no longer a need for that.
Matt suggests in the comments the following command and caveats:
docker-compose up --abort-on-container-exit --force-recreate otherwise the command never returns and the db container would never be removed.
up messes with stdout a bit though.
The exit status for the tests will be lost too, it's printed to screen instead.

Resources