Docker container migration - docker

I'm trying to use the experimental checkpoint feature to implement container migration between machines. I've found many examples of checkpointing and restoring on the same machine but I've only found this documentation about migrating checkpoints between different machines:
https://circleci.com/blog/checkpoint-and-restore-docker-container-with-criu/
However, the commands it uses is outdated and docker checkpoint restore is not available anymore. Instead docker start --checkpoint syntax should be used. I've done my use case as follow:
Host 1: Has a docker container running which I do a checkpoint at a location in $CHECKPOINT_FOLDER which is a shared folder among different machines with docker checkpoint create --checkpoint-dir=$CHECKPOINT_FOLDER $NAME checkpoint-$NAME where $NAME is the name of the running container (one-13 in this case).
Host 2: Has access to $CHECKPOINT_FOLDER folder and I can see the created one. I run docker start --checkpoint-dir $CHECKPOINT_FOLDER --checkpoint checkpoint-$NAME $NAME where $NAME again is the same name of the container that was running at host 1 (one-13). However I get this error:
No such container: one-13
Which makes me think that I have to create a container before starting a checkpoint but then, how do I do so? isn't supposed to be created automatically from the checkpoint? If not, is there a way to pass the checkpoint to the docker create command? What's the workflow for this use case?
Thank you.

Before restoring the container in the destination host, you have to create a container:
sudo docker create --name $NAME <container-image>
Creating the container $NAME is to make sure that the base image is downloaded and the disk space is allocated, you can check in /var/lib/docker/containers/
Then you can restore it with the share dump files in $CHECKPOINT_FOLDER
sudo docker start --checkpoint=checkpoint-$NAME --checkpoint-dir=$CHECKPOINT_FOLDER $NAME
or specifically with checkpoint-name
sudo docker start --checkpoint=checkpoint-one-13 --checkpoint-dir=$CHECKPOINT_FOLDER $NAME

Related

how to configure Cassandra.yaml which is inside docker image of cassandra at /etc/cassandra/cassandra.yaml

I am trying to edit cassandra.yaml which is inside docker container at /etc/cassandra/cassandra.yaml, I can edit it from logging inside the container, but how can i do it from host?
Multiple ways to achieve this from host to container. You can simple use COPY or RUN in Dockerfile or with basic linux commands as sed, cat, etc. to place your configuration into the container. Another way you can pass environment variables while running your cassandra image which will pass those environment variables to the spawning container. Also, can use the docker volume mount it from host to container and you can map the configuration you want into the cassandra.yaml as shown below,
$ docker container run -v ~/home/MyWorkspace/cassandra.yaml:/etc/cassandra/cassandra.yaml your_cassandra_image_name
If you are using Docker Swarm then you can use Docker configs to externally store the configuration files(Even other external services can be used as etcd or consul). Hope this helps.
To edit cassandra.yaml :
1) Copy your file from your Docker container to your system
From command line :
docker ps
(To get your container id)
Then :
docker cp your_container_id:\etc\cassandra\cassandra.yaml C:\Users\your_destination
Once the file copied you should be able to see it in your_destination folder
2) Open it and make the changes you want
3) Copy your file back into your Docker container
docker cp C:\Users\your_destination\cassandra.yaml your_container_id:\etc\cassandra
4) Restart your container for the changes to be effective

How to start a existing mysql container in docker (toolbox)?

I have a container (i'm using this container https://hub.docker.com/_/mysql/) which had started before, with ID 5f96e9570d1b1475a888d7a615acdd9a7715c1ed6f0c40900f2e9c1ab485c7cf, but now how can i restart it ? I tried this command but not work
$ docker run --name mysql -e MYSQL_ROOT_PASSWORD=*Abcd1234 -d mysql:5.7
D:\CWindow10\Docker Toolbox\docker.exe: Error response from daemon: Conflict. The container name "/mysql" is already in use by container "5f96e9570d1b1475a888d7a615acdd9a7715c1ed6f0c40900f2e9c1ab485c7cf". You have to remove (or rename) that container to be able to reuse that name.
See 'D:\CWindow10\Docker Toolbox\docker.exe run --help'.
If i delete the container and retype the command, will the old data still exist in new container?
To restart an existing container, simply run docker start <container_name_or_id>.
Regarding the data: docker uses the concept of volumes to put data. For the mysql image, there's a section "Where to Store Data" on the docker hub site. If you don't manually declare where the image should go, docker will create one for you. If you want your data to be kept, the easiest way is to create a folder and tell the docker run command to map that volume. That way, you can still use it if you throw away your container.
use this command to restart container docker restart <CONTAINER>
starting new container will not preserve your data unless you have mounted external volume and stored data on it. Have a look at this blog http://blog.arungupta.me/docker-mysql-persistence/

Run commands on host from container command prompt

I use portainer to manage containers and it works great.
https://portainer.io/
But when I connect to console, I get the command prompt of container. Is there any way to run simple commands like ls /home/ that will list the files on host?
In other words is there any image that will mount the file system of host server "as-is"?
Here's an example using docker command line:
$ docker run --rm -it -v ~/Desktop:/Desktop alpine:latest /bin/sh
/ # ls /Desktop/
You can extend the approach to as far as you need to. Experiment with it. Learn about the different mount options.
I know the Docker app on MacOS provides a way for default volume mounts. Portainer also claims to provide a volume management screen, am yet to use it.
Hope this helps.
If you're dealing with services, or an existing, running container, you can in most cases access the shell directly. Let's say you have a container called "meow". You can run:
docker exec -it meow bash
and it will drop you into the bash shell. You'll actually need to know if bash is installed, or try calling sh instead.
The "i" option indicates it should be interactive, and the "t" option indicates it should emulate a TTY terminal. When you're done, you can hit Ctrl+D to exit out of the container.
First of all: You never ever want to do so.
Volumes mounted to containers are used to persist the container's data as containers are designed to be volatile -(the container itself shouldn't persist it s state so restarting the container n number of times should result in the same container state each time it starts)- so think of the volume as a the database where all the data (state of the container) should be stored.
Seeing volumes this way makes it easier to decide against sharing the host's entire file system, as this container would have read write permissions over the host OS files itself which is a huge security threat .
Sharing volumes across containers is considered a bad container architecture let alone sharing the entirety of the host file system.
I would propose simple ssh (or remote desktop) to your host if you require access to it to run commands or tasks on your host.
OR if your container requires access to a specific folder for some reason then you should consider mounting or binding that folder to the container
docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest
I would recommend copying the content of that folder into a docker managed volume (a folder under the docker/volumes tree) and binding the container to this volume instead of the original folder to minimize the impact of your container on your host's OS.

Docker Oracle12c Enterprise image created from container symlink broken

We are trying to create a docker image from a container based on the Oracle 12c Enterprise Edition image from docker store (https://store.docker.com/images/oracle-database-enterprise-edition). We have the container working ok and then, after stopping the container we create an image based on that container with the following command.
docker commit Oracle_12 oracle/oradb:1
Then, we try to run a container using the commited image with the following command:
docker run -d -it --name oradb_cont -p 1512:1521 -p 5500:5500 oracle/oradb:1
This container fails with the following error:
Start up Oracle Database
Wed Nov 15 10:31:29 UTC 2017
start database
start listener
The database is ready for use .
tail: cannot open '/u01/app/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/alert_ORCLCDB.log' for reading: No such file or directory
tail: no files remaining
The container is "Exited" although the message "The database is ready for use".
We have attached a bash to the container to inspect where the missing file is. And the result seems to be that the "/diag" folder is a broken symlink:
Starting the original Oracle 12c container and attaching a bash, the folder is present. It seems symlink is broken or the file is not present only in the image created from the container.
The problem is that /ORCL is a data volume. The commit operation does not include any files that are inside volumes. You can check the commit documentation for more info.
Thus when starting the new instance, it appears that somehow the log file is being referenced and has not been yet created. Your current container is in an inconsistent state, as the files under '/ORCL' that were present in the commited container are missing from the new instance.
If you are running the new instance on a new machine you need to migrate the old volume into the new machine. You can find the volume of the old container by running docker inspect -f '{{ .Mounts }}' <old-container-name>, and migrate as specified in How to port data-only volumes from one host to another?
If you are running the new instance on the same machine, just mount the old volume using: <volume-name-or-id>:/ORCL
In general, as a best practice, you shouldn't rely on the commit command to get identical instances of a container. Rather build a DockerFile which extends the base image, and then add customizations by selecting only the necessary files to copy over on the new instance.

Docker.IO Filesystem Consistancy

I created a docker container, and then I created a file and exited the container.
When I restart the container with:
docker run -i -t ubuntu /bin/bash
the file is nowhere to be found. I checked /var/lib/docker/ and there is another folder created which has my file in it. I know it's something to do with Union FS.
How do I start the same container again with my file in it?
How do I export a container with file change?
I don't know if this will answer your question completely, but...
Doing
docker run -i -t ubuntu /bin/bash
will not restart any container. Instead, it will create and start a new container based on the ubuntu image.
If you started a container and stopped it you can use docker start ${CONTAINER_ID}. If you did not stop it yet you can use restart.
You can also commit (export) the container to a new image: see http://docs.docker.io/en/latest/commandline/command/commit/ for the correct syntax. docker export is a option as well, but all that will do is archive your container. By creating a new image using docker commit you can create multiple instances (containers) of it afterwards, all having your file in it.

Resources