I created a docker container, and then I created a file and exited the container.
When I restart the container with:
docker run -i -t ubuntu /bin/bash
the file is nowhere to be found. I checked /var/lib/docker/ and there is another folder created which has my file in it. I know it's something to do with Union FS.
How do I start the same container again with my file in it?
How do I export a container with file change?
I don't know if this will answer your question completely, but...
Doing
docker run -i -t ubuntu /bin/bash
will not restart any container. Instead, it will create and start a new container based on the ubuntu image.
If you started a container and stopped it you can use docker start ${CONTAINER_ID}. If you did not stop it yet you can use restart.
You can also commit (export) the container to a new image: see http://docs.docker.io/en/latest/commandline/command/commit/ for the correct syntax. docker export is a option as well, but all that will do is archive your container. By creating a new image using docker commit you can create multiple instances (containers) of it afterwards, all having your file in it.
Related
i'm trying to start up tomcat on my docker desktop,and i followed the official tomcat tutorial on docker hub.but somehow i found that docker will create a new container everytime after running the command:docker run -it --rm tomcat and delete the container automatically when tomcat shuts down.
i have already known the reason is that run --rm can automatically remove the container when it exits.
now i finally built webs on tomcat,and i don't want them to be vanished.
how can i save my container before it's deleted?
thx! ;D
Based on what I've found on the internet, remove the --rm flag is not possible currently. docker update gives you the ability to update some parameters after you start your container, but you cannot update the cleanup flag (--rm) according to the document.
References:
I started a docker container with --rm Is there an easy way to keep it, without redoing everything?
Cancel --rm option on running docker container
But some workaround can be applied. You can export your current container to an image, act as a checkpoint, then you can start a new container without the --rm flag, and based on the image you exported. You can use docker commit to do so:
docker commit [your container name/id] [repo/name:tag]
(Use docker ps to list your containers, do it in a new bash/cmd/PowerShell session, or you will lose your work when you exit your docker container)
Then start a new container without the --rm flag:
docker run -it [repo/name:tag]
Disclaimer:
In the production environment, you should never change the container by running bash or sh in it. Use Dockerfile and docker build instead. Dockerfile will give you a reproducible configuration even you delete your container. By design, the container should not have any important data (aka not persistent). Use the image and volumes to save your custom changes and configurations.
We are trying to create a docker image from a container based on the Oracle 12c Enterprise Edition image from docker store (https://store.docker.com/images/oracle-database-enterprise-edition). We have the container working ok and then, after stopping the container we create an image based on that container with the following command.
docker commit Oracle_12 oracle/oradb:1
Then, we try to run a container using the commited image with the following command:
docker run -d -it --name oradb_cont -p 1512:1521 -p 5500:5500 oracle/oradb:1
This container fails with the following error:
Start up Oracle Database
Wed Nov 15 10:31:29 UTC 2017
start database
start listener
The database is ready for use .
tail: cannot open '/u01/app/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/alert_ORCLCDB.log' for reading: No such file or directory
tail: no files remaining
The container is "Exited" although the message "The database is ready for use".
We have attached a bash to the container to inspect where the missing file is. And the result seems to be that the "/diag" folder is a broken symlink:
Starting the original Oracle 12c container and attaching a bash, the folder is present. It seems symlink is broken or the file is not present only in the image created from the container.
The problem is that /ORCL is a data volume. The commit operation does not include any files that are inside volumes. You can check the commit documentation for more info.
Thus when starting the new instance, it appears that somehow the log file is being referenced and has not been yet created. Your current container is in an inconsistent state, as the files under '/ORCL' that were present in the commited container are missing from the new instance.
If you are running the new instance on a new machine you need to migrate the old volume into the new machine. You can find the volume of the old container by running docker inspect -f '{{ .Mounts }}' <old-container-name>, and migrate as specified in How to port data-only volumes from one host to another?
If you are running the new instance on the same machine, just mount the old volume using: <volume-name-or-id>:/ORCL
In general, as a best practice, you shouldn't rely on the commit command to get identical instances of a container. Rather build a DockerFile which extends the base image, and then add customizations by selecting only the necessary files to copy over on the new instance.
I did the following and lost all the changed data in my Docker container.
docker build -t <name:tag> .
docker run *-p 8080:80* --name <container_name> <name:tag>
docker exec (import and process some files, launch a server to host them)
Then I wanted to run it on a different port. docker stop & docker run does not work. Instead I did
docker stop
docker rm <container_name>
docker run (same parameters as before)
After the restart I saw the changes that happened in the container at 1-3 had disappeared, and had to re-run the import.
How do I do this correctly next time?
what you have to do is build the image from the container you just stopped after making changes. Because your old command still using the old image which doesn't have new changes(you have made changes in container which you just stopped not in image )
docker commit --help
Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Create a new image from a container's changes
docker commit -a me new_nginx myrepo/nginx:latest
then you can start container with the new image you just built
but if you dont want create image with the changes you made(like you dont want to put config containing password in the image) you can use volume mount
docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
Manage data in containers
Every time you do a docker run it will spin up a fresh container based on your image. And once a container is started, there are very few things that docker allows you to change with the docker update. So instead, you should preserve your data in an external volume that needs to persist between instances of a container. E.g.
docker run -p 8080:80 -v app-data:/data --name <container_name> <name:tag>
The volume name (app-data) and mount point in the container (/data) can be changed for your own requirements. Then when you destroy and restart a new container, you can mount the same volume in the new container.
I am running a Docker container in CoreOS (host) and mounted a host folder with a container's folder.
docker run -v /home/core/folder_name:/folder_name <container_name>
Now, each time I am changing (insert/delete) some file in that host folder (folder_name), I have to restart the container (container_name) to see the effects.
docker restart <container_name>
Is there any way from the host side or docker side to restart it automatically when there is a change (insert/delete) in the folder?
Restarting the docker container on a folder change is rather antithetical to the whole notion of the -v command in the first place. If you really really really need to restart the container in the manner you are suggesting then the only way to do it is from the docker host. There are a couple tools (I can name off the top of my head, there are definitely more) you could use to monitor the host folder and when a file is inserted or deleted you could trigger the docker restart <container_name> command. Those tools are incron and inotify-tools. Here is another question someone asked similar to yours and the answer recommended using one of the tools I suggested.
Now, there is no way that the files in the host folder are not being changed in the docker container as well. It must be that the program you are using in the docker container isn't updating it's view of the /folder_name folder after it starts up. Is it possible for you to force the program you are running in the docker container to refresh or update? The -v command works via bind mounting and has been a stable feature in docker for quite a while. With bind mounting, the home/core/folder_name folder IS (for all practical purposes) the same folder as /folder_name in the container.
run the command
docker run -t -i -v /home/core/folder_name:/folder_name <container_name> /bin/sh
This command gives you an interactive shell within the container. In this shell issue the command:
cd /folder_name; touch a_file
Now go to /home/core/folder_name on the docker host in a shell or some file browser. The file a_file will be there. You can delete that file on the host and go back to the shell running in the docker container and run ls /folder_name. The file a_file will not be there.
So, you either need to use inotify or incron to go about restarting your container anytime a file changes on the host, or figure out how to work with the program you are running in the docker container to have it update its view of the /folder_name folder.
I have a docker image running a wordpress installation. The image by executes the apache server as default command. So when you stop the apache service the container exits.
The problem comes after messing up the apache server config. The container cannot start and I cannot recover the image contents.
My options are to either override the command that the container runs or revert last file system changes to a previous state.
Is any of these things possible? Alternatives?
When you start a container with docker run, you can provide a command to run inside the container. This will override any command specified in the image. For example:
docker run -it some/container bash
If you have modified the configuration inside the container, it would not affect the content of the image. So you can "revert the filesystem changes" just by starting a new container from the original image...in which case you still have the original image available.
The only way to that changes inside a container affect an image are if you use the docker commit command to generate a new image containing the changes you made in the container.
If you just want to copy the contents out you can use the command below with a more specific path.
sudo docker cp containername:/var/ /varbackup/
https://docs.docker.com/reference/commandline/cli/#cp
The file system is also accessible from the host. Run the command below and in the volumes section at the bottom it should have a path to where your file system modifications are stored. This is not a good permanent solution.
docker inspect containername
If you re-create the container later you should look into keeping your data outside of the container and linking it into the container as a virtual path when you create the container. If you link your apache config file into the container this way you can edit it while the container is not running
Managing Data in Containers
http://docs.docker.com/userguide/dockervolumes/
Edit 1: Not suggesting this as a best practice but it should work.
This should display the path to the apache2.conf on the host.
Replace some-wordpress with your container name.
CONTAINER_ID=$(docker inspect -f '{{.Id}}' some-wordpress)
sudo find /var/lib/docker/ -name apache2.conf | grep $CONTAINER_ID
There are different ways of overriding the default command of a docker image. Here you have two:
If you have an image with a default CMD command, you can simply override it in docker run giving as last argument the command (with its argument) you wish to run (Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...])
Create a wrapper image with BASE image the one you want to override the CMD or ENTRYPOINT. Example
FROM my_image
CMD ["my-new-cmd"]
Also, you can try to revert the changes in different ways:
If you have the Dockerfile of the image you want to revert, simple rewrite the changes into Dockerfile and run again docker build process.
If you don't have the Dockerfile and you built the image committing the changes, you can use docker history <IMAGE_NAME>:tag, locate the IMAGE_ID of the commit you want, and run that commit or tag that commit with the name (and tag) you wish (using -f option if you are overriding a tag name). Example:
$ docker history docker_io_package:latest
$ docker tag -f c7b38f258a80 docker_io_package:latest
If it requires starting a command with a set of arguments, for example
ls -al /bin
try to make it like that
docker run --entrypoint ls -it debian /bin -al
where ls goes after --entrypoint and all arguments are placed after the image name