I am using docker image opencv from https://hub.docker.com/r/andrewssobral/bgslibrary_opencv3/
of andrewssobral author.
First, i initialized container of the image by typing command:
docker run -it -p 5901:5901 andrewssobral/bgslibrary_opencv3 bash
And i tried install vim by command line:
apt-get install vim
But when i use exit COMMAND to go out of the container and i run it again then vim was uninstalled.
So how do i install vim or another software permanently inside docker?
But when i exit docker above container and i run it again then vim was uninstalled.
This is where the problem is: docker run creates a new container.
When you use docker run ... a new container is created and started based on the image you provide inside the command. It is also assigned a random name (if you don't specify one). If this container exits, you can then use docker start name and start it again. This means that if you had previously installed vim it will be there.
Solution: create a new image which includes what you need.
#Sergiu proposed to use Dockerfile
or another way is to save the current state of your container to a new image so that you can use it later on to create new containers with your changes included. To do this you can use docker commit
something like this:
docker commit your_modified_container_name [REPOSITORY[:TAG]]
You have two options: or you edit the Dockerfile provided by the author to add vim or you create a new Dockefile FROM the image.
Related
i've been using a docker container to build the chromium browser (building for Android on Debian 10). I've already created a Dockerfile that contains most of the packages I need.
Now, after building and running the container, I followed the instructions, which asked me to execute an install script (./build/install-build-deps-android.sh). In this script multiple apt install commands are executed.
My question now is, is there a way to install these packages without rebuilding the container? Downloading and building it took rather long, plus rebuilding a container each time a new package is required seems kind of suboptimal. The error I get when executing the install script is:
./build/install-build-deps-android.sh: line 21: lsb_release: command not found
(I guess there will be multiple missing packages). And using apt will give:
root#677e294147dd:/android-build/chromium/src# apt install nginx
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nginx
(nginx just as an example install).
I'm thankfull for any hints, as I could only find guides that use the Dockerfile to install packages.
You can use docker commit:
Start your container sudo docker run IMAGE_NAME
Access your container using bash: sudo docker exec -it CONTAINER_ID bash
Install whatever you need inside the container
Exit container's bash
Commit your changes: sudo docker commit CONTAINER_ID NEW_IMAGE_NAME
If you run now docker images, you will see NEW_IMAGE_NAME listed under your local images.
Next time, when starting the docker container, use the new docker image you just created:
sudo docker run **NEW_IMAGE_NAME** - this one will include your additional installations.
Answer based on the following tutorial: How to commit changes to docker image
Thanks for #adnanmuttaleb and #David Maze (unfortunately, they only replied, so I cannot accept their answers).
What I did was to edit the Dockerfile for any later updates (which already happened), and use the exec command to install the needed dependencies from outside the container. Also remember to
apt update
otherwise you cannot find anything...
A slight variation of the steps suggested by Arye that worked better for me:
Create container from image and access in interactive mode: docker run -it IMAGE_NAME bin/bash
Modify container as desired
Leave container: exit
List launched containers: docker ps -a and copy the ID of the container just modified
Save to a new image: docker commit CONTAINER_ID NEW_IMAGE_NAME
If you haven't followed the Post-installation steps for Linux
, you might have to prefix Docker commands with sudo.
I am trying to start a docker container using the ubuntu image:
docker container run -d --name ubuntu_assignment_4 6e4f1fe62
However as soon as I start the container it stops again.
Why does this happen and how can I ensure the container stays running?
The image I am trying to run here is: ubuntu:14.04
If you are going to use the ubuntu:14.04 image without any modifications to it, you would not require a separate Dockerfile. And it is not possible to keep the plain ubuntu:14.04 image running as a container.
You can directly launch the container with an interactive shell using the ubuntu:14.04 image.
docker run -it ubuntu:14.04 /bin/bash
But the plain ubuntu:14.04 image does not have curl pre-installed on it.
You will need a custom Dockerfile for this.
I can't say exactly what is happening without seeing the complete Dockerfile that was used to build the image, but I am pretty certain that the trouble you are having is just because whatever task that is being started inside the container is finishing and exiting.
Docker containers work by having some command assigned (using ENTRYPOINT or CMD directives in the Dockerfile, or as an argument to docker start or docker run on the command line) which is the program that is started when the container loads. The container will live for as long as that task continues to run, and once that program finishes the container will terminate.
To specify the startup entrypoint at the command line, try:
docker create -it [image] /bin/bash
Then start it like this:
docker start -ia [Container ID]
The container will exit once the shell exits, because this is assigning the shell as the entry point.
cURL may not be installed by default. It is possible to install it using apt-get. But again, once the shell is closed, the container will stop and any changes will be lost. As a start, try creating a new directory somewhere, and then add a file called Dockerfile with this content:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y curl
ENTRYPOINT ["/bin/bash"]
That will create a new image with curl installed. then, from inside the new directory where the Dockerfile was created, use:
docker build .
docker images
which will build a new image, using the Dockerfile as the blueprint.
Once the build finishes, find the image ID for the new container, and run it using:
docker run -it [image id]
Ultimately, to make Docker really useful, the typical approach is to replace that last line in the Dockerfile (ENTRYPOINT ["command"]) with something that will continue running forever (like ENTRYPOINT ["apache2"] or ENTRYPOINT ["redis"] or similar). If you have experience using regular desktop/server OS installs, and full virtual machines like VMWare or VirtualBox, just remember that Docker is very different; the way it works and the patterns used to deploy it are not the same.
I am just learning Docker, I pulled my first container using:
docker run -it debian:latest /bin/bash
After installing some services, like systemd, openssh, etc... I exit the container, using CTRL+D and the next time i start the container (using the same command) I get fresh install of debian without my configs.
I tried using docker run -it --restart no debian:buster without success.
How can I prevent this from happening?
Each time you use
docker run
command, you create a new container from an existing docker image. With
docker start $containerName
command, you can start the existing container ($containerName should replace your container real name). Otherwise, to have a custom image of a debian, it is better to write a dockerfile and build an image out of it. Here are the best practices to write a Dockerfile: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
After modifying a Docker image "from within" by running
docker run -it --user root <image_name> bash
…and commiting the changes, the image's config now contains the bash command in Container.Cmd and ContainerConfig.Cmd.
I have seen that docker commit at least used to have a -run option which could let me modify the configuration, but I haven't found documentation for it.
How can I remove Cmd from the configuration to make entrypoint active again (and what should I have done to avoid the problem)?
(Workaround) You could run your new image with docker run --entrypoint to set a new entrypoint, then commit that new container as a new image. It should keep the entrypoint you started it with.
Alternatively you could manually edit the JSON metadata for the image, but I wouldn't recommend that as a production hack -- it is always better to go through the APIs for that.
I want to create a Docker image which contains Java and PostgreSQL. I just want to create an Image to reuse it from anywhere.
From reading the documentation I don't understand how I can do that.
This is what I tried:
user#host:/$ docker run -i -t debian /bin/bash
root#container:/$ apt-get install postgresql-9.3
user#host:/$ docker ps
user#host:/$ docker commit <CID> username/postgresql
Why reinvent the wheel ? If you look at the registry, it already exists, see
https://registry.hub.docker.com/u/alinous/docker-java-postgresql/
Another way, you can also add PostgreSQL to a Java container, like this one
https://registry.hub.docker.com/u/dockerfile/java/ or add Java to a PostgreSQL container...
so start your Dockerfile with from dockerfile/java