How to extend existing docker container? - docker

The tensorflow docker container is available at https://hub.docker.com/r/tensorflow/tensorflow/ to extend this container with additional libraries such as requests I'm aware of two options.
Run the container and run pip install requests
Append pip install requests to the dockerFile that builds this container
Is there an alternative option ? Something like creating the tensorflow/tensorflow container from a dockerFile and then installing requests on this container.
Reading How to extend an existing docker image? to accomplish this create a dockerFile with these contents ? :
FROM tensorflow/tensorflow
RUN pip install requests

Your original assertion is correct, create a new Dockerfile:
FROM tensorflow/tensorflow
RUN pip install requests
now build it (note that the name should be lower case):
docker build -t me/mytensorflow .
run it:
docker run -it me/mytensorflow
execute a shell in it (docker ps -ql gives us an id of the last container to run):
docker exec -it `docker ps -ql` /bin/bash
get logs from it:
docker logs `docker ps -ql`
The ability to extend other images is what makes docker really powerful, in addition you can go look at their Dockerfile:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker
and start from there as well without extending their docker image, this is a best practice for people using docker in production so you know everything is built in-house and not by some hacker sneaking stuff into your infrastructure. Cheers! and happy building

you could enter the running container via:
docker exec -it CONTAINER_ID bin/bash
or if a name is set:
docker exec -it CONTAINER_NAME bin/bash

Related

Docker restarts container every time

I am just learning Docker, I pulled my first container using:
docker run -it debian:latest /bin/bash
After installing some services, like systemd, openssh, etc... I exit the container, using CTRL+D and the next time i start the container (using the same command) I get fresh install of debian without my configs.
I tried using docker run -it --restart no debian:buster without success.
How can I prevent this from happening?
Each time you use
docker run
command, you create a new container from an existing docker image. With
docker start $containerName
command, you can start the existing container ($containerName should replace your container real name). Otherwise, to have a custom image of a debian, it is better to write a dockerfile and build an image out of it. Here are the best practices to write a Dockerfile: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

re-running a script in a docker container

I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command

How to develop within docker image

I started to experiment with the docker but have some questions regarding how to develop on it and regarding its use cases. If anyone could guide me through these questions, it will be much appreciated.
First,
As far as I understood, docker is used mainly for developing applications on custom environments, thus avoiding the tidious installation processes. This is initially my intention, why I'd like to use docker for.
I've created a docker file which builds successfuly, and which has basic C++ development tools based upon library/gcc. I want to be able to develop in this docker container as you would do on your terminal.
What I did is I created a docker image from a Dockerfile. (I can observe that it is successfully created)
docker build -t mydockerimage .
Then run the docker in detached mode.
docker run -d mydockerimage
At this point, I am notified with the ID of the docker container. However docker container does not seem to be running when I check the output of:
docker container ls
Here comes the first question, why is my docker container not running?
To my understanding, simplest way to interact with the docker container is as follows:
docker exec -it <container_id_or_name> echo "Hello from container!"
Is this true? Is this a use case of docker in which I simply can start the container and exec some Linux command on it?
Moreover, I get a permission denied on /var/lib/docker.sock when I try to execute docker commands without sudo. What am I missing here?
Thank you in advance.
Do you provide an entrypoint or CMD in your dockerfile? This will be executed inside your container and keeps the container running. You can find some details here.
In short. Docker has a default entrypoint: /bin/sh -c, but no default CMD.
Check the dockerfile of ubuntu. This has bash as CMD so it's executing /bin/sh -c bash.
$ docker run -it ubuntu bash
root#9855e779cab2:/#
This will result in an interactive shell in which you can execute commands like on an ubuntu. If you exit the container the container will stop running.
To keep a container running you can use the -d option. It will run the container in the background as a daemon:
$ docker run -d -it ubuntu bash
2606ad8e095baa0237cc30e599a26a4d727d99d47392d779fb83cd50f1a39614
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2606ad8e095b ubuntu "bash" 18 seconds ago Up 17 seconds cranky_johnson
Now you can exec inside the container to "go inside" the container and execute ubuntu commands.
$ docker exec -it 2606ad8e095b bash
root#2606ad8e095b:/#
When you exit the container it remains running in the background.
Now we can execute your command too:
$ docker exec -it 2606ad8e095b echo "Hello from container!"
Hello from container!
This will open a bash session in your container and echo the string.
I think it's important in your case you define some entrypoint (which can also be a script) or a CMD. Probably you need something very similar to Ubuntu when you just want to use bash inside your container.
Moreover, I get a permission denied on /var/lib/docker.sock when I try to execute docker commands without sudo. What am I missing here?
This is normal. The Docker daemon currently requires root privileges. So you have to use docker with your root user or users which have root priviledges and you have to add sudo every time. You can add your user to a docker group. Every time the daemon starts, it makes the ownership of the Unix socket read/writable by the docker group. This means you can use docker without using sudo everytime when that user is inside your docker group.
To add your user to the docker group:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ exit
ssh back or open new shell

OS name for docker images

I am trying to build a new docker image using docker provided base Ubuntu image. I'll be using docker file to run few scripts and install applications on the base image. However my script requirement is that the hostname should remain same. I couldn't find any information on OS names for docker images. Does anybody has an idea that once we add layers to a docker image does the OS name remains same.
You can set the hostname with the -h argument to Docker run, otherwise it gets the short form of the container ID as the hostname:
$ docker run --rm -it debian bash
root#0d36e1b1ac93:/# exit
exit
$ docker run --rm -h myhost -it debian bash
root#myhost:/# exit
exit
As far as I know, you can't tell docker build to use a given hostname, but see Dockerfile HOSTNAME Instruction for docker build like docker run -h.

I lose my data when the container exits

Despite Docker's Interactive tutorial and faq I lose my data when the container exits.
I have installed Docker as described here: http://docs.docker.io/en/latest/installation/ubuntulinux
without any problem on ubuntu 13.04.
But it loses all data when exits.
iman#test:~$ sudo docker version
Client version: 0.6.4
Go version (client): go1.1.2
Git commit (client): 2f74b1c
Server version: 0.6.4
Git commit (server): 2f74b1c
Go version (server): go1.1.2
Last stable version: 0.6.4
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:05:47 Unable to locate ping
iman#test:~$ sudo docker run ubuntu apt-get install ping
Reading package lists...
Building dependency tree...
The following NEW packages will be installed:
iputils-ping
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 56.1 kB of archives.
After this operation, 143 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ precise/main iputils-ping amd64 3:20101006-1ubuntu1 [56.1 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 56.1 kB in 0s (195 kB/s)
Selecting previously unselected package iputils-ping.
(Reading database ... 7545 files and directories currently installed.)
Unpacking iputils-ping (from .../iputils-ping_3%3a20101006-1ubuntu1_amd64.deb) ...
Setting up iputils-ping (3:20101006-1ubuntu1) ...
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:06:11 Unable to locate ping
iman#test:~$ sudo docker run ubuntu touch /home/test
iman#test:~$ sudo docker run ubuntu ls /home/test
ls: cannot access /home/test: No such file or directory
I also tested it with interactive sessions with the same result. Did I forget something?
EDIT: IMPORTANT FOR NEW DOCKER USERS
As #mohammed-noureldin and others said, actually this is NOT a container exiting. Every time it just creates a new container.
You need to commit the changes you make to the container and then run it. Try this:
sudo docker pull ubuntu
sudo docker run ubuntu apt-get install -y ping
Then get the container id using this command:
sudo docker ps -l
Commit changes to the container:
sudo docker commit <container_id> iman/ping
Then run the container:
sudo docker run iman/ping ping www.google.com
This should work.
When you use docker run to start a container, it actually creates a new container based on the image you have specified.
Besides the other useful answers here, note that you can restart an existing container after it exited and your changes are still there.
docker start f357e2faab77 # restart it in the background
docker attach f357e2faab77 # reattach the terminal & stdin
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
In addition to Unferth's answer, it is recommended to create a Dockerfile.
In an empty directory, create a file called "Dockerfile" with the following contents.
FROM ubuntu
RUN apt-get install ping
ENTRYPOINT ["ping"]
Create an image using the Dockerfile. Let's use a tag so we don't need to remember the hexadecimal image number.
$ docker build -t iman/ping .
And then run the image in a container.
$ docker run iman/ping stackoverflow.com
There are really great answers above to the asked question. There might be no need for another answer but still I want to give my personal opinion on the topic in the simplest words possible.
Here are some points about containers & images that will help us for a conclusion:
A docker image can be:
created-from-a-given-container
deleted
used-to-create-any-number-of-containers
A docker container can be:
created-from-an-image
started
stopped
restarted
deleted
used-to-create-any-number-of-images
A docker run command does this:
Downloads an image or uses a cached image
Creates a new container out of it
Starts the container
When a Dockerfile is used to create an image:
It is already well known that the image will eventually be used to run a docker container.
After issuing docker build command, docker behind-the-scenes creates a running container with a base-file-system and follows steps inside the Dockerfile to configure that container as per the developers need.
After the container is configured with specs of the Dockerfile, it will be committed as an image.
The image gets ready to rock & roll!
Conclusion:
As we can see, a docker container is independent of a docker image.
A container can be restarted provided the unique ID of that container [use docker ps --all to get the id].
Any operation like making a new directory, creating files, installing tools, etc. can be done inside the container when it is running. Once the container is stopped, it persists all the changes. Container stopping and restarting is like rebooting a computer system.
An already created container is always available for a restart but when we issue docker run command, a new container is created out of an image and hence it is like a new computer system. The changes made inside the old container - as we can understand now - are not available in this new container.
A final note:
I guess it's now obvious why the data seems to be lost yet it is always there.. but in a different [old] container. So, take a good note of the difference in docker start & docker run command & never get confused in them.
I have got a much simpler answer to your question, run the following two commands
sudo docker run -t -d ubuntu --name mycontainername /bin/bash
sudo docker ps -a
the above ps -a command returns a list of all containers. Take the name of the container which references the image name - 'ubuntu' . docker auto generates names for the containers for example - 'lightlyxuyzx', that's if you don't use the --name option.
The -t and -d options are important, the created container is detached and can be reattached as given below with the -t option.
With --name option, you can name your container in my case 'mycontainername'.
sudo docker exec -ti mycontainername bash
and this above command helps you login to the container with bash shell. From this point on any changes you make in the container is automatically saved by docker.
For example - apt-get install curl inside the container
You can exit the container without any issues, docker auto saves the changes.
On the next usage, All you have to do is, run these two commands every time you want to work with this container.
This Below command will start the stopped container:
sudo docker start mycontainername
sudo docker exec -ti mycontainername bash
Another example with ports and a shared space given below:
docker run -t -d --name mycontainername -p 5000:5000 -v ~/PROJECTS/SPACE:/PROJECTSPACE 7efe2989e877 /bin/bash
In my case:
7efe2989e877 - is the imageid of a previous container running
which I obtained using
docker ps -a
You might want to look at docker volumes if you you want to persist the data in your container. Visit https://docs.docker.com/engine/tutorials/dockervolumes/. The docker documentation is a very good place to start
My suggestion is to manage docker, with docker compose. Is an easy to way to manage all the docker's containers for your project, you can map the versions and link different containers to work together.
The docs are very simple to understand, better than docker's docs.
Docker-Compose Docs
Best
the similar problem (and no way Dockerfile alone could fix it) brought me to this page.
stage 0:
for all, hoping Dockerfile could fix it: until --dns and --dns-search will appear in Dockerfile support - there is no way to integrate intranet based resources into.
stage 1:
after building image using Dockerfile (by the way it's a serious glitch Dockerfile must be in the current folder), having an image to deploy what's intranet based, by running docker run script. example:
docker run -d \
--dns=${DNSLOCAL} \
--dns=${DNSGLOBAL} \
--dns-search=intranet \
-t pack/bsp \
--name packbsp-cont \
bash -c " \
wget -r --no-parent http://intranet/intranet-content.tar.gz \
tar -xvf intranet-content.tar.gz \
sudo -u ${USERNAME} bash --norc"
stage 2:
applying docker run script in daemon mode providing local dns records to have ability to download and deploy local stuff.
important point: run script should be ending with something like /usr/bin/sudo -u ${USERNAME} bash --norc to keep container running even after the installation scripts finishes.
no, it's not possible to run container in interactive mode for the full automation matter as it will remain inside internal shall command prompt until CTRL-p CTRL-q being pressed.
no, if interacting bash will not be executed at the end of the installation script, the container will terminate immediately after finishes script execution, loosing all installation results.
stage 3:
container is still running in background but it's unclear whether container has ended installation procedure or not yet. using following block to determine execution procedure finishes:
while ! docker container top ${CONTNAME} | grep "00[[:space:]]\{12\}bash \--norc" -
do
echo "."
sleep 5
done
the script will proceed further only after completed installation. and this is the right moment to call: commit, providing current container id as well as destination image name (it may be the same as on the build/run procedure but appended with the local installation purposes tag. example: docker commit containerID pack/bsp:toolchained.
see this link on how to get proper containerID
stage 4: container has been updated with the local installs as well as it has been committed into newly assigned image (the one having purposes tag added). it's safe now to stop container running. example: docker stop packbsp-cont
stage5: any moment the container with local installs require to run, start it with the image previously saved.
example: docker run -d -t pack/bsp:toolchained
a brilliant answer here How to continue a docker which is exited from user kgs
docker start $(docker ps -a -q --filter "status=exited")
(or in this case just docker start $(docker ps -ql) 'cos you don't want to start all of them)
docker exec -it <container-id> /bin/bash
That second line is crucial. So exec is used in place of run, and not on an image but on a containerid. And you do it after the container has been started.
None of the answers address the point of this design choice. I think docker works this way to prevent these 2 errors:
Repeated restart
Partial error

Resources