Cannot use vi or vim command in docker container? - docker

It's CentOS 7, already installed vi and vim in my CentOS and I can use them. I run docker in CentOS, when I excute this line below:
docker exec -it mysolr /bin/bash
I cannot use vi/vim in the solr container:
bash: vim: command not found
Why is that and how do I fix it so I can use vi/vim to edit file in docker container?

A typical Docker image contains a minimal set of libraries and utilities to run one specific program. Additionally, Docker container filesystems are not long-lived: it is extremely routine to delete and recreate a container, for instance to use a newer version of a base image.
The upshot of this is that you never want to directly edit files in a Docker container, and most images aren't set up with "rich" editing tools. (BusyBox contains a minimal vi and so most Alpine-based images will too.) If you make some change, it will be lost as soon as you delete the container. (Similarly, you usually can install vim or emacs or whatever, but it will get lost as soon as the container is deleted: installing software in a running container isn't usually a best practice.)
There are two good ways to deal with this, depending on what kind of file it is.
If the file is part of the application, like a source file, edit, debug, and test it outside of Docker space. Once you're convinced it's right (by running unit tests and by running the program locally), docker build a new image with it, and docker run a new container with the new image.
ed config.py
pytest
docker build -t imagename .
docker run -d -p ... --name containername imagename
...
ed config.py
pytest
docker build -t imagename .
docker stop containername
docker run -d -p ... --name containername imagename
If the file is configuration that needs to be injected when the application starts, the docker run -v option is a good way to push it in. You can directly edit the config file on your host, but you'll probably need to restart (or delete and recreate) the container for it to notice.
ed config.txt
docker run \
-v $PWD/config.txt:/etc/whatever/config.txt \
--name containername -p ... \
imagename
...
ed config.txt
docker stop containername
docker rm containername
docker run ... imagename

Related

How to 'docker exec' a container built from scratch?

I am trying to docker exec a container that is built from scratch (say, a NATS container). Seems pretty straight-forward, but since it is built from scratch, I am unable to access /bin/bash, /bin/sh and literally any such command.
I get the error: oci runtime error (command not found, file not found, etc. depending upon the command that I enter).
I tried some commands like:
docker exec -it <container name> /bin/bash
docker exec -it <container name> /bin/sh
docker exec -it <container name> ls
My question is, how do I docker exec a container that is built from scratch and consisting only of binaries? By doing a docker exec, I wish to find out if the files have been successfully copied from my host to the container (I have a COPY in the Dockerfile).
If your scratch container is running you can copy a shell (and other needed utils) into its filesystem and then exec it. The shell would need to be a static binary. Busybox is a great choice here because it can double as so many other binaries.
Full example:
# Assumes scratch container is last launched one, else replace with container ID of
# scratch image, e.g. from `docker ps`, for example:
# scratch_container_id=401b31621b36
scratch_container_id=$(docker ps -ql)
docker run -d busybox:latest sleep 100
busybox_container_id=$(docker ps -ql)
docker cp "$busybox_container_id":/bin/busybox .
# The busybox binary will become whatever you name it (or the first arg you pass to it), for more info run:
# docker run busybox:latest /bin/busybox
# The `busybox --install` command copies the binary with different names into a directory.
docker cp ./busybox "$scratch_container_id":/busybox
docker exec -it "$scratch_container_id" /busybox sh -c '
export PATH="/busybin:$PATH"
/busybox mkdir /busybin
/busybox --install /busybin
sh'
For Kubernetes I think Ephemeral Containers provide or will provide equivalent functionality.
References:
distroless java docker image error
https://github.com/GoogleContainerTools/distroless/issues/168#issuecomment-371077961
There are several options.
You can do docker container cp ${CONTAINER}:/path/to/file/on/container /path/to/temp/dir/on/host. This will copy the files to your host where you can inspect things using host tools.
You can add an appropriate VOLUME to your Dockerfile. Then you can docker container inspect ${CONTAINER}. This will expose the volume name where the files should be. You can then inspect those in another container (based off an image with all the tools you need).
You can at runtime bind the container to a volume or host directory at the appropriate place.
You can add those binaries that you feel you need to the image. If you need /bin/ls or /bin/sh, then you can add them.
You can bind mount the necessary binaries to the container - so the container has them for verification purposes but the image is not bloated by them.
You can only use docker exec to run commands that actually exist in a container. If those commands don't exist, you can't run them. As you've noted, the scratch base image contains nothing – no shells, no libraries, no system files, nothing.
If all you're trying to check is if a Dockerfile COPY command actually copied the files you said it would, I'd generally assume the tooling works and just reference the copied files in my application.
Since it sounds like you control the Dockerfile, one workaround could be to change the base image to something lightweight but non-empty, like FROM busybox. That would give you a minimal set of tools that you could work with without blowing up the image size too much.
I am trying to do the same files check for my needs. I ended up with docker cp copy this file from container. In my case I am using nats container, but you can use any other container running scratch-based-image
sudo docker cp nats_nats_1:/nats-server.conf ./nats-server.conf
You can just grab the container identifier and throw it into a variable. For example, let's say the (truncated) output of docker ps -a is listed with your running container:
CONTAINER ID IMAGE
111111111111 neo4j-migrator
To further the example, you can docker exec -t using the variable you created. For example:
CONTAINER_ID=`docker ps -aqf "ancestor=neo4j-migrator"`
docker exec -it $CONAINER_ID \
sh -c "/usr/bin/neo4j-migrations \
--password $NEO4J_PASSWORD \
--username $NEO4J_USERNAME \
--address $NEO4J_URI \
migrate"

Start and attach a docker container with X11 forwarding

There are various articles like this, this and this and many more, that explains how to use X11 forwarding to run GUI apps on Docker. I am using a Centos Docker container.
However, all of these approaches use
docker run
with all appropriate options in order to visualize the result. Any use of docker run creates a new image and performs the operation on top of that.
A way to work in the same container is to use docker start followed by docker attach and then executing the commands on the prompt of the container. Additionally, the script (let's say xyz.sh) that I intend to run on Docker container resides inside a folder MyFiles in the root directory of the container and accepts a parameter as well
So is there a way to run the script using docker start and/or docker attach while also X11-forwarding it?
This is what I have tried, although would like to avoid docker run and instead use docker start and docker attach
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
centos \
cd MyFiles \
./xyz.sh param1
export containerId='docker ps -l -q'
This in turn throws up an error as below -
/usr/bin/cd: line 2: cd: MyFiles/: No such file or directory
How can I run the script xyz.sh under MyFiles on the Docker container using docker start and docker attach?
Also since the location and the name of the script may vary, I would like to know if it is mandatory to include each of these path in the system path variable on the Docker container or can it be done at runtime also?
It looks to me your problem is not with X11 forwarding but with general Docker syntax.
You could do it rather simply:
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-w MyFiles \
--rm \
centos \
bash -c xyz.sh param1
I added:
--rm to avoid stacking old dead containers.
-w workdir, obvious meaning
/bin/bash -c to get sure your script is interpreted by bash.
How to do without docker run:
run is actually like create then start. You can split it in two steps if you prefer.
If you want to attach to a container, it must be running first. And for it to be running, there must be a process currently running inside.

Docker logging to container

I'm fresh user of Docker. The fist problem with which I'm faced is logging into container.
I'm found solutions to execute container bash commands by
docker exec -it ID bash
But, this is solution only for install/ remove packages. What to use if I want to edit nginx config in docker container ?
One of solutions can be loggin to container via ssh connection, but maybe Docker have something own for this ?, I mean easilly access without install OpenSSH ?
as you said,
docker exec -it container_id bash
and then use your favorite editor to edit any nginx config file. vi or nano is usually installed, but you may need to install emacs or vim, if this is your favorite editor
if you have just a few characters to modify,
docker exec container_id sed ...
might do the job. If you want to SSH into your container, you will need to install SSH and deal with the SSH keys, I am not sure this is what you need.
You're going about it the wrong way. You should rarely need to log into a container to edit files.
Instead, mount the nginx.conf with -v from the host. That way you can edit the file with your normal editor. Once you've got the config working the way you want it, you can then build a new image with it baked in.
In general, you have to get into the mindset of containers being ephemeral. You don't patch them; you throw them away and replace them with a fixed version.
How: Docker logging to container
Yes, you can. You can login the running container.
Exist docker exec or docker attach is not good enough. Looking to start a shell inside a Docker container? The solution is: jpetazzo/nsenter with two commands: nsenter and docker-enter.
If you are in Linux environment, then run below command:
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
docker ps
# replace <container_name_or_ID> with real container name or ID.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid
Then you are in that running container, you can run any linux commands now.
I prefer the other command docker-enter. Without login the container, you can directly run linux commands in container with docker-enter command. Second, I can't memory multiple options of nsenter command and no need to find out the container's PID.
docker-enter 0e8c248982c5 ls /opt
If you are mac or windows user, run docket with toolbox:
docker-machine ssh default
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
PID=$(docker inspect --format {{.State.Pid}} 0e8c248982c5)
sudo nsenter --target $PID --mount --uts --ipc --net --pid
If you are mac or windows user, run docket with boot2docker:
boot2docker ssh
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
PID=$(docker inspect --format {{.State.Pid}} 0e8c248982c5)
sudo nsenter --target $PID --mount --uts --ipc --net --pid
Note: The command docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter only need run one time.
How: edit nginx config
For your second question, you can think about ONBUILD in Docker.
ONBUILD COPY nginx.conf /etc/nginx/nginx.conf
With this solution, you can:
edit nginx.conf in local, you can use any exist editor .
needn't build your image every time after you change nginx configuration.
every time, after you change nginx.conf file in local, you need stop, remove and re-run the containe, new nginx.conf file will be deployed into contrainer when docker run command.
You can refer the detail on how to use ONBUILD here: docker build

What is the best way to iterate while building a docker container?

I'm trying to build a few docker containers and I found the iteration process of editing the Dockerfile, and scripts run within it, clunky. I'm looking for best practices and to find out how others go about.
My initial process was:
docker build -t mycontainer mycontainer
docker run mycontainer
docker exec -i -t < container id > "/bin/bash" # get into container to debug
docker rm -v < container id >
docker rmi mycontainer
Repeat
This felt expensive for each iteration, especially if it was typo.
This alternate process required a little bit less iteration:
Install vim in dockerfile
docker run mycontainer
docker exec -i -t < container id > "/bin/bash" # get into container to edit scripts
docker cp to copy edited files out when done.
If I need to run any command, I carefully remember and update the Dockerfile outside the container.
Rebuild image without vim
This requires fewer iterations, but is not painless since everything's very manual and I have to remember which files changed and got updated.
I've been working with Docker in production since 0.7 and I've definitely felt your pain.
Dockerfile Development Workflow
Note: I always install vim in the container when I'm in active development. I just take it out of the Dockerfile when I release.
Setup tmux/gnu screen/iTerm/your favorite vertical split console utility.
On the right console I run:
$ vim Dockerfile
On the left console I run:
$ docker build -t username/imagename:latest . && docker run -it -name dev-1 username/imagename:latest
Now split the left console horizontally, so that the run STDOUT is above and a shell is below. Here you will run:
docker exec -it dev-1
and edits internally or do tests with:
docker exec -it dev-1 <my command>
Every time you are satisfied with your work with the Dockerfile save (:wq!) and then in the left console run the command above. Test the behavior. If you are not happy run:
docker rm dev-1
and then edit again and repeat step #3.
Periodically, when I've built up too many images or containers I do the following:
Remove all containers: docker rm $(docker ps -qa)
Remove all images: docker rmi $(docker images -q)
I assume the files you're editing in your Alternate process are files that make up part of the application you're deploying? Such as a Bash or Python script?
That being the case, you could mount them as a volume, during your debugging process, rather than mounting them inside the docker, so that when you edit them, they are immediately changed within the docker and the host.
So for example, if your code is at /home/dragonx/codefiles, do
docker run -v /home/dragonx/codefiles:/opt/codefiles mycontainer
Then when you edit those files, either from the host or within the container, they are available in the container but you don't need to copy them out before killing the docker.
Here is the simplest way to "build a few docker containers":
docker run -it --name=my_cont1 --hostname=my_host1 ubuntu:15.10
docker run -it --name=my_cont2 --hostname=my_host2 ubuntu:15.10
...
...
docker run -it --name=my_contn --hostname=my_hostn ubuntu:15.10
That would create 'n' number of containers.
After the very first "docker run ..." command, you will be put in a Bash shell. You can do your things there, exit and run the next "docker run ..." command.
Exiting from the Bash shell does not remove the containers. They are all still there in the "Exited" status. You can list them with the docker ps -a command. And you can always get back on to them by:
docker start -ia my_cont1

I lose my data when the container exits

Despite Docker's Interactive tutorial and faq I lose my data when the container exits.
I have installed Docker as described here: http://docs.docker.io/en/latest/installation/ubuntulinux
without any problem on ubuntu 13.04.
But it loses all data when exits.
iman#test:~$ sudo docker version
Client version: 0.6.4
Go version (client): go1.1.2
Git commit (client): 2f74b1c
Server version: 0.6.4
Git commit (server): 2f74b1c
Go version (server): go1.1.2
Last stable version: 0.6.4
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:05:47 Unable to locate ping
iman#test:~$ sudo docker run ubuntu apt-get install ping
Reading package lists...
Building dependency tree...
The following NEW packages will be installed:
iputils-ping
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 56.1 kB of archives.
After this operation, 143 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ precise/main iputils-ping amd64 3:20101006-1ubuntu1 [56.1 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 56.1 kB in 0s (195 kB/s)
Selecting previously unselected package iputils-ping.
(Reading database ... 7545 files and directories currently installed.)
Unpacking iputils-ping (from .../iputils-ping_3%3a20101006-1ubuntu1_amd64.deb) ...
Setting up iputils-ping (3:20101006-1ubuntu1) ...
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:06:11 Unable to locate ping
iman#test:~$ sudo docker run ubuntu touch /home/test
iman#test:~$ sudo docker run ubuntu ls /home/test
ls: cannot access /home/test: No such file or directory
I also tested it with interactive sessions with the same result. Did I forget something?
EDIT: IMPORTANT FOR NEW DOCKER USERS
As #mohammed-noureldin and others said, actually this is NOT a container exiting. Every time it just creates a new container.
You need to commit the changes you make to the container and then run it. Try this:
sudo docker pull ubuntu
sudo docker run ubuntu apt-get install -y ping
Then get the container id using this command:
sudo docker ps -l
Commit changes to the container:
sudo docker commit <container_id> iman/ping
Then run the container:
sudo docker run iman/ping ping www.google.com
This should work.
When you use docker run to start a container, it actually creates a new container based on the image you have specified.
Besides the other useful answers here, note that you can restart an existing container after it exited and your changes are still there.
docker start f357e2faab77 # restart it in the background
docker attach f357e2faab77 # reattach the terminal & stdin
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
In addition to Unferth's answer, it is recommended to create a Dockerfile.
In an empty directory, create a file called "Dockerfile" with the following contents.
FROM ubuntu
RUN apt-get install ping
ENTRYPOINT ["ping"]
Create an image using the Dockerfile. Let's use a tag so we don't need to remember the hexadecimal image number.
$ docker build -t iman/ping .
And then run the image in a container.
$ docker run iman/ping stackoverflow.com
There are really great answers above to the asked question. There might be no need for another answer but still I want to give my personal opinion on the topic in the simplest words possible.
Here are some points about containers & images that will help us for a conclusion:
A docker image can be:
created-from-a-given-container
deleted
used-to-create-any-number-of-containers
A docker container can be:
created-from-an-image
started
stopped
restarted
deleted
used-to-create-any-number-of-images
A docker run command does this:
Downloads an image or uses a cached image
Creates a new container out of it
Starts the container
When a Dockerfile is used to create an image:
It is already well known that the image will eventually be used to run a docker container.
After issuing docker build command, docker behind-the-scenes creates a running container with a base-file-system and follows steps inside the Dockerfile to configure that container as per the developers need.
After the container is configured with specs of the Dockerfile, it will be committed as an image.
The image gets ready to rock & roll!
Conclusion:
As we can see, a docker container is independent of a docker image.
A container can be restarted provided the unique ID of that container [use docker ps --all to get the id].
Any operation like making a new directory, creating files, installing tools, etc. can be done inside the container when it is running. Once the container is stopped, it persists all the changes. Container stopping and restarting is like rebooting a computer system.
An already created container is always available for a restart but when we issue docker run command, a new container is created out of an image and hence it is like a new computer system. The changes made inside the old container - as we can understand now - are not available in this new container.
A final note:
I guess it's now obvious why the data seems to be lost yet it is always there.. but in a different [old] container. So, take a good note of the difference in docker start & docker run command & never get confused in them.
I have got a much simpler answer to your question, run the following two commands
sudo docker run -t -d ubuntu --name mycontainername /bin/bash
sudo docker ps -a
the above ps -a command returns a list of all containers. Take the name of the container which references the image name - 'ubuntu' . docker auto generates names for the containers for example - 'lightlyxuyzx', that's if you don't use the --name option.
The -t and -d options are important, the created container is detached and can be reattached as given below with the -t option.
With --name option, you can name your container in my case 'mycontainername'.
sudo docker exec -ti mycontainername bash
and this above command helps you login to the container with bash shell. From this point on any changes you make in the container is automatically saved by docker.
For example - apt-get install curl inside the container
You can exit the container without any issues, docker auto saves the changes.
On the next usage, All you have to do is, run these two commands every time you want to work with this container.
This Below command will start the stopped container:
sudo docker start mycontainername
sudo docker exec -ti mycontainername bash
Another example with ports and a shared space given below:
docker run -t -d --name mycontainername -p 5000:5000 -v ~/PROJECTS/SPACE:/PROJECTSPACE 7efe2989e877 /bin/bash
In my case:
7efe2989e877 - is the imageid of a previous container running
which I obtained using
docker ps -a
You might want to look at docker volumes if you you want to persist the data in your container. Visit https://docs.docker.com/engine/tutorials/dockervolumes/. The docker documentation is a very good place to start
My suggestion is to manage docker, with docker compose. Is an easy to way to manage all the docker's containers for your project, you can map the versions and link different containers to work together.
The docs are very simple to understand, better than docker's docs.
Docker-Compose Docs
Best
the similar problem (and no way Dockerfile alone could fix it) brought me to this page.
stage 0:
for all, hoping Dockerfile could fix it: until --dns and --dns-search will appear in Dockerfile support - there is no way to integrate intranet based resources into.
stage 1:
after building image using Dockerfile (by the way it's a serious glitch Dockerfile must be in the current folder), having an image to deploy what's intranet based, by running docker run script. example:
docker run -d \
--dns=${DNSLOCAL} \
--dns=${DNSGLOBAL} \
--dns-search=intranet \
-t pack/bsp \
--name packbsp-cont \
bash -c " \
wget -r --no-parent http://intranet/intranet-content.tar.gz \
tar -xvf intranet-content.tar.gz \
sudo -u ${USERNAME} bash --norc"
stage 2:
applying docker run script in daemon mode providing local dns records to have ability to download and deploy local stuff.
important point: run script should be ending with something like /usr/bin/sudo -u ${USERNAME} bash --norc to keep container running even after the installation scripts finishes.
no, it's not possible to run container in interactive mode for the full automation matter as it will remain inside internal shall command prompt until CTRL-p CTRL-q being pressed.
no, if interacting bash will not be executed at the end of the installation script, the container will terminate immediately after finishes script execution, loosing all installation results.
stage 3:
container is still running in background but it's unclear whether container has ended installation procedure or not yet. using following block to determine execution procedure finishes:
while ! docker container top ${CONTNAME} | grep "00[[:space:]]\{12\}bash \--norc" -
do
echo "."
sleep 5
done
the script will proceed further only after completed installation. and this is the right moment to call: commit, providing current container id as well as destination image name (it may be the same as on the build/run procedure but appended with the local installation purposes tag. example: docker commit containerID pack/bsp:toolchained.
see this link on how to get proper containerID
stage 4: container has been updated with the local installs as well as it has been committed into newly assigned image (the one having purposes tag added). it's safe now to stop container running. example: docker stop packbsp-cont
stage5: any moment the container with local installs require to run, start it with the image previously saved.
example: docker run -d -t pack/bsp:toolchained
a brilliant answer here How to continue a docker which is exited from user kgs
docker start $(docker ps -a -q --filter "status=exited")
(or in this case just docker start $(docker ps -ql) 'cos you don't want to start all of them)
docker exec -it <container-id> /bin/bash
That second line is crucial. So exec is used in place of run, and not on an image but on a containerid. And you do it after the container has been started.
None of the answers address the point of this design choice. I think docker works this way to prevent these 2 errors:
Repeated restart
Partial error

Resources