why can i not run a X11 application? - docker

So, as the title states, I'm a docker newbie.
I downloaded and installed the archlinux/base container which seems to work great so far. I've setup a few things, and installed some packages (including xeyes) and I now would like to launch xeyes. For that I found out the CONTAINER ID by running docker ps and then used that ID in my exec command which looks now like:
$ docker exec -it -e DISPLAY=$DISPLAY 4cae1ff56eb1 xeyes
Error: Can't open display: :0
Why does it still not work though? Also, how can I stop my running instance without losing its configured state? Previously I have exited the container and all my configuration and software installations were gone when I restarted it. That was not desired. How do I handle this correctly?

Concerning the X Display you need to share the xserver socket (note: docker can't bind mount a volume during an exec) and set the $DISPLAY (example Dockerfile):
FROM archlinux/base
RUN pacman -Syyu --noconfirm xorg-xeyes
ENTRYPOINT ["xeyes"]
Build the docker image: docker build --rm --network host -t so:57733715 .
Run the docker container: docker run --rm -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY so:57733715
Note: in case of No protocol specified errors you could disable host checking with xhost + but there is a warning to that (man xhost for additional information).

Related

Can't save file on remote Jupyter server running in docker container

I'm trying to work in Jupyter Lab run via Docker on a remote machine, but can't save any of the files I open.
I'm working with a Jupyter Docker Stack. I've installed docker on my remote machine and successfully pulled the image.
I set up port forwarding in my ~/.ssh/config file:
Host mytunnel
HostName <remote ip>
User root
ForwardAgent yes
LocalForward 8888 localhost:8888
When I fire up the container, I use the following script:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
The container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fc3c720af1 jupyter/tensorflow-notebook "tini -g -- start-no…" 8 minutes ago Up 8 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp adoring_khorana
I get the regular Jupyter url back:
http://127.0.0.1:8888/lab?token=<token>
But when I access the server in my browser, the Save option is disabled.
I've tried some of the solutions proposed elsewhere in SO, but no luck.
Is this something about connecting over SSH? The Jupyter server thinks it's not a secure connection?
It is possible that the problem is related to the SSH configuration, but I think is more probably related to a permission issue with your volume mount.
Please, try reviewing your docker container logs looking for permissions related errors. You can do that using the following:
docker container logs <container id>
See the output provided by your docker run command too.
In addition, try opening a shell in the container:
docker exec -it <container id> /bin/bash
and see if you are able to create a file in the default work directory:
touch /home/jovyan/work/test_file
Finally, the Jupyter docker stacks repository has a troubleshooting page almost entirely devoted to permissions issues.
Consider especially the solutions provided in the Additional tips and troubleshooting commands for permission-related errors and, as suggested, try providing launching the container with your OS user:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
--user "$(id -u)" --group-add users \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
After that, as suggested in the mentioned documentation as well, see if the container is properly mounted using the following command:
docker inspect <container_id>
In the obtained result note the value of the RW field which indicates whether the volume is writable (true) or not (false).

Interactive debugger for inside docker container? [duplicate]

I'm getting started working with Docker. I'm using the WordPress base image and docker-compose.
I'm trying to ssh into one of the containers to inspect the files/directories that were created during the initial build. I tried to run docker-compose run containername ls -la, but that didn't do anything. Even if it did, I'd rather have a console where I can traverse the directory structure, rather than run a single command. What is the right way to do this with Docker?
docker attach will let you connect to your Docker container, but this isn't really the same thing as ssh. If your container is running a webserver, for example, docker attach will probably connect you to the stdout of the web server process. It won't necessarily give you a shell.
The docker exec command is probably what you are looking for; this will let you run arbitrary commands inside an existing container. For example:
docker exec -it <mycontainer> bash
Of course, whatever command you are running must exist in the container filesystem.
In the above command <mycontainer> is the name or ID of the target container. It doesn't matter whether or not you're using docker compose; just run docker ps and use either the ID (a hexadecimal string displayed in the first column) or the name (displayed in the final column). E.g., given:
$ docker ps
d2d4a89aaee9 larsks/mini-httpd "mini_httpd -d /cont 7 days ago Up 7 days web
I can run:
$ docker exec -it web ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
18: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:3/64 scope link
valid_lft forever preferred_lft forever
I could accomplish the same thing by running:
$ docker exec -it d2d4a89aaee9 ip addr
Similarly, I could start a shell in the container;
$ docker exec -it web sh
/ # echo This is inside the container.
This is inside the container.
/ # exit
$
To bash into a running container, type this:
docker exec -t -i container_name /bin/bash
or
docker exec -ti container_name /bin/bash
or
docker exec -ti container_name sh
Historical note: At the time I wrote this answer, the title of the question was: "How to ssh into a docker container?"
As other answers have demonstrated, it is common to execute and interact with preinstalled commands (including shells) in a locally-accessible running container using docker exec, rather than SSH:
docker exec -it (container) (command)
Note: The below answer is based on Ubuntu (of 2016). Some translation of the installation process will be required for non-Debian containers.
Let's say, for reasons that are your own, you really do want to use SSH. It takes a few steps, but it can be done. Here are the commands that you would run inside the container to set it up...
apt-get update
apt-get install openssh-server
mkdir /var/run/sshd
chmod 0755 /var/run/sshd
/usr/sbin/sshd
useradd --create-home --shell /bin/bash --groups sudo username ## includes 'sudo'
passwd username ## Enter a password
apt-get install x11-apps ## X11 demo applications (optional)
ifconfig | awk '/inet addr/{print substr($2,6)}' ## Display IP address (optional)
Now you can even run graphical applications (if they are installed in the container) using X11 forwarding to the SSH client:
ssh -X username#IPADDRESS
xeyes ## run an X11 demo app in the client
Here are some related resources:
openssh-server doesn't start in Docker container
How to get bash or ssh into a running container in background mode?
Can you run GUI applications in a Linux Docker container?
Other useful approaches for graphical access found with search: Docker X11
If you run SSHD in your Docker containers, you're doing it wrong!
If you're here looking for a Docker Compose-specific answer like I was, it provides an easy way in without having to look up the generated container ID.
docker-compose exec takes the name of the service as per your docker-compose.yml file.
So to get a Bash shell for your 'web' service, you can do:
$ docker-compose exec web bash
If the container has already exited (maybe due to some error), you can do
$ docker run --rm -it --entrypoint /bin/ash image_name
or
$ docker run --rm -it --entrypoint /bin/sh image_name
or
$ docker run --rm -it --entrypoint /bin/bash image_name
to create a new container and get a shell into it. Since you specified --rm, the container would be deleted when you exit the shell.
Notice: this answer promotes a tool I've written.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container. The only requirement is that the container has Bash.
The following example would start an SSH server attached to a container with name 'my-container'.
docker run -d -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock \
-e CONTAINER=my-container -e AUTH_MECHANISM=noAuth \
jeroenpeeters/docker-ssh
ssh localhost -p 2222
When you connect to this SSH service (with your SSH client of choice) a Bash session will be started in the container with name 'my-container'.
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
Start a session into a Docker container using this command:
sudo docker exec -i -t (container ID) bash
If you're using Docker on Windows and want to get shell access to a container, use this:
winpty docker exec -it <container_id> sh
Most likely, you already have Git Bash installed. If you don't, make sure to install it.
In some cases your image can be Alpine-based. In this case it will throw:
OCI runtime exec failed: exec failed: container_linux.go:348: starting
container process caused "exec: \"bash\": executable file not found in
$PATH": unknown
Because /bin/bash doesn't exist. Instead of this you should use:
docker exec -it 9f7d99aa6625 ash
or
docker exec -it 9f7d99aa6625 sh
To connect to cmd in a Windows container, use
docker exec -it d8c25fde2769 cmd
Where d8c25fde2769 is the container id.
docker exec -it <container_id or name> bash
OR
docker exec -it <container_id or name> /bin/bash
GOINSIDE SOLUTION
install goinside command line tool with:
sudo npm install -g goinside
and go inside a docker container with a proper terminal size with:
goinside docker_container_name
old answer
We've put this snippet in ~/.profile:
goinside(){
docker exec -it $1 bash -c "stty cols $COLUMNS rows $LINES && bash";
}
export -f goinside
Not only does this make everyone able to get inside a running container with:
goinside containername
It also solves a long lived problem about fixed Docker container terminal sizes. Which is very annoying if you face it.
Also if you follow the link you'll have command completion for your docker container names too.
To inspect files, run docker run -it <image> /bin/sh to get an interactive terminal. The list of images can be obtained by docker images. In contrary to docker exec this solution works also in case when an image doesn't start (or quits immediately after running).
Simple
docker exec -it <container_id> bash
in above -it means interactive terminal.
also, with image name:
docker exec -it <REPOSITORY name> bash
It is simple!
List out all your Docker images:
sudo docker images
On my system it showed the following output:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
bash latest 922b9cc3ea5e 9 hours ago
14.03 MB
ubuntu latest 7feff7652c69 5 weeks ago 81.15 MB
I have two Docker images on my PC. Let's say I want to run the first one.
sudo docker run -i -t ubuntu:latest /bin/bash
This will give you terminal control of the container. Now you can do all type of shell operations inside the container. Like doing ls will output all folders in the root of the file system.
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
I've created a terminal function for easier access to the container's terminal. Maybe it's useful to you guys as well:
So the result is, instead of typing:
docker exec -it [container_id] /bin/bash
you'll write:
dbash [container_id]
Put the following in your ~/.bash_profile (or whatever else that works for you), then open a new terminal window and enjoy the shortcut:
#usage: dbash [container_id]
dbash() {
docker exec -it "$1" /bin/bash
}
2022 Solution
Consider another option
Why do you need it?
There is a bunch of modern docker-images that are based on distroless base images (they don't have /bin/bash either /bin/sh) so it becomes impossible to docker exec -it {container-name} bash into them.
How to shell-in any container
Use opener:
requires to add alias in your environment opener wordpress
works anywhere docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock artemkaxboy/opener wordpress
Instead of wordpress you can use name or id or image-name of any container you want to connect
How it works
Opener is a set of python scripts wrapped-up to a docker image. It finds target container by any unique attribute (name, id, port, image), tries to connect to target using bash. If bash is not found opener tries to connect using sh. Finally if sh is not found either opener installs busybox into target container and connects to the target using busybox shell, opener deletes busybox during disconnection.
$ docker exec -it <Container-Id> /bin/bash
Or depending on the shell, it can be
$ docker exec -it <Container-Id> /bin/sh
You can get the container-Id via docker ps command
-i = interactive
-t = to allocate a psuedo-TTY
you can interact with the terminal in docker container by passing the option -ti
docker run --rm -ti <image-name>
eg: docker run --rm -ti ubuntu
-t stands for terminal
-i stands for interactive
There are at least 2 options depending on the target.
Option 1: Create a new bash process and join into it (easier)
Sample start: docker exec -it <containername> /bin/bash
Quit: type exit
Pro: Does work on all containers (not depending on CMD/Entrypoint)
Contra: Creates a new process with own session and own environment-vars
Option 2: Attach to the already running bash (better)
Sample start: docker attach --detach-keys ctrl-d <containername>
Quit: use keys ctrl and d
Pro: Joins the exact same running bash which is in the container. You have same the session and same environment-vars.
Contra: Only works if CMD/Entrypoint is an interactive bash like CMD ["/bin/bash"] or CMD ["/bin/bash", "--init-file", "myfile.sh"] AND if container has been started with interactive options like docker run -itd <image> (-i=interactive, -t=tty and -d=deamon [opt])
We found option 2 more useful. For example we changed apache2-foreground to a normal background apache2 and started a bash after that.
docker exec will definitely be a solution. An easy way to work with the question you asked is by mounting the directory inside Docker to the local system's directory.
So that you can view the changes in local path instantly.
docker run -v /Users/<path>:/<container path>
Use:
docker attach <container name/id here>
The other way, albeit there is a danger to it, is to use attach, but if you Ctrl + C to exit the session, you will also stop the container. If you just want to see what is happening, use docker logs -f.
:~$ docker attach --help
Usage: docker attach [OPTIONS] CONTAINER
Attach to a running container
Options:
--detach-keys string Override the key sequence for detaching a container
--help Print usage
--no-stdin Do not attach STDIN
--sig-proxy Proxy all received signals to the process (default true)
Use this command:
docker exec -it containerid /bin/bash
To exec into a running container named test, below is the following commands
If the container has bash shell
docker exec -it test /bin/bash
If the container has bourne shell and most of the cases it's present
docker run -it test /bin/sh
If you have Docker installed with Kitematic, you can use the GUI. Open Kitematic from the Docker icon and in the Kitematic window select your container, and then click on the exec icon.
You can see the container log and lots of container information (in settings tab) in this GUI too.
In my case, for some reason(s) I need to check all the network involved information in each container. So the following commands must be valid in a container...
ip
route
netstat
ps
...
I checked through all these answers, none were helpful for me. I’ve searched information in other websites. I won’t add a super link here, since it’s not written in English. So I just put up this post with a summary solution for people who have the same requirements as me.
Say you have one running container named light-test. Follow the steps below.
docker inspect light-test -f {{.NetworkSettings.SandboxKey}}. This command will get reply like /var/run/docker/netns/xxxx.
Then ln -s /var/run/docker/netns/xxxx /var/run/netns/xxxx. The directory may not exist, do mkdir /var/run/netns first.
Now you may execute ip netns exec xxxx ip addr show to explore network world in container.
PS. xxxx is always the same value received from the first command. And of course, any other commands are valid, i.e. ip netns exec xxxx netstat -antp|grep 8080.
There are two options we can connect to the docker terminal directly with these method shell and bash but usually bash is not supported and defualt sh is supported terminal
To sh into the running container, type this:
docker exec -it container_name/container_ID sh
To bash into a running container, type this:
docker exec -it container_name/container_ID bash
and you want to use only bash terminal than you can install the bash terminal in your Dockerfile like RUN apt install bash -y
This is best if you don't want to specify an entry point in your docker build file..
sudo docker run -it --entrypoint /bin/bash <container_name>
If you are using Docker Compose then this will take you inside a Docker container.
docker-compose run container_name /bin/bash
Inside the container it will take you to WORKDIR defined in the Dockerfile. You can change your work directory by
WORKDIR directory_path # E.g /usr/src -> container's path
Another option is to use nsenter.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid

No access to volume after docker run -v

The following command runs fine on my local machine.
docker run -it --rm --ulimit memlock=-1 \
-v "$HOMEDIR/..":"/home/user/repo" \
-w "/home/user/repo/linux" \
${DOCKER_IMAGE_NAME} bash build.sh
Running it in a docker-in-docker evirionment (that means the mentioned docker command is executed in a container on google cloudbuild) is leading to two problems though:
Docker complains The input device is not a tty. My workaround: I simply used only docker run -i --rm.
Somehow the assigned volume and working directory on the container do not exist under the given path. But i checked them on the host system and they exist, but somehow do not make it until the container.
I thought also already about using docker exec but there i don't have the fancy -v options. I tried both, the docker run command with the -i and the -it flag on my local machine where it both runned fine. Anyway on cloudbuild i get the tty error when usind -it and the unacessible volume problem occurs when using -i.

Docker running Node exits immediately, how can I get access? [duplicate]

I'm getting started working with Docker. I'm using the WordPress base image and docker-compose.
I'm trying to ssh into one of the containers to inspect the files/directories that were created during the initial build. I tried to run docker-compose run containername ls -la, but that didn't do anything. Even if it did, I'd rather have a console where I can traverse the directory structure, rather than run a single command. What is the right way to do this with Docker?
docker attach will let you connect to your Docker container, but this isn't really the same thing as ssh. If your container is running a webserver, for example, docker attach will probably connect you to the stdout of the web server process. It won't necessarily give you a shell.
The docker exec command is probably what you are looking for; this will let you run arbitrary commands inside an existing container. For example:
docker exec -it <mycontainer> bash
Of course, whatever command you are running must exist in the container filesystem.
In the above command <mycontainer> is the name or ID of the target container. It doesn't matter whether or not you're using docker compose; just run docker ps and use either the ID (a hexadecimal string displayed in the first column) or the name (displayed in the final column). E.g., given:
$ docker ps
d2d4a89aaee9 larsks/mini-httpd "mini_httpd -d /cont 7 days ago Up 7 days web
I can run:
$ docker exec -it web ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
18: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:3/64 scope link
valid_lft forever preferred_lft forever
I could accomplish the same thing by running:
$ docker exec -it d2d4a89aaee9 ip addr
Similarly, I could start a shell in the container;
$ docker exec -it web sh
/ # echo This is inside the container.
This is inside the container.
/ # exit
$
To bash into a running container, type this:
docker exec -t -i container_name /bin/bash
or
docker exec -ti container_name /bin/bash
or
docker exec -ti container_name sh
Historical note: At the time I wrote this answer, the title of the question was: "How to ssh into a docker container?"
As other answers have demonstrated, it is common to execute and interact with preinstalled commands (including shells) in a locally-accessible running container using docker exec, rather than SSH:
docker exec -it (container) (command)
Note: The below answer is based on Ubuntu (of 2016). Some translation of the installation process will be required for non-Debian containers.
Let's say, for reasons that are your own, you really do want to use SSH. It takes a few steps, but it can be done. Here are the commands that you would run inside the container to set it up...
apt-get update
apt-get install openssh-server
mkdir /var/run/sshd
chmod 0755 /var/run/sshd
/usr/sbin/sshd
useradd --create-home --shell /bin/bash --groups sudo username ## includes 'sudo'
passwd username ## Enter a password
apt-get install x11-apps ## X11 demo applications (optional)
ifconfig | awk '/inet addr/{print substr($2,6)}' ## Display IP address (optional)
Now you can even run graphical applications (if they are installed in the container) using X11 forwarding to the SSH client:
ssh -X username#IPADDRESS
xeyes ## run an X11 demo app in the client
Here are some related resources:
openssh-server doesn't start in Docker container
How to get bash or ssh into a running container in background mode?
Can you run GUI applications in a Linux Docker container?
Other useful approaches for graphical access found with search: Docker X11
If you run SSHD in your Docker containers, you're doing it wrong!
If you're here looking for a Docker Compose-specific answer like I was, it provides an easy way in without having to look up the generated container ID.
docker-compose exec takes the name of the service as per your docker-compose.yml file.
So to get a Bash shell for your 'web' service, you can do:
$ docker-compose exec web bash
If the container has already exited (maybe due to some error), you can do
$ docker run --rm -it --entrypoint /bin/ash image_name
or
$ docker run --rm -it --entrypoint /bin/sh image_name
or
$ docker run --rm -it --entrypoint /bin/bash image_name
to create a new container and get a shell into it. Since you specified --rm, the container would be deleted when you exit the shell.
Notice: this answer promotes a tool I've written.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container. The only requirement is that the container has Bash.
The following example would start an SSH server attached to a container with name 'my-container'.
docker run -d -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock \
-e CONTAINER=my-container -e AUTH_MECHANISM=noAuth \
jeroenpeeters/docker-ssh
ssh localhost -p 2222
When you connect to this SSH service (with your SSH client of choice) a Bash session will be started in the container with name 'my-container'.
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
Start a session into a Docker container using this command:
sudo docker exec -i -t (container ID) bash
If you're using Docker on Windows and want to get shell access to a container, use this:
winpty docker exec -it <container_id> sh
Most likely, you already have Git Bash installed. If you don't, make sure to install it.
In some cases your image can be Alpine-based. In this case it will throw:
OCI runtime exec failed: exec failed: container_linux.go:348: starting
container process caused "exec: \"bash\": executable file not found in
$PATH": unknown
Because /bin/bash doesn't exist. Instead of this you should use:
docker exec -it 9f7d99aa6625 ash
or
docker exec -it 9f7d99aa6625 sh
To connect to cmd in a Windows container, use
docker exec -it d8c25fde2769 cmd
Where d8c25fde2769 is the container id.
docker exec -it <container_id or name> bash
OR
docker exec -it <container_id or name> /bin/bash
GOINSIDE SOLUTION
install goinside command line tool with:
sudo npm install -g goinside
and go inside a docker container with a proper terminal size with:
goinside docker_container_name
old answer
We've put this snippet in ~/.profile:
goinside(){
docker exec -it $1 bash -c "stty cols $COLUMNS rows $LINES && bash";
}
export -f goinside
Not only does this make everyone able to get inside a running container with:
goinside containername
It also solves a long lived problem about fixed Docker container terminal sizes. Which is very annoying if you face it.
Also if you follow the link you'll have command completion for your docker container names too.
To inspect files, run docker run -it <image> /bin/sh to get an interactive terminal. The list of images can be obtained by docker images. In contrary to docker exec this solution works also in case when an image doesn't start (or quits immediately after running).
It is simple!
List out all your Docker images:
sudo docker images
On my system it showed the following output:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
bash latest 922b9cc3ea5e 9 hours ago
14.03 MB
ubuntu latest 7feff7652c69 5 weeks ago 81.15 MB
I have two Docker images on my PC. Let's say I want to run the first one.
sudo docker run -i -t ubuntu:latest /bin/bash
This will give you terminal control of the container. Now you can do all type of shell operations inside the container. Like doing ls will output all folders in the root of the file system.
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Simple
docker exec -it <container_id> bash
in above -it means interactive terminal.
also, with image name:
docker exec -it <REPOSITORY name> bash
I've created a terminal function for easier access to the container's terminal. Maybe it's useful to you guys as well:
So the result is, instead of typing:
docker exec -it [container_id] /bin/bash
you'll write:
dbash [container_id]
Put the following in your ~/.bash_profile (or whatever else that works for you), then open a new terminal window and enjoy the shortcut:
#usage: dbash [container_id]
dbash() {
docker exec -it "$1" /bin/bash
}
2022 Solution
Consider another option
Why do you need it?
There is a bunch of modern docker-images that are based on distroless base images (they don't have /bin/bash either /bin/sh) so it becomes impossible to docker exec -it {container-name} bash into them.
How to shell-in any container
Use opener:
requires to add alias in your environment opener wordpress
works anywhere docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock artemkaxboy/opener wordpress
Instead of wordpress you can use name or id or image-name of any container you want to connect
How it works
Opener is a set of python scripts wrapped-up to a docker image. It finds target container by any unique attribute (name, id, port, image), tries to connect to target using bash. If bash is not found opener tries to connect using sh. Finally if sh is not found either opener installs busybox into target container and connects to the target using busybox shell, opener deletes busybox during disconnection.
$ docker exec -it <Container-Id> /bin/bash
Or depending on the shell, it can be
$ docker exec -it <Container-Id> /bin/sh
You can get the container-Id via docker ps command
-i = interactive
-t = to allocate a psuedo-TTY
you can interact with the terminal in docker container by passing the option -ti
docker run --rm -ti <image-name>
eg: docker run --rm -ti ubuntu
-t stands for terminal
-i stands for interactive
There are at least 2 options depending on the target.
Option 1: Create a new bash process and join into it (easier)
Sample start: docker exec -it <containername> /bin/bash
Quit: type exit
Pro: Does work on all containers (not depending on CMD/Entrypoint)
Contra: Creates a new process with own session and own environment-vars
Option 2: Attach to the already running bash (better)
Sample start: docker attach --detach-keys ctrl-d <containername>
Quit: use keys ctrl and d
Pro: Joins the exact same running bash which is in the container. You have same the session and same environment-vars.
Contra: Only works if CMD/Entrypoint is an interactive bash like CMD ["/bin/bash"] or CMD ["/bin/bash", "--init-file", "myfile.sh"] AND if container has been started with interactive options like docker run -itd <image> (-i=interactive, -t=tty and -d=deamon [opt])
We found option 2 more useful. For example we changed apache2-foreground to a normal background apache2 and started a bash after that.
docker exec will definitely be a solution. An easy way to work with the question you asked is by mounting the directory inside Docker to the local system's directory.
So that you can view the changes in local path instantly.
docker run -v /Users/<path>:/<container path>
Use:
docker attach <container name/id here>
The other way, albeit there is a danger to it, is to use attach, but if you Ctrl + C to exit the session, you will also stop the container. If you just want to see what is happening, use docker logs -f.
:~$ docker attach --help
Usage: docker attach [OPTIONS] CONTAINER
Attach to a running container
Options:
--detach-keys string Override the key sequence for detaching a container
--help Print usage
--no-stdin Do not attach STDIN
--sig-proxy Proxy all received signals to the process (default true)
Use this command:
docker exec -it containerid /bin/bash
To exec into a running container named test, below is the following commands
If the container has bash shell
docker exec -it test /bin/bash
If the container has bourne shell and most of the cases it's present
docker run -it test /bin/sh
If you have Docker installed with Kitematic, you can use the GUI. Open Kitematic from the Docker icon and in the Kitematic window select your container, and then click on the exec icon.
You can see the container log and lots of container information (in settings tab) in this GUI too.
In my case, for some reason(s) I need to check all the network involved information in each container. So the following commands must be valid in a container...
ip
route
netstat
ps
...
I checked through all these answers, none were helpful for me. I’ve searched information in other websites. I won’t add a super link here, since it’s not written in English. So I just put up this post with a summary solution for people who have the same requirements as me.
Say you have one running container named light-test. Follow the steps below.
docker inspect light-test -f {{.NetworkSettings.SandboxKey}}. This command will get reply like /var/run/docker/netns/xxxx.
Then ln -s /var/run/docker/netns/xxxx /var/run/netns/xxxx. The directory may not exist, do mkdir /var/run/netns first.
Now you may execute ip netns exec xxxx ip addr show to explore network world in container.
PS. xxxx is always the same value received from the first command. And of course, any other commands are valid, i.e. ip netns exec xxxx netstat -antp|grep 8080.
There are two options we can connect to the docker terminal directly with these method shell and bash but usually bash is not supported and defualt sh is supported terminal
To sh into the running container, type this:
docker exec -it container_name/container_ID sh
To bash into a running container, type this:
docker exec -it container_name/container_ID bash
and you want to use only bash terminal than you can install the bash terminal in your Dockerfile like RUN apt install bash -y
This is best if you don't want to specify an entry point in your docker build file..
sudo docker run -it --entrypoint /bin/bash <container_name>
If you are using Docker Compose then this will take you inside a Docker container.
docker-compose run container_name /bin/bash
Inside the container it will take you to WORKDIR defined in the Dockerfile. You can change your work directory by
WORKDIR directory_path # E.g /usr/src -> container's path
Another option is to use nsenter.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid

Docker logging to container

I'm fresh user of Docker. The fist problem with which I'm faced is logging into container.
I'm found solutions to execute container bash commands by
docker exec -it ID bash
But, this is solution only for install/ remove packages. What to use if I want to edit nginx config in docker container ?
One of solutions can be loggin to container via ssh connection, but maybe Docker have something own for this ?, I mean easilly access without install OpenSSH ?
as you said,
docker exec -it container_id bash
and then use your favorite editor to edit any nginx config file. vi or nano is usually installed, but you may need to install emacs or vim, if this is your favorite editor
if you have just a few characters to modify,
docker exec container_id sed ...
might do the job. If you want to SSH into your container, you will need to install SSH and deal with the SSH keys, I am not sure this is what you need.
You're going about it the wrong way. You should rarely need to log into a container to edit files.
Instead, mount the nginx.conf with -v from the host. That way you can edit the file with your normal editor. Once you've got the config working the way you want it, you can then build a new image with it baked in.
In general, you have to get into the mindset of containers being ephemeral. You don't patch them; you throw them away and replace them with a fixed version.
How: Docker logging to container
Yes, you can. You can login the running container.
Exist docker exec or docker attach is not good enough. Looking to start a shell inside a Docker container? The solution is: jpetazzo/nsenter with two commands: nsenter and docker-enter.
If you are in Linux environment, then run below command:
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
docker ps
# replace <container_name_or_ID> with real container name or ID.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid
Then you are in that running container, you can run any linux commands now.
I prefer the other command docker-enter. Without login the container, you can directly run linux commands in container with docker-enter command. Second, I can't memory multiple options of nsenter command and no need to find out the container's PID.
docker-enter 0e8c248982c5 ls /opt
If you are mac or windows user, run docket with toolbox:
docker-machine ssh default
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
PID=$(docker inspect --format {{.State.Pid}} 0e8c248982c5)
sudo nsenter --target $PID --mount --uts --ipc --net --pid
If you are mac or windows user, run docket with boot2docker:
boot2docker ssh
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
PID=$(docker inspect --format {{.State.Pid}} 0e8c248982c5)
sudo nsenter --target $PID --mount --uts --ipc --net --pid
Note: The command docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter only need run one time.
How: edit nginx config
For your second question, you can think about ONBUILD in Docker.
ONBUILD COPY nginx.conf /etc/nginx/nginx.conf
With this solution, you can:
edit nginx.conf in local, you can use any exist editor .
needn't build your image every time after you change nginx configuration.
every time, after you change nginx.conf file in local, you need stop, remove and re-run the containe, new nginx.conf file will be deployed into contrainer when docker run command.
You can refer the detail on how to use ONBUILD here: docker build

Resources