I would like to start a stopped Docker container with a different command, as the default command crashes - meaning I can't start the container and then use 'docker exec'.
Basically I would like to start a shell so I can inspect the contents of the container.
Luckily I created the container with the -it option!
Find your stopped container id
docker ps -a
Commit the stopped container:
This command saves modified container state into a new image named user/test_image:
docker commit $CONTAINER_ID user/test_image
Start/run with a different entry point:
docker run -ti --entrypoint=sh user/test_image
Entrypoint argument description:
https://docs.docker.com/engine/reference/run/#/entrypoint-default-command-to-execute-at-runtime
Note:
Steps above just start a stopped container with the same filesystem state. That is great for a quick investigation; but environment variables, network configuration, attached volumes and other stuff is not inherited. You should specify all these arguments explicitly.
Steps to start a stopped container have been borrowed from here: (last comment) https://github.com/docker/docker/issues/18078
Edit this file (corresponding to your stopped container):
vi /var/lib/docker/containers/923...4f6/config.json
Change the "Path" parameter to point at your new command, e.g. /bin/bash. You may also set the "Args" parameter to pass arguments to the command.
Restart the docker service (note this will stop all running containers unless you first enable live-restore):
service docker restart
List your containers and make sure the command has changed:
docker ps -a
Start the container and attach to it, you should now be in your shell!
docker start -ai mad_brattain
Worked on Fedora 22 using Docker 1.7.1.
NOTE: If your shell is not interactive (e.g. you did not create the original container with -it option), you can instead change the command to "/bin/sleep 600" or "/bin/tail -f /dev/null" to give you enough time to do "docker exec -it CONTID /bin/bash" as another way of getting a shell.
NOTE2: Newer versions of docker have config.v2.json, where you will need to change either Entrypoint or Cmd (thanks user60561).
Add a check to the top of your Entrypoint script
Docker really needs to implement this as a new feature, but here's another workaround option for situations in which you have an Entrypoint that terminates after success or failure, which can make it difficult to debug.
If you don't already have an Entrypoint script, create one that runs whatever command(s) you need for your container. Then, at the top of this file, add these lines to entrypoint.sh:
# Run once, hold otherwise
if [ -f "already_ran" ]; then
echo "Already ran the Entrypoint once. Holding indefinitely for debugging."
cat
fi
touch already_ran
# Do your main things down here
To ensure that cat holds the connection, you may need to provide a TTY. I'm running the container with my Entrypoint script like so:
docker run -t --entrypoint entrypoint.sh image_name
This will cause the script to run once, creating a file that indicates it has already run (in the container's virtual filesystem). You can then restart the container to perform debugging:
docker start container_name
When you restart the container, the already_ran file will be found, causing the Entrypoint script to stall with cat (which just waits forever for input that will never come, but keeps the container alive). You can then execute a debugging bash session:
docker exec -i container_name bash
While the container is running, you can also remove already_ran and manually execute the entrypoint.sh script to rerun it, if you need to debug that way.
I took #Dmitriusan's answer and made it into an alias:
alias docker-run-prev-container='prev_container_id="$(docker ps -aq | head -n1)" && docker commit "$prev_container_id" "prev_container/$prev_container_id" && docker run -it --entrypoint=bash "prev_container/$prev_container_id"'
Add this into your ~/.bashrc aliases file, and you'll have a nifty new docker-run-prev-container alias which'll drop you into a shell in the previous container.
Helpful for debugging failed docker builds.
This is not exactly what you're asking for, but you can use docker export on a stopped container if all you want is to inspect the files.
mkdir $TARGET_DIR
docker export $CONTAINER_ID | tar -x -C $TARGET_DIR
docker-compose run --entrypoint /bin/bash cont_id_or_name
(for conven, put your env, vol mounts in the docker-compose.yml)
or use docker run and manually spec all args
It seems docker can't change entry point after a container started. But you can set a custom entry point and change the code of the entry point next time you restart it.
For example you run a container like this:
docker run --name c --entrypoint "/boot" -v "./boot":/boot $image
Here is the boot entry point:
#!/bin/bash
command_a
When you need restart c with a different command, you just change the boot script:
#!/bin/bash
command_b
And restart:
docker restart c
My Problem:
I started a container with docker run <IMAGE_NAME>
And then added some files to this container
Then I closed the container and tried to start it again withe same command as above.
But when I checked the new files, they were missing
when I run docker ps -a I could see two containers.
That means every time I was running docker run <IMAGE_NAME> command, new image was getting created
Solution:
To work on the same container you created in the first place run follow these steps
docker ps to get container of your container
docker container start <CONTAINER_ID> to start existing container
Then you can continue from where you left. e.g. docker exec -it <CONTAINER_ID> /bin/bash
You can then decide to create a new image out of it
I have found a simple command
docker start -a [container_name]
This will do the trick
Or
docker start [container_name]
then
docker exec -it [container_name] bash
I had a docker container where the MariaDB container was continuously crashing on startup because of corrupted InnoDB tables.
What I did to solve my problem was:
copy out the docker-entrypoint.sh from the container to the local file system (docker cp)
edit it to include the needed command line parameter (--innodb-force-recovery=1 in my case)
copy the edited file back into the docker container, overwriting the existing entrypoint script.
To me Docker always leaves the impression that it was created for a hobby system, it works well for that.
If something fails or doesn't work, don't expect to have a professional solution.
That said: Docker does not only NOT support such basic administrative tasks, it tries to prevent them.
Solution:
cd /var/lib/docker/overlay2/
find | grep somechangedfile
# You now can see the changed file from your container in a hexcoded folder/diff
cd hexcoded-folder/diff
Create an entrypoint.sh (make sure to backup an existing one if it's there)
cat > entrypoint.sh
#!/bin/bash
while ((1)); do sleep 1; done;
Ctrl+C
chmod +x entrypoint.sh
docker stop
docker start
You now have your docker container running an endless loop instead of the originally entry, you can exec bash into it, or do whatever you need.
When finished stop the container, remove/rename your custom entrypoint.
It seems like most of the time people are running into this while modifying a config file, which is what I did. I was trying to bypass CORS for a PHP/Apache server with a Vue SPA as my entry point. Anyway, if you know the file you horked, a simple solution that worked for me was
Copy the file you horked out of the image:
docker cp bt-php:/etc/apache2/apache2.conf .
Fix it locally
Copy it back in
docker cp apache2.conf bt-php:/etc/apache2/apache2.conf
Start your container back up
*Bonus points - Since this file is being modified, add it to your Compose or Build scripts so that when you do get it right it will be baked into the image!
Lots of discussion surrounding this so I thought I would add one more which I did not immediately see listed above:
If the full path to the entrypoint for the container is known (or discoverable via inspection) it can be copied in and out of the stopped container using 'docker cp'. This means you can copy the original out of the container, edit a copy of it to start a bash shell (or a long sleep timer) instead of whatever it was doing, and then restart the container. The running container can now be further edited with the bash shell to correct any problems. When finished editing another docker cp of the original entrypoint back into the container and a re-restart should do the trick.
I have used this once to correct a 'quick fix' that I butterfingered and was no longer able to run the container with the normal entrypoint until it was corrected.
I also agree there should be a better way to do this via docker: Maybe an option to 'docker restart' that allows an alternate entrypoint? Hey, maybe that already works with '--entrypoint'? Not sure, didn't try it, left as exercise for reader, let me know if it works. :)
Related
This question already has answers here:
How to edit files in stopped/not starting docker container
(4 answers)
Closed 1 year ago.
I have simple container with some service.
When I restarting server without stopping this container, I can't start it again.
Error message is:
pidfile found, try stopping another running service or delete /var/run/service.pid
I know that I can
run new container from image and delete stopped one
create new image from stopped container and re-run it with new entrypoint. Something like rm -f /var/run/service.pid && original_entrypoint.sh
But I want simply do something like
docker rm_file container:/var/run/service.pid; docker start container
Because it is most simple and fast to start solution.
Isn't here is no way to access container's fs without completely rebuild it? This operation is looking so useful...
Answering by myself using hints from another answer
Find where directory stored on docker host:
export LOCAL_DIR=$(docker inspect -f '{{ .GraphDriver.Data.UpperDir }}' container_name)
Remove file locally:
sudo rm -f ${LOCAL_DIR}/run/service.pid
Run container:
docker start container_name
Or all in one:
sudo rm -f "$(docker inspect -f '{{ .GraphDriver.Data.UpperDir }}' container_name)/run/service.pid" && docker start container_name
I always delete the old container and run a new one. This works consistently with every application and runtime, and doesn't involve trying to figure out how to manually reset the container filesystem to its initial state. I almost never use docker start.
docker rm container_name
docker run -d --name container_name ...
If you're in a context where the old pid file might be left behind (maybe it's in a bind-mounted host directory) you can use an entrypoint wrapper script to clean it up:
#!/bin/sh
rm -f /var/run/service.pid
exec "$#"
In your Dockerfile, make this script be the ENTRYPOINT. The last line will run the image's CMD as the main container process.
...
COPY entrypoint.sh . # must be executable, may already be there
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array form
CMD same as before
(Your question references an original_entrypoint.sh; if you already have this setup, edit the existing entrypoint script in your local source tree and add the rm -f line in there.)
I have a script startScript.sh that I want to run after the docker container completely starts (including loading all services that the container is supposed to start)
After that I want to run a script: startScript.sh
This is what I do:
sudo docker run -p 8080:8080 <docker image name> " /bin/bash -c ./startScript.sh"
However this gives me an error:
WFLYSRV0073: Invalid option '/bin/bash'
Even tried different shells still same error. Even tried passing just the script file name. Still did not help.
Note: I know that the above file is in the container in the root folder: /
In fact I once entered the container by doing: sudo docker exec and manually ran that script file and it worked.
But when I try to automatically do it as above, it does not work for me.
Some questions:
1. Please suggest what could be the issue.
2. I want to run that script after the container has started completely and is up and running - including all the services that are part of it. Is this the right way to even do it? Or does this try to run while the container is starting up?
When you pass arguments after the image name, you are not modifying the entrypoint, but the command (CMD). It seems your image has WFLYSRV0073 as entrypoint, which makes the actual executed binary be your entrypoint, with your command as arguments. Which makes WFLYSRV0073 fail when trying to parse /bin/bash as an argument.
To run just your script, you could override the image's entrypoint with an empty string, making it run your command's first element. Notice I also remove the quotes, or else Docker will search for a binary with the name containing spaces, which of course doesn't exist.
sudo docker run --entrypoint "" -p 8080:8080 <docker image name> /bin/bash -c ./startScript.sh
However this is probably not what you want: it won't run what the image should actually be running, only your setup script. The correct thing to do here is to modify the image's Dockerfile to run the setup script as the entrypoint, and at the end of it run the script's current entrypoint (the actual thing you want to run).
Alternatively, if you do not control the image you are running, you can use FROM <the current image> in a new Dockerfile to build another image based on it, setting the entrypoint to your script.
Edit:
An example of how the above can be done can be seen in MariaDB's entrypoint: you first start a temporary server, run your setup, then restart it, running the definitive service (which is the CMD) at the end.
The above solutions are good in case you want to perform initialization for an image, but if you just want to be able to run a script for development reasons instead of doing it consistently on the image's entrypoint, you can copy it to your container and then use docker exec <container name> your-command and-arguments to run it.
How can I debug docker container that I set to always restart.
I have a container that launch nodejs app, with a
CMD ["nodemon", "/usr/src/app/app.js »] that work very well on other container but not on the new one i created it says with docker logs containerName :
Usage: nodemon [nodemon options] [script.js] [args]
See "nodemon --help" for more.
How can I connect to the container to have more informations than logs, for example see some config file or if my nodejs files have been copied.
I didn’t find a way : I would like to use docker exec -it bash and navigate in my docker but because it is always restarting I cannot. How to debug this kind of container ?
EDIT : i use the CMD["bash"] but when i use docker exec -it bash i doesn't work
Because the container keep restarting.
You could make a new image base on your container image, and a different starting script (one which runs the node command for testing, and then opens a bash for instance)
You could need to COPY that script
COPY myscript /usr/local/bin
CMD ["/usr/local/bin/myscript"]
That way, you can test your current image as wrapped in a test image.
You can even only use bash in that new image
CMD["bash"]
And launch the command manually.
For that, you would need to run that image with:
docker run -it --rm myNewImage
That will open an interactive bash session.
I tried to start a exited container like follows,
I listed down all available containers using docker ps -a. It listed the following:
I entered the following commands to start the container which is in the exited stage and enter into the terminal of that image.
docker start 79b3fa70b51d
docker exec -it 79b3fa70b51d /bin/sh
It is throwing the following error.
FATA[0000] Error response from daemon: Container 79b3fa70b51d is not running
But when I start the container using docker start 79b3fa70b51d. It throws the container ID as output which is normal if it have everything work normally.
What is the cause of this error?
By default, docker container will exit immediately if you do not have any task running on the container.
To keep the container running in the background, try to run it with --detach (or -d) argument.
For examples:
docker pull debian
docker run -t -d --name my_debian debian
e7672d54b0c2
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e7672d54b0c2 debian "bash" 3 minutes ago Up 3 minutes my_debian
#now you can execute command on the container
docker exec -it my_debian bash
root#e7672d54b0c2:/#
Container 79b3fa70b51d seems to only do an echo.
That means it starts, echo and then exits immediately.
The next docker exec command wouldn't find it running in order to attach itself to that container and execute any command: it is too late. The container has already exited.
The docker exec command runs a new command in a running container.
The command started using docker exec will only run while the container's primary process (PID 1) is running
If it's not possible to start the main process again (for long enough), there is also the possibility to commit the container to a new image and run a new container from this image. While this is not the usual best practice workflow (the new image is not repeatable), I find it really useful to debug a failing script once in a while.
docker exec -it 6198ef53d943 bash
Error response from daemon: Container 6198ef53d9431a3f38e8b38d7869940f7fb803afac4a2d599812b8e42419c574 is not running
docker commit 6198ef53d943
sha256:ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker run -it ace7ca65e6e bash
root#72d38a8c787d:/#
This happens with images for which the script does not launch a service awaiting requests, therefore the container exits at the end of the script.
This is typically the case with most base OS images (centos, debian, etc.), or also with the node images.
Your best bet is to run the image in interactive mode. Example below with the node image:
docker run -it node /bin/bash
Output is
root#cacc7897a20c:/# echo $SHELL
/bin/bash
First of all, we have to start the docker container
ankit#ankit-HP-Notebook:~$ sudo docker start 3a19b39ea021
3a19b39ea021
After that, check the docker container:
ankit#ankit-HP-Notebook:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a19b39ea021 coreapps/ubuntu16.04:latest "bash" 13 hours ago
Up 9 seconds ubuntu1
455b66057060 hello-world "/hello" 4 weeks ago
Exited (0) 4 weeks ago vigorous_bardeen
Then execute by using the command below:
ankit#ankit-HP-Notebook:~$ sudo docker exec -it 3a19b39ea021 bash
root#3a19b39ea021:/#
Here is what worked for me.
Get the container ID and restart.
docker ps -a --no-trunc
ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker restart ace7ca65e6e3fdb678d9cdfb33a7a165c510e65c3bc28fecb960ac993c37ef33
docker run -it --entrypoint /bin/bash <imageid>
This was posted by L0j1k in the below post and worked for me.
How do I get into a Docker container's shell?
use command
> docker container ls
> docker image ls
Check your Image id and note it down. Here my Image id is "6c929ca002da" , you guys have to use your own Image id instead of mine..
> docker start 6c929ca002da
here our image is in down mode we have to start it first by using image id.
6c929ca002da is my image id
> `docker exec -it 6c929ca002da bash`
after running this command you can see
your image file in running mode like this
root#6c929ca002da
Here I am using root mode go root mode by using command
sudo su
The reason is just what the accepted answer said. I add some extra information, which may provide a further understanding about this issue.
The status of a container includes Created, Running, Stopped,
Exited, Dead and others as I know.
When we execute docker create, docker daemon will create a
container with its status of Created.
When docker start, docker daemon will start a existing container
which its status may be Created or Stopped.
When we execute docker run, docker daemon will finish it in two
steps: docker create and docker start.
When docker stop, obviously docker daemon will stop a container.
Thus container would be in Stopped status.
Coming the most important one, a container actually imagine itself
holding a long time process in it. When the process exits, the
container holding process would exit too. Thus the status of this
container would be Exited.
When does the process exit? In another word, what’s the process, how did we start it?
The answer is CMD in a dockerfile or command in the following expression, which is bash by default in some images, i.e. ubutu:18.04.
docker run ubuntu:18.04 [command]
docker run -it <image_id> /bin/bash
Run in interactive mode executing then bash shell
For anyone attempting something similar using a Dockerfile...
Running in detached mode won't help. The container will always exit (stop running) if the command is non-blocking, this is the case with bash.
In this case, a workaround would be:
1. Commit the resulting image:
(container_name = the name of the container you want to base the image off of,
image_name = the name of the image to be created
docker commit container_name image_name
2. Use docker run to create a new container using the new image, specifying the command you want to run. Here, I will run "bash":
docker run -it image_name bash
This would get you the interactive login you're looking for.
Here's a solution when the docker container exits normally and you can edit the Dockerfile.
Generally, when a docker container is run, an application is served by running a command. From the Dockerfile reference,
Both CMD and ENTRYPOINT instructions define what command gets executed when
running a container. ...
Dockerfile should specify at least one of CMD or ENTRYPOINT commands.
When you build a image and not specify any command with CMD or ENTRYPOINT, the base image's CMD or ENTRYPOINT command would be executed.
For example, the Official Ubuntu Dockerfile has CMD ["/bin/bash"] (https://hub.docker.com/_/ubuntu). Now, the bin/bash/ command can accept input and docker run -it IMAGE_ID command attaches STDIN to the container. The result is that you get an interactive terminal and the container keeps running.
When a command with CMD or ENTRYPOINT is specified in the Dockerfile, this command gets executed when running the container. Now, if this command can finish without requiring any input, it will finish and the container will exit. docker run -it IMAGE_ID will NOT provide the interactive terminal in this case. An example would be the docker image built from the Dockerfile below-
FROM ubuntu
ENTRYPOINT echo hello
If you need to go to the terminal of this image, you will need to keep the container running by modifying the entrypoint command.
FROM ubuntu
ENTRYPOINT echo hello && sleep infinity
After running the container normally with docker run IMAGE_ID, you can just go to another terminal and use docker exec -it CONTAINER_ID bash to get the container's terminal.
Perhaps too late for this active community, but there are a lot of causes because a container may not execute correctly and exit writing a console message or not. For all the newbies making nodeJS containers I'll recommend you to change the Dockerfile and erase all CMD and ENTRYPOINT you may have, and add only an ENTRYPOINT to ["/bin/sh"] (See my attached test Dockerfile example). Then rebuild the Docker image and run it with the command:
docker run -it --rm your_named_image:tag
Voilà you will be getting inside the container with a shell. Then you can test your app typing the command yourself i.e. node app.js and see what is happening. After you see all is ok, you can then change your docker file and erase the ENTRYPOINT to "/bin/sh" and use yourself i.e ["node","app.js"] or whatever. Always consider the previous answers to this post; When the app inside the container finish it will stop the running container.
Here is an example for my "test" Dockerfile:
FROM node:16.4.0-alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json","package-lock.json*", "./"]
RUN npm install --production
COPY ./dist .
ENTRYPOINT ["/bin/sh"]
NOTE: My source files for the app (.js) on the local computer are on directory ./dist, so I have to copy at the container as you can see.
In my case , i changed certain file names and directory names of the parent directory of the Dockerfile . Due to which container not finding the required parameters to start it again.
After renaming it back to the original names, container started like butter.
I have a different take on this. I could do a docker ps and see that there is a docker container running, I even tried to restart it, but as soon as I tried to get a session for it with New-PSSession -ContainerId $containerId -RunAsAdministrator It would error out, saying:
##[error]New-PSSession : The input ContainerId xxx does not exist,
##[error]or the corresponding container is not running.
My problem was I was running with network service and it did not have enough permissions to see the container, even though I had given it permissions to run docker commands (with docker security group configuration)
I didn't know how to enable working with containers, so I had to revert to running it as an admin user instead
In my case, I had previously killed the running container with,
sudo docker kill testdeb
So when I exec the container I got the error,
Error response from daemon: Container fcc29295fe78a425155c533506f58fc5b30a50ee9eb85c21031e8699b3f6ff01 is not running
The solution was to start the container with,
sudo docker start testdeb
Now I have a container running ,
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fcc29295fe78 debian "bash" 9 hours ago Up 11 seconds testdeb
Which wasn't previously running
The below approach I tried works in an windows vscode environment.
docker run --name yourcontainer -p 3306:3306 -e MYSQL_ROOT_PASSWORD=your password -d mysql
I see lot of similar answers but adding port number '-p 3306:3306', made the status up and running. You can verify by using the command docker ps -a
I was naively expecting this command to run a bash shell in a running container :
docker run "id of running container" /bin/bash
it looks like it's not possible, I get the error :
2013/07/27 20:00:24 Internal server error: 404 trying to fetch remote history for 27d757283842
So, if I want to run bash shell in a running container (ex. for diagnosis purposes)
do I have to run an SSH server in it and loggin via ssh ?
With docker 1.3, there is a new command docker exec. This allows you to enter a running docker:
docker exec -it "id of running container" bash
EDIT: Now you can use docker exec -it "id of running container" bash (doc)
Previously, the answer to this question was:
If you really must and you are in a debug environment, you can do this: sudo lxc-attach -n <ID>
Note that the id needs to be the full one (docker ps -notrunc).
However, I strongly recommend against this.
notice: -notrunc is deprecated, it will be replaced by --no-trunc soon.
Just do
docker attach container_name
As mentioned in the comments, to detach from the container without stopping it, type Ctrlpthen Ctrlq.
Since things are achanging, at the moment the recommended way of accessing a running container is using nsenter.
You can find more information on this github repository. But in general you can use nsenter like this:
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid
or you can use the wrapper docker-enter:
docker-enter <container_name_or_ID>
A nice explanation on the topic can be found on Jérôme Petazzoni's blog entry:
Why you don't need to run sshd in your docker containers
First thing you cannot run
docker run "existing container" command
Because this command is expecting an image and not a container and it would anyway result in a new container being spawned (so not the one you wanted to look at)
I agree with the fact that with docker we should push ourselves to think in a different way (so you should find ways so that you don't need to log onto the container), but I still find it useful and this is how I work around it.
I run my commands through supervisor in DEAMON mode.
Then I execute what I call docker_loop.sh
The content is pretty much this:
#!/bin/bash
/usr/bin/supervisord
/usr/bin/supervisorctl
while ( true )
do
echo "Detach with Ctrl-p Ctrl-q. Dropping to shell"
sleep 1
/bin/bash
done
What it does is that it allows you to "attach" to the container and be presented with the supervisorctl interface to stop/start/restart and check logs.
If that should not suffice, you can Ctrl+D and you will drop into a shell that will allow you to have a peek around as if it was a normal system.
PLEASE DO ALSO TAKE INTO ACCOUNT that this system is not as secure as having the container without a shell, so take all the necessary steps to secure your container.
Keep an eye on this pull request: https://github.com/docker/docker/pull/7409
Which implements the forthcoming docker exec <container_id> <command> utility. When this is available it should be possible to e.g. start and stop the ssh service inside a running container.
There is also nsinit to do this: "nsinit provides a handy way to access a shell inside a running container's namespace", but it looks difficult to get running.
https://gist.github.com/ubergarm/ed42ebbea293350c30a6
You can use
docker exec -it <container_name> bash
Here's my solution
In the Dockerfile:
# ...
RUN mkdir -p /opt
ADD initd.sh /opt/
RUN chmod +x /opt/initd.sh
ENTRYPOINT ["/opt/initd.sh"]
In the initd.sh file
#!/bin/bash
...
/etc/init.d/gearman-job-server start
/etc/init.d/supervisor start
#very important!!!
/bin/bash
After image is built you have two options using exec or attach:
Use exec (preferred) and run:
docker run --name $CONTAINER_NAME -dt $IMAGE_NAME
then
docker exec -it $CONTAINER_NAME /bin/bash
and use CTRL + D to detach
Use attach and run:
docker run --name $CONTAINER_NAME -dit $IMAGE_NAME
then
docker attach $CONTAINER_NAME
and use CTRL + P and CTRL + Q to detach
Note: The difference between options is in parameter -i
There is actually a way to have a shell in the container.
Assume your /root/run.sh launches the process, process manager (supervisor), or whatever.
Create /root/runme.sh with some gnu-screen tricks:
# Spawn a screen with two tabs
screen -AdmS 'main' /root/run.sh
screen -S 'main' -X screen bash -l
screen -r 'main'
Now, you have your daemons in tab 0, and an interactive shell in tab 1. docker attach at any time to see what's happening inside the container.
Another advice is to create a "development bundle" image on top of the production image with all the necessary tools, including this screen trick.
There are two ways.
With attach
$ sudo docker attach 665b4a1e17b6 #by ID
With exec
$ sudo docker exec - -t 665b4a1e17b6 #by ID
If the goal is to check on the application's logs, this post shows starting up tomcat and tailing the log as part of CMD. The tomcat log is available on the host using 'docker logs containerid'.
http://blog.trifork.com/2013/08/15/using-docker-to-efficiently-create-multiple-tomcat-instances/
It's useful assign name when running container. You don't need refer container_id.
docker run --name container_name yourimage
docker exec -it container_name bash
first, get the container id of the desired container by
docker ps
you will get something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ac548b6b315 frontend_react-web "npm run start" 48 seconds ago Up 47 seconds 0.0.0.0:3000->3000/tcp frontend_react-web_1
now copy this container id and run the following command:
docker exec -it container_id sh
docker exec -it 3ac548b6b315 sh
Maybe you were mislead like myself into thinking in terms of VMs when developing containers. My advice: Try not to.
Containers are just like any other process. Indeed you might want to "attach" to them for debugging purposes (think of /proc//env or strace -p ) but that's a very special case.
Normally you just "run" the process, so if you want to modify the configuration or read the logs, just create a new container and make sure you write the logs outside of it by sharing directories, writing to stdout (so docker logs works) or something like that.
For debugging purposes you might want to start a shell, then your code, then press CTRL-p + CTRL-q to leave the shell intact. This way you can reattach using:
docker attach <container_id>
If you want to debug the container because it's doing something you haven't expect it to do, try to debug it: https://serverfault.com/questions/596994/how-can-i-debug-a-docker-container-initialization
No. This is not possible. Use something like supervisord to get an ssh server if that's needed. Although, I definitely question the need.