Does restarting a Docker container "remember" initial run arguments? - docker

I ran a Docker container using a very (8 lines) long list of arguments:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.key \
registry:2
I confirmed this was running via docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff9c654bfc39 registry:2 "/bin/registry /etc/d" 2 days ago Up 13 minutes 0.0.0.0:5000->5000/tcp registry
I then stopped this container via docker stop ff9c654bfc39. I then tried re-running the container by issuing the exact same docker run ... (8 liner) as I did the first time:
Error response from daemon: Conflict. The name "registry" is already in use by container ff9c654bfc39. You have to delete (or rename) that container to be able to reuse that name.
So then I just tried docker restart ff9c654bfc39 and that seemed to work, but I'm not 100% sure Docker "remembered" my 8 lines of arguments from when I initially ran the container. Any ideas as to whether it is remembering? If not, what's the proper restart command to include those 8 lines?

As #gabowsky is explaining in the comments, yes, Docker will remember.
Using start, stop and restart will NOT destroy the container, hence remembering everything, including data (even between reboot of the host).
What stop does is to stop the process running inside the container. That's all.
Also, Docker store all the context, variables, etc... in an internal format. You don't have to specify command line arguments again.
To see what Docker knows about your container, you can run docker inspect.
On the contrary, rm will destroy everything, including none persisted data, and the container would need to be recreated again (Giving the arguments again this time).
As a final note, you should very much use names instead of SHA1 when referring containers in command line

Related

How get syncthing docker container to autostart on reboot with correct settings

I'm quite new to docker.
I followed the instructions here to install and run syncthing in a docker container.
That worked great, but when I restarted the PC it auto-started the syncthing docker container, but without the extra settings in the .sh file (no volumes, no other settings I think) - it was like it was running the container with only the default settings - wouldn't sync with any other devices, nothing other than the default share, etc.
When I tried to run syncthing from the syncthing_run.sh script, it complained that there was already a docker container running with the name syncthing and I needed to delete or rename the container in order to run with that name.
I was able to get it running again with all the settings by doing sudo docker stop syncthing to stop the already-running container, then removing the --name line from the script:
IDu=$(id -u $(logname)) # Saves the logged in user id in the IDu variable
IDg=$(id -g $(logname)) # Saves the logged in user group in the IDg variable
docker run -d \
--name syncthing \ # <------------- removed this line
--hostname=syncthing-redacted \
--network=host \
-v $PWD/st-sync/:/var/syncthing/ \
-v $PWD/data/:/var/syncthing/data/ \
-v /redacted/:/var/syncthing/redacted/ \
-e TZ="Australia/Sydney" \
-e PUID=$IDu \
-e PGID=$IDg \
--restart=unless-stopped \
syncthing/syncthing:latest
then running sudo ~/docker/syncthing/syncthing_run.sh.
It seems to be running fine now, but I think it has created a new container? When I run sudo docker container ls -a | grep syncthing, I get:
88e447e1cd78 syncthing/syncthing:latest "/bin/entrypoint.sh …" 27 minutes ago Up 27 minutes (healthy)
7dd04d0701a7 syncthing/syncthing:latest "/bin/entrypoint.sh …" 12 days ago Exited (0) 34 minutes ago
How can I fix everything so the container will auto-run on startup, but with the correct settings from the .sh script, and the correct name of syncthing??
Also, how can I remove the duplicate container, and is it safe for me to do so without losing any data?
I've discovered that docker run and docker start are different. docker run creates and then starts a new container.
What I should have done after the reboot is sudo docker start syncthing.
Running the script again created a new container. I had to stop the running container with sudo docker stop 88e447e1cd78 and then delete it with sudo docker rm 88e447e1cd78.
I think I could have just used sudo docker start syncthing after that point, but I also deleted the old container and then ran the script again to create a new container after reenabling the --name line.

Can't save file on remote Jupyter server running in docker container

I'm trying to work in Jupyter Lab run via Docker on a remote machine, but can't save any of the files I open.
I'm working with a Jupyter Docker Stack. I've installed docker on my remote machine and successfully pulled the image.
I set up port forwarding in my ~/.ssh/config file:
Host mytunnel
HostName <remote ip>
User root
ForwardAgent yes
LocalForward 8888 localhost:8888
When I fire up the container, I use the following script:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
The container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fc3c720af1 jupyter/tensorflow-notebook "tini -g -- start-no…" 8 minutes ago Up 8 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp adoring_khorana
I get the regular Jupyter url back:
http://127.0.0.1:8888/lab?token=<token>
But when I access the server in my browser, the Save option is disabled.
I've tried some of the solutions proposed elsewhere in SO, but no luck.
Is this something about connecting over SSH? The Jupyter server thinks it's not a secure connection?
It is possible that the problem is related to the SSH configuration, but I think is more probably related to a permission issue with your volume mount.
Please, try reviewing your docker container logs looking for permissions related errors. You can do that using the following:
docker container logs <container id>
See the output provided by your docker run command too.
In addition, try opening a shell in the container:
docker exec -it <container id> /bin/bash
and see if you are able to create a file in the default work directory:
touch /home/jovyan/work/test_file
Finally, the Jupyter docker stacks repository has a troubleshooting page almost entirely devoted to permissions issues.
Consider especially the solutions provided in the Additional tips and troubleshooting commands for permission-related errors and, as suggested, try providing launching the container with your OS user:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
--user "$(id -u)" --group-add users \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
After that, as suggested in the mentioned documentation as well, see if the container is properly mounted using the following command:
docker inspect <container_id>
In the obtained result note the value of the RW field which indicates whether the volume is writable (true) or not (false).

Docker basics, how to keep installed packages and edited files?

Do I understand Docker correctly?
docker run -it --rm --name verdaccio -p 4873:4873 -d verdaccio/verdaccio
gets verdaccio if it does not exist yet on my server and runs it on a specific port. -d detaches it so I can leave the terminal and keep it running right?
docker exec -it --user root verdaccio /bin/sh
lets me ssh into the running container. However whatever apk package that I add would be lost if I rm the container then run the image again, as well as any edited file. So what's the use of this? Can I keep the changes in the image?
As I need to edit the config.yaml that is present in /verdaccio/conf/config.yaml (in the container), my only option to keep this changes is to detach the data from the running instance? Is there another way?
V_PATH=/path/on/my/server/verdaccio; docker run -it --rm --name
verdaccio -p 4873:4873 \
-v $V_PATH/conf:/verdaccio/conf \
-v $V_PATH/storage:/verdaccio/storage \
-v $V_PATH/plugins:/verdaccio/plugins \
verdaccio/verdaccio
However this command would throw
fatal--- cannot open config file /verdaccio/conf/config.yaml: ENOENT: no such file or directory, open '/verdaccio/conf/config.yaml'
You can use docker commit to build a new image based on the container.
A better approach however is to use a Dockerfile that builds an image based on verdaccio/verdaccio with the necessary changes in it. This makes the process easily repeatable (for example if a new version of the base image comes out).
A further option is the use of volumes as you already mentioned.

Restarting Docker Container after reboot

I start a docker container like this:
docker run -ti --restart="always" --name "lizmap" -p 80:80 -d -t \
-v /home/lizmap_project:/home \
-v /home/lizmap_var:/var/www/websig/lizmap/var \
-v /home/lizmap_tmp:/tmp \
jancelin/docker-lizmap
which makes the GIS server works like charm.
After the first two reboots the container comes up by self as expected. After some several reboots it keeps on telling me that it is restarting and restarting and restarting.
Logs are
apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
httpd (pid 7) already running
The workaround I do is to docker stop lizmap and docker rm lizmap and then start the container again with the code above.
Does anyone has an idea about how to avoid this workaround and make the container's restart working not only for the first two time.
The docker files come from this
Github

Docker container will automatically stop after "docker run -d"

According to tutorial I read so far, use "docker run -d" will start a container from image, and the container will run in background. This is how it looks like, we can see we already have container id.
root#docker:/home/root# docker run -d centos
605e3928cdddb844526bab691af51d0c9262e0a1fc3d41de3f59be1a58e1bd1d
But if I ran "docker ps", nothing was returned.
So I tried "docker ps -a", I can see container already exited:
root#docker:/home/root# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
605e3928cddd centos:latest "/bin/bash" 31 minutes ago Exited (0) 31 minutes ago kickass_swartz
Anything I did wrong? How can I troubleshoot this issue?
The centos dockerfile has a default command bash.
That means, when run in background (-d), the shell exits immediately.
Update 2017
More recent versions of docker authorize to run a container both in detached mode and in foreground mode (-t, -i or -it)
In that case, you don't need any additional command and this is enough:
docker run -t -d centos
The bash will wait in the background.
That was initially reported in kalyani-chaudhari's answer and detailed in jersey bean's answer.
vonc#voncvb:~$ d ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4a50fd9e9189 centos "/bin/bash" 8 seconds ago Up 2 seconds wonderful_wright
Note that for alpine, Marinos An reports in the comments:
docker run -t -d alpine/git does not keep the process up.
Had to do: docker run --entrypoint "/bin/sh" -it alpine/git
Original answer (2015)
As mentioned in this article:
Instead of running with docker run -i -t image your-command, using -d is recommended because you can run your container with just one command and you don’t need to detach terminal of container by hitting Ctrl + P + Q.
However, there is a problem with -d option. Your container immediately stops unless the commands keep running in foreground.
Docker requires your command to keep running in the foreground. Otherwise, it thinks that your applications stops and shutdown the container.
The problem is that some application does not run in the foreground. How can we make it easier?
In this situation, you can add tail -f /dev/null to your command.
By doing this, even if your main command runs in the background, your container doesn’t stop because tail is keep running in the foreground.
So this would work:
docker run -d centos tail -f /dev/null
Or in Dockerfile:
ENTRYPOINT ["tail"]
CMD ["-f","/dev/null"]
A docker ps would show the centos container still running.
From there, you can attach to it or detach from it (or docker exec some commands).
According to this answer, adding the -t flag will prevent the container from exiting when running in the background. You can then use docker exec -i -t <image> /bin/bash to get into a shell prompt.
docker run -t -d <image> <command>
It seems that the -t option isn't documented very well, though the help says that it "allocates a pseudo-TTY."
Background
A Docker container runs a process (the "command" or "entrypoint") that keeps it alive. The container will continue to run as long as the command continues to run.
In your case, the command (/bin/bash, by default, on centos:latest) is exiting immediately (as bash does when it's not connected to a terminal and has nothing to run).
Normally, when you run a container in daemon mode (with -d), the container is running some sort of daemon process (like httpd). In this case, as long as the httpd daemon is running, the container will remain alive.
What you appear to be trying to do is to keep the container alive without a daemon process running inside the container. This is somewhat strange (because the container isn't doing anything useful until you interact with it, perhaps with docker exec), but there are certain cases where it might make sense to do something like this.
(Did you mean to get to a bash prompt inside the container? That's easy! docker run -it centos:latest)
Solution
A simple way to keep a container alive in daemon mode indefinitely is to run sleep infinity as the container's command. This does not rely doing strange things like allocating a TTY in daemon mode. Although it does rely on doing strange things like using sleep as your primary command.
$ docker run -d centos:latest sleep infinity
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d651c7a9e0ad centos:latest "sleep infinity" 2 seconds ago Up 2 seconds nervous_visvesvaraya
Alternative Solution
As indicated by cjsimon, the -t option allocates a "pseudo-tty". This tricks bash into continuing to run indefinitely because it thinks it is connected to an interactive TTY (even though you have no way to interact with that particular TTY if you don't pass -i). Anyway, this should do the trick too:
$ docker run -t -d centos:latest
Not 100% sure whether -t will produce other weird interactions; maybe leave a comment below if it does.
Hi this issue is because docker containers exit if there is no running application in the container.
-d
option is just to run a container in deamon mode.
So the trick to make your container continuously running is point to a shell file in docker which will keep your application running.You can try with a start.sh file
Eg: docker run -d centos sh /yourlocation/start.sh
This start.sh should point to a never ending application.
In case if you dont want any application to be running,you can install monit which will keep your docker container running.
Please let us know if these two cases worked for you to keep your container running.
All the best
You can accomplish what you want with either:
docker run -t -d <image-name>
or
docker run -i -d <image-name>
or
docker run -it -d <image-name>
The command parameter as suggested by other answers (i.e. tail -f /dev/null) is completely optional, and is NOT required to get your container to stay running in the background.
Also note the Docker documentation suggests that combining -i and -t options will cause it to behave like a shell.
See:
https://docs.docker.com/engine/reference/run/#foreground
I have this code snippet run from the ENTRYPOINT in my docker file:
while true
do
echo "Press [CTRL+C] to stop.."
sleep 1
done
Run the built docker image as:
docker run -td <image name>
Log in to the container shell:
docker exec -it <container id> /bin/bash
execute command as follows :
docker run -t -d <image-name>
if you want to specify port then command as below:
docker run -t -d -p <port-no> <image-name>
verify the running container using following command:
docker ps
Docker container exits if task inside is done, so if you want to keep it alive even if it does not have any job or already finished them, you can do docker run -di image. After you do docker container ls you will see it running.
Docker requires your command to keep running in the foreground. Otherwise, it thinks that your applications stops and shutdown the container.
So if your docker entry script is a background process like following:
/usr/local/bin/confd -interval=30 -backend etcd -node $CONFIG_CENTER &
The '&' makes the container stop and exit if there are no other foreground process triggered later.
So the solution is just remove the '&' or have another foreground CMD running after it, such as
tail -f server.log
If you are using CMD at the end of your Dockerfile, what you can do is adding the code at the end. This will only work if your docker is built on ubuntu, or any OS that can use bash.
&& /bin/bash
Briefly the end of your Dockerfile will look like something like this.
...
CMD ls && ... && /bin/bash
So if you have anything running automatically after you run your docker image, and when the task is complete the bash terminal will be active inside your docker. Thereby, you can enter you shell commands.
Maybe it is just me but on CentOS 7.3.1611 and Docker 1.12.6 but I ended up having to use a combination of the answers posted by #VonC & #Christopher Simon to get this working reliably. Nothing I did before this would stop the container from exiting after it ran CMD successfully. I am starting oracle-xe-11Gr2 and sshd.
Dockerfile
...
RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N '' && systemctl enable sshd
...
CMD /etc/init.d/oracle-xe start && /sbin/sshd && tail -f /dev/null
Then adding -d -t and -i to run
docker run --shm-size=2g --name oracle-db -d -t -i -p 5022:22 -p 5080:8080 -p 1521:1521 centos-oracle:7.3.1611
Finally after hours of bashing my head against the wall
ssh -v root#127.0.0.1 -p 5022
...
root#127.0.0.1's password:
debug1: Authentication succeeded (password).
For whatever reason the above will exit after executing CMD if the tail -f is removed, or any of the -t -d -i options are omitted.
I had the same issue, just opening another terminal with a bash on it worked for me :
create container:
docker run -d mcr.microsoft.com/mssql/server:2019-CTP3.0-ubuntu
containerid=52bbc9b30557
start container:
docker start 52bbc9b30557
start bash to keep container running:
docker exec -it 52bbc9b30557 bash
start process you need:
docker exec -it 52bbc9b30557 /path_to_cool_your_app
Running docker with interactive mode might solve the issue.
Here is the example for running image with and without interactive mode
chaitra#RSK-IND-BLR-L06:~/dockers$ sudo docker run -d -t -i test_again1.0
b6b9a942a79b1243bada59db19c7999cfff52d0a8744542fa843c95354966a18
chaitra#RSK-IND-BLR-L06:~/dockers$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
chaitra#RSK-IND-BLR-L06:~/dockers$ sudo docker run -d -t -i test_again1.0 bash
c3d6a9529fd70c5b2dc2d7e90fe662d19c6dad8549e9c812fb2b7ce2105d7ff5
chaitra#RSK-IND-BLR-L06:~/dockers$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c3d6a9529fd7 test_again1.0 "bash" 2 seconds ago Up 1 second awesome_haibt
You can simply use:
docker container run -d -it <container name or id> /bin/bash
I have explained it in the following post that has the same question.
How to retain docker alpine container after "exit" is used?
I was also facing the same problem but in a different manner. When I create the docker containers. it automatically stops the unused containers which are just running in the background. Sometimes it also stops the containers that are in the use.
In my situation, this is because of the permission of the docker.sock files it earlier has.
what you have to do is :-
Install docker again.(As i work on ubuntu i install it from here)
Run the command to change the permissions.
sudo chmod 666 /var/run/docker.sock
Install docker-compose (this is optional as I have compose file to create many containers together)
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
check for the version to ensure that I have the latest one and not get problem with some deprications.
Then I run the docker container build.
Argument order matters
Jersey Beans answer (all 3 examples) worked for me. After quite a bit of trial and error I realized that the order of the arguments matter.
Keeps the container running in the background:
docker run -t -d <image-name>
Keeps the container running in the foreground: docker run <image-name> -t -d
It wasn't obvious to me coming from a Powershell background.
if you want to operate on the container, you need to run it in foreground to keep it alive.
There are multiple options out there to run the container in foreground/detached state. But if you still feel the issue is not resolved, you can try troubleshooting the issue by viewing the logs.
sudo docker logs -f >> container.log
additionally you can also use --details to show extra details provided to logs.
Incorrect Path to App in Dockerfile:
I was migrating an application from a RHEL server to a Docker container using Alpine Linux.
No errors during the build, so I was surprised to see the container immediately exit!
First port of call:
docker logs <containerID>
This revealed the path of the binary I had supplied to CMD in the Dockerfile was bogus:
line 0: /sbin/postfix: not found
Well that told me how things were broken, but not specifically where: I still required the correct path for the binary in Alpine Linux...
Troubleshooting:
Googling didn't reveal the correct path to it, so I added the following line to my Dockerfile:
RUN which postfix
I then reviewed my build logging- provided by the below command appended to my build command- to retrieve the value of RUN which postfix
--progress=plain > /path/to/build.log 2>&1
The Fix:
I deleted this test build, supplied the correct path- /usr/sbin/postfix - to CMD in the Dockerfile, deleted RUN which postfix and ran another build.
Voila; the process now remained up.
So a duff path was causing the container to immediately exit...
These 4 commands all work to keep your docker container running:
docker run -td centos
docker run -dt centos
docker run -t -d centos
docker run -d -t centos
Firstly, You need to check if any container is running
Type command,
docker ps -all
If any container is running then stop them
Type command,
docker stop Container Id
Now, Finally run the docker by using below command..........
docker run -t -p 2020:3000 dockerImageName
Hence, Open your google chrome and visit on localhost:2020
Congrats :)

Resources