I'm on Docker for Mac 1.13.0
When I run docker run -d cassandra:3.9 each time it kills previous container. I can not start more than one instance. It works as expected with nginx:latest though, i.e. starts multiple instances.
Why is it happening?
give it a different name
docker run -d --name cassandra1 cassandra:3.9
and then
docker run -d --name cassandra2 cassandra:3.9
I suspect this might be a docker 1.13 problem. I dont see this issue on docker 1.12.6 on mac.
Related
I'm trying to launch a GitLab or Gitea docker container in my QNAP NAS (Container Station) and, for some reason, when I restart the container, it won't start back up because files are lost (it seems).
For example, for GitLab it gives me errors saying runsvdir-start and gitlab-ctl don't exist. For Gitea it's the s6-supervise file.
Now I am launching the container like this, just to keep it simple:
docker run -d --privileged --restart always gitea/gitea:latest
A simple docker stop .... and docker start .... breaks it. How do I troubleshoot something like this?
QNAP has sent this issue to R&D and they were able to replicate it. It's a bug and will probably be fixed in a new Container Station update.
It's now fixed in QTS 4.3.6.20190906 and later.
Normal to lose you data if you launch just :
docker run -d --privileged --restart always gitea/gitea:latest
You should use VOLUME to sharing folder between your host and docker host for example :
docker run -d --privileged -v ./gitea:/data -p 3000:3000 -p 222:22 --restart always gitea/gitea:latest
Or use docker-compose.yml (see the official docs).
Newbie question: I have splash running in a docker container and scrapy running on my local development machine. I now need to promote this to an AWS environment via docker containers, but I can't figure out how to connect the scrapy and splash containers?
I'm assuming that I need to create a docker stack, but that's as far as I've got :o(
It was really quite straightforward in the end.
docker network create crawler-network
docker run --network=crawler-network --name=splash --hostname=splash --memory=6GB --restart unless-stopped -d -p 8050:8050 scrapinghub/splash --max-timeout 600 --slots 10
docker run --network=crawler-network --name=crawler --hostname=crawler -it conda
docker network inspect crawler-network
Then we changed the scrapy splash settings to point to http://splash:8060, instead of http://localhost:8050
When i run a new docker container (or when i start a stopped one), the system logs me out. When i login again the container is up 'n running and i can use it without any problems.
I am currently using Fedora 24.
Example:
alekos#alekos-pc:~$ docker run -d --name somename -t -p 80:80 someimage
At this point it logs me out
When I log in again I run:
alekos#alekos-pc:~$ docker ps -a
and my "somename" container is running without any further problems.
I can live with these logouts/logins but it is a bit annoying. Does anybody have any idea what is causing the problem?
I was trying rancher.
I used the command:
sudo docker run -d --restart=always -p 8080:8080 rancher/server
to start run it.
Then I stopped the container and removed it. But if I stop and restart the docker daemon or reboot my laptop, and lookup running containers using docker ps command, it will have rancher server running again. How do I stop/remove it completely and make sure it will not run again.
Note: following issue 11008 and PR 15348 (commit fd8b25c, docker v1.11.2), you would avoid the issue with:
sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server
In your current situation, thanks to PR 19116, you can use docker update to update the restart policy.
docker update --restart=unless-stopped <yourContainerID_or_Name>
Then stop your container, restart your docker daemon: it won't restart said container.
The OP codefire points to another reason in the comments:
When I first ran the start rancher server command, I didn't notice that it was being downloaded. So I may have retried that command a couple times.
That must be why the job kept on restarting even after stopping and removing containers that was started as rancher server.
After stopping and removing 8+ containers, it finally stopped
That is why I have aliases to quickly remove any stopped containers.
it keeps restarting because you're using --restart=always flag
Run
docker logs <CONTAINER_ID>
to see if your code is encountering any errors that does not allow the container to run properly
Thanks for the answers, but for me even using the answer von VonC the problem continues.
After researching Kubernetes was running the image again and again.
Use kubectl get nodes to get the nodes u have and later run:
kubectl drain NODE_ID --delete-emptydir-data --ignore-daemonsets
These solutions are correct! But for me there was a situation when these answers did not work out. That's because I was running service in the background I did not remove it so it keeps running even you remove container it will restart it again and so on...
So answer to that specific problem is
docker service rm service name or id
Docker run in your locahost if you shutdown/kill docker damen then docker stop and inside in docker container delete data if you not save your data in external volume.
docker run -d nginxlogs:/var/log/nginx -p 5000:80 nginx
I see that Docker has added something called restarting policies to handle restart of containers in case of, for instance, reboot.
While this is very useful, I see that the restart policy command just work with docker run and not docker start. So my question is:
Is there any way to add restarting policies to a container that was already created in the past?
In recent versions of docker (as of 1.11) you have an update command:
docker update --restart=always <container>
There're two approaches to modify RestartPolicy:
Find out the container ID, stop the whole docker service, modify /var/lib/docker/containers/CONTAINER_ID/hostconfig.json, set RestartPolicy -> Name to "always", and start docker service.
docker commit your container as a new image, stop & rm the current container, and start a new container with the image.
Using --restart=always policy will handle restart of existing containers in case of reboot.
The problem is that if there are multiple containers with --restart=always when you run image of a newer version as discussed in docker - how do you disable auto-restart on a container?.
Trying to automatically remove the container when it exist by put
option docker run --rm will also problem with the --restart=always
policy since they are conflicting each others.
$ docker run --rm --restart always <image>
Conflicting options: --restart and --rm
So in this case it is better to choose another option: --restart unless-stopped policy.
$ docker run --rm --restart unless-stopped <image>
This policy will not conflicting the docker run --rm but as explained in docker documentation:
It similar to --restart=always, except that when the container is stopped
(manually or otherwise), it is not restarted even after Docker daemon
restarts.
So when using this --restart unless-stopped policy, to ensure the restarting is working in case it stop by accident when you close the terminal, do once in another terminal as below:
$ docker ps
$ docker restart <container>
Wait until the killing process end in the previous shell, then close it and just leave (don't do exit).
And check again in the remained terminal if the container is still running:
$ docker ps
If it is still running the you can safely reboot and check again that the application is restarting and see your docker is clean without unused of multiple containers.