I'm trying to launch a GitLab or Gitea docker container in my QNAP NAS (Container Station) and, for some reason, when I restart the container, it won't start back up because files are lost (it seems).
For example, for GitLab it gives me errors saying runsvdir-start and gitlab-ctl don't exist. For Gitea it's the s6-supervise file.
Now I am launching the container like this, just to keep it simple:
docker run -d --privileged --restart always gitea/gitea:latest
A simple docker stop .... and docker start .... breaks it. How do I troubleshoot something like this?
QNAP has sent this issue to R&D and they were able to replicate it. It's a bug and will probably be fixed in a new Container Station update.
It's now fixed in QTS 4.3.6.20190906 and later.
Normal to lose you data if you launch just :
docker run -d --privileged --restart always gitea/gitea:latest
You should use VOLUME to sharing folder between your host and docker host for example :
docker run -d --privileged -v ./gitea:/data -p 3000:3000 -p 222:22 --restart always gitea/gitea:latest
Or use docker-compose.yml (see the official docs).
Related
To start I built a docker container from the MariaDB docker image.
After that I loaded a database dumpfile in the running container.
[MariaDB status][1]
Everything goes fine.
When I want to run/link the Drupal image:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -d drupal
I can reach the drupal installation page, but when I want to load the database I always have the same errors:
-host, pass or dbname is wrong.
But I'm pretty sure my credentials are right.
It seems that my drupal container can't find the mariadb image.
Docker links is a deprecated feature and should be avoided: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
I assume you have a container named mariadbdocker running.
If you gain bash access inside drupaldocker container, you should be able to ping mariadb alias like this:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -it drupal /bin/bash
If ping succeeds then you probably still have credentials issue.
I'm on Docker for Mac 1.13.0
When I run docker run -d cassandra:3.9 each time it kills previous container. I can not start more than one instance. It works as expected with nginx:latest though, i.e. starts multiple instances.
Why is it happening?
give it a different name
docker run -d --name cassandra1 cassandra:3.9
and then
docker run -d --name cassandra2 cassandra:3.9
I suspect this might be a docker 1.13 problem. I dont see this issue on docker 1.12.6 on mac.
When i run a new docker container (or when i start a stopped one), the system logs me out. When i login again the container is up 'n running and i can use it without any problems.
I am currently using Fedora 24.
Example:
alekos#alekos-pc:~$ docker run -d --name somename -t -p 80:80 someimage
At this point it logs me out
When I log in again I run:
alekos#alekos-pc:~$ docker ps -a
and my "somename" container is running without any further problems.
I can live with these logouts/logins but it is a bit annoying. Does anybody have any idea what is causing the problem?
I am learning "Docker for Mac"
$ docker run -d -p 80:80 --name webserver nginx
docker: Error response from daemon: Conflict. The name "/webserver" is already in use by container 728da4a0a2852869c2fbfec3e3df3e575e8b4cd06cc751498d751fbaa75e8f1b. You have to remove (or rename) that container to be able to reuse that name..
But when I run
$ docker ps
It shows no containers listed.
But due to the previous error message tells me that there is this container 728da....
I removed that container
$ dockder rm 728da4a0a2852869c2fbfec3e3df3e575e8b4cd06cc751498d751fbaa75e8f1b
Now I run this statement again
$ docker run -d -p 80:80 --name webserver nginx
It is working fine this time.
And then I run $ docker ps, I can see this new container is listed
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ecc0412fd31 nginx "nginx -g 'daemon off" 19 seconds ago Up 17 seconds 0.0.0.0:80->80/tcp, 443/tcp webserver
Note:
I am using "Docker for Mac".
But I had "Docker Box" installed on the Mac before. I don't know if that is the invisible "webserver" container comes from.
As activatedgeek says in the comments, the container must have been stopped. docker ps -a shows stopped containers. Stopped containers still hold the name, along with the contents of their RW layer that shows any changes made to the RO image being used. You can reference containers by name or container id which can make typing and scripting easier. docker start webserver would have restarted the old container. docker rm webserver would remove a stopped container with that name.
You can also abbreviate the container id's to the shortest unique name to save typing or a long copy/paste. So in your example, docker rm 728d would also have removed the container.
The Docker Getting Started document asks the learners trying two statements first.
docker run hello-world
and
docker run -d -p 80:80 --name webserver nginx
I was wondering why I can run
docker run hello-world
many times but if I run
docker run -d -p 80:80 --name webserver nginx
the second time, I got the name conflicts error. Many beginners would be wondering too.
With your help and I did more search, now I understand
docker run hello-world,
we did not use --name, in this case, a random name was given so there will be no name conflicts error.
Thanks!
I see that Docker has added something called restarting policies to handle restart of containers in case of, for instance, reboot.
While this is very useful, I see that the restart policy command just work with docker run and not docker start. So my question is:
Is there any way to add restarting policies to a container that was already created in the past?
In recent versions of docker (as of 1.11) you have an update command:
docker update --restart=always <container>
There're two approaches to modify RestartPolicy:
Find out the container ID, stop the whole docker service, modify /var/lib/docker/containers/CONTAINER_ID/hostconfig.json, set RestartPolicy -> Name to "always", and start docker service.
docker commit your container as a new image, stop & rm the current container, and start a new container with the image.
Using --restart=always policy will handle restart of existing containers in case of reboot.
The problem is that if there are multiple containers with --restart=always when you run image of a newer version as discussed in docker - how do you disable auto-restart on a container?.
Trying to automatically remove the container when it exist by put
option docker run --rm will also problem with the --restart=always
policy since they are conflicting each others.
$ docker run --rm --restart always <image>
Conflicting options: --restart and --rm
So in this case it is better to choose another option: --restart unless-stopped policy.
$ docker run --rm --restart unless-stopped <image>
This policy will not conflicting the docker run --rm but as explained in docker documentation:
It similar to --restart=always, except that when the container is stopped
(manually or otherwise), it is not restarted even after Docker daemon
restarts.
So when using this --restart unless-stopped policy, to ensure the restarting is working in case it stop by accident when you close the terminal, do once in another terminal as below:
$ docker ps
$ docker restart <container>
Wait until the killing process end in the previous shell, then close it and just leave (don't do exit).
And check again in the remained terminal if the container is still running:
$ docker ps
If it is still running the you can safely reboot and check again that the application is restarting and see your docker is clean without unused of multiple containers.