Have to restart docker every ddev stop - docker

I'm just starting out playing with DDEV and hit an annoying roadblock.
It doesn't matter if I'm adding a new project or running ddev start on an existing, the project will only start and be accessible the first time after a docker restart. If I restart docker, run ddev start on a site, leave it running for a short time then ddev stop. I have to restart docker to then run ddev start again.
In the terminal, after starting then stopping a site, I run ddev start again and can see ddev run through the steps of creating/re-creating the containers but stops at different points each attempt. If I do a CTRL+C then run ddev start again, another container will get recreated.
I can only do 2 starts and the process will always stop at a Container ddev-projectname-web Recreated line.
The only difference I can see after restarting docker, then running ddev start on the same project, is
Network ddev_default created
Shows on the first line, followed by
Container ddev-ssh-agent Started
I then get
ssh-agent container is running: If you want to add authentication to the ssh-agent container, run 'ddev auth ssh' to enable your keys.
The containers then start (dba, db, web). I get a couple of warnings with the same text:
Project type has no settings paths configured, so not creating settings file.
I can't find much that relates to this message and what I'm experiencing.
The site is then accessible. Running ddev stop then ddev start after gets stuch up to the web container recreating.
There are no messages in docker as the containers don't start. I've done a factory reset on docker which made no difference and I've cleared all images as per the support docs.
Any help around? I moved to Craft Nitro so I wouldn't have to fight MAMP each day but now Nitro has been abandoned and now DDEV is being pushed.

Related

Docker containers restart order

I have two containers running on Ubuntu Server 22.04 LTS.
One of them is Selenium Grid and the second one is Python container that works in the connection with mentioned above Selenium container.
How can I get these two containers correctly restarted after system poweroff or reboot?
I tried this:
docker update --restart [on-failure] [always] [unless-stopped] container_grid docker update --restart [on-failure] [always] [unless-stopped] container_python
The Selenium Grid container restarts correctly, but Python container keeps restarting in a loop.
As I can suppose it cannot by some reason establish connection to the second one, exits with the code 1 and keeps restarting.
How can I avoid this? Maybe there is a solution that adds delay or sets the order of containers restart after turning on the system? Or should I simply add some delay in Python code because there is no any simple solution to this?
I am not software developer but automation engineer so could somebody help me with the solution. Maybe it would e Docker Compose or something else.
Thanks in advance.
So) Solved this problem via crontab.
Selenium container starts in accordance with "--restart on-failure" option.
My Python container starts with a delay in accordance with crontab command:
#reboot sleep 20 && docker start [python_container]

Docker port localhost:80 is always listened on

I am just studying the Docker and found out it seems that we don't need to run docker-tutorial image and the port:80 is always listened on just like below picture:
At first, I thought it is automatically managed by Docker Desktop. But it is not. Because after I close the Docker desktop completely, it is still there.
I even run a command to check the process of port 80 and no process is there:
when no process is on this port, it is still running. It drives me crazy. I do have followed docker start tutorial to run this tutorial web application and at that time I can also open localhost:80.
After that, I have stopped and removed container and even the image as well as closing the Docker app, the page, however, is still there.
Does any have encountered this situation or have any idea? How does Docker do this?
After a day, i start my mac again without running Docker and it is still there in a messy way:
By the looks of the page, it is running off the browser cache. Clear the cache or open an incognito window to use the newly created services on port 80.
Try stopping the container. E.g.
List running containers.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b223e7cc8c5 docker/getting-started "/docker-entrypoint.…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp wonderful_goldstine
Stopping the docker/getting-started container with its container ID.
$ docker stop 4b223e7cc8c5
4b223e7cc8c5
At this point, the container will have stopped and port 80 will be free. It will still be on your machine if you ever want to restart it, but you can remove it with:
$ docker rm 4b223e7cc8c5
4b223e7cc8c5
I had the same issue but in my case it' was just a cache in the browser. After stopping the docker image or deleted it, it will work probably
For stop it.
$ docker stop [CONTAINER ID]
for delete it
$ docker rm [CONTAINER ID]

Ddev update -> Migrating bind-mounted database in ~/.ddev to docker-volume mounted database

i have updated Ddev and Docker and now i get the following message:
"Migrating bind-mounted database in ~/.ddev to docker-volume mounted database"
"Failed to remove ddev project crazy-twins.de.development: Failed to start project xx to snapshot database: Failed to migrate db from bind-mounted db: failed t
o run migrate_file_to_volume.sh, err=container run failed with exit code 2 output="
How can i fix this?
How can i remove the database if necessary?
No container starts anymore.
Thank you for your help.
In my case, I was following the steps to upgrade from version 1.0.0 to 1.2.0, I followed the steps of the documentation: remove the custom .yml, run ddev config and here I made my mistake, the next step was ddev start and I ran ddev restart I realized how much 15 seconds maybe, and I stopped the process with ctrl + c and from that moment I broke the update process.
Never again could I start the process again.
What I realized in my case was that the process of updating ddev, creates a container to migrate the databases called as follows:
{nameYourProject}_migrate_volume
I could see it running docker ps -a
Apparently this volume got corrupted when I stopped the update process.
The solution (in my case):
I removed the migration container,
docker rm 3435 // use the hash number of the migration container
Then execute ddev start again and the update was executed without problem.
I could not execute the docker container prune command because it removes all the containers that you have created.
I hope someone serves you.
Greetings.
I ran into the same problem today. Cleaning up stopped docker containers with
docker container prune
before running ddev did the job for me. Hope this helps!

Difference between docker-compose run, start, up

I'm new in docker.
What is the difference between these?
docker run 'an image'
docker-compose run 'something'
docker-compose start 'docker-compose.yml'
docker-compose up 'docker-compose.yml'
Thanks in advance.
https://docs.docker.com/compose/faq/#whats-the-difference-between-up-run-and-start
What’s the difference between up, run, and start?
Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.
Also: https://docs.docker.com/compose/reference/

docker-compose replays past output on container reboot

It appears that docker-compose replays captured output on container re-launch. This is against expectation, and is misleading about what my container is actually doing. Can this be disabled?
For instance,
I have a simple service that logs and exits w/ code 0.
In docker-compose.yml, i have restart: always set.
When running docker-compose up, each time the logging container comes back up after existing, I see all previous output relogged, plus any new additions from the current run.
Here's an easy to run example.
clone, cd <project>/fluentd, docker-compose build, & docker-compose up
I'm using docker-compose version 1.16.1, build 6d1ac21 on OSX.
Tips would be great!
This appears to be an open issue with Docker, where it's replaying logs on up. A workaround is mentioned here:
alias docker-logs-truncate="docker-machine ssh default -- 'sudo find /var/lib/docker/containers/ -iname \"*json.log\"|xargs -I{} sudo dd if=/dev/null of={}'"
Is this a life cycle problem? There is a difference between a stop and a "rm". If you do a docker-compose stop, the containers are suspended. Use "docker-compose up" to restart them from last time.
But "docker-compose rm" will destroy the containers. docker-compose up again to recreate them from scratch.
Ok. Did you try to remove the restart always for your container?

Resources