Docker with Plesk: Random Container spawned - docker

I played around with Docker and followed this Tutorial.
Using the docker functionality in Plesk, I pulled my created container from docker hub and ran it. When trying to remove it again, it threw an error message, which I didnt capture (didnt expected anything strange at that point). Then, when going back to the container overview it gave me the screen shown below.
Now, the container in the middle is mine (get-started), however, where the hell did angry_kare and peaceful_haibt come from?
Thank you guys for any answers or ideas! :)
(Im am currently not able to reproduce this :/)
Random containers spawned

When you run the container without a name. Docker will create containers with random names. The two containers which your having are the previously failed containers.

Related

no configuration file provided: not found for docker compose up --scale chrome=5

This might look similar to this existing solution but I have tried all the solutions mentioned there and none seems to resolve my issue.
I have created a docker compose file , Docker-Compose-V3.yml.
On running docker-compose -f docker-compose-v3.yml up I am able to successfully spin up the grid network.
When I am trying to scale my chrome nodes using docker-compose up --scale chrome = 5 I am getting no configuration file provided: not found error message
I have tried following solutions from the existing answer linked but to no avail
Made sure I am in correct directory where the docker compose file is present
Checked the extension of yml file and cross checked the folder option settings
I am unable to understand that why is docker able to identify the compose file when asked to spin up the grid but fails to do so when I am trying to scale up the services.
I know that another possible way I can do it is using docker swarm which I came to know of while scrolling through docker documentation but would like to understand why this isn't working.
Needless to say but I have just started exploring the docker world and would appreciate any help in being pointed towards the existing documentation/answers that would resolve my problem

Why does Docker randomly throw this a 'Permission Denied' error when trying to stop a container?

I am trying to stop a docker container and get the following error:
This happens randomly on occasion and it is very frustrating to have restart the docker service and relaunch all my containers.
Would anyone know what could be happening to cause this? As far I have seen or know, there has not been any changes made to the container since they have been launched, may some changes in the content of the data in the containers. Also if people need more information, I would be happy to provide.
FYI everything that I am doing I am doing as a root user.
ALSO -- ABSOLUTLEY CANNOT STOP THE DOCKER DAMON OR RESTART IT, THIS MUST BE RESOLVED WHILE KEEPING THE CURRENT CONTAINERS OPEN AND RUNNIN.

Docker Port fowarding is not working anymore

I have mutiple Docker containers that I use on my machine for testing, etc.,all through port fowarding.
Strangely, for the last four days I have not been able to connect to any of them. I made some tests with application outside of containers, it appears I can still connect to them.
But for every application inside a container I get a "connection reset by peer error"
I might have messed up with dangling docker network interface before this happenned,
but this is my first time having that consequence and now my work is really impeded.
Does anybody know what could be going on?
It was a problem with the iptables. I don't know which one, but after deleting them and reinstalling everything it started working again.

Docker - Cannot start or stop container groups through Docker Dashboard

I am new to Docker and have been running the Example Voting App (suggested by the getting-started guide).
I have encountered an issue where i can start and stop each individual container within the desktop dashboard, but no start or stop command is passed when i start or stop the containers as a group. I get no logs and no other helpful information as to why this is happening.
I can start and stop the containers as a group from command line by simply navigating to the repository folder and calling docker-compose start/stop, but i'd like the functionality to be able to do it from the desktop dashboard.
Some other things that I have encountered that may* relate to the issue:
I encountered issues with the dashboard GUI not properly displaying when containers were deleted.
I had to enable file sharing on my C Drive to even be able 'import' half of the repository files. This was not listed as something that
you needed to do on the getting-started guide so im assuming it
shouldnt actually be a necessary thing to do.
= Windows 10 =

Docker stack deploy doesn't pull all images

I am using docker for many different services and tools. I run a docker stack deploy -c docker-compose.yml --with-registry-auth stack_name. On the swarms themselves, only one or two of the nodes will have them images pulled and not on the others. I thought that the deploy causes all nodes to pull so that the images exist everywhere. The error that then occurs is a no such image because it wasnt pulled on that particular node. I have been looking around for help and i see many pages about how it already does this normally. Am I missing something that is causing this, any help is helpful.
I finally figured out what the problem is. When deploying a job the token it uses only stays active for however long the jobs is running. So in my script in my gitlab-ci file, i always at least pulled the image on the first node so it always worked there. This made it so at least one node had the image. To get them on the other nodes i had to add a sleep so that the other nodes had enough time to pull the image. It was a race condition, the token became useless after the job ended and couldnt pull any images.

Resources