Hitting docker rate limit without pulling at all - docker

I have a computer that is running docker. Now I get the error toomanyrequests when I try to pull an image. The twist is, I get this error if Docker is just running and I do not pull anything. So by waiting I never get to pull anything, except if I change my IP. If I get a fresh IP, I can pull without a problem. But after a few hours, I cannot pull anymore from the IP that the computer that is running Docker is using.
To my knowledge, I do not have any other software running that should provoke a pull. Is there anything from Docker itself, that contact docker hub and is causing the rate limit to kick-in. I just have 3 simple services running in Docker: A web proxy, a database and keycload. This is on a VM running Ubuntu 22.04.
There are no other machine on my network that are running Docker. If I start other machines and start Docker there, this problem does not occur. For example, I can start Docker Desktop on another machine and pull lots of stuff and leave it running, I do not get toomanyrequests.
Can anyone offer an explanation what is causing this? How can I fix this?

Related

Sending a request to a docker container running in GCP VM

I have a prerender server running in a docker container on my GCP VM instance running on Debian. I know it is running from the docker logs on the containers' port 3000. But I can't seem to send a request to the VM external IP. The firewall settings on the VM instance allow for both HTTP and HTTPS traffic but nothing seems to happen. I am using the VM cloud shell to ssh into the VM so I am positive the container itself is running as it should but I believe the issue lies somewhere between the VM and the container as I seem to have no activity on my VM network.
What I've tried so far:
the obvious first try was to simply send a request from a browser to that external IP address, this was just a simple http://'externalIPofVM'/render?fullpage=true&renderType=jpeg&url='requestedURL' I know from local testing that this works I just can't seem to figure out how to send this to my docker on the VM. Even if that request failed on the prerender server I'd at least know that it's hitting the docker in the first place but at the moment it's never being hit.
I believe it may have something to do with connecting the VM to the container but I don't know? This is my first dive into running a container on a VM so any information I may have left out please tell me and I'll happily provide as much detail as possible.
example output of a successful prerender request on a local container
the image is the type of response I expect from a successful request to prerender, however even a failed one would be helpful at this point as I'd at least know I'm making contact.

Docker on WSL2 and Ubuntu not pushing images

I've installed Docker on WSL2 and Ubuntu and I'm not able to push any image to any registry. The process seems to start but then hangs up indefinitely without any error message.
Everything else works flawlessly, I can build and pull images and run containers but I cannot push anything. I've been trying private and public registries with no success. Daemon logs don't show anything meaningful, they just hang up.
Not sure if this might be related but I've been using Docker Desktop (with WSL2) previously without any problem. I completely uninstalled Docker Desktop and WSL2 and did a fresh install of WSL2, Ubuntu + Docker.
Here's the pertinent section of daemon logs:
I'm quite lost here so I'd really appreciate any suggestion on how to effectively debug this issue.

Docker container keeps showing Restarting(255)

I'm facing a notable issue regarding Docker container on EC2 Linux instance. I deployed them 5 months ago and it was running perfectly. But now they stop working.
I have deployed three Docker containers of Cockroach DB, Redis associated with TheThingStack (TheThingsIndustries) using Docker Compose. I tried restarting containers using Docker Compose but it gave me an error for no space remaining. So I suggest and confirmed it later on that my EBS storage of the EC2 instance got full.
So I extended the file system of Linux after increasing EBS storage size from AWS official guideline. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
But still its not restarting and gave me error of "No space". I tried last time by restarting single container of deployed containers using Docker Compose, and now its showing Restarting(255).
I'm attaching multiple pictures of it, maybe it will help anyone answer that.
Nothing was the issue. Sometimes you need to restart the EC2 machine, and I did that then the error was gone. All things are now working well.
As I increased the EBS storage, it showed the increased volume but the file system of Linux didn't increase, the only option I got left was to restart the EC2 machine and the error was gone after restarting the EC2 machine.

Unable to make Docker container use OpenConnect VPN connection

I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.

"java.net.NoRouteToHostException: No route to host" between two Docker Containers

Note: Question is related to Bluemix docker support.
I am trying to connect two different Docker Containers deployed in Bluemix. I am getting the exception:
java.net.NoRouteToHostException: No route to host
when I try such connection (Java EE app running on Liberty trying to access MySQL). I tried using both private and public IPs of MySQL Docker Container.
The point is that I am able to access MySQL Docker Container from outside Bluemix. So the IP, port, and MySQL itself are ok.
It seems something related to the internal networking of Docker Container support within Bluemix. If I try to access from inside Bluemix it fails, if I do from outside it works. Any help?
UPDATE: I continued investigating as you can see in comments, and it seems a timing issue. I mean, it seems once containers are up and running, there is some connectivity work still undone. If I am able to wait around 1 minute, before trying the connection it works.
60 seconds should be the rule of thumb for the networking start working after container creation.

Resources