docker fails in pushing local image to repository - docker

I am just learning docker (I use windows 7 and install docker tools) and when I tried to use push command to push a local image to repository, it kept pushing for a long time without any prompts or error messages so that I have to use "ctrl+C" to stop it. I tried many times but got same results.
the screenshot is as follows:
I am not sure what is wrong with it. Maybe it's because I am now in China and it is due to the firewall?

I'm glad you pointed out that you're in China! Yes, this is very likely due to a Great Firewall issue.
docker push goes to docker.io as you can see; which returns the IP address of 34.234.103.99
A WHOIS result of this returns that this IP address belongs to Amazon Web Services (AWS); which the Great Firewall blocks access to. After a cursory search, it looks like you're not the first to hit this as well.
I'd recommend setting up a VPN or proxy in order to bypass this.

You can also try and use the docker mirror that is hosted in china, see
https://docs.docker.com/registry/recipes/mirror/#use-case-the-china-registry-mirror
https://www.docker-cn.com/registry-mirror (chinese)

Related

Why can't I access my container from my internal or external IP within GAE?

I created a very simple docker practice script (Github link), and executed it via the docker application on my MAC OS computer without any problems. I wanted to test it on google clouds compute engine, so i created an instance and re-built the docker image & container via the SSH browser (Using Debian GNU/Linux)
Everything seems to work fine, except when i try to access the container via localhost/external IP. Both give me this response Site can't be reached.
I've adjusted the firewall settings many times, and end up with the same results as the screenshot provided. I ended up resetting the firewall settings to its default settings, just so I could bring this question here. Here are the default settings
What makes me think i'm missing something is the fact that I can use curl http://localhost:5000 (the port i've chosen for exposure), and i'll get this as a response, which is all i had set the page to say once it's launched.
What am I missing that's causing the container to not allow me to view it via localhost/external IP?

Docker (Compose? Swarm?) How to run a health check before exposing cotainer

I have a web app (netcore) running in a docker container. If I update it under load it won't be able to handle requests until there is a gap. This might be a bug in my app, or in the .net, I am looking for a workaround for now. If I hit the app with a single http request before exposing it to the traffic though, it works as expected.
I would like to get this behaviour:
In the running server get the latest release of the container.
Launch the container detached from network.
Run a health check on it, if health check fails - stop.
Remove old container.
Attach new container and start processing traffic.
I am using compose atm, and have somewhat limited knowledge of docker infrastructure and the problem should be something well understood, yet I've failed finding anything in the google on the topic.
It kind of sounds like Kubernetees at this stage, but I would like to keep it as simple as possible.
The thing I was looking for is the Blue/Green deployment and it is quite easy to search for it.
E.g.
https://github.com/Sinkler/docker-nginx-blue-green
https://coderbook.com/#marcus/how-to-do-zero-downtime-deployments-of-docker-containers/
Swarm has a feature which could be useful as well: https://docs.docker.com/engine/reference/commandline/service_update/

Access to internal infrastructure from Kubernetes

If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.

Windows Docker NAT seems completely broken

I have a docker container that has the NAT mapping 0.0.0.0:9055->80/tcp. From what I can tell, this should mean I can go to http://localhost:9055/ on my host machine, and it will be redirected to port 80 on the running Docker image. However, when I try this it times out.
If I connect to the instance and run docker exec -i 52806ceaf166 "ipconfig" to see what the image's private IP is, I get 172.28.27.31. When I try going to http://172.28.27.31/ on the host machine, it works!
I'd like to get the NAT mapping working since that's what all the tools assume works (such as Visual Studio, Kitematic, etc) and plus I don't want to have to worry about which containers use which IPs. Is there a way to fix this? Thanks!
PS: I'm new to Docker (just installed it today) so if any more info is needed (settings, versions, etc) just let me know how to get them and I'll add them to the post.
Was looking at the Docker Image I'm using, and I think this is what I'm running into:
This is a known issue that'll be addressed in the near future. The work around is fairly easy though.
Update: This was fixed in a recent Windows patch available through Windows Update.

Not able to access to Docker container through bound public ip

I am trying to use docker containers on Bluemix but it looks like I am having troubles tried again this morning but seems it still does not work.
I have followed these steps:
I have released all public ip issuing the cf ic ip release command
I have created a new container from the etherpad image (following the tutorial Tutorial), requesting and binding a new public ip from the Bluemix GUI.
Bluemix assigned 134.168.1.49 IP and bound it to the container.
I expect the application to respond to http://134.168.1.49:9080/ but it hangs and responds me back with a connection timeout.
Running a container from the same image locally works perfectly.
Any idea, suggestion?
There is a known issue with the IBM Containers service where there's a delay with the inbound network access being available after containers start. It can take up to five minutes for this to be available.
Are you able to successfully ping the bound IP address?
Note: The IBM Containers service suffered a major incident yesterday which affected operations. If you were trying to use it during this time, it may be related to that.
We recently experienced some connectivity issues in our US-South datacenter. I would suggest redeploying your container with an IP address again today and determine if you have further success.
I have worked with Bluemix support, that was able to create a new image, start it up and access it successfully with my exact configuration. At this time, it appears there is something wrong with the networking for the tenant space where my containers are running. Bluemix team is investigating.
Thank you all for the support.

Resources