I am trying to use docker containers on Bluemix but it looks like I am having troubles tried again this morning but seems it still does not work.
I have followed these steps:
I have released all public ip issuing the cf ic ip release command
I have created a new container from the etherpad image (following the tutorial Tutorial), requesting and binding a new public ip from the Bluemix GUI.
Bluemix assigned 134.168.1.49 IP and bound it to the container.
I expect the application to respond to http://134.168.1.49:9080/ but it hangs and responds me back with a connection timeout.
Running a container from the same image locally works perfectly.
Any idea, suggestion?
There is a known issue with the IBM Containers service where there's a delay with the inbound network access being available after containers start. It can take up to five minutes for this to be available.
Are you able to successfully ping the bound IP address?
Note: The IBM Containers service suffered a major incident yesterday which affected operations. If you were trying to use it during this time, it may be related to that.
We recently experienced some connectivity issues in our US-South datacenter. I would suggest redeploying your container with an IP address again today and determine if you have further success.
I have worked with Bluemix support, that was able to create a new image, start it up and access it successfully with my exact configuration. At this time, it appears there is something wrong with the networking for the tenant space where my containers are running. Bluemix team is investigating.
Thank you all for the support.
Related
If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.
I tried following the tutorial here
https://docs.docker.com/get-started/part3/.
First issue I ran into was when I called docker swarm init. It also asked for docker swarm init --advertise-addr with one of two possible IPv6 IPs.
I tried initializing the swarm on both and then starting the service. The service starts succesfully, but I can't get any response when accessing Localhost:4000. It just loads forever.
I have tried rebuilding the image, creating the swarm on both IPs, checking the logs (there was nothing there), but I kind of run out of ideas. If it helps, the computer has dual operating system, might affect the networking in ways I an unable to figure out.
How can I receive a response on my request?
The issue I was facing was a connection between google chrome and docker swarm, documented better here
https://forums.docker.com/t/google-chrome-and-localhost-in-swarm-mode/32229/9.
There is no apparent solution
I am just learning docker (I use windows 7 and install docker tools) and when I tried to use push command to push a local image to repository, it kept pushing for a long time without any prompts or error messages so that I have to use "ctrl+C" to stop it. I tried many times but got same results.
the screenshot is as follows:
I am not sure what is wrong with it. Maybe it's because I am now in China and it is due to the firewall?
I'm glad you pointed out that you're in China! Yes, this is very likely due to a Great Firewall issue.
docker push goes to docker.io as you can see; which returns the IP address of 34.234.103.99
A WHOIS result of this returns that this IP address belongs to Amazon Web Services (AWS); which the Great Firewall blocks access to. After a cursory search, it looks like you're not the first to hit this as well.
I'd recommend setting up a VPN or proxy in order to bypass this.
You can also try and use the docker mirror that is hosted in china, see
https://docs.docker.com/registry/recipes/mirror/#use-case-the-china-registry-mirror
https://www.docker-cn.com/registry-mirror (chinese)
I'm not sure if this is an issue with the current version of Windows Docker network or poor configuration and misunderstanding on my part, but I have the following setup:
2 Docker containers (built using the Microsoft/ASP.NET image as a base) running a .NET MVC application in each.
1 Docker container running SQL server (built using the Microsoft/mssql-server-windows image)
When I create all 3 containers everything works great, I can attach and ping all other the other containers using their names without any issue. The applications run and can communicate with each other as I hoped.
However, when I reboot my machine and start all the containers again they can no longer ping/communicate with each other using their names (using IP addresses is fine).
I've tried this on the default NAT network and also tried replacing the NAT network with my own custom NAT network.
To resolve the issue I have to run the force network disconnect command for each container as such:
docker network disconnect nat <containername> --force
And then I have to reconnect each container to the network before starting them up. All containers can then ping/communicate with each other using their names as well as their IP addresses.
FYI, this is a development environment but I was hoping to do something similar in Azure using a Windows Server 2016 VM, although I don't quite know what the best network configuration is for live production yet as I need to have multiple applications (in separate containers) on the same node accessed via their own subdomains.
Any help or guidance would be great.
I'm not sure, in part because this question was asked several months before any other example I've run into, but this sounds very similar to the problem described at https://github.com/docker/for-win/issues/1038.
Basically, there appears to be a problem introduced with the 1709 update to Windows 10 which results in a scenario where Hyper-V networking doesn't work the way it ought to.
There appear to be two common ways of working around this problem: Turning off "Fast Start" in the Control Panel => Power Options => System Settings, or restarting Docker for Windows and any containers after booting. I also thought I saw something on a Microsoft blog post indicating that the underlying problem has now been resolved and will be included in an update to Windows 10, but alas I can no longer find that information or the specific version number in which the problem was (theoretically) resolved. It may well be the delayed 1803 "Spring Creators Update" release.
Docker allows you to link containers by name.
I have two questions on this:
Supposed A (client) is linked to B (service), and B's port is exposed dynamically (i.e. the actual host port is determined by Docker, not given by the user). What happens if B goes down and is being restarted?
Does Docker update the environment variable on A?
Does Docker assign the very same port again to B?
Is A link to B broken?
…?
Besides that, it's quite clear that this works fine if both containers are run on the same host machine. Does linking containers also work across machine boundaries?
Have you looked into the ambassador pattern?
It's ideal for this concept where you may want App server linked to DB server but if you take DB server down then App server needs to be restarted also.
http://docs.docker.io/en/latest/use/ambassador_pattern_linking/
I would say: try ;).
At the moment, docker as no control whatsoever on the process once started as it execve(3) without fork. It is not possible to update the env, that's why the links need to be done before the container runs and can't be edited afterward.
Docker will try to reassign the same port to B, but there is no warranty as an other container could be using it.
What do you mean by 'broken'? If you disabled the networking between unlinked container, it should still be working if you stop/start a container.
No, you can't link container across network yet.