I am trying to resolve this network issue which I am facing multiple time while performing any docker commands like "Docker search Ubuntu".
I get an error saying:
"Error response from daemon: server misbehaving.
Can anyone help me on this?
For those who have this problem, it is typically related to having an issue with your DNS being unable to resolve index.docker.io. I had this issue today working from home where my internet connection has a default DNS server that is notoriously flakey.
My dev environment is OSX and I easily solved the issue by changing my DNS servers in network settings to Google's DNS servers (8.8.8.8 and 8.8.4.4) and then restarting my docker host through docker-machine restart MACHINENAME
Faster/Easier Solution: login to docker-machine and fix the dns.
Turns out you don't have to go to all the trouble and waiting associated with restarting docker-machine. Just login to the docker machine (i.e. docker-machine ssh default) and edit /etc/resolv.conf - Add the dns settings from your host machine at the top of resolv.conf.
This is more or less what happens when you restart docker-machine and explains why some repositories are unreachable sometimes after you switch networks.
I also had the exact same problem. Then I stopped the docker-machine and started it--it worked.
Make sure that, when you run this, you are connected to the internet, as Docker needs to be able to do this.
My issue not solved with stated Answer here.
This is problem with resolving Host... I was getting random error time out and misbehave
You need to enable through a configuration property experimentalHostResolver in %APPDATA%\rancher-desktop\settings.json. By default this property is set to false, meaning that the default DNS process in the rancher desktop will be handled through dnsmasq. However, if this property is set to true the default DNS lookup will switch to host-resolver.
NOTE: This feature can only be enabled for Windows currently and it is
an experimental feature
You can take a look at the example settings.json file below as a reference:
"kubernetes":{
"experimentalHostResolver":true <== This is the config!
},
Reference
Related
I'm having problems reloading my rabbitmq ssl certs when they get renewed. The core of my setup is as follows:
vps running the rabbitmq community edition docker container
ssl certs provisioned using letsencrypt on the vps, with the certs available to the docker image via a mounted folder
auto-renewal configured by mounting a folder that gets bound to /plugins/rabbitmq_management-{rmq-version}/priv/www in the docker container, and specifying the --webroot-path as that folder. This allows the renewal to work without shutting down the server because the /priv/www folder is where cowboy serves static assets from
The problem now is that even with the renewal successful, the server seems to not pick up the changed ssl certs unless the docker container is restarted. For this problem the closest I've found to a possible solution is this recommendation to clear the certs cache, but when I try that command the error message I get is this: unable to connect to epmd (port 4369) on {cluser-name}.localdomain: nxdomain (non-existing domain). I haven't found a solution to this problem yet. I've seen this suggestion to check the contents of the /etc/hosts/ file, and the contents of mine differ from the contents shown in that blog article. Specifically mine simply shows:
127.0.0.1 localhost
...
without the localhost.localdomain parts shown in the article. I'm stumped beyond this point. Any help will be much appreciated. Thank you!
Try running this command:
rabbitmqctl -n rabbit#localhost eval 'ssl:clear_pem_cache().'
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
I creates multiple containers remotely in the VM, but sometimes after rebooting the VM it gets this error and only restarting the VM helps. Do you know what this error means and how to solve it?
Docker API responded with status code=InternalServerError,
response={"message":"hcsshim::CreateComputeSystem
21b0a2583de03c0bb0871be6fc19580b74382c2afe697370b13edac0f31379ab: The
requested resource is in use.\n(extra info:
{"SystemType":"Container","Name":"21b0a2583de03c0bb0871be6fc19580b74382c2afe697370b13edac0f31379ab","Owner":"docker","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\Docker\\windowsfilter\\21b0a2583de03c0bb0871be6fc19580b74382c2afe697370b13edac0f31379ab","Layers":[{"ID":"6e512ed6-358b-5ade-be48-2ffe0aca7198","Path":"C:\\ProgramData\\Docker\\windowsfilter\\99f07daca1c1f27cebf6734e629331f2591365fb10f01087af5006daaf08dcfe"},{"ID":"89732e48-50f0-58fc-924e-116de51e7fa8","Path":"C:\\ProgramData\\Docker\\windowsfilter\\ffeb491d61491478f7db8de90727ec187246889b07289fcf0167cd81f81bc578"},{"ID":"4d107d52-32b1-58ef-8300-0a6e663ffc96","Path":"C:\\ProgramData\\Docker\\windowsfilter\\1d3356c9b099fb80301e6e2f4f3a47bbc422e7ddb9c496827d45d9c6e04f5fc2"},{"ID":"0b71aac6-7318-5347-a670-992c14d06613","Path":"C:\\ProgramData\\Docker\\windowsfilter\\def8b69f42a59ecf75bc8f74ab8a89a08378ce9a8c08b23a266e5b3c6a698e5f"},{"ID":"713ec9da-4c17-5a81-b458-3bd4a5f6ed27","Path":"C:\\ProgramData\\Docker\\windowsfilter\\abe1c469fd4c7650f55042c06cbe401b02c8bd5aedcb7fabd8c8d7e4a42f827f"},{"ID":"099b6b2c-1067-50b3-99bc-ec9d34027c52","Path":"C:\\ProgramData\\Docker\\windowsfilter\\3f6a34abe173f55b59669c859cc160436dc0d1222d9771bd907098e65efd2d60"},{"ID":"98b515ee-aa27-5fd2-bc30-ef80416b0f6f","Path":"C:\\ProgramData\\Docker\\windowsfilter\\6968b8cb1c76650fd4cb00f6daa20196c48ed3ea859da838b1e983900ceed1cd"},{"ID":"47ef818f-b569-530b-b886-3b94002eb19e","Path":"C:\\ProgramData\\Docker\\windowsfilter\\75114c4fe304656f0ad6be10eebc8e4a44ea193058bf444b536e2ecd8aa1671b"},{"ID":"0e70a491-247c-5574-b86a-24f209823e52","Path":"C:\\ProgramData\\Docker\\windowsfilter\\0193da6cbfd630787e298c7aa77274cd40ac3b444edbcaacfb725f9365210c36"},{"ID":"c3c2426f-1d1e-59eb-bd25-9fdb6bc2677e","Path":"C:\\ProgramData\\Docker\\windowsfilter\\06552a9e13e92f67a507de20d1e3a68e94a3510e716b7591d731d4637b3b6441"},{"ID":"ff3c665d-3a3b-563e-9c80-1f86fa8e7735","Path":"C:\\ProgramData\\Docker\\windowsfilter\\3d5e3b9c49041934ffa8917fb26401e6844a65ae0609705cecf57c958f0821c6"}],"ProcessorCount":5,"MemoryMaximumInMB":6675,"HostName":"21b0a2583de0","HvPartition":true,"EndpointList":["b1c4f68a-6037-4335-ad37-64cd6330159d"],"HvRuntime":{"ImagePath":"C:\\ProgramData\\Docker\\windowsfilter\\06552a9e13e92f67a507de20d1e3a68e94a3510e716b7591d731d4637b3b6441\\UtilityVM"},"AllowUnqualifiedDNSQuery":true})"}
stupid message but the problem with parameter -memory https://forums.docker.com/t/error-the-requested-resource-is-in-use-only-when-using-memory/95851 I will spawn docker in windows server and I will be able to use parameter isolation which resolves docker bug
Issue
So my problem is that I can't get the rancher server to find the rancher agent. I've looked at the Rancher Troubleshooting FAQs but that haven't helped with my issue. I'm using one server for both the rancher server and the agent and I'm setting the CATTLE_AGENT_IP to the IP of the physical server.
I'm running Ubuntu 16.04 and docker 1.12.3.
Iptables
At first I thought it might be a firewall issue, but I've tried disabled it and no luck.
Logs
Rancher agent error log message
time="2016-10-27T11:56:50Z" level="info" msg="Host not registered yet. Sleeping 1 second and trying again." Attempt=5 reportedUuid="492dc65c-6359-4a40-b6e3-89c6da704ffb"
I feel like I've tried everything without any result. Anyone have an idea what could be wrong or how I could continue to troubleshoot the problem?
Are you reusing the host from a previous Rancher install?
If so, there is sometimes old credentials that are tried instead of the new ones for the host. The files are in /var/lib/rancher. (they are .files so you need ls -a to view)
If you are using a self signed SSL cert it will fail to register if you are not bind mounting the CA root cert. See http://docs.rancher.com/rancher/v1.2/en/installing-rancher/installing-server/basic-ssl-config/ the last section "Adding Hosts" for more info.
I solved my issue. The problem was a faulty CATTLE_AGENT_IP. Apparently you can not have http:// before the IP address.
Here is my whole scenario.
I have a RHEL 7.1 vmware image, with the corporate proxy properly configured, accessing stuff over http or https works properly.
Installed docker-engine, and added the HTTP_PROXY setting to /etc/systemd/system/docker.service.d/http-proxy.conf. I can verify the proxy setting is picked up by executing:
sudo systemctl show docker --property Environment
which will print:
Environment=HTTP_PROXY=http://proxy.mycompany.com:myport/ with real values of course.
Pulling and running docker images works correctly this way.
The goal is to work with the binary distribution of openshift-origin. I downloaded the binaries, and started setting up things as per the walkthrough page on github:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
Starting openshift seems to work as I can:
* login via the openshift cli
* create a new project
* even access the web console
But when I try to create an app in the project (also via the cli):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
It fails:
error: can't look up Docker image "centos/ruby-22-centos7": Internal error occurred: Get https://registry-1.docker.io/v2/: dial tcp 52.71.246.213:443: connection refused
I can access (without authentication though) this endpoint via the browser on the VM or via WGET.
Hence I believe DOCKER fails to pick up the proxy settings. After some searching I also fear if there are IPTABLES settings missing. Referring to:
https://docs.docker.com/v1.7/articles/networking/
But I don't know if I should fiddle with the IPTABLES settings, should not Docker figure that out itself?
Check your HTTPS_PROXY environment property.
I'm new to Wildfly and I hope you guys can help me with this problem:
I'm following this tutorial on how to Install Wildfly 8 and when I'm trying to execute step 4 I get the following errors:
I've been googling for a while now and I can't find an answer. I've tryed with JDK 7 and 8, no changes, I'm using admin permissions, I've even tried to download Wildfly again and still no changes.
More experienced co-workers have seen this and don't have a clue about what's going on.
Can you help me? Thanks
The tutorial you linked to, has Wildfly configured to use the default port 8080. Most likely, you have another process or service running which is already using port 8080. Try to find out what process it is and stop it, or try configuring Wildfly to use a different port.
try restart the machine or enable IPV6 in the machine, this error will be resolved
Those having the same problem should check who else uses the port 9990 in your Windows system. TCPView is a good tool to find out the guilty of charge. One of possible common causes in this case is NVIDIA Network Service (NvNetworkService.exe).
If that's the case just find it in your Windows services list and stop/disable it. The service itself is responsible for checking for Nvidia drivers updates, so any time you want it back just turn it on manually.
In my case, I inadvertedly added an AJP socket binding while using standalone jboss_cli utility:
[standalone#localhost:9990 /] /subsystem=undertow/server=default-server/ajp-listener=ajp:add(socket-binding=ajp)
This led to an 'already in use' error that doesn't let any app to start and signaled 503 error through an Apache web server.
I deleted the binding:
/subsystem=undertow/server=default-server/ajp-listener=ajp:remove
And then everything worked normally.
I too had the same issue.After analysis it was found that the SSL port(443 in my case) was creating this issue. I just terminated the processes that were running on 443 and restarted the wildfly and everything worked fine after that.
I had faced same issue with wildfly_8.2.1
Port 8080 was also free, so that solution doesn't worked for me.
Try below procedure as it helped to resolve my issue.
add below lines to your server's /etc/sysctl.conf file
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
No restart is required for this solution.