I'm using WSL(v2) and have a web server running inside it.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 3d21h v1.25.0
And I can ping my "website" from within WSL [ping mysite.com] just fine
On my Windows side, I have Docker Desktop running.
Under Containers:
NAME IMAGE STATUS PORTS
kind-control-plane kindest/node:<none> Running 443,42725,80
I have also enabled all [WSL] related options under Docker Desktop -> Settings
I want to access this website in Chrome and tried:
localhost:443 = err 400
localhost:42725 = Client sent an HTTP request to an HTTPS server. With https, err 403
localhost:80 = err 404
I also tried adding the port 42725 to my firewall allowed list but nothing seems to change.
I searched and found multiple Q&As pointing me here:
https://learn.microsoft.com/en-us/windows/wsl/networking
I tried these:
netsh interface portproxy add v4tov4 listenport=80listenaddress=0.0.0.0 connectport=80connectaddress={hostname -I}
netsh interface portproxy add v4tov4 listenport=443listenaddress=0.0.0.0 connectport=443connectaddress={hostname -I}
but nothing changes.
However, when I do this
netsh interface portproxy add v4tov4 listenport=42725listenaddress=0.0.0.0 connectport=42725connectaddress={hostname -I}
And then try to restart:
$ docker start kind-control-plane
Error response from daemon: Ports are not available: exposing port TCP 127.0.0.1:42725 -> 0.0.0.0:0: listen tcp 127.0.0.1:42725: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
Error: failed to start containers: kind-control-plane
I also tried a completely different [listenport] but it didn't work either.
I'm actually not sure what ports to put inside the listenport/connectport options. Also, should I change listenaddress=0.0.0.0 to something else?
Any suggestions? Thanks!
Related
So I have GitLab EE server (Omnibus) installed and set up on Ubuntu 20.04.
Next, following official documentation found on GitLab PlantUML integration, I started PlantUML in a docker container which I did with the following command:
docker run -d --name plantuml -p 8084:8080 plantuml/plantuml-server:tomcat
Next, I also configured /etc/gitlab/gitlab.rb file and added next line for redirection as my GitLab server is using SSL:
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n"
In the GitLab server GUI in admin panel, in Settings -> General, when I expand PlantUML, I set the value of PlantUML URL to (two ways):
1st approach:
https://HOSTNAME:8084/-/plantuml
Then, when trying to reach it through the browser through this address(https://HOSTNAME:8084/-/plantuml), I get
This site can’t provide a secure connection.
HOSTNAME sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
2nd approach:
Also I tried to put before that I tried different value in in Settings -> General -> PlantUML -> PlantUML URL:
https://HOSTNAME/-/plantuml
Then, when trying to reach it through the browser through this address (https://HOSTNAME/-/plantuml), I get
502
Whoops, GitLab is taking too much time to respond
In both cases when I trace logs with gitlab-ctl tail I get the same errors:
[crit] *901 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: CLIENT_IP, server: 0.0.0.0:443
[error] 1123593#0: *4 connect() failed (113: No route to host) while connecting to upstream
My question is which of the above two ways is correct to access PlantUML with the above configuration and is there any configuration I am missing?
I believe the issue is that you are running the plantuml in a docker container and then trying to reach it via gitlab (on localhost) with name.
In order to check if that is the issue please change
proxy_pass http://plantuml:8080/
to
proxy_pass http://localhost:8080/
and trying again with the first approach.
Your second approach seems to be missing the container port in the url.
I am starting a WebLogic 12.2.1.4 admin server in docker from my docker-compose.yml file.
I use different port mapping, not the default 7001.
My docker port mapping is this: 7101:7001
Everything works fine, except this: I constantly get the following exception when I click on the Deployment menu on the web console:
<Feb 12, 2021 5:11:21,002 PM UTC> <Notice> <JMX> <BEA-149535> <JMX Resiliency Activity Server=All Servers : Resolving connection list DomainRuntimeServiceMBean>
javax.ws.rs.ProcessingException: java.net.ConnectException: Tried all: '1' addresses, but could not connect over HTTP to server: 'localhost', port: '7101'
failed reasons:
[0] address:'localhost/127.0.0.1',port:'7101' : java.net.ConnectException: Connection refused
The WL admin server tries to use the public docker port 7101 in the container but actually, WL is listening on the default 7001 port inside the container. Port 7101 is only used from the host machine, and of course, WL is not listening on port 7101 in the container.
My workaround is the following:
Check the IP address of the admin-server container with docker inspect <container-name>
Open the WL console using the container private IP address, e.g.: http://172.19.0.2:7001/console
In this case, the exception does not appear
But if I open the WL console from http://localhost:7101/console which is the mapped port to the host machine by docker, then the exception appears
Maybe this is a WL user interface issue? But I am not sure.
Any idea why this happening?
I'm trying to run a docker image on my windows 10 pro workstation, and I'm getting this error:
docker: Error response from daemon: Ports are not available: listen tcp 0.0.0.0:80: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
I'm running this command:
docker run -d -p 80:80 docker/getting-started
And getting this response back:
Unable to find image 'docker/getting-started:latest' locally
latest: Pulling from docker/getting-started
188c0c94c7c5: Pull complete
617561f33ec6: Pull complete
7d856acdaa9c: Pull complete
a0d3c6e28e6d: Pull complete
af69a9b963c8: Pull complete
0739f3815ad8: Pull complete
7c7b75d0baf8: Pull complete
Digest: sha256:b821569034e3b5fae03b40e64a866017067f3bf17effe185b782bdbf02179528
Status: Downloaded newer image for docker/getting-started:latest
7907f6de2b55cc2d66b5ed3a642ac1a97e5bb5ecda5fcf76ff60d7236e8fd32d
docker: Error response from daemon: Ports are not available: listen tcp 0.0.0.0:80: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
How can I run a Docker container and get past this problem?
The commands that helped me were:
net stop winnat
net start winnat
Taken from here.
Check if somthing is listening on port 80 by running PowerShell command:
Get-Process -Id (Get-NetTCPConnection -LocalPort 80).OwningProcess
If the response is something like this:
NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
------ ----- ----- ------ -- -- -----------
0 0.20 4.87 0.00 4 0 System
Then the it is taken.
terminate the process or simply change the port
docker run -d -p 81:81 docker/getting-started 5a0b1202f48ef63c06d75c2f26be2a05f29aa84fe2fbdc5b66f989aa86df98f
docker: Error response from daemon: Ports are not available: listen tcp 0.0.0.0:80: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
I was also using Docker's Getting Started tutorial, ran
docker run -d -p 80:80 docker/getting-started, and got the same error.
I thought that Internet Information Services (IIS) might be using port 80 already. I verified this hunch by going to
Start > IIS > computer name > Sites > Default Web Site > highlight Default Web Site
In the right pane, I saw
Browse *.80 (http)
Yep, port 80 was already in use!
Note: To verify if any process is using port 80, run the command
Get-Process -Id (Get-NetTCPConnection -LocalPort 80).OwningProcess
and look for something like
NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
------ ----- ----- ------ -- -- -----------
0 0.20 2.96 0.00 4 0 System
per #Tim Dunphy's answer.
Solution #1
Stop the Default Web site.
Either go to IIS per above > right pane > Manage Website > Stop or
Use the net stop http command, and stop services using the port.
Solution #2
You may not want to stop IIS or any other service running on port 80, so just change the local port! For example,
docker run -d -p 81:80 docker/getting-started
will likely do the trick, as long as you don't have services using port 81.
When I encountered this error there was nothing actively listening on the port.However, the following command showed the port had been reserved by something else
netsh interface ipv4 show excludedportrange protocol=tcp
See This article for more details. A reboot fixed the issue for me.
Most times, Windows IIS (Internet Information Service) or some other program may be using port 80, which is the default http port used by Laravel, Apache and other PHP development environments.
To resolve this issue, Open a new PowerShell window as administrator and simply run this command:
net stop http
A prompt listing all services using the http port is shown and you are given the option of stopping them. enter 'Y' and press Enter. All the services are stopped and port 80 is now free for whatever you want to use it for.
Very useful for testing multiple application environments locally on Windows without having to worry about port configuration.
in my case,the task manager show that a system process is occupy port 80.
when i dive deeper, i found a svchost.exe related to port 80, and it is based on world wide web service
so,just open service list and stop the world wide web service,then everything will be ok,the name of service maybe different, but should include keywords :world wide web
just do it
netsh http add iplisten ipaddress=::
(This has to be run in a command prompt with elevated rights!)
I am using PubNub (GO SDK) publish/subscribe service to receive messages but I am not receiving any messages even when I am able to ping google.com from inside the container.
In the Pubnub logs, every time my program tries to connect to the origin (Pubnub server), I am getting "dial tcp: i/o timeout" error. I guess this is due to the slow internet connection either on the host or in the container.
What should I do to get around this error?
I was to solve this error by including Google DNS (8.8.8.8) in my machine's nameservers. I think this error was due to the slow domain resolution process as my machine was getting DNS from DHCP.
I followed these steps:
Set static DNS in /etc/resolvconf/resolv.conf.d/basefile:
nameserver 8.8.8.8
nameserver 8.8.4.4
Configure your PC so that it uses user-provided DNS, instead of obtaining it from DHCP. For that, open this file /etc/dhcp/dhclient.conf and add this line:
supersede domain-name-servers 8.8.8.8, 8.8.4.4;
Restart network manager using:
sudo service network-manager restart
I have a HTTP health check in my service, exposed on localhost:35000/health. At the moment it always returns 200 OK. The configuration for the health check is done programmatically via the HTTP API rather than with a service config, but in essence, it is:
set id: service-id
set name: health check
set notes: consul does a GET to '/health' every 30 seconds
set http: http://127.0.0.1:35000/health
set interval: 30s
When I run consul in dev mode (consul agent -dev -ui) on my host machine directly the health check passes without any problem. However, when I run consul in a docker container, the health check fails with:
2017/07/08 09:33:28 [WARN] agent: http request failed 'http://127.0.0.1:35000/health': Get http://127.0.0.1:35000/health: dial tcp 127.0.0.1:35000: getsockopt: connection refused
The docker container launches consul, as far as I am aware, in exaclty the same state as the host version:
version: '2'
services:
consul-dev:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul-dev"
hostname: "consul-dev"
ports:
- "8400:8400"
- "8500:8500"
- "8600:8600"
volumes:
- ./etc/consul.d:/etc/consul.d
command: "agent -dev -ui -client=0.0.0.0 -config-dir=/etc/consul.d"
I'm guessing the problem is that consul is trying to do the GET request to the containers loopback interface rather than what I am intending, which is for the loopback interface of the host. Is that a correct assumption? More importantly, what do I need to do to correct the problem?
So it transpires that there was a bug in some previous versions of macOS that prevented use of the docker0 network. Whilst the bug is fixed in newer versions, Docker support extends to older versions and so Docker for Mac doesn't currently support docker0. See this discussion for details.
The workaround is to create an alias to the loopback interface on the host machine, set the service to listen on either that alias or 0.0.0.0, and configure Consul to send the health check GET request to the alias.
To set the alias (choose a private IP address that's not being used for anything else; I chose a class A address but that's irrelevant):
sudo ifconfig lo0 alias 10.200.10.1/24
To remove the alias:
sudo ifconfig lo0 -alias 10.200.10.1
From the service definition above, the HTTP line should now read:
set http: http://10.200.10.1:35000/health
And the HTTP server listening for the health check requests also needs to be listening on either 10.200.10.2 or 0.0.0.0. This latter option is suggested in the discussion but I've only tried it with the alias.
I've updated the title of the question to more accurately reflect the problem, now I know the solution. Hope it helps somebody else too.