Userland proxy error when launching docker image on Google Cloud Platfrom - docker

I am trying to run a standard nginx container on one of my GCP VMs. When i run
docker run -it --rm -p 80:80 tiangolo/uwsgi-nginx-flask:python3.6
I get the following error:
Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
However it is a clean VM instance I created. During VM creation I also checked the http port to make sure port 80 is open (i need to add https - but this is my first deployment test).
The image does work locally. It seems to be a Google Cloud Platform configuring thing I guess.

it was my own stupid error.. sorry for asking the SO community...
so what did I do wrong.. I connected through the web client.. which means port 80 is already in use. causing all this havoc :(
so just ssh in and try again and it works.

I tried to reproduce the issue on my end, but I did not find any error. Here are the below steps I have taken.
First I spin up a Debian vm instance in the Google cloud platform and allowed incoming http in the firewall for that VM instance so that I could access the site from outside.
Then I installed docker in the VM instance. I followed this link.
After that, I made sure that http port is free in the VM instance. I used the below command.
netstat -an | egrep 'Proto|LISTEN'
You may check the link here.
At this point, I issued the docker command you provided.
docker run -it --rm -p 80:80 tiangolo/uwsgi-nginx-flask:python3.6
I did not get any error and I could access the nginx page.
“Hello World from Flask in a uWSGI Nginx Docker container with Python 3.6 (default)”
If you spin a new VM with the same docker version, do you have the same issue? What kind of image is your VM running?

Related

Can't access localhost from docker

I'm a beginner in this docker world and as it is very suffering to set all these 'localhost' thingy with apache and stuff, it's the same with docker.
I don't know if it's me because but i tried with the help of other post to solve my problem but after several hour i give up and i ask for your help, because some post are just not comprehensible for me ( post that includes bridges stuff NAT iptables docker-machine, etc )
After several hour i'm just simply trying to access apache website on localhost:5000 on windows who is launched with service apache2 start within a docker, and if i do w3m localhost in this docker i can see it running.
But when i'm trying to access it with a browser no response.
I also tried this command :
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' bce97a49b68c
172.17.0.2
The adress with :5000 don't have an access, i even it put in the hosts file. No success.
If someone has the last solution for this problem, it's seems there are plenty and everything seems to be so simple in blog of article (i even tried something with docker-composer, it deleted docker i had to reinstall the whole thing)
I'm a little unsure what you're asking, but it seems like you may need to expose your ports. When running something in Docker, it runs in its own little box unconnected to the outside world - the rest of your machine. If you want to connect ports - say to access a web server running inside a Docker container, you need to use the -p or --publish option when running your Docker container. There are similar commands for mounting drives and such.
Here's an example from the database I run locally in Docker:
docker run \
--publish=7474:7474 \
--volume=/home/me/logs:/logs \
--env=NEO4J_AUTH=none \
neo4j:4.2.
This says:
Allow the outside system to access port 7474 inside the Docker container from the port 7474 outside the docker container
Mount the outside system's /home/me/logs folder as /logs inside the Docker container
Set the environment variable NEO4J_AUTH inside the Docker container to the value none

How do I configure docker to allow a connection to a container from other computers?

I am trying to run a small test server with MS SQL Server running on a Mac in a Linux docker container. Maybe I have the terminology wrong so please correct me if necessary:
host - the macOS desktop with docker installed (ip 10.0.1.73)
container - the Linux instance running in the docker container with SQL Server running in it
remote desktop - another computer on the local area network trying to connect to SQL Server
I followed the MS installation instructions and everything seems to be running fine, except I can't connect to SQL Server from the Remote Desktop
I can connect to the docker host(10.0.1.73) and can ping the IP address
I can connect to SQL Server from the docker host and see the databases etc.
I used the following command to create the docker container
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<XXXXXX>" -p 1433:1433 --name sqlserver1 -d microsoft/mssql-server-linux:2017-latest
Thinking that the -p 1433:1433 would map the linux port to the macOS host port and allow the remote computer to access the docker container when connecting to that port on the macOS host from the local area network
This is not working and I assume this may be to do with the network routing on the macOS host
Most solutions I have seen seem to indicate that one should use the VirtualBox UI to modify the network settings - but I don't have that installed
The others seem to have pages and pages of command line instructions that are required
Is there an easy solution somewhere I have missed?
EDIT:
Some more research and I found this explanation about how by default the Docker networking is set up for single host networking. Good explanation for anyone else struggling with the Docker concepts.
It is also worth reading up about the differences between docker containers and virtual machines...
https://youtu.be/Js_140tDlVI
Still trying to find some explanation on multi host networking.
try disabeling the firewall on the host you want to connect to.
port 1433 will be forwarded to the docker container, but your host (MAC) should have port 1433 open to be able to connect to your host.
Using NAT:
Assign the target address to your host interface:
sudo ifconfig en1 alias 10.0.1.74/21 up
Create the docker container and map the port to the second IP address assigned to the host interface
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<XXXXXXXXX>" -p 10.0.1.74:1433:1433 --name sqlserver1 -d microsoft/mssql-server-linux:2017-latest

Cannot contact docker container on Windows host

tl;dr summary of the problem:
Application launches successfully within container, binds to 127.0.0.1:8080 within the container, and successfully services web requests, but only within the container
docker ps -a confirms that port 8080 is being exposed
I cannot communicate with the application from the host using the container's actual IP address when I request http://[Container IP address]:8080
The host is running Windows 10
The Windows Firewall is completely disabled for troubleshooting
To troubleshoot I have created the simplest possible application to run in a dockers container, an F# / Suave application like so:
open Suave
[<EntryPoint>]
let main args =
startWebServer defaultConfig (Successful.OK "Hello World!")
0
Which works fine, returning a simple "Hello World!" when I run it locally.
To containerize the app I have followed the instructions at "Dockerize a .NET Core application" which instructs me to run the container like
$ docker run -d -p 8080:80 --name myapp aspnetapp
I cannot connect to the "website" at http://localhost:80 nor http://localhost:8080, which apparently is a common problems for Docker users running Windows. However the solution that seems to have fixed this problem for every other Windows user on the internet, running
docker inspect myapp
and then hitting the resulting IPAddress, does not work either. I get:
Hitting both http://172.17.0.2:80 and http://172.17.0.2:8080 in Chrome gives me "Site can't be reached."
Also worth noting, when I run
docker logs myapp
The only line is
[17:43:21 INF] Smooth! Suave listener started in 73.476ms with binding 127.0.0.1:8080
As a guess, I have also tried
ipconfig
and then hitting the IP address of the Docker NAT adapter, but this also results in an unreachable site.
UPDATE:
Another observation which might or might not be relevant: Many online tutorials suggest that under Windows you need to directly connect to the IP Address of the container, and to get that IP address by running
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" myapp
which for me, always yields:
When I run a vanilla
docker inspect myapp
the resulting JSON is not structured exactly like the recommended query. I get a bridge node, but no nat node:
Your app says it’s bound to localhost:8080, but you’re publishing port 80. Stop the container, and rerun with:
docker run -d -p 8080:8080 --name myapp aspnetapp
Try adding the following lines in docker file
ENV ASPNETCORE_URLS http://+:80
EXPOSE 80
Reference: https://www.sep.com/sep-blog/2017/02/20/hosting-asp-net-core-docker/
My containerized app needed to listen/bind to 0.0.0.0 rather than 127.0.0.1.

docker images access issue

Im not able to access my docker image. my setup is windows 7 and have the docker linux vm which is running on oracle vm. i have build my app and i can see my app using below
i dont know how i can access myapp container. since its wokring on localhost i believe i can access on localhost:port number. but i have no clue where i can see and how i can start. if you have face this same prob can you help ?
Update log hung :
in the below screen the server startup hung almost 10 mins and i terminate the process, any idea about this error ?
What you have shown in your screenshot is the image list. So you would first have to docker run your image, binding the application's port exposed by the docker image (with EXPOSE, I'm assuming 8081 for the sake of my example) to the host:
docker run --publish 8081:8081 3b98
If you forgot to expose the port in your image you can do that on the commandline adding the argument --expose 8081 to run.
Then, since your working with the Windows 7 setup, you cannot access your running application in its container on localhost but on the docker-machine's (the docker linux VM) IP. You can find out the assigned IP with
docker-machine ip
So if your application publishes itself on 8081 and docker-machine ip returns 192.168.99.100 you would find your app on 192.168.99.100:8081

Docker container can't connect to host application using IP whitelist

I have an application running on my host which has the following features: it listens to port 4001 (configurable) and only accepts connections from a whitelist of trusted IP addresses (127.0.0.1 only by default, other addresses can be be added but one by one, not using a mask).
(It's the interactive brokers gateway application which is run in java but I don't think that's important)
I have another application running inside a docker container which needs to connect to the host application.
(It's a python application accessing the IB API, but again I don't think that matters)
Ultimately I have will multiple containers on multiple machines trying to do the same thing, but I can't even get it working with one running on the same machine.
sudo docker run -t myimage
Error: Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.
(No response from IB Gateway on host machine)
IDEALLY I'd be able to set up the docker containers / bridge so that all the docker containers appear as if they are on a specific IP address, add it to the whitelist, and voila.
What I've tried:
1) using -p and EXPOSE
sudo docker run -t -p 4001:4001 myimage
Bind for 0.0.0.0:4001 failed: port is already allocated.
(No response from gateway)
This eithier doesn't work or leads to a "port already in use" conflict. I gather that these settings are designed for the opposite problem (host can't see a particular port on the container).
2) setting --net=host
sudo docker run -t --net=host myimage
Exception caught while reading socket - Connection reset by peer
(no response from gateway)
This should work since the docker container should now look like it's 127.0.0.1... but it doesn't.
3) setting --net=host and adding the local host's real IP address 192.168.0.12 (as suggested in comments) to the whitelist
sudo docker run -t --net=host myimage
Exception caught while reading socket - Connection reset by peer
(no response from gateway)
4) adding 172.17.0.1, ...2, ...3 to the whitelist on the host application (the bridge network is 172.17.0.0 and subsequent containers get allocated in this range)
sudo docker run -t myimage
Error: Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.
(no response from host)
This is horribly hacky but doesn't work eithier.
PS Note this is different from the problem of trying to run the host application IB Gateway inside a container - I am not doing that.
I don't want to run the host application inside another container, although in some ways that might be a neater solution.
Running the IB gateway is tricky on a number of different levels, including connecting to it, and especially if you want to automate the process.
We took a close look at connecting to it from other IPs, and finally gave up on it--gateway bug as far as we could tell. There is a setting to white IPs that can connect to the gateway, but it does not work and can not be scripted.
In our build process we create a docker base image, then add the gateway and any/all of the gateway's clients to that image. Then we run that final image.
(Posted on behalf of the OP).
Setting --net=host and changing the port from 4001 so it doesn't conflict with a live version of the gateway on the same network. The only IP address required in the whitelist is 127.0.0.1.
sudo docker run -t --net=host myimage
Use socat to forward port from the gateway to a new port which can listen on any address. For example, set the gateway to listen on port 4002 (localhost only) and use command in the container
socat tcp-listen:4001,reuseaddr,fork tcp:localhost:4002
to forward the port to 4001.
Then you can connect to the gateway from outside of the container using port 4001 when running the container with parameter -p 4001:4001.
In case this one is useful for another person. I tried a couple suggestions that were put here to connect from my python app running on a Docker container to a TWS IBGateway instance running on another server and none of them were 100% working. The socat option was connecting, but then the connection was being drop due an issue with the socat buffer that we couldn't fix.
The solution we found was to create an ssh tunnel from the machine that is running the Docker container to the machine that is running the TWS IBGateway.
ssh -i ib-gateway.pem <ib-gateway-server-user>#<ib-gateway-server-ip> -f -N -L 4002:127.0.0.1:4001
After you establish this ssh tunnel, you can test it running
telnet 127.0.0.1 4002
If this command run successfully, your ssh tunnel is ready. The next step would be to configure your python application to connect to 127.0.0.1 on port 4002 and start your docker container with --net=host to be able to access the ssh tunnel running on Docker host machine.

Resources