X-Forwarding with proxychains - docker

I am displaying UI elements (e.g. Firefox) from a Docker container via X-Forwarding, e.g. like this (with a running X-server on the host):
docker run -it -e DISPLAY=host.docker.internal:0 my/image
This works totally fine, e.g. when I execute firefox in the container, the UI is displayed via the host's X-server.
Unfortunately, this does not work when I use a proxy for my networking within the container. Let's say I use proxychains with the default Tor settings. Then I am getting the following error:
$ proxychains firefox
ProxyChains-3.1 (http://proxychains.sf.net)
|DNS-request| host.docker.internal
|S-chain|-<>-127.0.0.1:9050-<><>-4.2.2.2:53-<><>-OK
|DNS-response|: host.docker.internal does not exist
Error: cannot open display: host.docker.internal:0
It appears, the X-forwarding is not working when I use a proxy for my networking. How can I fix this?

Related

Docker cannot access exposed port inside container

I have a container for which I expose my port to access a service running within the container. I am not exposing my ports outside the container i.e. to the host (using host network on mac). On getting inside the container using exec -t and running a curl for a post request, I get the error:
curl command: curl http://localhost:19999
Failed connect to localhost:19999; Connection refused.
I have the expose command in my dockerfile and do not want to expose ports to my host. My service is also up and running inside the container. I also have the property within config set as
"ExposedPorts": {"19999/tcp": {}}
(obtained through `docker inspect <container id/name>\ Any idea on why this is not working? Using docker for Mac
I'd post my docker-compose file too but this is being built through maven. I can ensure that I am exposing my port using 19999:19999. Another weird issue is that on disabling my proxies it would run a very light weight command for my custom service and wouldn't run it again returning the same error as above. The issue only occurs on my machine and not others
Hints:
The app must be listening on port 19999 which is probably not.
The EXPOSE that you're using inside the Dockerfile does nothing.
Usually there is no need to change the default port on which an application is listening, hence each container has its own IP and you shouldn't run in a port conflict.
Answer:
Instead of curling 19999 try to use the default port on which your app would normally be listening to(it's hard to guess what you are trying to run).
If you don't publish a port (with the docker run -p option or the Docker Compose ports: option), you cannot directly reach the container on Docker for Mac. See the Known limitations, use cases, and workarounds in the Docker Desktop for Mac documentation: the "per-container IP addressing is not possible" item ism what you're trying to attempt.
The docker inspect IP address is basically useless, except in one very specific Docker configuration (on a native-Linux host, calling from outside of Docker, on the same host); I wouldn't bother looking it up.
The Dockerfile EXPOSE directive and similar runtime options do very little and mostly serve as documentation. Even if you have that configured you still need to separately publish the port when you start the container to reach it from outside of Docker space.

Cannot download Docker images - no such host

I have a web app on my home PC, which uses Docker. It works as expected.
I have installed Docker on my work PC. So far I have run: docker run hello-world and I see an expected result:
I then try: docker run --name some-mongo -d mongo:tag and I see this:
I have spent a lot of time looking into this. So far I have tried (in Docker for Windows Settings):
1) In Proxies; check 'Manual proxy configuration' and specify: http://proxy1:8080 as http and https proxy server (this is what is specified in Internet Settings).
2) In Network specify a fixed DNS server of: 8.8.8.8.
This has made no difference. What is the problem? Do I need to pass my username and password to the proxy server? I am confused why the command in screenshot 1 works as expected whilst the command in screenshot 2 does not.
I have Docker for Windows on a Windows 10 PC. I am using Linux containers.

Simple Nginx server on docker returns 503

I'm just starting up with Docker and the first example that I was trying to run already fails:
docker container run -p 80:80 nginx
The command successfully fetches the nginx/latest image from the Docker Hub registry and runs the new container, there is no indication in CMD of anything going wrong. When I browse to localhost:80 I get 503 (Service Unavailable). I'm doing this test on Windows 7.
I tried the same command on another computer (this time on macOS) and it worked as expected, no issues.
What might be a problem? I found some issues on SO similar to mine, but they were connected with the usage of nginx-proxy, which I don't use and don't even know what it is. I'm trying to run a normal http server.
//EDIT
When I try to bind my container to a different port, for example:
docker container run -p 4201:80 nginx
I get ERR_CONNECTION_REFUSED in Chrome, so basically connection can't be established, because destination does not exist. Why is that?
The reason why it didn't work is that on Windows, Docker publishes results on different IP than localhost. This IP given is at the top in Docker client console.

Apache NIFI The request contained an invalid host header

I am trying to run Apache NIFI on a docker in my Rancher server. Rancher is running correclty as I have other services running. It is installed on a Debian box.
I am trying to test the official Apache Nifi container. As rancher's default port is 8080, I am trying to run it on another port. I am trying to run the first command as it is referenced in the documentation:
docker run --name nifi -p 9090:9090 -d -e NIFI_WEB_HTTP_PORT='9090' apache/nifi:latest
This gives me the error I mentioned in the title:
The request contained an invalid host header [xx.xx.xx.xx:9090] in the request [/nifi]. Check for request manipulation or third-party intercept.
I have tried to run it on a ubuntu laptop where docker is freshly installed and it started without problems.
If I get to the docker command line with docker exec -it nifi bash I see that I have no vi, nano nor any way of editing the nifi configuration file where I am supposed to change that information.
I have tried to create it directly from the rancher interface but it stays for a very long time just starting the container.
What I am doing wrong?
Apache NiFi 1.6.0 was just released (April 8, 2018) and the Docker image should update within the next few days to refer to that version. In 1.6.0, the host header handling was relaxed to be more user-friendly:
NIFI-4761 Host headers are not blocked on unsecured instances (i.e. unless you have configured TLS, you won't see this message anymore)
NIFI-4761 A new property in nifi.properties (nifi.web.proxy.host) was added to allow for listing acceptable hostnames that are not the nifi.web.http(s).host
NIFI-4788 The Dockerfile was updated to allow for this acceptable listing via a parameter like NIFI_WEB_PROXY_HOST='someotherhost.com'
I'm not familiar with Rancher, but I would think the container would have some text editor installed.
Finally Rancher through the Web itnerface and after a LONG wait has managed to start the container and it works.
I still don't know why on the command line it is not working, but now it is secondary.

no route to host between 2 docker containers in same host

I have two docker containers which runs on the same host(centos 6 server).
container 1 >> my web application (Ports mapped to some random port of host)
container 2 >> python selenium testscripts ( Runs headless Firefox)
My Test cases fails saying problem loading page
Basically the issue is that the second container or any other container residing on the same host is not able to access my Web application.
But my web app is accesible to outside world
I linked both containers and still i am facing the problem
I tried replicating the same setup in my laptop(ubuntu) and its working fine!!!
Any help appreciated !!
Thanks in advance
I think order matters in linking containers. You should start container1 the web application and then link container2 with webapp.
You need to change your selenium scripts to use the docker link id or alias as the hostname.
For example if you did:
$ sudo docker run -d --name webapp my/webapp
$ sudo docker run -d -P --name selenium --link webapp:webapp my/selenium
then your selenium scripts should point to http://webapp/
I had this problem in Fedora(22) - for some containers (not all). Upon inspection, it showed up there is an special DOCKER chain on the iptables, that can make some connections go loose. Appending an accept rule for that chain made things work:
sudo iptables -A DOCKER -p tcp -j ACCEPT
(While searching for the problem before hitting this question, there are suggestions this also occurs in CentOS and RHEL)
Yes the order of container launch does matter, But i am launching my web application container through jenkins.
jenkins is configured in container 2.
So i can not launch my web application(container 1) manually.
Is there anyother solution for this, something like bidirectional linkage??

Resources