Apache NIFI The request contained an invalid host header - docker

I am trying to run Apache NIFI on a docker in my Rancher server. Rancher is running correclty as I have other services running. It is installed on a Debian box.
I am trying to test the official Apache Nifi container. As rancher's default port is 8080, I am trying to run it on another port. I am trying to run the first command as it is referenced in the documentation:
docker run --name nifi -p 9090:9090 -d -e NIFI_WEB_HTTP_PORT='9090' apache/nifi:latest
This gives me the error I mentioned in the title:
The request contained an invalid host header [xx.xx.xx.xx:9090] in the request [/nifi]. Check for request manipulation or third-party intercept.
I have tried to run it on a ubuntu laptop where docker is freshly installed and it started without problems.
If I get to the docker command line with docker exec -it nifi bash I see that I have no vi, nano nor any way of editing the nifi configuration file where I am supposed to change that information.
I have tried to create it directly from the rancher interface but it stays for a very long time just starting the container.
What I am doing wrong?

Apache NiFi 1.6.0 was just released (April 8, 2018) and the Docker image should update within the next few days to refer to that version. In 1.6.0, the host header handling was relaxed to be more user-friendly:
NIFI-4761 Host headers are not blocked on unsecured instances (i.e. unless you have configured TLS, you won't see this message anymore)
NIFI-4761 A new property in nifi.properties (nifi.web.proxy.host) was added to allow for listing acceptable hostnames that are not the nifi.web.http(s).host
NIFI-4788 The Dockerfile was updated to allow for this acceptable listing via a parameter like NIFI_WEB_PROXY_HOST='someotherhost.com'
I'm not familiar with Rancher, but I would think the container would have some text editor installed.

Finally Rancher through the Web itnerface and after a LONG wait has managed to start the container and it works.
I still don't know why on the command line it is not working, but now it is secondary.

Related

Cannot download Docker images - no such host

I have a web app on my home PC, which uses Docker. It works as expected.
I have installed Docker on my work PC. So far I have run: docker run hello-world and I see an expected result:
I then try: docker run --name some-mongo -d mongo:tag and I see this:
I have spent a lot of time looking into this. So far I have tried (in Docker for Windows Settings):
1) In Proxies; check 'Manual proxy configuration' and specify: http://proxy1:8080 as http and https proxy server (this is what is specified in Internet Settings).
2) In Network specify a fixed DNS server of: 8.8.8.8.
This has made no difference. What is the problem? Do I need to pass my username and password to the proxy server? I am confused why the command in screenshot 1 works as expected whilst the command in screenshot 2 does not.
I have Docker for Windows on a Windows 10 PC. I am using Linux containers.

Simple Nginx server on docker returns 503

I'm just starting up with Docker and the first example that I was trying to run already fails:
docker container run -p 80:80 nginx
The command successfully fetches the nginx/latest image from the Docker Hub registry and runs the new container, there is no indication in CMD of anything going wrong. When I browse to localhost:80 I get 503 (Service Unavailable). I'm doing this test on Windows 7.
I tried the same command on another computer (this time on macOS) and it worked as expected, no issues.
What might be a problem? I found some issues on SO similar to mine, but they were connected with the usage of nginx-proxy, which I don't use and don't even know what it is. I'm trying to run a normal http server.
//EDIT
When I try to bind my container to a different port, for example:
docker container run -p 4201:80 nginx
I get ERR_CONNECTION_REFUSED in Chrome, so basically connection can't be established, because destination does not exist. Why is that?
The reason why it didn't work is that on Windows, Docker publishes results on different IP than localhost. This IP given is at the top in Docker client console.

docker app serving on https and connecting to external rethinkdb

I'm trying to launch a docker container that is running a tornado app in python 3.
It serves a few API calls and is writing data to a rethinkdb service on the system. RethinkDB does not run inside a container.
The system it runs on is ubuntu 16.04.
Whenever I tried to launch the docker with docker-compose, it would crash saying the connection to localhost:28015 was refused.
I went researching the problem and realized that docker has its own network and that external connections must be configured prior to launching the container.
I used this command from a a question I found to make it work:
docker run -it --name "$container_name" -d -h "$host_name" -p 9080:9080 -p 1522:1522 "$image_name"
I've changed the container name, host name, ports and image name to fit my own application.
Now, the docker is not crashing, but I have two problems:
I can't reach it from a browser by pointing to https://localhost/login
I lose the docker-compose usage. This is problematic if we want to add more services that talk to each other in the future.
So, how do I launch a docker that can talk to my rethinkdb database without putting that DB into a container?
Please, let me know if you need more information to answer this question.
I'd appreciate your guidance in this.
The end result is that the docker will serve requests coming over https.
for exmaple I have an end-point called /getURL.
The request includes a token verified in the DB. The URL is like this:
https://some-domain.com/getURL
after verification with the DB it will send back a relevant response.
the docker needs to be able to talk on 443 and also on 28015 with the rethinkdb service.
(Since 443 and https include the use of certificates, I'd appreciate a solution that handles this on regular http with some random port too and I'll take it from there)
Thanks!
P.S. The service works when I launch it without a docker on pycharm it's the docker configuration I have problems with.
I found a solution.
I needed to add this so that the container can connect to both the database and the rethinkdb:
--network="host"
Since this solution works for me right now, but it isn't the best solution, I won't mark this as the answer for now.

localhost not working docker windows 10

I am using VS2017 docker support. VS created DockerFile for me and when I build docker-compose file, it creates the container and runs the app on 172.x.x.x IP address. But I want to run my application on localhost.
I did many things but nothing worked. Followed the docker docs as a starter and building microsoft sample app . The second link is working perfectly but I get HTTP Error 404 when tried the first link approach.
Any help is appreciated.
Most likely a different application already runs at port 80. You'll have to forward your web site to a different port, eg:
docker run -d -p 5000:80 --name myapp myasp
And point your browser to http://localhost:5000.
When you start a container you specify which inner ports will be exposed as ports on the host through the -p option. -p 80:80 exposes the inner port 80 used by web sites to the host's port 80.
Docker won't complain though if another application already listens at port 80, like IIS, another web application or any tool with a web interface that runs on 80 by default.
The solution is to:
Make sure nothing else runs on port 80 or
Forward to a different port.
Forwarding to a different port is a lot easier.
To ensure that you can connect to a port, use the telnet command, eg :
telnet localhost 5000
If you get a blank window immediatelly, it means a server is up and running on this port. If you get a message and timeout after a while, it means nobody is running. You anc use this both to check for free ports and ensure you can connect to your container web app.
PS I run into this just a week ago, as I was trying to set up a SQL Server container for tests. I run 1 default and 2 named instances already, and docker didn't complain at all when I tried to create the container. Took me a while to realize what was wrong.
In order to access the example posted on Docker Docs, that you pointed out as not working, follow the below steps,
1 - List all the running docker containers
docker ps -a
After you run this command you should be able to view all your docker containers that are currently running and you should see a container with the name webserver listed there, if you have followed the docker docs example correctly.
2 - Get the IP address where your webserver container is running. To do that run the following command.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" webserver
You should now get the IP address which the webserver container is running, hope you are familiar with this step as it was even available within the building Microsoft sample app example that you attached with the question.
Access the IP address you get once running the above command and you should see the desired output.
Answering to your first question (accessing docker container with localhost in docker for windows), in Windows host you cannot access the container with localhost due to a limitation in the default NAT network stack. A more detailed explanation for this issue can be obtained by visiting this link. Seems like the docker documentation is not yet updated but this issue only exists in Windows hosts.
There is an issue reported for this as well - Follow this link to see that.
Hope this helps you out.
EDIT
The solution for this issue seems to be coming in a future Windows release. Yet that release comes out this limitation is available in Windows host. Follow this link -> https://github.com/MicrosoftDocs/Virtualization-Documentation/issues/181
For those who encountering this issue in 2022, changing localhost to 127.0.0.1 solved an issue for me.
There is other problem too
You must have correct order with parameters
This is WRONG
docker run container:latest -p 5001:80
This sequence start container but parameter -p is ignore, therefore container have no ports mapping
This is good
docker run -p 5001:80 container:latest

Some questions of Docker -p and Dockerfile

1: docker run -d -p 3000:3000 images
If i up a localhost:3000 server in container,how can i open it in my machine browser, what's the ip?
I've tried localhost:3000 or 0.0.0.0:3000.
2: I use docker pull ubuntu and docker run it, after updating and deploying the server i commmited it.So now i have one ubuntu and a new image.
Next time i run a container using this new image,
The shell scripts still needs to be sourced, also the server needs to reopened again.
How can i commit the image that it can source the scripts and deployed by itself when i docker run it.
Thanks.
I don't quite understand questions 2 or 3, can you add more context?
Regarding your question about using -p, you should be able to visit in your browser using http://localhost:3000/ . However that assumes a couple of things are true.
First, you used -p 3000:<container-port> - looks good on this point.
Second, the image you have run exposed port 3000 (EXPOSE 3000).
And third, the service running in the container is listening on 0.0.0.0:3000. If it is listening on localhost inside the container, then the port export will not work. Each container has its own localhost which is usable only inside the container. So it needs to be listening on all IPs inside the container, in order for outside connections to reach the service from outside of the container.

Resources