I'm trying to stand up a temporary NiFi server to support a proof of concept demo for a customer. For these types of short lived servers I like to use Docker when possible. I'm able to get the NiFi container up and running with out any issues but I can't figure out how to access its UI from the browser on a remote host. I've tried the following docker run variations:
docker run --name nifi \
-p 8080:8080 \
-d \
apache/nifi:latest
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_PORT='8080' \
-d \
apache/nifi:latest
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_HOST=${hostname-here} \
-e NIFI_WEB_HTTP_PORT='8080' \
-d \
apache/nifi:latest
My NiFi version is 1.8.0. I'm fairly certain that my problems are related to the host-headers blocker feature added to version 1.5.0. I've seen a few questions similar to mine but no solutions.
Is it possible to access the NiFi UI from a remote host after version 1.5.0?
Can host-headers blocker be disabled for a non-prod demo?
Would a non-Docker install on my server present the same host-headers blocker issues?
Should a use 1.4 for my demo and save myself a headache?
While there was a bug around 1.5.0 surrounding the host headers in Docker that issue was resolved and, additionally, the host header check now is only enforced for secured environments (you will see a note about this in the logs on container startup).
The commands you provide in your question are all workable for accessing NiFi on the associated mapped port in each example and I have verified this in 1.6.0, 1.7.0, and 1.8.0. You may want to evaluate the network security settings of your remote machine in question (cloud provided instances, for example. will typically require explicit security groups exposing ports).
I had the same issue, I was not able to access the web ui remotely. Turns out the firewall issue. Disabling the firewalld & adding a custom firewall rule to allow docker network with port should solve the issue.
The docker-compose.yml is shared here
Related
I have successfully installed and am running Nextcloud in docker. The installation uses LetsEncrypt to generate the certificates and runs without problems when I access it using HTTP.
When I attempt to use HTTPS, however, I am getting 500 Internal Server Error.
Researching this problem, I have learned that by default the nginx proxy container is not actually configured to use HTTPS (based on various nextcloud descriptions I have read online). Apparently what has to be done is to configure it to be able to use SSL.
Trouble is, there is no step by step procedure given for doing this.
Just about everywhere I look I am finding plenty of tutorials and instructions that include “set up letsencrypt” or "place certs in using certbot or other methods. This is nice but the docker-compose.yml file I used already sets up the certs! What I need is some clear instructions on how to configure the nginx-proxy to use SSL. I have been unable to find such instructions.
Can someone tell me how to properly configure the nginx-proxy container to use SSL? Or failing that, can someone point me to some clear instructions for properly configuring it for SSL?
I personnaly use the https://github.com/nginx-proxy/nginx-proxy container to do that.
Here is the command I use to start my reverse-proxy container :
docker run --detach \
--name reverse-proxy \
--net network-proxy \
--publish 80:80 --publish 443:443 \
--volume <CERT_FOLDER_ON_HOST>:/etc/nginx/certs:ro \
--volume /var/run/docker.sock:/tmp/docker.sock:ro \
nginxproxy/nginx-proxy
The line bellow set up the container in the same network than other containers.
--net network-proxy \
The line bellow tells the proxy where it can find certs of your domains.
--volume <CERT_FOLDER_ON_HOST>:/etc/nginx/certs:ro \
You have to put your certs in that folder, to generate a certificate you can use https://certbot.eff.org/. The certs have to be named like this : example.com.key for the private key and example.com.crt for the cert.
If you need help : https://github.com/nginx-proxy/nginx-proxy and https://certbot.eff.org/
The line bellow permit communication between containers, so that the proxy will be able to redirect requests.
--volume /var/run/docker.sock:/tmp/docker.sock:ro \
After that you have to set up the VIRTUAL_HOST environement variable in others containers to tell the domain name of the others containers to the reverse-proxy.
An exemple of a webserver container :
docker run --detach \
--name webserver \
--net network-proxy \
--env 'VIRTUAL_HOST=<DOMAIN_NAME>' \
--volume <WEBSITE_FOLDER_ON_HOST>:/usr/share/nginx/html:ro \
nginx
While the answer above given y Alexandre is a good one (I gave it a point for the effort and it does work), I ended up solving my problem by writing my own reverse proxy server. This server is specifically designed for Nextcloud and runs well in a Docker container. It requires little configuration and is easier to set up and use than an nginx proxy. It works with SSL and acts as a good security layer in from of Nextclud.
Reverse proxies are easy to make using various frameworks. Once I decided to go that route it took less than three hours to make this one for Nextcloud.
If there is interest I will be happy to share the code and container.
I'm trying to implement a backup system. This system requires the execution of a docker container at the time of the backup on the specific node. Unfortunately I have not been able to get it to execute on the required node.
This is the command I'm using and executing on the docker swarm manager node. It is creating the container on the swarm manager node and not the one specified in the constraint. What am I missing?
docker run --rm -it --network cluster_02 -v "/var/lib:/srv/toBackup" \
-e BACKUPS_TO_KEEP="5" \
-e S3_BUCKET="backup" \
-e S3_ACCESS_KEY="" \
-e S3_SECRET_KEY="" \
-e constraint:node.hostname==storageBeta \
comp/backup create $BACKUP_NAME
You are using an older classic Swarm method to try running your container, but almost certainly using Swarm Mode. If you installed your swarm with docker swarm init and can see nodes with docker node ls on the manager, then this is the case.
Classic Swarm ran as a container that was effectively a reverse proxy to multiple docker engines, intercepting calls to commands like docker run and sending them to the appropriate node. It is generally recommended to avoid this older swarm implementation unless you have a specific use case and take the time to configure mTLS on each of your docker hosts.
Swarm Mode provides an HA manager using Raft for quorum (same as etcd), handles encryption of all management requests, configures overlay networking, and works with a declarative model, giving the target state, rather than imperative commands to run. It's a very different model from classic Swarm. Notably, Swarm Mode only works on services and stacks, and completely ignores the docker run and other local commands, e.g.:
docker service create \
--name backup \
--constraint node.hostname==storageBeta \
--network cluster_02 \
-v "/var/lib:/srv/toBackup" \
-e BACKUPS_TO_KEEP="5" \
-e S3_BUCKET="backup" \
-e S3_ACCESS_KEY="" \
-e S3_SECRET_KEY="" \
comp/backup create $BACKUP_NAME
Note that jobs are not well supported in Swarm Mode yet (there is an open issue seeking community feedback on including this functionality). It is currently focused on running persistent services that do not normally exit unless there is an error. If your command is expected to exit, you can include an option like --restart-max-attempts 0 to prevent swarm mode from restarting it. You may also have additional work to do if the network is not swarm scoped.
I'd also recommend converting this to a docker-compose.yml file and deploying with docker stack deploy to better document the service as a config file.
I have pulled hyperledger/composer-rest-server docker image , Now if i wanted to run this docker image then on which port should i expose ? Like mentioned below.
docker run --name composer-rest-server --publish XXXX:YYYY --detach hyperledger/composer-rest-server
Here please tell me what should i replace for XXXX & YYYY ?
I run the rest server in a container using a command as follows:
docker run -d \
-e COMPOSER_CARD="admin#test-net" \
-e COMPOSER_NAMESPACES="never" \
-v ~/.composer:/home/composer/.composer \
--name rest -p 3000:3000 \
hyperledger/composer-rest-server
For the Published Port, the first element is the Port that will be used on the Docker Host, and the second is the Port it is forwarded to inside the container. (The Port inside the container will always be 3000 by default and is more complex to change.)
I'm passing 2 environment variables into the Container which the REST server will recognise - Namespaces just keeps the endpoints simple, but the COMPOSER_CARD is essential for the REST server to start properly.
I'm also sharing a volume between the Docker Host and the Container which is where the Cards are stored, so that the REST server can find the COMPOSER_CARD referred to in the environment variable.
Warning: If you are trying to test the REST server with the Development Fabric you need to understand the IP Network and Addressing of the Docker containers - by default the Composer Business Network Cards will be built using localhost as the address of the Fabric servers, but you can't use localhost in the REST server container as that will redirect inside the container and fail to find the Fabric.
There is a tutorial in the Composer Docs that is focused on Multi-User authentication, but it does also cover the networking aspects of using the REST Server Container. There is general information about the REST server here.
For the life of me I can't get to the NiFi Web UI. It makes me hate security.
TLDR; I can't find the right way to start NiFi in a docker container and still access the UI. Here's what I've tried (for 8 hours):
docker run --name nifi \
-p 8080:8080 \
-d \
apache/nifi:latest
I go to localhost:8080/nifi - timeout. Same on 127.0.0.1.
docker inspect nifi - IP Gateway is 172.20.0.1 with actual IP of 172.0.0.2. Invalid Host Header and timeout, respectively.
Start randomly trying stuff:
# I tried localhost, 0.0.0.0, various IP addresses
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_HOST=${hostname-here}
-d \
apache/nifi:latest
I also built a full docker-compose.yml for my diminishingly-possible stack. Everything thing works except for:
nifi:
image: apache/nifi:latest
hostname: nifi
depends_on:
- zookeeper
- broker
- schema_registry
- nifi-registry
ports:
- "8080:8080"
No changes. Can you help me?
Updates 1
I used the docker-compose.yml file from the repo linked in comments below; thank you #Chaffelson. Still dealing with timeout on localhost. So I spun up a droplet with docker-machine.
The services start fine, and logs indicate Jetty server is up for both NiFi Registry and NiFi. I can access NiFi registry # <host ip>:18080/nifi-registry exactly like I can on my local machine.
I can't access <host ip>8080/nifi - I get invalid host header response.
So I added to docker-compose.yml:
environment:
# Tried both with and without quotes
NIFI_WEB_HTTP_HOST: "<host-ip>"
Jetty server fails to start. Insight?
Updates 2
From the logs, using just docker run --name nifi -p 8080:8080 -d apache/nifi:1.5.0:
[NiFi Web Server-24] o.a.n.w.s.HostHeaderSanitizationCustomizer Request host header [45.55.36.15:8080] different from web hostname [348146fc286f(:8080)]. Overriding to [348146fc286f:8080/nifi] where 45.55.36.15 is the host ip.
This is the crux of my problem.
Updates 3
I disabled ufw (firewall) on my local machine. I can now access nifi via localhost:8080. No progress on actually accessing on a remote host (which is the point of all this).
Sorry to hear you are having trouble with this. In Apache NiFi 1.5.0, the stricter host header protection was enabled to prevent host header poisoning attacks. Unfortunately, we have seen that this was not documented sufficiently for users who were not familiar with the configuration. In response, we have made changes that are currently in master and will be included in the upcoming 1.6.0 release:
a new property nifi.web.proxy.host was added to nifi.properties which accepts a comma-separated list of valid host headers independent of the Jetty hostname
the Docker configuration has been updated to allow proxy whitelisting from the run command
the host header protection is only enforced on "secured" NiFi instances. This should make it much easier for users to quickly deploy sandbox environments like you are doing in this case
For an immediate fix, this command should work:
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_HOST=172.20.0.1
-d \
apache/nifi:latest
You can also intercept the requests using a Chrome extension like ModHeader to override the Host header and verify that it works when it matches the expected host. Along with Daniel's excellent additions, this should help you until the next version is released.
I use this and similar docker compose files for my automated NiFi Python client testing. It looks superficially similar to yours, and works perfectly well on both Ubuntu (Travis-CI) and my local MacBook pro for myself.
I suggest you try running this file as a known-good configuration, and also examine 'docker logs -f nifi' for the above to see if your environment is throwing errors on startup.
The environment variables for NIFI_WEB_HTTP_HOST and NIFI_WEB_HTTP_PORT are for when you are accessing Docker nifi on a port other than 8080, so that you don't get the host-headers blocker. I contributed these modifications to the project recently, so if you are having trouble with them I would like to know so I can fix it.
I had the same issue, I was not able to access the web ui remotely. Turns out the firewall issue. Disabling the firewalld & adding a custom firewall rule to allow docker network with port should solve the issue.
Here is my docker-compose.yml:
in docker use this. It fixed my problem.
--net=host
so that docker can reduce the internal port forwarding path.
I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.