I've installed a jenkins docker image on my CentOS7. When I type
docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
There is an error :Jenkins Initial Setup is required
Since my CentOS7 does not have a graphical interface, do you know how do I "expose" it in order to see the renderized jenkins interface in another machine ?
I tried reading some documentation (https://www.jenkins.io/doc/book/security/services/) but I really don't know where and how these configurations are made.
Allow external access to port 8080 by opening your Firewalls if you have any. Then Simply access it like http://IP_OF_THE_JENKINS_SERVER:8080.
Related
Im currently trying to install the Jfrog container registry via the docker image and it errors just as it reaches the ui after setup. My process of installing is as follows
docker pull docker.bintray.io/jfrog/artifactory-jcr:latest
docker run --name artifactory -d -p 8081:8081 docker.bintray.io/jfrog/artifactory-jcr:latest
I navigate to localhost:8081/artifactory and after the setup image it redirects to localhost:8082/ui/ and shows page not found. Im not sure why the port changes and I have looked at the documentation for installing and there isnt anything about the port change. changing the port back to 8081 just shows a HTTP Status 404 – Not Found.
Im on docker for windows, looking to test this out. Any ideas what im doing wrong?
Artifactory internal architecture has changed, and there are separate micro services for Artifactory and its UI. This is done via the JFrog router, which listens on port 8082.
If you follow the Docker installation documentation, you can see you need to also expose port 8082.
docker run --name artifactory -d -p 8081:8081 -p 8082:8082 docker.bintray.io/jfrog/artifactory-jcr:latest
You can also drop the port 8081 and stick to 8082 only. 8081 allows for direct access to Artifactory (bypassing the jfrog router) for better performance on high load systems.
I am newbie in docker,noticed below command while referring a document to install jenkins in docker.
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
-p <source>:<dest> or --publish <source>:<dest>, create a forwarding rule from your <docker-host>:<source> to <container>:<dest>.
If used multiple times, it creates multiple forwarding rules.
In your example, the traffic from <host-machine-IP-addr>:8080, is forwarded inside the Jenkins container, to the service that's is listening on :8080.
Exactly the same thing is happening with port :50000.
Basically the container's Jenkins web UI is exposed on the host machine on 8080, while the Jenkins slave port is also exposed on you host machine on 50000.
I have 2 docker containers running on my Mac host - container 1 is Jenkins from Docker Hub and container 2 is SonarQube from Docker Hub. I have both containers running successfully. I can access Jenkins from my host by going to http://localhost:8080/ and I can access my SonarQube by going to http://localhost:9000/.
The Jenkins container was started like this:
docker run -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:latest
The SonarQube container was started like this:
docker run -d -p 9000:9000 sonarqube
Now I want to have each container communicate with each other so I need to provide the IP address of the other container to each container.
I got the IP address of each container by executing this:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' container_name_or_id
This returns an IP address of 172.17.0.2 for the Jenkins container and 172.17.0.3 for the SonarQube container. But when I try and access the Jenkins container from my host by going to http://172.17.0.2:8080 I get a request timeout. The same thing happens when I try and access the SonarQube container from my host by going to http://172.17.0.3:9000
Is this normal behavior?
Shouldn't I be able to access each container from my host by their internal IP address?
And how can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?
Is this normal behavior? Shouldn't I be able to access each container from my host by their internal IP address?
What you describe is normal behavior: you can't directly reach the Docker-internal IP addresses from a MacOS host. See "Per-container IP addressing is not possible" in the Docker for Mac docs.
How can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?
This isn't something I normally "test" per se. Start up both processes and have them make their normal (HTTP) connections; if it works you'll see appropriate log messages, and if it doesn't work you'll see complaints. (Getting a root shell in a container to send ICMP packets from one container to another seems to be a popular option but doesn't prove much.)
Also: don't make this connection by explicit IP address. As you've noticed already the Docker-internal IP addresses aren't usable in some contexts, and they'll change whenever you restart containers. Instead, Docker provides an internal DNS service that can resolve host names when communicating between containers, but you need to explicitly set up a non-default bridge network. That setup would look like:
docker network create jenkinsnet
docker run --name sonarqube -d --net jenkinsnet \
-p 9000:9000 \
sonarqube
docker run --name jenkins -d --net jenkinsnet \
-p 8080:8080 -p 50000:50000 \
-e SONARQUBE_URL=http://sonarqube:9000 \
jenkins/jenkins:latest
So I've explicitly created a network; started both containers connected to it; and told the client container (via an environment variable) where the server container is. You don't have to publish ports with docker run -p to reach them this way; whether you do or not, use the port the server process is listening on (the second port number in the docker run -p option).
From the host, your only (portable, reliable) path to reach the container is via its published ports.
Looks like you are using default bridge network model. Internal IPs are meant for each container to talk to each other under bridge networking. You cannot access them from host.
There are multiple options for you.
You can configure http://172.17.0.3:9000 as your sonar endpoint in Jenkins.
You can configure http://172.17.0.2:8000 as your jenkins endpoint in sonar.
If you don't want to hard code above Ips then both of your containers can talk to each using Docker Default GatewayIp(172.17.0.1) and their internal port. so essentially you can configure http://172.17.0.1 as well.
Note - Default Gateway Ip change change if you define user defined bridge network.
https://docs.docker.com/v17.09/engine/userguide/networking/#the-default-bridge-network
https://docs.docker.com/network/network-tutorial-standalone/
If you want to spin up both containers using docker-compose, then you can link both containers using service name. Just follow Networking in Compose.
The accepted answer (https://stackoverflow.com/a/53992787/7730554) already provides valid options of which I personally usually prefer using docker compose.
But as you are running Docker on Mac you could also use host.docker.internal in combination with the defined forwarding host port. So Docker will take care that host.docker.internal is resolved to the corresponding IP even if your Host IP changes.
See https://docs.docker.com/desktop/mac/networking/.
Note that this is for development mode only and works when you use Docker Desktop.
For the life of me I can't get to the NiFi Web UI. It makes me hate security.
TLDR; I can't find the right way to start NiFi in a docker container and still access the UI. Here's what I've tried (for 8 hours):
docker run --name nifi \
-p 8080:8080 \
-d \
apache/nifi:latest
I go to localhost:8080/nifi - timeout. Same on 127.0.0.1.
docker inspect nifi - IP Gateway is 172.20.0.1 with actual IP of 172.0.0.2. Invalid Host Header and timeout, respectively.
Start randomly trying stuff:
# I tried localhost, 0.0.0.0, various IP addresses
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_HOST=${hostname-here}
-d \
apache/nifi:latest
I also built a full docker-compose.yml for my diminishingly-possible stack. Everything thing works except for:
nifi:
image: apache/nifi:latest
hostname: nifi
depends_on:
- zookeeper
- broker
- schema_registry
- nifi-registry
ports:
- "8080:8080"
No changes. Can you help me?
Updates 1
I used the docker-compose.yml file from the repo linked in comments below; thank you #Chaffelson. Still dealing with timeout on localhost. So I spun up a droplet with docker-machine.
The services start fine, and logs indicate Jetty server is up for both NiFi Registry and NiFi. I can access NiFi registry # <host ip>:18080/nifi-registry exactly like I can on my local machine.
I can't access <host ip>8080/nifi - I get invalid host header response.
So I added to docker-compose.yml:
environment:
# Tried both with and without quotes
NIFI_WEB_HTTP_HOST: "<host-ip>"
Jetty server fails to start. Insight?
Updates 2
From the logs, using just docker run --name nifi -p 8080:8080 -d apache/nifi:1.5.0:
[NiFi Web Server-24] o.a.n.w.s.HostHeaderSanitizationCustomizer Request host header [45.55.36.15:8080] different from web hostname [348146fc286f(:8080)]. Overriding to [348146fc286f:8080/nifi] where 45.55.36.15 is the host ip.
This is the crux of my problem.
Updates 3
I disabled ufw (firewall) on my local machine. I can now access nifi via localhost:8080. No progress on actually accessing on a remote host (which is the point of all this).
Sorry to hear you are having trouble with this. In Apache NiFi 1.5.0, the stricter host header protection was enabled to prevent host header poisoning attacks. Unfortunately, we have seen that this was not documented sufficiently for users who were not familiar with the configuration. In response, we have made changes that are currently in master and will be included in the upcoming 1.6.0 release:
a new property nifi.web.proxy.host was added to nifi.properties which accepts a comma-separated list of valid host headers independent of the Jetty hostname
the Docker configuration has been updated to allow proxy whitelisting from the run command
the host header protection is only enforced on "secured" NiFi instances. This should make it much easier for users to quickly deploy sandbox environments like you are doing in this case
For an immediate fix, this command should work:
docker run --name nifi \
-p 8080:8080 \
-e NIFI_WEB_HTTP_HOST=172.20.0.1
-d \
apache/nifi:latest
You can also intercept the requests using a Chrome extension like ModHeader to override the Host header and verify that it works when it matches the expected host. Along with Daniel's excellent additions, this should help you until the next version is released.
I use this and similar docker compose files for my automated NiFi Python client testing. It looks superficially similar to yours, and works perfectly well on both Ubuntu (Travis-CI) and my local MacBook pro for myself.
I suggest you try running this file as a known-good configuration, and also examine 'docker logs -f nifi' for the above to see if your environment is throwing errors on startup.
The environment variables for NIFI_WEB_HTTP_HOST and NIFI_WEB_HTTP_PORT are for when you are accessing Docker nifi on a port other than 8080, so that you don't get the host-headers blocker. I contributed these modifications to the project recently, so if you are having trouble with them I would like to know so I can fix it.
I had the same issue, I was not able to access the web ui remotely. Turns out the firewall issue. Disabling the firewalld & adding a custom firewall rule to allow docker network with port should solve the issue.
Here is my docker-compose.yml:
in docker use this. It fixed my problem.
--net=host
so that docker can reduce the internal port forwarding path.
I am trying to get JMX monitoring working to monitor a test kafka instance.
I have kafka (ches/kafka) running in docker via boot2docker but I can not get JMX monitoring configured properly. I have done a bunch of troubleshooting and I know the kafka instance is running properly (consumers and producers work). The issue arises when I try simple JMX tools (jconsole and jvisualvm) and both can not connect (insecure connection error, connection failed).
Configuration items of note: I connect to 192.168.59.103 (virtualbox ip found when running 'boot2docker ip') and the ches/kafka docker/kafka instance uses the port 7203 as the JMX_PORT (confirmed in the kafka startup logs). Using jconsole, I connect to 192.168.59.103:7203 and that is when the errors occur.
Any help is appreciated.
For completeness, here is the solution that works:
I ran ches/kafka docker image as follows -- note that the JMX_PORT (7203) is now published appropriately:
$ docker run --hostname localhost --name kafka --publish 9092:9092 --publish 7203:7203 --env EXPOSED_HOST=192.168.59.103 --env ZOOKEEPER_IP=192.168.59.103 ches/kafka
Also, the following environment is set in the kafka-run-class.sh (.bat for windows)
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
But I needed to add one additional item (thanks to one of the commenters for pointing this out):
-Dcom.sun.management.jmxremote.rmi.port=7203
Now, to run the ches/docker image in boot2docker you just need to set one of the recognized environment variables (KAFKA_JMX_OPTS or KAKFA_OPTS) to add the additional item and it now works.
Thanks for the help!
There's no reason the kafka docker port would bind to the same port in the boot2docker VM except if you specify it.
Try running it with -p 7203:7203 to force the 1:1 forwarding of the port.