I want to integrate my Jira, Jenkins, and confluence. I have configured all the software with the help of a docker image. i.e., Jenkins -- Port 8082, Jira -- Port 8081, confluence -- Port 8090.
For Jira – Jenkins Integration, I used "Jira Issue Updater," but Jenkins console is troughing an error –
Unable to connect to REST service
java.net.ConnectException: Connection refused (Connection refused)Finished: SUCCESS
My fundamental question — Is it possible to connect two docker images because, by default, Jenkins and Jira's opening port are 8080. Still, I configured it as per my convenience (All docker images are in one docker network, and I could able to ping Jira's IP from Jenkins instance means Jira is reachable to Jenkins).
I am a bit confused about this integration. I need some support from community members.
Related
I'm using Kubernetes Continuous Deploy Plugin to deploy and I'm using pipeline and this is stage to deploy into kubernetes in the Jenkinsfile:
stage('Deploy to k8s'){
steps{
kubernetesDeploy(
configs:'quarkusAgrata.yaml',
kubeconfigId:'KUBERNETES_CLUSTER_CONFIG',
enableConfigSubstitution:true
)
}
}
I am getting these errors even after correctly configuring
My KUBERNETES_CLUSTER_CONFIG image
Log on to the box/container/pod hosting Jenkins and try to ping the IP, if alls good, try to telnet to the IP AND port. Depending on the results you should know whether or not a connection is possible from your Jenkins host. Note: if you have Jenkins running in a Container On a pod, you may need to ssh into the container on the pod. Make sure you're in the correct environment when trying to debug. Otherwise you'll kid yourself into thinking you're forming a connection from the Jenkins host when you might not be at quite the right level.
ping 123.123.123.123 - show whether a connection to the host is possible on the ping port.
telnet 123.123.123.123 8080 - connecting to a specific port via telnet - if all is good the connection should be opened. If you don't have the telnet application on the pod you may need to install it.. or else you can spin up a busybox container which has various tools alongside telnet for debugging.
I have JMeter Master (5.3) running in a Docker container, triggered by a Jenkins pipeline containing a 'docker run' command. It communicates to JMeter slaves that are located in a Kubernetes namespace, with an Ingress controller to handle input. (For this trial I'm using just one slave but there may be multiple in the future)
The Docker JMeter Master container is aware of Ingress and can reference it by name or by IP address. From within the JMeter Master container I am able to ping the JMeter slave hostname and it is giving the Ingress IP address which I would expect.
Ingress in turn has the ability to communicate with the JMeter slave, but I can't get from JMeter Master to JMeter slave. I have set server.rmi.localport=80 on both sides, the JMeter slave seems to register port 80 in the logs.
The error from the JMeter Master is 'operation timed out (connection timed out)'. I'm not sure where to start looking?
(For reference, we cannot move JMeter Master into Kubernetes, but conversely the slaves need to stay in Kubernetes in order to provide the workload).
Edit: I've done some more investigation. The problem seems to occur that while the base RMI port is port 80, it's also trying to open port 81 and port 82. This will obviously fail as I've only got port 80 available through ingress.
So the question is now 'how do I tell JMeter to only use a single port for RMI'?
As an update to this, basically I've come to the conclusion that 'you can't do this'. While the ports can be opened on Ingress, RMI can't communicate over them. So even if I could get it all on one port, it still wouldn't work.
There is a 'RMI over HTTP' implementation but I wouldn't have first idea on how to put that into JMeter.
What I have done is add a small webserver to the pod, such that I can control JMeter through normal web calls. For example, the jmx file can be PUT on to the pod, and a GET command can retrieve the results. That way I can start the pod up in the relevant location, where it will wait for whatever tests we want to run. It's also extendable if I need additional functionality.
I have a docker image on the google cloud platform that I would like to run. Part of this script attempts to connect to a RabbitMQ server (located in the same subnet). This does not work.
I've taken the following steps to try and solve it:
I have tried connecting to both the internal and external IP-address of the RabbitMQ server.
I have enabled VPC-native (alias IP)
I have checked I can connect to the internet from my docker image
I have checked that my docker image can connect to RabbitMQ when run locally
I have checked that the server can connect to the internal IP-address from the RabbitMQ server (by pinging it)
I think I probably have an incorrect setting in my kubernetes engine, but I've looked for quite some time and I cannot find it.
Does anybody know how to connect to a RabbitMQ server from a Kubernetes pod running in the Google Cloud Platform?
Here is the setup we have - Jenkins master running on kubernetes cluster. Windows VM connected as a permanent slave. Windows VM connects through port 30502 , exposed by the Jenkins master. The default port 50000 has been modified to 30502 in the Jenkins TCP JNLP port config. The Windows VM connects successfully to the Jenkins Master.
When the Jenkins master starts, tons of these messages keep getting thrown every 2 sec.
hudson.TcpSlaveAgentListener$ConnectionHandler run
WARNING: Connection #788 failed java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at hudson.TcpSlaveAgentListener$ConnectionHandler.run
(TcpSlaveAgentListener.java:244)```
Does this require any additional config that I am missing here?
This is related to load balancer health probes. https://issues.jenkins-ci.org/browse/JENKINS-48106
We are running Kubernetes on AWS. I manually reconfigured target group health check port to some random number(60000 for example). AWS target groups have interesting behavior: If there is no healthy endpoint, it sends traffic to all endpoints. So AWS "bombards" some random port and mark all endpoints as unhealthy, but still sends traffic to all of them.
I am very new to cloudera. I'm trying to add a host in cloudera manager but it fails with following error
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager Server (check firewall rules).
Ensure that ports 9000 and 9001 are not in use on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added. (Some of the logs can be found in the installation
details).
If Use TLS Encryption for Agents is enabled in Cloudera Manager (Administration -> Settings -> Security), ensure that
/etc/cloudera-scm-agent/config.ini has use_tls=1 on the host being
added. Restart the corresponding agent and click the Retry link here.
I'm running cloudera-quickstart-vm (https://github.com/caioquirino/docker-cloudera-quickstart) in a docker container running on ubuntu based google cloud VM.
I create a tunnel to cloudera manager using PuTTY on port 172.17.xx.1:7180 where the IP is the docker IP. I access it in browser as localhost:7180
This same IP resolves as hostname in first step of adding new host.
When I run "hostname" command in my container, I get the container id e.g. 0cb223fcfe64. If I try to add this as a hostname I get message "Could not connect to host"
How can I resolve these errors and add a new host?
I have reviewed other similar posts on stackoverflow and cloudera forum but none of the solutions worked for me. If any more information is required, let me know and I will try to provide more details.
Any help will be appreciated.