I have jenkins installed as a service on a Google Cloud Platform Compute Engine VM.
It was working fine till a few days ago.
Recently I am having issues connecting with the compute engine vm itself. I try to connect to that vm using a RDP and the vm would not open. I am trying to trigger jenkins using https, the requests will timeout.
I have to restart the compute engine vm itself to make it working again. It works for some minutes and starts rejecting requests again.
Weird thing is that I have jenkins slaves installed on multiple different gcp compute vms (other than the jenkins master). And while I am unable to access the VM via RDP, I can trigger jenkins requests from other gcp slaves.
I have configured SSL for the jenkins.
I have a static IP assigned to the compute vm which has jenkins server installed.
Any direction on what might be causing the gcp compute engine vm to reject connection requests would be helpful.
Related
Im working on web-application that uses IBM MQ as a message broker. I want to setup environment for integration tests via testcontainers, but there is no IBM MQ container for ARM arhitecture, so using Docker as a container manager is not a proper solution.
I replaced Docker with Podman on Intel machine using this article but Podman`s performance dropped significantly (25 seconds to run a container and infinite execution of podman ps) so i dont want to use this mechanism.
Also I've heard of Lima and Colima so now im totally confused and I can't decide what setup for my case is the best.
Being architecture independent is one of the benefits of testcontainers.cloud, a product by the maintainers of the Testcontainers libraries.
When you use Testcontainers Cloud your tests run locally as usual, and the containers are started in an isolated on-demand cloud environment which your tests provision and connect to via a small, user-space agent application.
Testcontainers Cloud is currently in a public beta, and you can evaluate it for your use-cases and setup by joining on the website.
Right now I found 2 possible solutions creating Jenkins Slaves or Jenkins Workers:
Using the SSH-Slave Plugin
Using JNLP
My question now: What is the better / more stable solution and why?
I found myself some pros and cons using both of the solutions but I don't want to affect the discussion
Java Web Start -- and the underlying Java Network Launch Protocol (JNLP) is the recommended mode of connection for windows agents.
jenkins.wiki.io
SSH is recommended for Linux agents
The Pros and Cons don't matter if both are recommended for different platforms.
We are only using SSH, mainly because in a corporate setting:
asking for the JNLP port to be opened on the agent server (agent being the new name for slave in Jenkins)
any query to another server should use an authenticated flow (like SSH with a key, or an account username/password)
Plus we can add to the Launch method configuration various JVM Options to monitor the GC and other memory parameters.
I have a bit unusual environment. In order to connect to the machine B via ssh, I need connect to the machine A and from that box, connect to B, and execute a number of commands there.
Local --ssh--> Machine A --ssh--> Machine B (some commands to execute here)
Generally speaking, Machine A is my entry point to all servers.
I am trying to automate the deployment process with Jenkins and wondering, if it supports such unusual scenario.
So far, I installed the SSH plugin and able to connect to Machine A, yet I am struggling with a connection to Machine B. The jenkins process freezes on the ssh command to Machine B and nothing happens.
Does anyone have any ideas how I can make such scenario work?
The term for Machine A is a "bastion host", which might help your googling.
This link calls it a "jump host", and describes a number of ways to use SSH's ProxyCommand setting to setup all manner of inter-host SSH communication:
https://www.cyberciti.biz/faq/linux-unix-ssh-proxycommand-passing-through-one-host-gateway-server/
I'm building an Apache mesos cluster with 3 masters and 3 slaves. I installed docker on the slave nodes and it's able to create instances which are vissible in Marathon. Now i tried to install the HAproxy server on top of it but that didn't worked out that well so I deleted it.
The problem is, since then i'm only able to scale my application to a maximum of 3 instances, the exact number of nodes When I want to scale to 5, there are 2 instances that are stuck at the 'deploying' stage.
Does anyone know how to fix this issue so i'm back able to create more instances?
Thank you
To perform that, you trully need to setup Marathon ServiceDiscovery with HAProxy as unknown ports on the same slave machine will be binded to your containers.
First, install HAProxy on every slave. If you need SSL, you will need to make build HAProxy to support SSL.
Then, when HAProxy service is running, you need to follow this very well explain tutorial to enable Marathon service discovery on every Slave
HAProxy marathon Service discovery
Pay well attention to the tutorial, it is very well explained and straightforward.
I've been using Cloudbees DEV#Cloud to build and test my code. Now I want to automate the deployment of my applications to Amazon AWS.
To deploy the applications I scp files to Amazon and use ssh sessions to start the applications. This works just fine, but I'm force to allow ssh connections to my Amazon AWS instances.
Is it possible to run the Cloudbees builds from a fixed IP address, so I don't have to allow ssh access to the AWS instances from every IP address?
No, DEV#cloud slaves are drawn from an elastic pool of machines (in EC2, currently), so you cannot fix a particular IP address. If you need to restrict access to a resource which builds must use, you can configure VPN service instead.