I have a bit unusual environment. In order to connect to the machine B via ssh, I need connect to the machine A and from that box, connect to B, and execute a number of commands there.
Local --ssh--> Machine A --ssh--> Machine B (some commands to execute here)
Generally speaking, Machine A is my entry point to all servers.
I am trying to automate the deployment process with Jenkins and wondering, if it supports such unusual scenario.
So far, I installed the SSH plugin and able to connect to Machine A, yet I am struggling with a connection to Machine B. The jenkins process freezes on the ssh command to Machine B and nothing happens.
Does anyone have any ideas how I can make such scenario work?
The term for Machine A is a "bastion host", which might help your googling.
This link calls it a "jump host", and describes a number of ways to use SSH's ProxyCommand setting to setup all manner of inter-host SSH communication:
https://www.cyberciti.biz/faq/linux-unix-ssh-proxycommand-passing-through-one-host-gateway-server/
Related
I'm sitting with a new issue that you might also face soon. I need a little help if possible. I've spent about almost 2 working weeks on this.
I have 2 possible solutions for my problem.
CONTEXT
I have 2 kubernetes clusters called FS and TC.
The Jenkins I am using runs on TC.
The slaves do deploy in FS from the TC Jenkins, however the slaves in FS would not connect to the Jenkins master in TC.
The slaves make use of a TCP connection that requires a HOST and PORT. However, the exposed jnlp service on TC is HTTP (http:/jenkins-jnlp.tc.com/) which uses nginx to auto generate the URL.
Even if I use
HOST: jenkins-jnlp.tc.com
PORT: 80
It will still complain that it's getting serial data instead of binary data.
The complaint
For TC I made use of the local jnlp service HOST (jenkins-jnlp.svc.cluster.local) with PORT (50000). This works well for our current TC environment.
SOLUTIONS
Solution #1
A possible solution would involve having a HTTP to TCP relay container running between the slave and master on FS. It will then be linked up to the HTTP url in TC (http:/jenkins-jnlp.tc.com/), encapsulating the HTTP connection to TCP (localhost:50000) and vice versa.
The slaves on FS can then connect to the TC master using that TCP port being exposed from that container in the middle.
Diagram to understand better
Solution #2
People kept complaining and eventually someone made a new functionality to Jenkins around 20 Feb 2020. They introduced Websockets that can run over HTTP and convert it to TCP on the slave.
I did set it up, but it seems too new and is not working for me even though the slave on FS says it's connected, it's still not properly communicating with the Jenkins master on TC. It still sees the agent/slave pod as offline.
Here are the links I used
Original post
Update note on Jenkins
Details on Jenkins WebSocket
Jenkins inbound-agent github
DockerHub jenkins-inbound-agent
CONCLUSION
After a lot of fiddling, research and banging my head on the wall, I think the only solution is solution #1. Problem with solution #1, a simple tool or service to encapsulate HTTP to TCP and back does not exist (that I know of, I searched for days). This means, I'll have to make one myself.
Solution #2 is still too new, zero to none docs to help me out or make setting it up easy and seems to come with some bugs. It seems the only way to fix these bugs would be to modify both Jenkins and the jnlp agent's code, which I have no idea where to even start.
UPDATE #1
I'm halfway done with the code for the intermediate container. I can now get a downstream from HTTP to TCP, I just have to set up an upstream TCP to HTTP.
Also considering the amount of multi-treading required to run a single central docker container to convert the protocols. I figured on adding the the HTTP-to-TCP container as a sidecar to the Jenkins agent when I'm done.
This way every time a slave spins up in a different cluster, it will automatically be able to connect and I don't have to worry about multiple connections. That is the theory, but obviously I want results and so do you guys.
Right now I found 2 possible solutions creating Jenkins Slaves or Jenkins Workers:
Using the SSH-Slave Plugin
Using JNLP
My question now: What is the better / more stable solution and why?
I found myself some pros and cons using both of the solutions but I don't want to affect the discussion
Java Web Start -- and the underlying Java Network Launch Protocol (JNLP) is the recommended mode of connection for windows agents.
jenkins.wiki.io
SSH is recommended for Linux agents
The Pros and Cons don't matter if both are recommended for different platforms.
We are only using SSH, mainly because in a corporate setting:
asking for the JNLP port to be opened on the agent server (agent being the new name for slave in Jenkins)
any query to another server should use an authenticated flow (like SSH with a key, or an account username/password)
Plus we can add to the Launch method configuration various JVM Options to monitor the GC and other memory parameters.
I am wondering how do we make machines that host docker to be easily replaceable. I would like something like a Dockerfile that contains instructions on how to set-up the machine that will host docker. Is there a way to do that?
The naive solution would be to create an official "docker host" binary image to install on new machines, but I would like to have something that is reproducible and transparent like the dockerfile?
It seems like tools like Vagrant, Puppet, or Chef may be useful but they appear to be for virtual machine procurement and they seem to all require set-up of some sort of "master node" server. I am not going to be spinning up and tearing down regularly so a master server is a waste of a server, I just want something that is reproducible in the event i need to set-up or replace a new machine.
this is basically what docker-machine does for you https://docs.docker.com/machine/overview/
and other "orchestration" systems will make this automated and easier, as well
There are lots of solutions to this with no real one size fits all answer.
Chef and Puppet are the popular configuration management tools that typically use a centralized server. Ansible is another option that typically runs without a server and just connects with ssh to configure the host. All three of these works very similarly, so if your concern is simply managing the CM server, Ansible may be the best option for you.
For VM's Vagrant is the typical solution and it can be combined with other tools like Ansible to provision the VM after creating it.
In the cloud space, there's tools like Terraform or vendor specific tools like CloudFormation.
Docker is working on a project called Infrakit to deploy infrastructure the way compose deploys containers. It includes hooks for several of the above tools, including Terraform and Vagrant. For your own requirements, this may be overkill.
Lastly, for designing VM images, Docker recently open sourced their Moby project which creates the VM image containing a minimal container OS, the same one used under the covers in Docker for Windows, Docker for Mac, and possibly some of the cloud hosing providers.
We automate Docker installation on hosts using Ansible + Jenkins. Given the propper SSH access, provisioning new Docker hosts is a matter of triggering a Jenkins job.
I have a Jenkins server on Ubuntu EC 2, i am trying to make a slave on my local windows machine, but not able to do. Even though i can make slaves on other EC 2 instances. is there some settings or some port need to be open. Jenkins is on ngnix.
Regards,
Ashish
I think you need a firewall rule to allow the port used by Jenkins for master-slave communication. Go to $JENKINS_URL/configureSecurity and set the port to a fixed number that's available on your Jenkins master, then add a firewall rule for that port.
I have an application which has the following requirement.
During the running of my Erlang App. on the fly I need to start one or more remote nodes either on the local host or a remote host.
I have looked at the following options
1) For starting a remote node on the local host either use the slave module or the net_kernel:start() API.
However with the latter , there seems to be no way to specify options like boot script file name etc.
2) In any case I don't need the slave configuration as I need to mimic similar behaviour of nodes spawned on local as
well as remote hosts. In my current setup, I dont have permissions to rsh to the remote host. The workaround i can think of is to have
a default node running on the remote host so as to enable remote node creation either through spawn or rpc:async_call and os:cmd
combination
Is there any other API interface to start erl ?
I am not sure this is the best or the cleanest way to solve this problem and I would like to know the Erlang approach to the same?
Thanks in advance
There is pool module which might help you, however it utilizes slave module (thereof rsh).