Hi I am very new to Flume. I have installed flumes 0.9.3 version and able to start the node in window OS.
But to move forward I need certain basic things. Can any one help me to setup flumes completely?
How to setup flumes agent and configure with any source like fileshare or server log
How to setup flumes collector
How the sink in agent will push the files to collector
How to install flumes master and configure with multiple agents and collector
How to integrate flumes collectors with HDFS.
Related
My situation is as follows. I am using the vSphere Jenkins plugin to clone and start a VM on a vSphere server during a stage of a pipeline. I use SSH to connect to the VM from the Jenkins master and start the slave. VMware tools is installed on the machine so that the vSphere Jenkins plugin knows what IP to SSH to.
Now comes the problem: I need to change the IP address of each VM after startup. For that I am using a script that changes the IP of the machine, wrapped in a systemd oneshot service which loads the script on startup. The issue is that VMware tools sends the IP information back to the jenkins plugin before the systemd service gets loaded and then Jenkins tries to connect to an IP that has been changed.
How do I delay the start of VMware tools or how else could I overcome this issue?
I ended up adding this line in the [Unit] section of my .service file:
Before=vmware-tools.service
It does what I want it to.
I followed tutorial to get Jenkins set up on Windows.
What i have is:
Jenkins running with recommend plugins installed
Jenkins URL changed to http:// my ipv4:8080/
A project with a simple command [echo hi]
For nodes i currently have just the Master node which is tied to my main PC
My goal is have one computer send a command to all the slave PC's so they run a python script i created.
I create a windows VM and connected to the Jenkins server. I logged in with the admin account and created a new node.
I cant find anything useful to help me figure out what to put in launch command. When i launch my node on the VM without the launch command specified, it fails to launch.
Is the batch script i wrote in the project, what's sent to all the slave machines or do I have this all wrong?
Thank you!
EDIT
I got it working thanks to the answer posted here. I wrote up a doc on how i got Jenkins working from installation to deployment. There are other resources out there but i hope this will help someone.
Jenkins Master/Agent Setup
If you want to have the option Launch slave agents via Java Web Start you should specify the TCP port for slaves.
It is done through Manage Jenkins > Configure Global Security > TCP port for JNLP agents. You can select fixed port 50000. More info here.
We are looking at Open source Jenkins masters failover scenario, and currently backing up jenkins jobs and configurations using SCM sync plugin. any ideas on how to restore Jenkins for high availablity
when master goes down.
Docker images work great for this. In essence the master is just an image which you configure with all your jobs. Logging of course should not be stored on the docker image but piped to AWS S3 or some datastore.
Each job you run launches a new docker slave to handle that task. Offers HA with lots of room for horizontal scaling.
If docker/containers is not your thing, configuration management is the way to go (chef, puppet, ansible). Take your pick and use these tools to build out your consistent Jenkins master and restore from latest backup.
I'm new to Jenkins, and I like to know if it is possible to have one Jenkins server to deploy / update code on multiple web servers.
Currently, I have two web servers, which are using python Fabric for deployment.
Any good tutorials, will be greatly welcomed.
One solution could be to declare your web servers as slave nodes.
First thing, give jenkins credentials to your servers (login/password or ssh login+private key or certificate. This can be configured in the "Manage credentials" menu
Then configure the slave nodes. Read the doc
Then, create a multi-configuration job. First you have to install the matrix-project plugin. This will allow you to send the same deployment intructions to both your servers at once
Since you are already using Fabic for deployment, I would suggest installing Fabric on the Jenkins master and have Jenkins kick off the Fabric commands to deploy to the remote servers. You could set up the hostnames or IPs of the remote servers as parameters to the build and just have shell commands that iterate over them and run the Fabric commands. You can take this a step further and have the same job deploy to dev/test/prod just by using a different set of hosts.
I would not make the webservers slave nodes. Reserve slave nodes for build jobs. For example, if you need to build a windows application, you will need a windows Jenkins slave. IF you have a problem with installing Fabric on your Jenkins master, you could create a slave node that is responsible for running Fabric deploys and force anything that runs a fabric command to use that slave. I feel like this is overly complex but if you have a ton of builds on your master, you might want to go this route.
I've been reading about Jenkins master/slave configurations but I still have some questions:
Is it so that the slave Jenkins is not actually installed and started up the way master Jenkins is? I assumed I would install one master Jenkins and another slave Jenkins in the same way, and then master Jenkins would control the slave e.g. through SSH? So I cannot view the slave Jenkins through a GUI?
The reason why I have thought about adding a slave Jenkins on another VM is because the VM contains our application servers (many test environments). Deploying and starting/stopping application servers from master Jenkins is a pain because master Jenkins and application servers are on different machines. Therefore, if I would add a slave Jenkins to the machine where our application servers are, these would actually be deployed and started/stopped locally (by slave Jenkins). I wonder if I have missed something, of if my presumptions are still valid.
In a standard Jenkins master/slave setup, Jenkins is only installed on the master. That is where you see the user interface and start/configure build jobs.
The slaves execute the jobs. There is no Jenkins installation here other than a small Java app to have Jenkins communicate to/from the slave. Jenkins talks to these slaves through the slave.jar app over e.g. SSH via the SSH Slaves Plugin and can monitor if the slave is running, etc.
So in your case, you can start jobs from the master that will execute on the application servers.
The master/slave setup also allows you to host all whole bunch of different slaves, with different OSes, different hardware, etc. You can communicate job results (artifacts) from one slave to another via the Copy Artifacts Plugin.
There are also ways to duplicate the actual Jenkins master with load balancing in a heavy use scenario. That is not what you seem to be looking for.