How to run multiple Flume agents on a single node with Cloudera Manager? - flume

I have installed Flume on my CDH5.8.0 cluster. The flume agent is on a single node where Flume tasks run. I use the linux terminal ro run two sepate data ingestions via flume, with seperate configuartion files.
I want to monitor both ingestion processes via Cloudera Manager. Although the flume configuration panel in CM gives the option to add custom conf properties, but that is for a single flume agent.
I cant add another flume agent via CM on the same host. How can I monitor both ingestion processes with Cloudera Manager?

If you can only monitor a single Flume agent, I think you could merge both agent configuration files in a single one, in order to run a single Flume agent (the one that you could monitor).
You can declare as many sources, channels and sinks as you want:
a1.sources = r1 r2
a1.sinks = k1 k2
a1.channels = c1 c2
And then bind them appropriately:
a1.sources.r1.channels = c1
a1.sources.r2.channels = c2
...
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

Related

How to use multiple labels to select a node in a Jenkins Pipeline script?

Intro:
We are currently running a Jenkins master with multiple slave nodes, each of which is currently tagged with a single label (e.g., linux, windows, ...)
In our scripted-pipeline scripts (which are defined in a shared library), we currently use snippets like the following:
node ("linux") {
// do something on a linux node
}
or
node ("windows") {
// do something on a windows node
}
Yet, as our testing environment grows, we now have multiple different Linux environments, some of which have or do not have certain capabilities (e.g., some may be able to run service X and some may not).
I would like to label my slaves now with multiple lables, indicating their capabilities, for example:
Slave 1: linux, serviceX, serviceY
Slave 2: linux, serviceX, serviceZ
If I now need a Linux slave that is able to run service X, I wanted to do the following (according to this):
node ("linux" && "serviceX") {
// do something on a linux node that is able to run service X
}
Yet, this fails.
Sometime, also a windows slave gets selected, which is not what I want to achieve.
Question: How can i define multiple labels (and-combined) based on which a node gets selected in a Jenkins scripted pipepline script?
The && needs to be part of the string, not the logical Groovy operator.

TFS rest API - queue build on same machine

I am trying to see if it is possible to queue a build on the same VM as another build. Note, I am in the vNext system. I know it is possible to configuration information about the agent queues. but I want to get a little more specific than that. I'm not sure if what I want falls outside of build best practices, but I'd like to ask anyway.
Consider the following setup for builds A & B:
There are 4 agents: 2 on server 1, 2 on server 2 Any agent can pick up build A or build B .... I know how to set up the demands so only agents on those VMs can pick up this builds.
Build A uses the rest API to queue build B, and wait for it to complete. Right now, under current configurations, the spawned build B can be picked up by any of the 3 remaining agents on (so the free agent on the same machine as build A, or any of the agents on the other server).
I want to try to run build B on the same server as build A when build A launches build B. In other words, if an agent on server 1 picks up build A, I want the build B it launches to use the same server (using the other agent on the machine). Conversely, if the server 2 picks up build A, I want the build B it launches to be picked up by server 2 as well. Assuming I know things like the Agent ID, Agemt Machine Name, etc, I wanted to use the rest API in c# to simulating launching build B to try to control which machine picks it up.
I could configure agents so that both builds only run on 1 machine, but I am trying to avoid running too many agents on one machine. I could configure the above 4 agents to run on 1 machine, but I want to distribute the agents as much as possible. I know it is possible to specify information about agents queues / pools etc, bot not the machine. I am trying to avoid having to restrict the number of machines that run a set of agents.
I know this may seem a bit unusual ... but I am dealing with a scenario where I want to share information between the builds that will be simplified if they run on the same machine. If build B is launched by itself, it doesn't matter what machine picks it up. If this isn't possible, I will try other ways to share information between the builds.
Is it possible to do this?
Yes, you can achieve that by set up the demands with REST API when queue the build.
For example, you can use below PowerShell script to queue a build with the specific agent:
Param(
[string]$collectionurl = "http://server:8080/tfs/DefaultCollection/",
[string]$projectName = "ProjectName",
[string]$keepForever = "true",
[string]$BuildDefinitionId = "1",
[string]$user = "username",
[string]$token = "password"
)
# Base64-encodes the Personal Access Token (PAT) appropriately
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$token)))
function CreateJsonBody
{
$value = #"
{
"definition": {
"id": $BuildDefinitionId
},
"sourceBranch": "$/xxxx",
"demands":["Agent.Name -equals Agent1"]
}
}
"#
return $value
}
$json = CreateJsonBody
$uri = "$($collectionurl)/$($projectName)/_apis/build/builds?api-version=2.0"
$result = Invoke-RestMethod -Uri $uri -Method Post -Body $json -ContentType "application/json" -Headers #{Authorization=("Basic {0}" -f $base64AuthInfo)}
In your scenario: (Assume that Agent1 is the name of the agent, you can get it from Agent Capabilities)
Server1 : Agent1, Agent2
Server2 : Agent3, Agent4
If Build A queued with Agent1, then you an set "demands":["Agent.Name -equals Agent2"] to queue Build B.
Same thing, if Build A queued with Agent3, then you an set "demands":["Agent.Name -equals Agent4"] to queue Build B.
You can also create a simple PowerShell script and add a PowerShell step as the end of the build step in definition A, then run the PS script to queue build B once Build A completed.
If you want to queue build with the REST API in C#, then you can reference below thread:
Queue vNext build from c# using REST API (Tfs on premise)

To run commands on different linux nide by jenkins

I want to jenkins to execute a list of commands on different linux node in a network.
What steps should I take to run a command on another linux node by adrressing its ip address
if I understood you correctly you should add this node as a slave machine to the Jenkins.
go to Manage Jenkins section and then to Manage Nodes and just add a new Node
once you added the nodes.
in pilpeline groovy script
use :
node('node1'){
//command execution
}
node('node2'){
//command execution
}

A way to request ips of available Jenkins slaves by label in a freestyle job?

Is there a possibility to request the list of Jenkins slave ips,
inside of a Jenkins freestyle job,
when executing a shell script?
Maybe as an environment variable?
You can determine the IP address(es) of a slave node via groovy. See this answer.
So you could proceed as follows:
Create a groovy build step that will write the IP addresses of all slaves of interest to a text file
In you shell script build step, read IP addresses from that file.
As an example, this groovy code will print the names and IP addresses of all slaves with label mylabel:
import hudson.model.Computer.ListPossibleNames
slaves = Hudson.instance.slaves.findAll { it.getLabelString().split() contains 'mylabel' }
slaves.each {
println "slave '${it.name}' has these IPs: " + it.getChannel().call(new ListPossibleNames())
}
Sample output:
slave 'foo' has these IPs: [10.162.0.135]

Finding IP of a Jenkins node

A windows slave node connected to Jenkins server through "Java web start". The system information of the node doesn't have it's IP address.
I had to run through all the slaves node we had, and find which machine (ip address) corresponds to the slave node in Jenkins.
Is there a way to find the IP address of a slave node from Jenkins itself?
Through the Script Console (Manage Jenkins -> Nodes -> Select a node -> Script Console) of the node we can execute groovy script. Run the following command to get the IP address.
println InetAddress.localHost.canonicalHostName
The most efficient and platform-independent way to find out the IP is probably the following groovy code for the "global" Script Console on the master:
import hudson.model.Computer.ListPossibleNames
def node = jenkins.model.Jenkins.instance.getNode( "myslave" )
println node.computer.getChannel().call(new ListPossibleNames())
In the console, this yields (for example)
Result
[192.168.0.17]
The result is a list of strings, as there's potentially multiple IP addresses on one machine.
Since this does not require the node-specific consoles, it's easy to add a loop around the code that covers all nodes.
To answer this same question on a non-windows Jenkins slave:
Get the IP address:
println "ifconfig".execute().text
Get the hostname:
println "hostname".execute().text
From the Web Interface
Go to the node's Log link:
http://jenkins.mycompany.com:8080/computer/my_node_name/log
The first line should say something like:
JNLP agent connected from /10.11.12.123
screenshot
This is very similar to what deepak explained but I added images along the short steps.
In Jenkins UI click:
Manage Jenkins -> Nodes -> Select a node -> Script Console
then run println InetAddress.localHost.canonicalHostName
In your Jenkins job if its in groovy or else echo the ifonfig
sh "/sbin/ifconfig -a | grep inet"
To get the ip on a Windows slave:
Navigate to the Script Console (Manage Jenkins -> Nodes -> Select a node -> Script Console)
println "ipconfig".execute().text
Can also be found through the Jenkins UI:
Manage Jenkins --> Manage Nodes --> Click Node name --> Configure
This should display both the public and private ip address of that node

Resources