In Flume-NG, is there a way to check the heartbeat on the http agent - flume

I want to put an ELB in front of the Flume-NG agents, I was wondering if there is a way to check the status/heartbeat of the flume agent without really sending any event into the agent.

Why would you want to put an ELB in front of the agents? If you have multiple agents then use one of the sink processors to get your events through. You could use Failover or Load-Balancing.
I'm not aware of any way to query the status of a Flume agent.

Related

Stomp over SockJS ActiveMQ relay to multiple servers

I'm trying to create a reasonable setup for client-client-communication for our existing infrastructure. I've been reading the docs for Spring, Websocket, STOMP, SockJS and ActiveMQ for over a week now and I'm not not sure whether what I am trying to do is feasible or sensible. Spring server and JavaScript client were up and running relatively quickly and sending messages between clients just works (direct connection from JS client to Spring server). This setup won't suffice for our needs so we decided to put dedicated brokers in between. Configuring ActiveMQ is a nightmare probably because I don't really know where to start. I have not worked with a dedicated broker so far.
Environment
170 independent servers (Tomcat, Spring, SockJS, STOMP)
2 ActiveMQ (Artemis) brokers (load balance, failure safety)
a few thousand clients (JavaScript/.NET, SockJS, STOMP)
Requirement
I need every client to be able to talk to every other client. Every message has to be curated by one of the servers. I'd like the clients to connect to one of the ActiveMQ brokers. The ActiveMQ brokers would hold a single connection to every single server. The point is to avoid that all my clients would have to open 170 WebSocket connections to all the servers. The servers do not need to talk to each other (yet/necessarily) since they are independent with different responsibilities.
Question
Is ActiveMQ or any other dedicated broker viable as transparent proxy/relay i.e. can it handle this situation and are there ways to dynamically decide the correct recipients or should I go another route like rolling my own Spring-based relay?
In a traditional messaging use-case (e.g. using ActiveMQ Artemis with STOMP) the broker manages "destinations" and any messages sent to those destinations. Messages are only dispatched to clients if the client specifically creates a consumer on a destination.
In your use-case all of your 170 "servers" would actually be messaging clients. They would need to create a consumer on the broker in order to receive messages. To be clear, once the consumer is created then the broker would dispatch messages to it as soon as they arrived.
I'm not sure what exactly you mean by "transparent," but if it means that the process(es) receiving the message(s) don't have to do anything then no message broker will handle your use-case.

monitor the amount of requests openstack4j does

Jenkins's openstack-plugin uses openstack4j for talking to an openstack cloud. I'm looking for a way that we can we can monitor the amount of http(s) API calls openstack4j does, from client side perspective.
Some possible things to know:
Jenkins can tell me that? (although I believe openstack4j does the http(s) call independently)
it's running inside a container, some https call monitoring tools that I could use on that level?
Regarding your questions:
I don't think Jenkins can do this monitoring for you, in the end, it's just a big, distributed, job scheduler and runner. If there's no plugin purposefully written for this, it can't. You'd have to write it yourself.
Regarding the monitoring, there's a bunch of questions to answer, actually:
Do you want just a Java based solution?
Surprisingly, I couldn't find anything Java based, the standard Java Management Extensions (JMX) apparently do not have direct support for investigating a process' open network connections.
If it doesn't have to be Java-specific, you could use tcpdump or tshark to analyze the traffic, as long as you know where the calls go, for example.
Another generic Linux based alternative is to launch the process through strace. You might need to make some adjustments for Java.
Is the connection HTTP or HTTPS (it matters a lot)?
For HTTPS one option would be to man-in-the-middle the HTTPS connection with some sort of proxy. Then you can just check the logs of the proxy for the connections

Docker Swarm - Route a request to ALL containers

Is there any sort of way to broadcast an incoming request to all containers in a swarm?
EDIT: More info
I have a distributed application with many docker containers. The client can send requests to the swarm and have it respond. However, in some cases, the client needs to change a state on all server instances and therefore I would either need to be able to broadcast a message or have all the Docker containers talk to each other similar to MPI, which I'm trying to avoid.
There is no built-in way to turn a unicast packet into a multicast packet, nor any common 3rd party way of doing (That I've seen or heard of).
I'm not sure what "change a state on all server instances" means. Are we talking about the running state on all containers in a single service?
Or the actual underlying OS? All containers on all services? etc.
Without knowing more about your use case, I'd say it's likely better to design something where the request is received by one Swarm service, and then it's stored in a queue system where a backend worker would pick it up and "change the state on all server instances" for you.
It depends on your specific use case. One way to do it is to send a docker service update --force, which will cause all containers to reboot. If your containers fetch the information that is changed at startup, it would have the required effect

WebPageTest WPT Private Instance - Agents not deregistering with server

I have setup WPT (Web Page Test) private instance, using docker, mesos and marathon.
However when I'm scaling up and down of agents, sometimes the server thinks that there are more agents connected (when looking at server-host/install/).
It looks like perhaps the agent doesn't "re-register" properly with the Server.
Questions:
- How does the agent notify the server that it is no longer connected?
- Is there an option I can pass when starting up the dockerized instance (of agent/server) or marathon config to notify the server when the instance is being scaled down?
Thanks!
You can use the marathon event bus. Basically you can subscribe to it. You can read more about it here:
https://mesosphere.github.io/marathon/docs/event-bus.html

How can I detect if gearman is working?

I installed gearman and configured jenkins to connect to it. I tested that jenkins can connect to gearman. Now, I am creating another application to detect that gearman is still up and running. Is it possible to do that? If so, how to do it?
You can connect to gearman through a regular tcp connection on port 4730. After connecting you can issue the command status to get a list of registered functions, queue status and number of assigned workers, or workers to see a list of clients. Some libraries abstract away this (look for something mentioning Gearman Admin) depending on your language and library of choice.
The complete Admin protocol (which isn't much larger, really) can be seen on the protocol page at the Gearman homepage.

Resources