monitor the amount of requests openstack4j does - jenkins

Jenkins's openstack-plugin uses openstack4j for talking to an openstack cloud. I'm looking for a way that we can we can monitor the amount of http(s) API calls openstack4j does, from client side perspective.
Some possible things to know:
Jenkins can tell me that? (although I believe openstack4j does the http(s) call independently)
it's running inside a container, some https call monitoring tools that I could use on that level?

Regarding your questions:
I don't think Jenkins can do this monitoring for you, in the end, it's just a big, distributed, job scheduler and runner. If there's no plugin purposefully written for this, it can't. You'd have to write it yourself.
Regarding the monitoring, there's a bunch of questions to answer, actually:
Do you want just a Java based solution?
Surprisingly, I couldn't find anything Java based, the standard Java Management Extensions (JMX) apparently do not have direct support for investigating a process' open network connections.
If it doesn't have to be Java-specific, you could use tcpdump or tshark to analyze the traffic, as long as you know where the calls go, for example.
Another generic Linux based alternative is to launch the process through strace. You might need to make some adjustments for Java.
Is the connection HTTP or HTTPS (it matters a lot)?
For HTTPS one option would be to man-in-the-middle the HTTPS connection with some sort of proxy. Then you can just check the logs of the proxy for the connections

Related

Can Windows Service be started by incoming TCP connection?

I'd like to make a small Windows Service, that would be shutdown most of the time, but would be automatically activated when incoming TCP (REST?) connection comes. I do not want the service to be running 24/7 just in case (albeit that might trn to be the least evil).
There were projects porting inet.d and xinet.d to Windows, but they are all abandoned, and introducing yet another dependency for a lean program is wrong.
However by the fact they were abndoned i thought it is now a standard Windows functionality?
Service Triggers documentation seems both incomplete and self-contradictionary.
SERVICE_TRIGGER_SPECIFIC_DATA_ITEM claims that for SERVICE_TRIGGER_TYPE_NETWORK_ENDPOINT there is
A SERVICE_TRIGGER_DATA_TYPE_STRING that specifies the port, named
pipe, or RPC interface for the network endpoint.
Feels great, but totally lacks any example how to specify port and nothing but the port. But then:
SERVICE_TRIGGER structure seems to claim there is no way to "wait" on TCP/UDP connections.
SERVICE_TRIGGER_TYPE_NETWORK_ENDPOINT - The event is triggered when a packet or request arrives on a particular network protocol.
So far so good... But then.
The pTriggerSubtype member specifies one of the following values: RPC_INTERFACE_EVENT_GUID or NAMED_PIPE_EVENT_GUID. The pDataItems member specifies an endpoint or interface GUID
Dead-end. You have no choice but either Windows-specific flavor of RPC or Windows-specific named pipes. And you can only specify GUID as a data item, not a port, as it was told above.
I wonder, which part of documentation is wrong? Can ChangeServiceConfig2 API be used for a seemingly simple aim of starting service to respond to TCP packet coming to a specific port ? If yes - how?
there is also SERVICE_TRIGGER_TYPE_FIREWALL_PORT_EVENT but the scarce documentation seems to say the functionality is the opposite, the trigger is not remote packet incoming from a client, but instead by a local server binding to a port.
Some alternative avenues, from quick search:
Internet Information Server/Service seems to have "Windows Process Activation Service" and "WWW Publishing Service" components, but adding dependency upon heavy IIS feels wrong too. It also can interfere with other HTTP servers (for example WAMP systems). Imagining explaining to non-techie persons how to diagnose and resolve clashes for TCP ports makes me shiver.
I wonder if that kind of starting a service on demand can be done only programming http.sys driver without rest of IIS, though.
COM protocol seems ot have servers activation on demand feature, and hopefully so does DCOM, but I do not want to have DCOM dependency. It seems today much easier to find documentation, programs and network administrators for maintaining plain TCP or HTTP connections, than DCOM. I fear relying on DCOM would be more and more fragile in practice, just like relying on DDE.
DCOM and NT-RPC would also make the program non-portable if i later would decide to use other operating systems than Windows.
Really, starting a service/daemon on incoming network connection seems so obvious a task, there has to be out-of-the-box function in Windows?

How to configure Prometheus in a multi-location scenario?

I love using Prometheus for monitoring and alerting. Until now, all my targets (nodes and containers) lived on the same network as the monitoring server.
But now I'm facing a scenario, where we will deploy our application stack (as a bunch of Docker containers) to several client machines in thier networks. Nearly all of the clients networks are behind a firewall or NAT. So scraping becomes quite difficult.
As we're still accountable for our stack, I'd like to have a central montioring server, altering and dashboards.
I was wondering what could be the best architecture if want to implement it with Prometheus, but I couldn't find any convincing approaches. My ideas so far:
Use a Pushgateway on our side and push all data out of the client networks. As the docs state, it's not intended that way: https://prometheus.io/docs/practices/pushing/
Use a federation setup (https://prometheus.io/docs/prometheus/latest/federation/): Place a Prometheus server in every client network behind a reverse proxy (to enable SSL and authentication) and aggregate relevant metricts there. Open/forward just a single port for federation scraping.
Other more experimental setups, such as SSH Tunneling (e.g. here https://miek.nl/2016/february/24/monitoring-with-ssh-and-prometheus/) or VPN!?
Thank you in advance for your help!
Nobody posted an answer so I will try to give my opinion on the second choice because that's what I think I would do in your situation.
The second setup seems the most flexible, you have access to the datas and only need to open one port on for the federating server, so it should still be secure.
One other bonus of this type of setup is that even if the firewall stop working for a reason or another, you will still have a prometheus scraping, you will have an alert because you won't be able to access the server(s) but when the connexion comes again you will have all the datas. You won't have a hole in the grafana dashboards because there was no datas, apart during the incident.
The issue with this setup is the fact that you need to maintain a number of server equivalent to the number of networks. A solution for this would be to have a packer image or maybe an ansible playbook to deploy.

Alternatives to CLI and SNMP

I am trying to write a small script that will help me automate some of my IT tasks regarding to VLAN management.
I do not want to log-in to my switch via command-line - I want to send commands to it and get response (over the NET).
Are there any alternatives? I have started to search the web but so far I did not found anything.
I know SNMP is an option to gain info but I want to check other alternatives
thanks.
You can try Netconf Configuration Protocol, it is RPC-like management protocol which is supported by Cisco and many other vendors.
SNMP is the only widely and commonly used option here.
You can use WMI to manage Windows-based infrastructure.
There is also legacy SYSLOG protocol (RFC3164) which is UDP based.
For traffic monitoring and billing purposes there are NetFlow,
sFlow, jFlow, IPFIX and RADIUS protocols.
There are some other protocols but mostly proprietary.
So I'd suggest using SNMP which is nowadays a de-facto standard in network monitoring domain.
You might look at Expect as a scripting language solution. It is commonly used to do exactly what you are needing:
log into device (with result cases)
execute commands
save config
logout
As you build out a script library, tasks become simplified as you could do things like run scripts with parameters and have Expect do all the detail work.
See the wikipedia article for an overview.
I have also used SNMP for this kind of thing but the functionality is different because you are using an SNMP read-write privilege to upload new parts or complete configs, saving the running config to flash and/or saving the config off-device.
Try NETCONF+YANG protocol because it is currently the best option for network device configuration. More about SNMP alternatives:
https://bestmonitoringtools.com/top-snmp-alternatives-because-snmp-is-dying/

Exposing a library via zeromq

I am wanting to know what would be the best way to expose a library via zeromq. Say, I install a machine learning library (mll) on one machine, and I have a zeromq broker running on another. Now, if I have a zeromq client which needs to call functions within the mll, how can it do so via the broker.
I am wanting to know the steps I will need to take to make this work for libraries in a generic way.
Basically you need to have a "listener" that picks up data from ZMQ and feeds it to your machine-learning backend code, then transmits the results back to the requestor.
There are a lot of design choices to be made, such as what format to use to serialize data between client and server (JSON? YAML? Pickle? Thrift? ...) , and how to encode requests and request options. But all things considered, this is a pretty straightforward ZMQ usage.
The problem comes when you want a more feature-rich, complete, robust, etc. design--things like multi-threaded or multi-process servers, multi-machine scalability, secure user / request authentication and authorization, job reporting and dashboard, or job checkpointing. All those "extras" are common "network job scheduler" or "(enterprise) message broker" functions that tend to come out-of-the-box with packages like Celery or RQ.
If you don't want to go the full "message broker middleware" route, you might start by examining others' designs for lightweight ZMQ-based job brokers, such as this one from Jeff Knupp.

Emulate incoming network messages for Indy

Is it possible to emulate incoming messages using Indy (if it's of any importance: I'm using Indy 10 and Delphi 2009)? I want to be able to create these messages locally and I want Indy to believe that they come from specific clients in the network. All the internal Indy handling (choice of the thread in which the message is received and stuff like that) should be exactly the same as if the message would have arrived over the network.
Any ideas on that? Thanks in advance for any tips.
What you want to do has nothing to do with Indy, as you would need to do this on a much lower level. The easiest way to make Indy believe that messages come from a specific client is to inject properly prepared packets into the network stack. Read up on TCP Packet Injection on Google or Wikipedia. EtterCap is one such tool that allows to inject packets into established connections. However, this is definitely going into gray areas, as some of the tools are illegal in some countries.
Anyway, all of this is IMHO much too complicated. I don't know what exactly you want to do, but a specially prepared client or server is a much better tool to emulate certain behaviour while developing server or client applications. You can run them locally, or if you need to have different IP addresses or subnets you can do a lot with virtual machines.
Indy doesn't have any built-in mechanisms for this but thinking off the top of my head I would recommend building a small test application (or a suite) that runs locally on your development machine and connects to your Indy server application to replay messages.
It should be irrelevant to your Indy server applications if a TCP connection is made either locally or from a remote host as the mechanisms by which a server thread is created and a command processed is identical to both scenarios.
My last gig involved using Indy and all our testing was done with a similar Resender type application that would load local message files and send these to the Indy server app.
HTH and good luck!
One thing you can do would be to create virtual machines to run your test clients, that way they will not be seen as "local machine", and its fairly simple to create a complex network with VMS -- provided you have enough memory and disk space. The other advantage of testing with VM's is you can eliminate the development environment completely when its time to focus on deployment. Amazing how much time that saves alone.
VirtualPC is a free download from Microsoft and works fairly well. VMWare has another option, but costs a little more to get started. For development purposes, I prefer the desktop versions but the server versions also work well. You will still need to have a license to install the virtual OS. MSDN membership is probably the cheapest way to go, and allows you to build test environments for other flavors of the OS.
Indy has abstract stack mechanism for crossplatform support (IDStack.pas) I think u can hack the stack for windows (IdStackWindows.pas). It is a class. U can even consider to derivate it and override some functions to do the hack.

Resources