Host jenkins plugins locally - jenkins

I've got a very locked down egress firewall which restricts access to sites I specify. I've noticed the addresses I've specified show that there's an update available for plugins, but are incapable of retrieving the .hpi file. I can extend it, but tcpdump details a variety of endpoints, I'd rather not open up egress to the half the internet.
So my question is: Can I host the plugins myself and sync these plugins from from a single jenkins source address? Is there an accepted way of doing this? I've read about running a jenkins job to change the json file (messy). Anyone got anything better

Related

Monitoring server file systems from a Zabbix agent in a container?

I recently noticed that the file systems automatically discovered on a server I was monitoring made no sense and looked like regular files..
After banging my head a bit, I realized it was because the Zabbix Agent on that server is run inside a container, and the file systems it is listing are the file systems that container can see, including docker file mounts! This of course defeats the purpose of monitoring that server..
I think of a couple of options, neither satisfactory:
• I can explicitly mount some of the server file systems for the agent to discover and correctly monitor, but now I have to keep this up to date
• I can run the agent directly on the server, but the container route seems so clean..
Am I missing something?

How to ask KAFKA about it's status - number of consumers, etc.? Particularly when it's running in a docker container?

I am looking at how to get the information on the number of consumers from a Kafka server running in a docker container.
But I'll also take almost any info to help point me in a direction that is forward movement. I've been trying through Python ond URI requests, but I'm getting the feeling I need to get back to Java to ask Kafka questions on it's status?
In relation to the current threads I've seen, many handy scripts from $KAFKA_HOME are referenced but, the configured systems I have access to do not have $KAFKA_HOME defined - nor do they have the contents of that directory. My world is a docker container without a CLI access. So I haven't been able to apply the solutions requiring shell scripts or other tools from $KAFKA_HOME to my running system.
One of the things I have tried is a python script using requests.get(uri...)
where the uri looks like:
http://localhost:9092/connectors/
The code looks like:
r = requests.get("http://%s:%s/connectors" % (config.parameters['kafkaServerIPAddress'],config.parameters['kafkaServerPort']))
currentConnectors=r.json()
So far I get a "nobody's home at that address" response.
I'm really stuck and a pointer to something akin to "Beginners Guide to Getting KAFKA Monitoring Information" would be great. Also if there's a way to grab the helpful kafka shell scripts & tools, that would be great to - where do they come from?
One last thing - I'm new enough to Kafka that I don't know what I don't know.
Thanks.
running in a Docker container
That shouldn't matter, but Confluent maintains a few pages that go over how to configure the containers for monitoring
https://docs.confluent.io/platform/current/installation/docker/operations/monitoring.html
https://docs.confluent.io/platform/current/kafka/monitoring.html
number of consumers
Such a metric doesn't exist
Python and URI requests
You appear to be using the /connectors endpoint of the Kafka Connect REST API (which runs on port 8083, not 9092). It is not a monitoring endpoint for brokers or non-Connect-API consumers
way to grab the helpful kafka shell scripts & tools
https://kafka.apache.org/downloads > Binary downloads
You don't need container shell access, but you will need external network access, just as all clients outside of a container would.

What is the recommended way to package up pre-configured solace docker image

We are trying to package up a solace docker image with pre configured message vpn, jndi connection factory, queues and such. This is so that we can take the docker image to a different site and load it there without having to configure it on every site.
Is all the configuration information that I have done from Solace UI stored in /usr/sw/var within container environment? So in reality all I have to do is save contents of that directory and build a new docker image with that contents?
Yes, the configuration information that you have done is stored in directory /usr/sw/var. But it contains a lot more than you want, like hostname, router-name and other data that you do not really want.
If I may suggest, the best way to 'copy' your data is to go into the Solace CLI and do:
show current-config message-vpn *
You can re-direct the data to a file e.g.
show current-config message-vpn * > my-vpn-config.txt
The output is saved in the directory /usr/sw/jail.
You can edit this file if you like, it will contain CLI commands that are familiar to you.
Copy this file out and download it to your new docker directory under /usr/sw/jail. From there you can source your configuration file with the command:
enable
source script my-vpn-config.txt
With the above method, you will miss out some system configurations, like usernames, ldap-profile, etc.. But from the list of things you are looking for 'message vpn, jndi connection factory, queues and such', this is good enough.

Is it possible to monitor a whole directory with fluentd?

I would like to set up log forwarding as part of a deployment process. The activity of the machines will be different but they will all log to specific places (notably /var/log).
Is it possible to configure fluentd so that it monitors a whole directory? (including the ability to pick up files which pop-up while it is active)
I know that in_tail can do this for a given, specified file but the documentation does not mention a whole directory.
There is an ideal exact duplicate of this question from 2014 which points to the tail_ex plugin. Unfortunately its description mentions that
Deprecated: Fluentd has the features of this plugin since 0.10.45. So,
the plugin no longer maintained
I still could not find the mentioned features.
Using the wildcard support within Fluentd's in_tail plugin this is absolutely possible. In the path section you would specify the /var/log/* directory and Fluentd will automatically skip files that are non-readable.
Additionally, if you write new files to this directory Fluentd will periodically scan based on the configuration item https://docs.fluentd.org/v0.12/articles/in_tail#refreshinterval
Some notes: If you use Treasure Data's packaged version of Fluentd, td-agent then you need to ensure that the files you want to tail are readable by the td-agent user that is provisioned as part of that install.
Lastly, if you need to securely read these files you may consider Treasure Data's Enterprise Fluentd offering

Graphite installation in a docker container - volume query

I am installing graphite via a docker container.
I have seen that whisper files should not be saved in the container.
So I will be using a data volume from docker to save these on the host machine.
My question is is there anything else I should be saving on the host (I know this is subjective so I guess Im looking for recommendations on whats important)?
Don't believe I need configuration e.g. carbon conf as this will come from my installation
So I'm thinking are there any other files from graphite I need (e.g log files etc)?
What is your reason for keeping log files? Though you do need the directory structure in place. Logging defaults to /opt/graphite/storage/logs. In here you have carbon-cache/ and webapp/ directories. The log directory for the webapp is set in the config- local_settings.py, whereas carbon uses carbon.conf. The configs are well documented so you can look into them for specific information.
Apart from configs that are generated during installation the only other 'file' crucial for the webapp to work is graphite.db in the /opt/graphtie/storage. It is used internally by the django webapp for housekeeping information such as user-auth etc. It gets generated by python manage.py --syncdb so i believe you can generate it again at the target system.

Resources