Active zabbix agents are becoming unavailable due to DHCP IP change - monitoring

I am using active Zabbix agents that auto-register themselves to the Zabbix server.
Everything goes well until the DHCP changes the host IP, the host then becomes unavailable in Zabbix... Looking at the host under the hosts list in Zabbix frontend, I can see that it had the old IP.
Is there any way to solve this?

This means that you are actually not using active items. I'd suggest cloning your current template and changing items, LLD rules and LLD prototypes to "Zabbix agent (active)" - then agent IP address changes will not be a concern.

Related

How do I set up bind via webmin to delegate dns lookups for certain subdomains?

I have several docker containers with some web applications running via docker compose. One of the containers is a custom DNS server with Bind and Webmin installed. Webmin gives a nice web UI allowing me to update Bind DNS configuration without directly modifying the files or SSHing into the container. I have docker setup to lookup DNS in this order:
my docker dns server
my companies internal dns server
google dns server
I have one master zone file for top level domain "example.com" defined in dns server 1. I added an address for server1.example.com and dns resolves correctly. I want other subdomains to be resolved from my companies internal dns server.
server1.example.com - resolves correctly
server2.example.com - this host is not referenced in the zone file for docker dns server. I would like to somehow delegate this to my companies dns server (server 2)
The goal is I should be able to do software development for web applications and deploy them on my docker containers. The code makes internal calls to other "example.com" hosts. I want some of those calls to get directed back to other docker containers rather than the real server because I am developing code on both and want to test it end to end.
I don't want to (and can't) modify my companies dns configuration. I am not an expert in bind or dns setup and looking for the simplest solution.
What configuration can achieve this?
I guess the workaround is to use fully qualified name when creating the zone file. Instead of creating a master zone example.com and listing server1 inside that zone I am creating a master zone with server1.example.com. It means I have to create a zone file for every server but I guess its ok to manage with a smaller number of hosts. server2.example.com then doesnt fall inside of a zone and gets resolved using the next dns server in the chain.

Zabbix Web Monitoring without interfaces

I want to send HTTP request to a website where I can't deploy a Zabbix agent. I can only access the website via HTTP / HTTPS.
I created a host, but I had to setup a false interface (I wrote Zabbix Agent / the website DNS and port 10050). Consequently, the status is "Red" as Zabbix Server cannot connect to the agent.
How can I set up an agent without any interfaces, or more precisely, without triggering a "Red" status on the host ? In my situation, is there any way to get a "Green" status on the host ?
Zabbix version : 3.2.11
At least one interface is mandatory. Using 127.0.0.1 the status is Enabled, green. I'm using Zabbix 4 here.
Hosts in Zabbix must have at least one interface. That interface does not have to be used.
If you see the red "Z" and an error about contacting the agent, this means that you have created Zabbix agent items on that host (directly or via linking some template).
As those items do not work, it would be best to remove them.
See http://www.zabbixbook.com/2016/11/22/can-i-have-a-host-without-interfaces/ for more information.

Use Docker DNS server on other nodes

The new version of Docker (version 1.10) includes a DNS server to pass alias information from other hosts on the same network. There used to be hosts file entries for resolving linked containers (or containers on the same network). I am wondering if it is possible to use this embedded DNS server on an overlay network? I have looked in the documentation (and in issues) and cannot find information about this.
So the way the new embedded DNS "server" works is that it isn't a formal server. It's just an embedded listener for traffic to 127.0.0.11:53 (udp of course). When docker sees that query traffic on the container's network interface, it steps in with its embedded DNS server and replies with any answers it might have to the query. The documentation has some options you can set to affect how this DNS server behaves, but since it only listens for query traffic on that localhost address, there is no way to expose this to an overlay network in the way that you are thinking. However this seems to be a moving target, and I have seen this question before in IRC, so it may one day be the case that this embedded DNS server at least becomes pluggable, or possibly exposable in the way you would like.

Received empty response from Zabbix Agent

I have zabbix installed on my wan ip and I'm able to add clients in that of same Wan Ip network, But when i'm trying to add a zabbix client of different Wan network range, i'm getting -
Received empty response from Zabbix Agent at [xx.xx.xx.xx]. Assuming
that agent dropped connection because of access permission.
There is no filtering at any end, and i have cross checked zabbix_agentd.conf file may times and found no errors. I can see ports listening at 10050 on client servers.
The Troubleshooting page on Zabbix.org is a classical reference for troubleshooting unreachable agents.
In this particular case, you might wish to start with checking that Zabbix server IP is listed in agent's Server configuration parameter.
I have solved the above problem. it was my router's wan ip that was collecting the data from zabbix agent and not the zabbix server's wan ip.
so i gave both the ips of my router as well as of zabbix server in zabbix_agentd.conf's Server and ServerActive . and now its working fine.
Thanks

Jenkins Slave port number for firewall

We use Jenkins 1.504 on Windows.
We need to have Master and Slave in different sub-networks with firewall in between.
We can't have ANY to ANY port firewall rules, we must specify exact port numbers.
I know the port Master is listening on.
I also see that Slave opens connection to the Master from the arbitrary port dynamically assigned every run, and port on the Master side is also arbitrary.
I can fix Master's port by specifying it in Manage Jenkins > Configure Global Security > TCP port for JNLP slave agents).
How to fix Slave port?
UPDATE: Found Connection Mechanism described here: https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI#JenkinsCLI-Connectionmechanism
I think it might work for us, but still would be better to have fixed-2-fixed ports connection.
We had a similar situation, but in our case Infosec agreed to allow any to 1, so we didnt had to fix the slave port, rather fixing the master to high level JNLP port 49187 worked ("Configure Global Security" -> "TCP port for JNLP slave agents").
TCP
49187 - Fixed jnlp port
8080 - jenkins http port
Other ports needed to launch slave as a windows service
TCP
135
139
445
UDP
137
138
A slave isn't a server, it's a client type application. Network clients (almost) never use a specific port. Instead, they ask the OS for a random free port. This works much better since you usually run clients on many machines where the current configuration isn't known in advance. This prevents thousands of "client wouldn't start because port is already in use" bug reports every day.
You need to tell the security department that the slave isn't a server but a client which connects to the server and you absolutely need to have a rule which says client:ANY -> server:FIXED. The client port number should be >= 1024 (ports 1 to 1023 need special permissions) but I'm not sure if you actually gain anything by adding a rule for this - if an attacker can open privileged ports, they basically already own the machine.
If they argue, then ask them why they don't require the same rule for all the web browsers which people use in your company.
I have a similar scenario, and had no problem connecting after setting the JNLP port as you describe, and adding a single firewall rule allowing a connection on the server using that port. Granted it is a randomly selected client port going to a known server port (a host:ANY -> server:1 rule is needed).
From my reading of the source code, I don't see a way to set the local port to use when making the request from the slave. It's unfortunate, it would be a nice feature to have.
Alternatives:
Use a simple proxy on your client that listens on port N and then does forward all data to the actual Jenkins server on the remote host using a constant local port. Connect your slave to this local proxy instead of the real Jenkins server.
Create a custom Jenkins slave build that allows an option to specify the local port to use.
Remember also if you are using HTTPS via a self-signed certificate, you must alter the configuration jenkins-slave.xml file on the slave to specify the -noCertificateCheck option on the command line.

Resources