Disabling notifications for a host in Check_MK - monitoring

I'm losing my mind trying to disable notifications for a specific host in Check_MK. For example, I want the host to be monitored - show up on the network topology and be able to see the problems on the host via the Check_MK views, but I don't want the server to send me an e-mail every time the host is disconnected and such things.
Am i missing something simple or is there a roundabout way of doing this?

Do these:
Search for the host name on the main check_mk UI
Click on the hammer icon on the top (located under the page title "Services of Host ").
The UI that comes up is self-explanatory. You can either schedule a down time for the hosts (checks will be disabled) or you can simply disable notifications for selected or all of your services on that host.
Hope this helps.

I would recommend to go to your Email Notifications: WATO -> Notification
and search for the Exclude the following Hosts. Just put the Hostname in there!

Related

How to show hosts problems rather than host groups on Zabbix Dashboard?

On Zabbix Dashboard there's a Widget called "Problem hosts", which shows a list of host groups rather than a list of hosts.
Is it possible to show problems grouped by hosts instead of grouping them by host groups?
If you want to see the data as well as triggers that are there for any host then you can use the wiget called Data Overview which provides you all hosts along with any triggers that are present in problem state for that host.

Server timeout and sftp timeout. What to do?

Since 12h the website (Wordpress website) that is hosted on Google Cloud Platform has a time out issue. After 60 seconds of trying to load the website, the following message appears "The connection has timed out".
When trying to connect with SFTP, same issue.
What should I do to resolve this?
Since two different services stopped to work at the same time it
sounds like a networking issue. There is a timeout, therefore there is the server not answering at all to the requests.
What to do?
I would proceed with this general troubleshooting steps, if you want you can upload your question with the result of these commands/question to proceed with the troubleshooting.
First of all I would check if you are able to ping the
external/public IP of the instance.
I would check if the firewall rules allows TCP80/TCP443 and TCP22, Notice that on GCP you need to create the rule and assign the TAG to the machine from its detail page if the the rule does not apply to the whole network.
Are you able to ssh into the instance?
I would check if the processes are actually listening netstat -tuplen
If you are able when logged into the machine do you have access to the internet? Are you able to ping external IP? If not whats about internal IP?
I would go to the "activity" page of your Google Cloud Console to check which actions have been taken while the instance was still running.
I would check as well the history of the Linux machine to check if you run some commands acting on the network configuration of the machine.
Note that if you cannot SSH into the machine you can always access through serial console setting a password for your username through a startup script.
UPDATE
I had the possibility to take a look into the project, the machine was stopped due to issue with the billing account (it was closed) after the free trial period ended.
I would suggest you to go again trough the documentation regarding the upgrade of the billing account
If you have still some doubts or question after you perform this operations you can file a case at this link with the billing team and they will help you to solve the issue.

I want to access Jira (Docker on Synology DS716+II) from LAN not only via IP_OF_SYNOLOGY:PORT but for example jira.synology.local

I am working with a Synology NAS type aDS716+II, DSM 6.1.4-15217 Update 2 on wich runs Docker with a Jira container.
So now what I want to do I'm assinged to get to work is to access Jira's webinterface with let's say jira.synology.local with synology being the servername.
I read a lot about nginx and how it's built in since DSM 6.X but I don't seem to get it to work properly at all.
I can access Jira's webinterface from another machine within the LAN via IP_OF_SYNOLGY:PORT so when setting up a reverse proxy on the server it should be pointing to LOCALHOST:PORT right? I have also tried using the actual IP instead of LOCALHOST but without success.
I can access the interface of Synology itself not only via IP_OF_SYNOLGY:PORT but also via DOMAINNAME.LOCAL if I set the domain name.
I really don't know what I'm missing and I tried everything I could think of. Does someone has experience with this?
If some information is missing, I'll gladly provide it. I'm fairly new to synology I have to admit. Thanks in advance!
So this has gotten zero response but I figured probably someone will have a similar "problem" in the future, so I will answer anyway.
I solved everything, when I setup Active Directory. When installing AD, the DNS-Server will automatically be installed too.
So we have JIRA running in a Docker container (on port, let's say, 12345) and I want to access it via the LAN on jira.domainname.
To do so we need to have installed DSM6.X or higher (for nginx) and the DNS-Server. That's it.
In the DNS-Server you will have to create a new master zone
and apply the following settings, whereas you can freely choose the domain name and Master DNS server must be the IP of your synology station, since it functions as a DNS
Then you want to edit the Resource Record
There you want to add an A Record Resource
and an CNAME Record Resource
So your Resource Records will look like this
Now the last step for setting up the DNS server is to tell it what to do if there is no specific record for a query. So for example if you want to open jira.domainname in your browser, there is a specific record for that and the DNS server knows how to direct it. But if you want to open up for example google.com the DNS server has no information on that and does now know what to do. So what we do now is to to tell the DNS server to forward the request, if it has no records for a request. To do so, enable the forwarders and put in the IP of your gateway/ managed switch as primary and some public DNS server (8.8.8.8 for one of google's DNS server) as secondary.
Please remember that jira.domainname shall always be the domainname you choose and 192.168.0.200 shall always be the IP of your synology station.
So now the DNS server is completely setup. Now we want to take advantage of the built-in reverse proxy (which runs on nginx in the background). To do so we navigate as seen here
and create a new reverse proxy rule
So now that the URL's can point to the same destination (your synology, 192.168.0.200) but on different Port. That comes in very handy for some applications running in docker.
So now if you are running this in an home setup or small office, you probably are working with standard issue commercial router such as for example a FritzBox by AVM. Those are pretty good but beware that some prohibit the so called DNS Rebinding which means that DNS requests pointing to a local IP will be not allowed. Since in this setup the DNS server (your synology) and the destination JIRA (also your synology) are in the same LAN, we have to create an exception. Probably other routers don't suppress those requests, but if so exceptions are necessary.
So the next step, it to tell your Gateway or managed switch that it has to use the newly setup DNS server as the primary DNS server. For FritzBox' you can do so here
put in the IP of your DNS server and an secondary DNS server. This is important as a fallback solution if your DNS server probably stops working at some point.
Now that everything is setup I would recommend to restart the router/ managed switch, synology and the workstation you are working on, to flush all caches. After that you can simply open your browser and type in jira.domainname and JIRA should open up. You can also open a terminal/ cmd and type in nslookup jira.domainname to see if it is being resolved correctly.
I really hope this will help someone at some point and if there are any additional questions, please feel free to comment this or write me directly!

ELB not routing traffic to healthy instance

This seems to have something to do with the subnet/availability zone, but I'm new to using a VPC and it's eluding me.
VPC: 10.80.0.0/16
subnet: 10.80.1.0/24 (us-east-1b)
subnet: 10.80.2.0/24 (us-east-1a)
All instances are Windows Server 2012.
I have an internet facing ELB created within my VPC (10.80.0.0/16). There is one instance added from AZ us-east-1a, which is on subnet 10.80.2.0/24. The instance is running IIS 7.5, with an app running on port 80 and /health.aspx set up for use as the ELB health check.
Internal traffic on the VPC is flowing normally (unrestricted). I can request health.aspx from this instance from another instance in us-east-1b (10.80.1.0/24). I can also copy files from one instance to another.
Outbound traffic is unrestricted. I can RDP to the instance (when connected to our VPN) and open a browser and request a web page and get it.
The ELB says the instance is healthy and I can see the requests to health.aspx in the IIS logs. Both the ELB and the instance are configured with a security group that allows 80 and 443.
But if I try to request {elb-url}/health.aspx over the open internet the request just times out. Similarly, with an elastic IP associated to the instance, a request to {elastic-ip}/health.aspx times out.
#Chris, thanks for the response...as it happens, I've already worked it out with some help from a friend. I'll post my findings here for posterity (in case anybody else was similarly confused about how ELB works).
This would be more clear with a diagram. But the summary is that in each availability zone, you need to create both a public and a private subnet. When you add availability zones to your ELB, you need to select the public subnet for the zone. This had already been done in us-east-1b before I got to this setup, and I had simply missed this nuance of ELB configuration. So for the new availability zone, I had to do this...
us-east-1c
private subnet 10.1.3.0/24 (using nat instance as default route)
public subnet 10.1.4.0/24 (using internet gateway as default route)
Then my instance goes in the private subnet as expected.
And the lynch pin of this whole thing is (drum roll....)
When I add us-east-1c to my ELB, I have to select the public subnet...10.1.4.0. Otherwise the instances will pass the health check (since the ELB can communicate with any instance within my entire VPC) but the responses from the servers cannot make it back out to the public internet.
This is what is so confusing. And I still don't fully understand it. The instance can make a request for, say, www.google.com. I can RDP to it and open a browser and get the web page. But a request from a host (like my laptop at my house) will die. strange.
PS: another note...make sure you are using enough NAT instance for your load. I think we ran into an issue where our NAT instance simply ran out of ports because too many web servers were trying to route outbound connections to 3rd party APIs through it. Quite honestly, I'm not good enough at this level of network/OS troubleshooting to be sure. But my theory is that our 8 instances of IIS were holding too many connections open to the NAT instance. We were also abusing the NIC on that micro instance. I upped us to two large instances, one per AZ and things smoothed back out. Both NAT instances are humming and we're not seeing the hung processes in IIS anymore.
Debugging this kind of issue is always a challenge. I have a few ideas to suggest based on what you have written (and generally apply to trying to solve this problem) that come from dealing with this a number of times.
Have you checked both the security groups and network ACLs? Bear in mind that all network ACLs need to be specified in both directions, as they are stateless. Also bear in mind that ELBs are a bit unique in this regard. While they are associated with your VPC, they sometimes need extra rules to ensure connectivity. In the past I have debugged this by opening all network ACLs on all ports, then removing these rules until it has stopped working in order to identify where the block was.
Security groups should be checked too. They are stateful but ensure that your load balancer has permissions to be hit from the web.
Have you checked this isn't an application configuration problem? I don't know how IIS comes out of the box but I would check it is setup to respond to all hostnames.
Check the ELB isn't an internal one, as that wouldn't be publically addressable.
You say the ELB is configured with the health check, but it's worth checking you also have the listener setup for port 80? It's in a separate tab on the dashboard and you will need this in addition to the health check for connectivity through the ELB.
Hope one of these tips is useful to you.

Plastic SCM server access outside home network

I have installed Plastic SCM server in one of my PCs at home (Windows 7 - Home Prem). The server is accessible from the clients residing inside my home network. It is resolved using the home network PC name as the server address / visible name.
However, I would like to be able to have access to the server from outside the home network. Ideally, I would like to use the IP that has been assigned to the PC, by the ISP, where the server resides. I can deal with the intermittent IP address changes. The PC is just a regular, personal use PC (i.e. not configured as a server).
A couple of questions: Is this possible to access Plastic SCM server from outside the home network using the IP address that the ISP assigns to the PC where PSCM-Server resides?
Second, the server config tool automatically displays as the visible name of the PC, the name assigned in the home network. It does not allow me to enter an IP address. If the answer to the first question is yes, how can I enter the desired IP address?
Are there any configurations that must be in place on Windows 7 (Home Premium), perhaps?
Any suggestions would be appreciated.
Plastic SCM servers listens in two ports: a SSL one and an plain TCP one. I'd strongly recommend you to set up an SSL connection if you're going to open up the port on the internet.
http://codicesoftware.blogspot.com/2010/08/ssl-enabled-plastic-connections-reborn.html
In order to configure your PC:
As you pointed you'll need to redirect the traffic from your router to your PC
The "redirection" must go from a public port to the Plastic SCM port (the TCP or the SSL ones)
Your PC should have the firewall configured to allow incoming traffic to the Plastic SCM port
Regarding your question about "the server configuration": no, it just shows you the name, you can't set the IP since it simply takes the IP/name from your server. It wouldn't work otherwise, unless you mean you've a multi-IP machine. Is that the case? Do you have more than one network card in your PC? If that's the case, there's a way to specify where to listen, but let's confirm first your scenario.
I'm making the assumption that you are using Plastic 4.x (I don't know how similar the 3.x version is to this)
The answer to your first question is YES. I frequently connect to my home plastic server from my work machine to view or grab projects/tools that I need.
Your second question is not technically accurate - what you need is the CLIENT tool to access your server IP address - and that IS possible.
To answer your final question - how to do it: start the Client Configuration tool on this "external" PC.
On the third page of the CLIENT configuration tool, it asks for the Plasti SCM server selection - it gives you an entry for the server address, and an entry for the port.
You most likely have set up the username/password access type on the server, but you could also have used Local users - be sure to select the appropriate log-in type you configured your server for on the final page.
Your only other consideration is the Firewall on Win7 (and as pointed out by Pablo, your router config to 'point' to your server machine on the desired ports (8087/8088) need to be forwarded) must allow those ports to be accessed. (I believe 3.x used different ports)

Resources