In order to migrate from icinga1 to icinga2, I'm basically concerned about the NRPE custom checks, till the time i tried to add just the basic memory check using NRPE.
With command line everything seems to be fine and perfect.
/usr/lib64/nagios/plugins/check_nrpe -H 192.186.113.45 -p 5666 -c CheckMEM -a MaxWarn=80% MaxCrit=90% ShowAll=long type=physical
Output:
OK: physical memory: Total: 64G - Used: 4.69G (7%) - Free: 59.3G (93%)|'physical memory %'=7%;80;90 'physical memory'=4.687G;51.174;57.57;0;63.967
But when I tried to apply the same with ICINGAweb2 it doesn't work well.
It simply gives me the error there as
Unknown argument: -c
Below are the configurations for the command i tried to create as a beginner.
My command.conf file has certain part defined for the same specific check
object CheckCommand "nrpe-check-1arg" {
import "plugin-check-command"
command = [PluginDir + "/check_nrpe" ]
arguments = {
"-H" = "$host$"
"-p" = "$port$"
"-c" = "$check$"
"-a" = "$argument$"
}
}
and my hostfile.conf contains
object Host "RenamedHost" {
address = "192.186.113.45"
check_command = "hostalive"
vars.os = "windows"
}
object Service "NRPE check load" {
import "generic-service"
host_name = "RenamedHost"
check_command = "nrpe-check-1arg"
vars.host = "132.186.119.45"
vars.port = "5666"
vars.check = "CheckMem"
vars.argument = "MaxWarn=80% MaxCrit=90% ShowAll=long type=physical"
}
What am I doing wrong ??
You will be able to pass the argument into nrpe.cfg as
vars.arguments = "80%!90%!long!physical"
And in command CheckMEM in remote machine you can specify the argument as
MaxWarn=$ARG1$ MaxCrit=$ARG2$ ShowAll=$ARG3$ type=$ARG4$
Related
I have a PHP login system that should be built to run on both XAMPP and Docker at the same time. My database need to be localy stored.
I create my Container and Image like these:
Image: docker build -t php . Container: docker run -dp 9000:80 --name php-app php
<?php
$host = "host.docker.internal"; // need to be that or 'localhost'
$name = "test";
$user = "root";
$passwort = "";
try {
$mysql = new PDO("mysql:host=$host;dbname=$name", $user, $passwort);
}
catch (PDOException $e) {
echo "SQL Error: ".$e->getMessage();
}
?>
Where do I get the information on which system I am running to make this value dynamic?
You can check if you are inside Docker this way:
function isDocker(): bool
{
return is_file("/.dockerenv");
}
I haven't worked on windows system yet but in Linux, You can check the processes and find process execute using docker or not.
$processes = explode(PHP_EOL, shell_exec('cat /proc/self/cgroup'));
// Check process folder path and pass here
$processes = array_filter($processes);
$is_docker = true;
foreach ($processes as $process) {
if (strpos($process, 'docker') === false) {
$is_docker = false;
}
}
Then you can implement as per your need.
if($is_docker === true){
// Do something
}
I am using Packer to generate an image on Google Compute Engine, and Terraform to create the instance. I have set this metadata:
key: env_vars
value: export test=10
Packer is using a script with something like this inside:
curl "http://metadata.google.internal/computeMetadata/v1/project/attributes/env_vars?recursive=tru&alt=text" -H "Metadata-Flavor: Google" -o /tmp/env_vars.sh
source /tmp/env_vars.sh # or . /tmp/env_vars.sh
The problem is that when I create an instance using this image through Terraform the env variables are not available. That means, If I run printenv or echo $test, it is empty.
Even if I write a startup-script for the instance, it doesn't work.
But, if I run the same exact script inside the instance via SSH, it does work.
In all scenarios described above, the file env_vars.sh is created.
I just want to set the env vars from my metadata for any instance.
Any suggestion on how can I achieve this?
EDIT:
Here's the terraform code:
# create instance
resource "google_compute_instance" "default" {
count = 1
name = var.machine_name
machine_type = var.machine_type
zone = var.region_zone
tags = ["allow-http-ssh-rule"]
boot_disk {
initialize_params {
image = var.source_image
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
}
I have reproduced your issue in my own project, and you are right it seems that exportdoes not work on the strat-up script.
I also tried creating a start-up script in a bucket but it does not work.
On the other hand I was able to set the env var in my project:
I’m using a debian-9 image, so, I edited the /etc/profile to add the env vars.
I use the following code to create my VM with env variables:
provider "google" {
project = "<<PROJECT-ID>>"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {
}
}
# defining metadata
metadata = {
foo = "bar"
}
metadata_startup_script = "echo ENVVAR=DEVELOPMENT2 >> /etc/profile"
}
After the creation of my instance I was able to see the correct values:
$ echo $ENVVAR
DEVELOPMENT2
| Error Error initializing classpath: Could not set unknown property 'env' for task ':createPostgresContainer' of type com.bmuschko.gradle.docker.tasks.container.DockerCreateContainer. (Use --stacktrace to see the full trace)
https://bmuschko.github.io/gradle-docker-plugin/
I was using this Gradle Docker Plugin to setup a Postgres docker container to test out some database related stuff on the Mac, by following the example here. http://guides.grails.org/grails-docker-external-services/guide/index.html
But it doesn't seem to work out because of an env string that cannot be set like so:
task createPostgresContainer(type: DockerCreateContainer, dependsOn: pullPostgresImage) {
group = "docker"
ext {
pgContainerName = "demo-db"
dbName = "demo-db"
dbPort = 5432
dbPassword = "kevintan"
}
description = 'Creates PostgreSQL container'
containerName = pgContainerName
imageId = pullPostgresImage.imageName+":"+pullPostgresImage.tag
portBindings = ["${dbPort}:5432"]
env = [
"POSTGRES_PASSWORD=${dbPassword}",
"POSTGRES_DB=${dbName}",
] as String[]
onError { e ->
if (e.class.simpleName in ['BadRequestException', 'ConflictException']) {
logger.warn 'Container already exists'
} else {
throw e
}
}
}
Is there any way to set the env? Or am I missing something?
Never mind. I forgot to read the changelogs.
Removed DockerCreateContainer.env, replaced by DockerCreateContainer.envVars
I got a sample AWS codepipeline working via the console but need to get it set up via Terraform.
I have two problems, one minor and one major:
The Github stage fails until I go in and edit it via the console, even though I wind up not changing anything that I already had set up in "owner" or "repo"
The more major item is that I keep getting CannotPullContainerError on the build step that keeps anything else from happening. It says "repository does not exist or may require 'docker login'".
The repository DOES exist; I used the command line from my Linux instance to verify the same 'docker login' and 'docker pull' commands that don't work from AWS CodePipeline.
(I know: the buildspec.yml is stupidly insecure but I wanted to get the prototype I had working the same way before I put in kms.)
My buildspec.yml is simple:
version: 0.2
phases:
pre_build:
commands:
- $(aws ecr get-login --no-include-email --region us-west-2)
- docker pull 311541007646.dkr.ecr.us-west-2.amazonaws.com/agverdict-next:latest
build:
commands:
- sudo apt install curl
- curl -sL https://deb.nodesource.com/setup_8.x | sudo bash -
- sudo apt install nodejs -y
- mkdir /root/.aws
- cp ./deployment/credentials /root/.aws/credentials
- cd ./deployment
- bash ./DeployToBeta.sh
Here's the terraform that creates the pipeline. (No 'deploy' step as the 'build' shell script does that from a previous incarnation.)
locals {
github_owner = "My-Employer"
codebuild_compute_type = "BUILD_GENERAL1_LARGE"
src_action_name = "projectname-next"
codebuild_envronment = "int"
}
data "aws_caller_identity" "current" {}
provider "aws" {
region = "us-west-2"
}
variable "aws_region" { default="us-west-2"}
variable "github_token" {
default = "(omitted)"
description = "GitHub OAuth token"
}
resource "aws_iam_role" "codebuild2" {
name = "${var.codebuild_service_role_name}"
path = "/projectname/"
assume_role_policy = "${data.aws_iam_policy_document.codebuild_arpdoc.json}"
}
resource "aws_iam_role_policy" "codebuild2" {
name = "codebuild2_service_policy"
role = "${aws_iam_role.codebuild2.id}"
policy = "${data.aws_iam_policy_document.codebuild_access.json}"
}
resource "aws_iam_role" "codepipeline2" {
name = "${var.codepipeline_service_role_name}"
path = "/projectname/"
assume_role_policy = "${data.aws_iam_policy_document.codepipeline_arpdoc.json}"
}
resource "aws_iam_role_policy" "codepipeline" {
name = "codepipeline_service_policy"
role = "${aws_iam_role.codepipeline2.id}"
policy = "${data.aws_iam_policy_document.codepipeline_access.json}"
}
resource "aws_codebuild_project" "projectname_next" {
name = "projectname-next"
description = "projectname_next_codebuild_project"
build_timeout = "60"
service_role = "${aws_iam_role.codebuild2.arn}"
encryption_key = "arn:aws:kms:${var.aws_region}:${data.aws_caller_identity.current.account_id}:alias/aws/s3"
artifacts {
type = "CODEPIPELINE"
name = "projectname-next-bld"
}
environment {
compute_type = "${local.codebuild_compute_type}"
image = "311541007646.dkr.ecr.us-west-2.amazonaws.com/projectname-next:latest"
type = "LINUX_CONTAINER"
privileged_mode = false
environment_variable {
"name" = "PROJECT_NAME"
"value" = "projectname-next"
}
environment_variable {
"name" = "PROJECTNAME_ENV"
"value" = "${local.codebuild_envronment}"
}
}
source {
type = "CODEPIPELINE"
}
}
resource "aws_codepipeline" "projectname-next" {
name = "projectname-next-pipeline"
role_arn = "${aws_iam_role.codepipeline2.arn}"
artifact_store {
location = "${var.aws_s3_bucket}"
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = ["projectname-webapp"]
configuration {
Owner = "My-Employer"
Repo = "projectname-webapp"
OAuthToken = "${var.github_token}"
Branch = "deploybeta_bash"
PollForSourceChanges = "false"
}
}
}
stage {
name = "Build"
action {
name = "projectname-webapp"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["projectname-webapp"]
output_artifacts = ["projectname-webapp-bld"]
version = "1"
configuration {
ProjectName = "projectname-next"
}
}
}
}
Thanks much for any insight whatsoever!
Both issues sound like permission problems.
CodePipeline's console is likely replacing the GitHub OAuth token (with one that works): https://docs.aws.amazon.com/codepipeline/latest/userguide/GitHub-authentication.html
Make sure the CodeBuild role (${aws_iam_role.codebuild2.arn} in the code you provided I think) has permission to access ECR.
I have setup a single master with 2 client endpoints in my icintga2 monitoring system using director with Top-Down mode.
I have also setup 2 client nodes with both accept configs and accept commands.
(hopefully this means I'm running Top Down Command Endpoint mode)
The service checks (disk/mem/load) for the 3 hosts are returning correct results. But my problem is:
according to the example from Top Down Command Endpoint example,
host icinga2-client1 is using "hostalive" as the host check_command.
eg.
object Host "icinga2-client1.localdomain" {
check_command = "hostalive" //check is executed on the master
address = "192.168.56.111"
vars.client_endpoint = name //follows the convention that host name == endpoint name
}
But one issue I have is that
if the client1 icinga process is not running,
the host status stays GREEN and also all of service status (disk/mem/load) stay all GREEN as well
because master is not getting any service check updates and hostalive check command is able to ping the node.
Under Best Practice - Health Check section,
it mentioned to use "cluster-zone" check commands.
I was expecting while using "cluster-zone",
the host status would be RED
when the client node icinga process is stopped,
but somehow this is not happening.
Does anyone has any idea?
My zone/host/endpoint configurations are as follows:
object Zone "icinga-master" {
endpoints = [ "icinga-master" ]
}
object Host "icinga-master" {
import "Master-Template"
display_name = "icinga-master [192.168.100.71]"
address = "192.168.100.71"
groups = [ "Servers" ]
}
object Endpoint "icinga-master" {
host = "192.168.100.71"
port = "5665"
}
object Zone "rick-tftp" {
parent = "icinga-master"
endpoints = [ "rick-tftp" ]
}
object Endpoint "rick-tftp" {
host = "172.16.181.216"
}
object Host "rick-tftp" {
import "Host-Template"
display_name = "rick-tftp [172.16.181.216]"
address = "172.16.181.216"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
object Zone "tftp-server" {
parent = "icinga-master"
endpoints = [ "tftp-server" ]
}
object Endpoint "tftp-server" {
host = "192.168.100.221"
}
object Host "tftp-server" {
import "Host-Template"
display_name = "tftp-server [192.168.100.221]"
address = "192.168.100.221"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
template Host "Host-Template" {
import "pnp4nagios-host"
check_command = "cluster-zone"
max_check_attempts = "5"
check_interval = 1m
retry_interval = 30s
enable_notifications = true
enable_active_checks = true
enable_passive_checks = true
enable_event_handler = true
enable_perfdata = true
}
Thanks,
Rick