Icinga2 repository service remove "disk C:" - monitoring

I am new to icinga2 and using repository.d and "node update-config" to get services of client.
I am using it for windows server.
I don't want multiple entries for disk monitoring like "disk" and "disk C:" to get added.
Is it possible to remove "disk C:" from repository?
I don't want this service to get added for further window server.

On your remote windows machine, you have just to delete
vars.disks["disk C:"] = {
disk_partition = "C:"
}
from your host.conf file, reload the daemon icinga2.exe daemon -C and restart icinga2 Restart-Service icinga2.
Don't forget to update your node config on your monitoring server icinga2 node update-config

Related

Oracle Cloud: attach boot volume

There was a need to restore SSH access to an instance (Ubuntu 22) so using a guide provided by Oracle the boot volume has been detached and connected as a block storage to another instance. Once a new ssh key was added to the authorized_keys file, the disk was unmount and detached form the temporary instance. So now, I'm trying to re-attache the boot volume to the initial instance; it's successfully starting the process, but than "Attaching" status changes to "Detaching" and at the end the volume is still detached. I'm new at Oracle Cloude and have no idea how to debug any infrastructure operations (where to find an event log).
Here are some screenshots:
Attach volume->Attaching->Detaching->Detached

Azure Kubernetes Service (AKS) POD File Explorer

I deployed Dot Net application on AKS with windows node pool. I want to view files structure in the AKS POD. Do we have any tool for that? or any other suggestion.
I dont have any tool for this but i have a suggestion: If you are using Kubernetes, dont log to files inside your Pods.
Your application should send logs to STDOUT & STDERR and you can scrape those logs with a tool like fluentbit, fluentd or promtail and send it to a central log solution like Loki etc.
Another downside of this log file solution you have is that if you dont have a persistent volume for your pod, it will use EmptyDir aka a ephemeral volume. This also means that Kubernetes will kill your pod if the node reaches 85% of its storage capacity.
I found simple way to view Pods folder structure and view content of file using PowerShell.
Run below command that will jump to pod and open PowerShell to execute the commands inside pod.
kubectl exec -it k8s-xm-cm-pod -n staging powershell
Reference: https://support.sitecore.com/kb?id=kb_article_view&sysparm_article=KB0137733
Please let me know if anyone knows other tools which shows pod file structure.

mongooseim cluster setup eacces error on ubuntu 14.04

We are trying to create master-master cluster of two mongooseim instances on AWS in same virtual network..
All necessary ports are opened in AWS security group.
I suspect some issue with mongooseim setup on Ubuntu 14.04 LTS
After running join_cluster command on one of the node we get error as follows ( refer screenshot )
Error: {error,{badmatch,{error,eacces}}}
Attached screenshot with details.
Server configuration was not changed except vm args as shown in attached screenshot.
is this an issue with your binary ? or some other glitch ?
I ran into this issue myself. Mongoose uses erlangs internal mnesia storage system for a lot of information including cluster topology. The default path for mnesia's storage is /var/lib/mongooseim. When you do a mongooseimctl join_cluster ... it needs to wipe it's mnesia store and basically pulls a copy from the cluster it's joining. The issue arises because it also tries to delete /var/lib/mongooseim itself, which it won't have permissions to do because the running user mongooseim won't have permissions of the parent directory, /var/lib. Nor should it.
The way I fixed this was by creating a subdirectory which it could safely delete and recreate and configuring it to use that as it's mnesia directory:
sudo mkdir /var/lib/mongooseim/mnesia
sudo chown mongooseim:mongooseim /var/lib/mongooseim/mnesia
Configuration for the mnesia directory exists by default in /etc/mongooseim/app.config. In mine it was the third line. Originally looked like this:
{mnesia, [{dir, "/var/lib/mongooseim"}]},
I changed the path to the new directory I created
{mnesia, [{dir, "/var/lib/mongooseim/mnesia"}]},
After that, I stopped and started mongoose and was successfully able to join the cluster
mongooseimctl stop
mongooseimctl start && mongooseimctl started
mongooseimctl join_cluster mongooseim#other.node.name

Change Docker native images location on Windows 10 Pro

This is not a duplicate of Change Docker machine location - Windows
I'm using docker native, version 1.12.1-stable (build: 7135) on Windows 10 Pro with Hyper-V enabled.
So docker is not running with VirtualBox nor do I have the folder C:\Users\username\.docker
I'd like to move docker's images, caches, ... to my secondary drive D:\
I guess I should edit the Docker Daemon configuration.
I tried to add "graph": "/D/docker". Docker started correctly but I couldn't pull any image because of an error
open /D/docker/tmp/GetImageBlob135686954: no such file or directory
How to tell docker to use another path to store its images, etc ?
Docker Desktop now can use WSL 2 Backend. In this mode, you need to move the wsl data.
In my case (Windows10 with Docker Desktop) none of the above solutions helped me, but I found the solution; run these commands.
This command changes the docker directory to drive D: (don't forget to quit docker desktop first)
wsl --shutdown
wsl --export docker-desktop-data docker-desktop-data.tar
wsl --unregister docker-desktop-data
wsl --import docker-desktop-data D:\docker-new-repo\ docker-desktop-data.tar --version 2
And now you can delete .tar file
There is a very good blog post explaining everything:
https://dev.to/kimcuonthenet/move-docker-desktop-data-distro-out-of-system-drive-4cg2
Docker Version : 2.2.0.3 (42716)
Right-click on docker icon on desktop tray
Click on Settings
3 Click on Resources from the left-hand menu then under the Disk Image location click on browse and change the location
Click on apply and restart
In 2020 to "Change Docker native images location on Windows 10 Pro" is:
quit docker desktop
open/edit configuration file C:\ProgramData\Docker\config\daemon.json
add setting "data-root": "D:\\Virtual Machines\\Docker"
now start docker desktop
run the command docker info to see the setting Docker Root Dir: D:\Virtual Machines\Docker
pull docker images e.g.: docker pull mongo
you can find the downloaded images in folder D:\Virtual Machines\Docker\windowsfilter
I found a solution here
Docker native, on Windows, runs in a Hyper-V virtual machine.
Move existing docker VM
I have to move the VM used by docker to the desired location.
I did this using the GUI of Hyper-V manager.
The VM for docker is called MobyLinuxVM.
Right-click MobyLinuxVM
Select Move
Select desired location
Set location of futures Hyper-V VMs
And to be sure futures VMs of Hyper-V will be stored on my secondary drive,
I followed those instructions
In a powershell terminal (destination folders must exist)
SET-VMHOST –computername <computer> –virtualharddiskpath 'D:\Hyper-V_Virtual-Hard_Disks'
SET-VMHOST –computername <computer> –virtualmachinepath 'D:\Hyper-V_VMs'
Edit the Docker Daemon configuration and use "data-root": "D:\\docker" instead of "graph": "/D/docker".
That will move all the newly downloaded images to D:\docker folder.
For Old Docker version use graph "graph": "D:\\docker", "graph" has been deprecated.
There is an easier way to do this:
Go to Docker Settings > Advanced > Change "Disk image location" and click "Apply" when prompted. Docker engine will shut down the VM and move it for you to the new location.
Warning: new location must not be compressed. If it is then Docker will not show you any error, just won't change location.
None of these steps worked for me. After reboot or a Docker restart, it would move back to the original path. What worked for me is using Junction
stop docker engine
create a target folder in the new location:
mkdir d:\docker\vhd
copy the folder Virtual Hard Disks to the target folder
rename (and backup) the original folder
rename “C:\Users\Public\Documents\Hyper-V\Virtual hard disks” “C:\Users\Public\Documents\Hyper-V\Virtual hard disks_backup”
create a hard symbolic link (junction)
junction.exe "C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks" "d:\docker\vhd\Virtual Hard Disks"
start docker engine
For Those looking in 2020. The following is for Windows 10 Machine:
In the global Actions pane of Hyper-V Manager click Hyper-V
Settings…
Under Virtual Hard Disks change the location from the default to
your desired location.
Under Virtual Machines change the location from the default to your
desired location, and click apply.
Click OK to close the Hyper-V Settings page.
If issues using the Docker Desktop GUI, when using Hyper-V:
Shutdown Docker Desktop
Edit c:\users\[USERNAME]\AppData\Roaming\Docker\settings.json
You need to edit dataFolder entry. Use Double backslashes.
eg: "dataFolder": "D:\\Demo\\Hyper-V\\DockerDesktop\\DockerDesktop"
Restart Docker Desktop
You can also use the above if Docker Desktop loses track of where you data folder is, as the GUI doesn't allow you to set it to a previously used location.
I would recommend looking at Microsoft documentation docker engine on windows, it's the daemon.json file that allows to change the setting "data-root": "".
From: https://github.com/microsoft/WSL/issues/4699#issuecomment-658369676
He created a symlink pointing to the new folder location. By running:
$ErrorActionPreference = "Stop"
$newLocation = "E:\VMs\WSL2\"
cd "~\AppData\Local\Docker\wsl\data"
wsl --shutdown
Optimize-VHD .\ext4.vhdx -Mode Full
mkdir $newLocation -Force
mv ext4.vhdx $newLocation
cd ..
rm "data"
New-Item -ItemType SymbolicLink -Path "data" -Target $newLocation
He also wrote a blog post going into more detail: http://nuts4.net/post/moving-wsl2-vhdx-file-to-a-different-location
Just configuration from Docker Desktop worked for me (Latest Version V20.10.8)
Steps
Go to settings
Select 'Docker Engine' option
Add property "data-root": "D:\\Docker" in configuration file
Apply and Restart
Settings

ArtifactDeployer plugin -remote access denied (Linux to Windows)

I am trying to use the ArtifactDeployer plugin to copy the artifacts from WORKSPACE/jobs/ directory into a remote directory on the windows 7 machine .The Jenkins machine OS is linux
However Jenkins never manages to succeed. Throwing errors like:
[ArtifactDeployer] - Starting deployment from the post-action ... [ArtifactDeployer] - [ERROR] - Failed to deploy. Can't create the directory ... Build step [ArtifactDeployer] - Deploy artifacts from workspace to remote directories' changed build result to FAILURE
I am not sure how to use the Remote Directory parameter.
Please check the sample code for how I am trying to specify the remote directory
remote Directory - \ip address of that machine\users\public
Is it possible to copy the artifacts which is on linux machine to windows 7 machine?
Please let me know how to specify the remote directory.
Reading the Plugin page doesn't seem to be very helpful when it comes to configuring it. The text seem to hint that you need to have local access (from the node where the job is running) to the (remote) folder you want to deploy too. For a first test, use a local directory (on your Linux box) to see if you get it to work. Second, the correct way to address a windows share is \\servername\sharename\subdirs. Remember that you might need to login to the share.
You might need to install samba or cifs to connect to the windows share from your linux system. There is also a setting in Windows that determines whether your windows box will accept connections to aliases. If that is not the case, you need to use the hostname in order to access the share. So IP and any alias for the server will not work then.
e.g
hostname: RTS3524
alias: JENKINSREPO
ip: 192.168.15.33
share: temp
For the example above, only \\RTS3524\temp will work but \\192.168.15.33 will not.

Resources