xen-hypervisor - right way of snapshot and revert - rollback

I have an CentOS 7 with xen-hypervisor 4.6.6.
I manage the VMs from cli with xen-tools (xl, etc).
The VM disks are managed with LVM.
I would like to create snapshot of VM and revert if needed (eg.: in case of failed system update).
Please tell me the right way to do it and these steps.
Thanks for the help!
(and sorry for my english)

I am actually facing the same issue by trying Xen-hypervisor (not xen-server neither xen-api) and i will add some more information here.
From now this is what i found: https://searchservervirtualization.techtarget.com/tip/Creating-snapshots-in-Xen-with-Linux-commands
The main difference from this topic is that I use xl and not xm.
Now, I have my snapshot.img and my snapshot.sav (cf topic).
I dont know what to do with those since xl restore seems to just restore the domain and when i use dd back to put the disk image in the LVM partition, i dont succeed to launch again the VM.
Any advice on that ?

Coming back on this topic, i ended up using XenOrchestra to handle snapshot and export.
Also, one big mistake i had in my mind was confusing snapshot and export (image).
A snapshot is a state in time of your vm whereas an export is the entire vm. I was trying to use a snapshot to recreate a vm after removal which does not make sense.
Also, i think that you can do it easily with XenCenter.

Related

MariaDB settings in Docker

A while back I created an instance of mariadb inside a docker container on a machine running Ubuntu. I've since learned that I'll need to update some settings to keep things running smoothly, but when I created the image, I did not specify any .cnf volumes. How do I update/create a .cnf file for this image? I'm a complete newb when it comes to docker, so please spoon-feed me.
I've tried accessing the file from within the image, but there are no text editors.
The defaults of MariaDB work pretty much out of the box (container) for small instances. You should only need to change setting when problems occur.
If you have spare memory you can increase your innodb_buffer_pool_size.
With the mariadb container, you don't need to edit the .cnf files, you can just add a few options on the command line per the docs (that you should defiantly read).
Recommend using the defaults for a while, and if you encounter problems, include a new question on dba.stackexchange.com that includes show global status output and specifics on the queries that are slow (show create table TBLNAME / explain QUERY).

Is there any possibality to run docker-autoheal (or similar..) for an Image that already running without having the dockerfile?

I have an image that I cannot miss on anything on it, but I need any other way to monitor it or applying any other docker-autoheal alternative tool, the issue that most of the docker-autoheal documaintaion needs to modify the image configuration and add a health check to it (which is not possible to my situation).
My main goal here is to auto restart the image after it's stopped/failed immedialty.
Hi you can use docker commit in order to make changes to the image :)

Enable k8s experimental features in Docker Desktop

does anyone know if this is possible?
All I can find in docs is reference to enabling docker experimental features, but not the kubernetes experimental features.
I tried this, but still get error.
k alpha debug -it exchange-pricing-865d579659-s8x6d --image=busybox --target=exchange-pricing-865d579659-s8x6d
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
Thanks
I had the same intent (as have others in this feature request). After several hours of trial and error, I finally found out a way to do so.
Steps:
Depending on which file you're trying to edit, you may need to fully shut down Docker Desktop, and restart WSL. (right-click tray-icon and press "Quit Docker Desktop", then run wsl --shutdown, then run wsl)
Open the [...]/kubeadm/manifests folder, in the Docker filesystem.
On Windows, navigate Windows Explorer to:
For Docker Desktop 4.2.0: \\wsl$\docker-desktop-data\version-pack-data\community\kubeadm\manifests
For Docker Desktop 4.11.0: \\wsl$\docker-desktop-data\data\kubeadm\manifests
Open the kube-controller-manager.yaml, kube-apiserver.yaml, and kube-scheduler.yaml files, adding the line below:
spec:
containers:
- command:
[...]
- --feature-gates=EphemeralContainers=true <-- add this line
Start Docker Desktop again.
It looks so easy when its already figured out, huh? Well trust me, it was a pain to find out.
Some of the slowdowns I hit:
It took me quite a while to even find those manifest files. (eventually found it using grepWin, searching through the whole \\wsl$\docker-desktop-data folder for any matches of a line I grabbed from the kube-apiserver-docker-desktop pod's config, which I viewed using Lens)
Once I found it, I got confused by this documentation. When I read FEATURE STATE: Kubernetes v1.22 [alpha], I thought that meant you needed version 1.22 or higher of Kubernetes for the feature to be available. This caused a huge wild goose chase where I tried to change the version of Kubernetes that was being launched in Docker Desktop, which Docker Desktop didn't seem to like. (in retrospect, the issue may have just been the minor one in point 3 below...)
When I first made changes to the manifest files, I was using Notepad++. And despite my liking Notepad++, it's apparently not quite as smart as vscode in the following regard: it does not automatically detect the indentation type for yaml files. Thus, when I pressed tab to create an indent, so I could add the new flag to the argument list, it added it as a tab character rather than spaces. This caused Kubernetes to fail reading of the file. That might not be so bad if Kubernetes gave a sane error message for that, but instead it merely gave the message unexpected EOF. And I didn't even see that error message at first because it was not being propagated to the kube-controller-manager-docker-desktop pod (which was the only relevant one that wasn't immediately erroring/closing). Anyway, I didn't realize this was the problem at the time, so...
I decided to try bypassing the manifest-files and applying my modification to the etcd data-store directly. In retrospect, this was not a good idea, because the etcd data-store is pretty complex, the tooling is substandard, and the documentation is substandard. I spent a ton of time just trying to figure out how to send commands to read and write data to it (eventually managed to do so by calling etcdctl within the etcd-docker-desktop pod). I spent further time still writing up a NodeJS script capable of reading all the data as JSON, storing it in a dump file, and being able to write changes to entries back despite there being 3+ levels of quoting involved (I eventually was able to use stdin to pass the value rather than as part of the command string, to avoid quotation-mark-inception). After all the work on etcd reading/writing above, I found it didn't work anyway because Kubernetes invariably "breaks" if anyone else writes to its etcd data-store. (even if you write the exact same value that had been there before -- as verified by comparing the dumps before and after)
After all of the above, I decided to have one last go with just adding the flags to mentioned manifest files. Was still getting the startup failure/error, but at the very end, I decided I wanted to see exactly what about my changes was causing Kubernetes to reject them. So I tried commenting out my added line; the error remained. I thought maybe it was a checksum-based rejection then. But then I thought, maybe the YAML parser that Kubernetes is using is just outdated and is finicky about what comments it is able to recognize. So I tried moving the comment around to different places, and was puzzled when the manifest was being accepted just by moving the comment to the root level. I moved it back to various locations, with it working and not working, until I thought to try making the line "half-indented" since it's "in-between" the working and non-working versions. That's when I noticed the line had a tab as its indent. And then it hit me; are the other lines also using tabs? I checked, and nope, they were using spaces. And that's when I realized I had wasted the last few hours on something I coulda just fixed with a simple indent change.
The moral of the story for some is that YAML is a bad configuration format, because it makes it easy to make trivial errors like this. But I actually place the blame more on whatever parser Kubernetes is using for the YAML files; it is unacceptable that a YAML parser would encounter an indentation mismatch and give a message so generic as unexpected EOF. I don't know what the identity of that YAML parser is, but I'm tired enough of the subject that I'm not even going to look into it right now. If one of you finds it, please make an issue report for it -- perhaps including this story as a real-world example of the pain that ambiguous error messages can cause.
Since Ephemeral Containers is still an alpha feature, it is disabled by default.
As you can read here, for this to work, it requires the EphemeralContainers feature gate to be enabled, and Kubernetes client and server version v1.16 or later.
As to the 2nd requirement I assume both your Kuberntes server and client versions are v1.16 or later but it looks like, for the time being, the 1st requirement cannot be met on Docker Desktop. According to this issue, it currently doesn't support enabling Feature Gates.
However you may still try to ssh to your master node and edit the following files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
by adding inside the command section:
--feature-gates=EphemeralContainers=true
Then you need to delete those pods so they are recreated with new settings applied. You'll find them by running:
kubectl get pods -n kube-system

remove/disable node from icinga2

I feel terrible about that I can't solved that alone.
I have an icinga 2.6 installed, 1 master, multiple "slave". One of our server is going to be offline for a longer time so I want to disable/remove the node.
I tried "icinga2 node remove", but I get "deprecated read changelog" error. I read the changelog, but I really can't understand why I need to reverse engineer such an easy functionality across the half internet and read already more ticket then I want...
Still no solution. I tried deleting files from the repository.d but 0 success.
Any help would be good, and some word in the official docs would be nice too :D
I'm not really sure if this is the answer you'll be looking for, but I'm giving it a shot anyways as I'm in the same boat as you are.
The only alternative I've found thus far is installing and setting up the module 'Director' on Icinga Web 2. The process is, as everything else with Icinga, poorly documented but it'll get you there. Please see here for instructions: https://www.icinga.com/docs/director/latest/doc/02-Installation/
Once installed, the module needs to be configured and old hosts may be imported. That's where it ended for me: what was documented didn't work and the error messages are probably only logical to the one that wrote them.
I've given up and am looking for a replacement to Icinga2 right now. While I liked it at the start, though it was complicated, they've now gone above and beyond creating a tool that is simply so difficult to work with that many won't.
I have 2.6 installed and needed to remove a node as well.
I know you tried to remove files and that didn't work for you - but it worked for me - so just documenting the process here in case it helps someone else.
I was able to remove the node manually by removing all files and directories related to the node in repository.d, specially in directories:
/etc/icinga2/repository.d/endpoints
/etc/icinga2/repository.d/hosts
/etc/icinga2/repository.d/zones
Note that in /etc/icinga2/repository.d/hosts there should be a subdirectory related to the node you are trying to remove - which also needs to be removed.
Once all are removed (recommend just moving to another location outside of /etc/icinga2 in case you need to revert), restart the icinga2 process.
At this point my icinga2 instance restarted successfully and the node was not showing up anymore.
Well the you need to do the next:
Execute: icinga2 node remove 'PC-name'
Will appear: deprecated read changelog
Then execute: icinga2 node update-config
And the last step: service icinga2 restart
Be happy, the node disappear

Separate transaction logs in Neo4j Enterprise

I'am trying to change logical logs out of the *.db folder to put this in another disk volume. However I don't see any option in the neo4j config files that will allow you to do this. It's possible to do this configuration?
My neo4j version is 3.2.1.
Thanks
No, it is - at this time - not possible to move the transaction logs to some other place. Note that while the term logs is technically correct, these files are essential to the integrity of the database (unlike a regular log it would be very unwise to delete them) and it is therefore logical that they live together with the actual datafiles.
Hope this helps,
Tom
see file
conf/neo4j.conf
and line
#dbms.directories.logs=logs

Resources