Duplicity: Too many files in target directory - duplicity

I'm trying to use duply backup which is based on duplicitiy. My problem is that my remote target ("GMX mediacenter" via WebDAV) seems to limit the number of files in one directory to 3000 (which is not documented - only my experience). When this limit is reached backup always stops because of WebDAV response status 507 with reason 'Insufficient Storage'.
I am seeing the following options:
Split the source into several parts. Then I still have to risk to reach the limit of 3000 files in the target directory.
Increase the volume size (--volsize), but this will only delay the problem, too.
Do I have any other good options?

not a development issue! so maybe the wrong place to ask your question.
i'd say change your backend! how will you ever guarantee that your backup will hold less than 3000 files exactly?
there are lot's of cheap/free services out there offering 2GB upward.
..ede/duply.net

Related

Electron does not run on shared folder

C:\share is shared folder.
C:\share\electron-v13.0.1-win32-x64, \\192.168.1.10\share\electron-v13.0.1-win32-x64 and Z:\electron-v13.0.1-win32-x64 are same folder.
Electron app is launched correctly when I execute C:\share\electron-v13.0.1-win32-x64\electron.exe command.
However, electron app is not launched correctly when I execute Z:\electron-v13.0.1-win32-x64\electron.exe command.
According to the task manager, electron processes are running.
However, electron's window is not shown.
Can electron run correctly on shared folder?
Should be safer to use it locally (from the C:\share). The mapped drives behave very differently compared to local filesystem. And their implementations can differ in their settings as well:
https://wiki.samba.org/index.php/Time_Synchronisation
https://www.truenas.com/community/threads/issue-with-modified-timestamps-on-windows-file-copy.82649/
https://help.2brightsparks.com/support/solutions/articles/43000335953-the-last-modification-date-and-time-are-wrong
If I understand you are just mapping back your own shared folder, and overall the Windows server cofigurations felt to me more consistent, however the protocol changed over the time as well:
https://en.wikipedia.org/wiki/Server_Message_Block
I do not understand the network sharing protocols well to give you exact answer why you have the problem, but I know enough to tell you that the mounted shared folders are not like your own local filesystem. In many cases the differences do not matter and it gives great user expierence, but in some cases these minute differences break things in misterious ways, even if they are mapped/mounted almost like a regular/local drive. This is not exclusive problem to Electron.
And that is a problem with a lot of things through SMB (mainly binaries/tools), the shared folder might be running a different filesystem, different permission and privileges (or run a completely different structure of permissions underneath if it's a completely different filesystem). Remote folders might have issues with inotify getting events on file updates, might miss changed file (like touch on Linux is meant to update date on the file), so through shared folder the date updates might be delayed/rounded. I think at one point even Makefiles were misbehaving as it was depending on the access-date to work the way it would locally.
Other problem with tools is the sharability, can it handle run multiple instances from the same location? Is it saving something into a ./tmp or some other file which could conflict with other user running it at the same time?
Overall with shares I tend to use them for data (and few times had issues with them as well), but have shared remotely applications only if they are known to not cause troubles.

Enable k8s experimental features in Docker Desktop

does anyone know if this is possible?
All I can find in docs is reference to enabling docker experimental features, but not the kubernetes experimental features.
I tried this, but still get error.
k alpha debug -it exchange-pricing-865d579659-s8x6d --image=busybox --target=exchange-pricing-865d579659-s8x6d
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
Thanks
I had the same intent (as have others in this feature request). After several hours of trial and error, I finally found out a way to do so.
Steps:
Depending on which file you're trying to edit, you may need to fully shut down Docker Desktop, and restart WSL. (right-click tray-icon and press "Quit Docker Desktop", then run wsl --shutdown, then run wsl)
Open the [...]/kubeadm/manifests folder, in the Docker filesystem.
On Windows, navigate Windows Explorer to:
For Docker Desktop 4.2.0: \\wsl$\docker-desktop-data\version-pack-data\community\kubeadm\manifests
For Docker Desktop 4.11.0: \\wsl$\docker-desktop-data\data\kubeadm\manifests
Open the kube-controller-manager.yaml, kube-apiserver.yaml, and kube-scheduler.yaml files, adding the line below:
spec:
containers:
- command:
[...]
- --feature-gates=EphemeralContainers=true <-- add this line
Start Docker Desktop again.
It looks so easy when its already figured out, huh? Well trust me, it was a pain to find out.
Some of the slowdowns I hit:
It took me quite a while to even find those manifest files. (eventually found it using grepWin, searching through the whole \\wsl$\docker-desktop-data folder for any matches of a line I grabbed from the kube-apiserver-docker-desktop pod's config, which I viewed using Lens)
Once I found it, I got confused by this documentation. When I read FEATURE STATE: Kubernetes v1.22 [alpha], I thought that meant you needed version 1.22 or higher of Kubernetes for the feature to be available. This caused a huge wild goose chase where I tried to change the version of Kubernetes that was being launched in Docker Desktop, which Docker Desktop didn't seem to like. (in retrospect, the issue may have just been the minor one in point 3 below...)
When I first made changes to the manifest files, I was using Notepad++. And despite my liking Notepad++, it's apparently not quite as smart as vscode in the following regard: it does not automatically detect the indentation type for yaml files. Thus, when I pressed tab to create an indent, so I could add the new flag to the argument list, it added it as a tab character rather than spaces. This caused Kubernetes to fail reading of the file. That might not be so bad if Kubernetes gave a sane error message for that, but instead it merely gave the message unexpected EOF. And I didn't even see that error message at first because it was not being propagated to the kube-controller-manager-docker-desktop pod (which was the only relevant one that wasn't immediately erroring/closing). Anyway, I didn't realize this was the problem at the time, so...
I decided to try bypassing the manifest-files and applying my modification to the etcd data-store directly. In retrospect, this was not a good idea, because the etcd data-store is pretty complex, the tooling is substandard, and the documentation is substandard. I spent a ton of time just trying to figure out how to send commands to read and write data to it (eventually managed to do so by calling etcdctl within the etcd-docker-desktop pod). I spent further time still writing up a NodeJS script capable of reading all the data as JSON, storing it in a dump file, and being able to write changes to entries back despite there being 3+ levels of quoting involved (I eventually was able to use stdin to pass the value rather than as part of the command string, to avoid quotation-mark-inception). After all the work on etcd reading/writing above, I found it didn't work anyway because Kubernetes invariably "breaks" if anyone else writes to its etcd data-store. (even if you write the exact same value that had been there before -- as verified by comparing the dumps before and after)
After all of the above, I decided to have one last go with just adding the flags to mentioned manifest files. Was still getting the startup failure/error, but at the very end, I decided I wanted to see exactly what about my changes was causing Kubernetes to reject them. So I tried commenting out my added line; the error remained. I thought maybe it was a checksum-based rejection then. But then I thought, maybe the YAML parser that Kubernetes is using is just outdated and is finicky about what comments it is able to recognize. So I tried moving the comment around to different places, and was puzzled when the manifest was being accepted just by moving the comment to the root level. I moved it back to various locations, with it working and not working, until I thought to try making the line "half-indented" since it's "in-between" the working and non-working versions. That's when I noticed the line had a tab as its indent. And then it hit me; are the other lines also using tabs? I checked, and nope, they were using spaces. And that's when I realized I had wasted the last few hours on something I coulda just fixed with a simple indent change.
The moral of the story for some is that YAML is a bad configuration format, because it makes it easy to make trivial errors like this. But I actually place the blame more on whatever parser Kubernetes is using for the YAML files; it is unacceptable that a YAML parser would encounter an indentation mismatch and give a message so generic as unexpected EOF. I don't know what the identity of that YAML parser is, but I'm tired enough of the subject that I'm not even going to look into it right now. If one of you finds it, please make an issue report for it -- perhaps including this story as a real-world example of the pain that ambiguous error messages can cause.
Since Ephemeral Containers is still an alpha feature, it is disabled by default.
As you can read here, for this to work, it requires the EphemeralContainers feature gate to be enabled, and Kubernetes client and server version v1.16 or later.
As to the 2nd requirement I assume both your Kuberntes server and client versions are v1.16 or later but it looks like, for the time being, the 1st requirement cannot be met on Docker Desktop. According to this issue, it currently doesn't support enabling Feature Gates.
However you may still try to ssh to your master node and edit the following files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
by adding inside the command section:
--feature-gates=EphemeralContainers=true
Then you need to delete those pods so they are recreated with new settings applied. You'll find them by running:
kubectl get pods -n kube-system

How do I make a simple public read-only WebDAV server with SabreDAV?

I recently began looking into WebDAV, as I found it to be an option for letting me play a Blu-ray folder remotely - i.e. without requiring the viewer to download the whole 24gb ISO first.
Add a WebDAV source in Kodi v18 to a Blu-ray folder - and it actually plays! Very awesome.
The server can also be mounted on Windows with
net use m: http://example.com/webdavfolder/
or in Linux with
sudo mount -t davfs http://example.com/webdavfolder/ /mnt/mywebdav
-and should then (in theory) play with any software media players that supports Blu-ray Disc Java (BD-J), such as PowerDVD and VLC.
vlc bluray:///mnt/mywebdav --bluray-menu
PowerDVD.exe AUTOPLAY BD m:
(Unless of course time-out values has been set too low, which seems to be the case for VLC at the moment).
Anyway, all this is great, except I can't figure out how to make my WebDAV server read-only. Currently anyone can delete files as they wish, and that's of course not optimal.
So far I've only experimented with SabreDAV, because afaik that's the only option I have if I want to keep using my existing webhost. Trying with very minimal setups, because I've read that minimal setups should default to a read-only solution. It just doesn't seem to happen.
I initially used the setup from http://sabre.io/dav/gettingstarted/ and tried removing some lines. Also tried calling chmod 0444 MainFolder -R on the webserver. And I can see that everything does get a read-only attribute. But it changes nothing. It's still possible to delete whatever I want. :-(
What am I missing?
Maybe I'm using the wrong technology for what I want to do? Is there some other/better way of offering a Blu-ray folder for remote viewing? (One that includes the whole experience - i.e. full Java menus etc).
I should probably mention that all of this is of course perfectly legal. It is my own Blu-ray project - not copyright material.
Also: Difficult to decide if this belongs on StackOverflow or SuperUser. I ended up posting it on StackOverflow because SabreDAV is about coding, and because there's no sabredav tag on SuperUser.
You have two options:
Create your own file/directory classes for sabre/dav that simply throw an error when trying to delete. You can basically start with a copy of Sabre\DAV\FS\Directory and Sabre\DAV\FS\File and change the methods that do writing.
Since you're considering just using linux file permissions, really the key thing you are missing is that that 'deleting' is not controlled on the file or directory you're trying to delete. To delete a file or directory in unix, all you need is write permissions on the parent directory. However, I wouldn't recommend going this route as doing this will just cause a weird error in sabre/dav, which might leave clients in a confused state. It would result in a 500 error, not the expected 403 error.

Neo4j getting killed in Travis CI container

I'm trying to add Neo4j 3.0 to my tests for the neo4j gem and I'm having trouble with the server getting killed in a Travis CI container. Pre-3.0 works just fine, but when I use 3.0 it seems to get killed. There seems to be plenty of memory (when I run Neo4j locally it uses 300-400 MB). I get a warning from Neo4j saying:
WARNING: Max 30000 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
That makes me think that it's getting killed because of too many open files. I'm not sure if there's a way to increase the number of files on a Neo4j container, and I have a number of jobs so I don't want to slow things down by running sudo: true. Did Neo4j 3.0 change to require more open files (the documentation doesn't seem to imply that it did)?
EDIT:
My .travis.yml file:
This is how I do it, and it works fine for me for 2.3 and 3.0 including a push to docker hub.
https://github.com/maxdemarzi/neo_travis
https://travis-ci.org/maxdemarzi/neo_travis
I think our memory allocation is messing things up. One thing that is unusual on your (travis's) setup is that there is twice the amount of swap memory compared to RAM, and that the amount of memory reported as available is very large.
Try specifying the amount of memory in your config file. See http://neo4j.com/docs/operations-manual/current/#performance-tuning for more details, but essentially add these to your config.
In neo4j.conf:
dbms.memory.pagecache.size=1G
and in neo4j-wrapper.conf:
dbms.memory.heap.max_size=1000
dbms.memory.heap.initial_size=1000
The memory limits are set quite low to guarantee that Travis doesn't kill the process, and I suspect that the tests don't need much in terms of memory.

PHP Command-line scripts are ignoring php.ini and ini_set('memory_limit',...) directives

I am facing the common "Fatal error: Out of memory (allocated 30408704) (tried to allocate 24 bytes)..." PHP Fatal error. Pages served via Apache are not exhibiting this behavior.
I've tried the following:
Increasing the memory_limit in php.ini to a much larger value.
Increasing memory_limit within the script itself via calls to ini_set('memory_limit', -1), ini_set('memory_limit', '-1'), ini_set('memory_limit', 100000000), ini_set('memory_limit', '128M'), etc.
unset()ing unneeded arrays and objects to encourage the garbage collector to free up memory.
Contacting the web host. They are normally very capable and knowledgeable, but have not been able to help me with this issue either.
I've tried explicitly including a php.ini file using the -c command-line flag to hand-pick specific php.ini files with various values.
I've tried setting memory_limit in php.ini using both raw numbers of bytes and values such as 64M, 128M, etc.
The hosting provider was able to run the script as root with no issues, but experiences the same issue I do when running it using my non-root user. Perhaps there is some kind of permissions issue involved?
Regardless of what I try, the error message is the same. It appears that my command line scripts are ignoring changes to memory_limit.
I tend to try to make sure my scripts are memory efficient, but I'm currently needing to parse large amounts of HTML via Simple HTML DOM and it is in the parser that I'm experiencing out of memory issues. In an attempt to reduce the memory footprint of the script, I've tried using DOMDocument instead. This does not help either. In fact, the out of memory error is now triggered elsewhere in the script.
My question: has anyone experienced this or a similar issue? Do you have any recommendations?
Thank you.
It turns out that the problem was caused by shell fork bomb protection being enabled on the server that was placing a hard memory limit on all command-line scripts. This had been enabled by my web host without my knowledge.
your PHP on cli may be using a different php.ini to your apache php. try a phpinfo() and check its using the ini file you think its using.

Resources