How to check the validity of all *.url files on my Win10 partition? - url

How to check the validity of all *.url files on my Win10 partition?
In the past there was a tool AM-Deadlink which did this job.
Unfortunately it uses very old internet explorer engine resp. does not offer *.url files checking from partition files any more.
So I am seerching for another, newtool which helps me.
It should:
collect all *.url files from a given partition (e.g. D:)
check if target webpage exists resp. exists only as re-direction
....and tell me the HTTP server return (error) code
Any tool available?
See above
nothing more to explain

Related

365 Block .exe Download

Is it possible to prevent the download and execution of files with a .exe extension using 365? I've searched across Endpoint Manager, Defender for Endpoint and Defender for Cloud Apps but can't see an obvious way of doing this.
Most of my searches suggest using AppLocker but this would only solve half the problem (blocking execution of the file).
Is there any way using Microsoft 365 technology to block the download and execution of files based on their extension?
You can try using Edge's relevant group policy or registry to achieve your needs. I think this should help you. Please refer to this policy document: Allow download restrictions.
It can be known in the documentation that the danger level of exe type files is ALLOW_ON_USER_GESTURE, so you can change the group policy or registry information I mentioned above to 2, which will block potentially dangerous or unwanted downloads and dangerous file types.
And the path of this registry is at: SOFTWARE\Policies\Microsoft\Edge\Recommended, if it doesn't exist, you can create it as REG_DWORD type, set its value will be ok. In the same way, you can do it via group policy.

Where does the H2-database draws the most recent database-connections from and adds them to the .h2.server.properties?

When downloading and installing the H2 database on Windows from the official page and starting the H2-database either via the Windows-Start menu or with the .bat-file usually located in C:\Program Files (x86)\H2\bin\ the .h2.server.properties are generated as well, as they are supposed to usually in C:\Users\user\.h2.server.properties.
Just there seems to be another cache-mechanism for the recent database-connections which are listed in that file, since after deleting H2 and reinstalling it, in the auto-generated file I find connections I do not remember to used them in the past:
#H2 Server Properties
webSSL=false
webAllowOthers=false
webPort=8082
10=Generic DB2|com.ibm.db2.jcc.DB2Driver|jdbc\:db2\://localhost/test|
...
...
...
My question is, whether there is another caching-mechanism, the H2-application uses for those connections.
Thanks in advance.
If this file doesn't exist, H2 writes a hard-coded builtin list of default connection options for various JDBC drivers including the own one. This list can't be configured, but it may be different in different versions of H2. Connection options for third-party drivers are updated from time to time.

Google Colab: Can we restore all the data even after the runtime disconnects?

I am a new learner. I recently started learning Google Colab. Whenever I close my Colab and reopen it, all the code start executing from beginning. Is there any way to restore the local variable, code outputs and all the previous program data?
It is really time-consuming to load the dataset every time.
Unfortunately No (by this answer posted date), you cannot restore to previous runtime. Everything restarts on a new runtime session on a different virtual machine. Notebooks run by connecting to virtual machines that have maximum lifetimes that can be as much as 12 hours. And Colab Pro says to provide around 24hrs of runtime. This is necessary for Colab to be able to offer computational resources for free.
However you can apply good practices to help you work faster. Some of them are:
Save your datasets and trained models on your Google Drive; Mount it and use it as required. Only runtime local variables and program data for that session are destroyed.
Use pre-trained models to implement Transfer Learning to save training time.
Use "Connect to hosted runtime" and "Manage Sessions" to use the free resources effectively.
Sadly, it's just part of the workflow with colab, but there are ways to make life easier. To persist data you'd want to connect to google drive and pull/save files from there:
from google.colab import drive
drive.mount('/content/drive')
Then follow instructions - click the link, copy/paste the auth token.
After connecting to google drive - copy files that are stored on the drive using command !cp. For example, these commands copy files stored on the drive to local notebook environment:
!cp "/content/drive/My Drive/Colab Notebooks/trainer.py" "trainer.py"
!cp "/content/drive/My Drive/Colab Notebooks/data.pkl" "data.pkl"
To copy files and folders from notebook environment to drive use the same !cp command:
!cp "model" "/content/drive/My Drive/Colab Notebooks/my-fancy-model"
Assuming you want to see previous ouputs of the code. You could use File > Save and Pin Revision to save revision history including revision name. That way it will store previous outputs including code changes. Now going to File > Revision History, it will show difference between two version. Clicking on three dot on right side it will show option to restore version, open, or name it.

How do I make a simple public read-only WebDAV server with SabreDAV?

I recently began looking into WebDAV, as I found it to be an option for letting me play a Blu-ray folder remotely - i.e. without requiring the viewer to download the whole 24gb ISO first.
Add a WebDAV source in Kodi v18 to a Blu-ray folder - and it actually plays! Very awesome.
The server can also be mounted on Windows with
net use m: http://example.com/webdavfolder/
or in Linux with
sudo mount -t davfs http://example.com/webdavfolder/ /mnt/mywebdav
-and should then (in theory) play with any software media players that supports Blu-ray Disc Java (BD-J), such as PowerDVD and VLC.
vlc bluray:///mnt/mywebdav --bluray-menu
PowerDVD.exe AUTOPLAY BD m:
(Unless of course time-out values has been set too low, which seems to be the case for VLC at the moment).
Anyway, all this is great, except I can't figure out how to make my WebDAV server read-only. Currently anyone can delete files as they wish, and that's of course not optimal.
So far I've only experimented with SabreDAV, because afaik that's the only option I have if I want to keep using my existing webhost. Trying with very minimal setups, because I've read that minimal setups should default to a read-only solution. It just doesn't seem to happen.
I initially used the setup from http://sabre.io/dav/gettingstarted/ and tried removing some lines. Also tried calling chmod 0444 MainFolder -R on the webserver. And I can see that everything does get a read-only attribute. But it changes nothing. It's still possible to delete whatever I want. :-(
What am I missing?
Maybe I'm using the wrong technology for what I want to do? Is there some other/better way of offering a Blu-ray folder for remote viewing? (One that includes the whole experience - i.e. full Java menus etc).
I should probably mention that all of this is of course perfectly legal. It is my own Blu-ray project - not copyright material.
Also: Difficult to decide if this belongs on StackOverflow or SuperUser. I ended up posting it on StackOverflow because SabreDAV is about coding, and because there's no sabredav tag on SuperUser.
You have two options:
Create your own file/directory classes for sabre/dav that simply throw an error when trying to delete. You can basically start with a copy of Sabre\DAV\FS\Directory and Sabre\DAV\FS\File and change the methods that do writing.
Since you're considering just using linux file permissions, really the key thing you are missing is that that 'deleting' is not controlled on the file or directory you're trying to delete. To delete a file or directory in unix, all you need is write permissions on the parent directory. However, I wouldn't recommend going this route as doing this will just cause a weird error in sabre/dav, which might leave clients in a confused state. It would result in a 500 error, not the expected 403 error.

Rabbitmq erlang client build failed due to file paths problems?

I have been able to build rabbitmq server on ubuntu linux. It came already prepackaged and on making, it is able to start as a service. When i got the client source, i failed to make because it appeared like it needed a folder called ./deps/rabbitmq-server. Analysing the code, i find that the author of the client was accessing the same header files as are found in the server, using include_lib("path to rabbit.hrl e.t.c") in his header file called "amqp_client.hrl". I then decided to add rabbitmq_server in the lib dir of erlang so as its paths are automatically added on start up of the vm. But still this didnot help. There is also another folder which the client references called "rabbit_common" for an include folder he assumes would contain all the .hrl files there. Please assist me in building both the client and server on my ubuntu server, for testing.
Also, if anyone has used RabbitMQ server for IMs, please provide some benchmarks and/or your findings on its throughput, speed and number of users. How can it be compared to ejabberd?. How can one create AJAX/Jquery/Javascript clients for Web functionality?
thanks
I hope you had made some progress as far as RabbitMQ and ejabberd are concerned.
Below is a link to an interesting discussion that might be of help.
http://old.nabble.com/AMPQ-vs-XMPP-and-RabbitMQ-vs-ejabberd-td17587109.html

Resources