Mapped drive letters immediately available? - powershell-2.0

I am using this to map a drive letter in PowerShell v2, and it works in that the drive letter shows up and is usable in Explorer.
$Network = New-Object -ComObject "Wscript.Network"
$Network.MapNetworkDrive($drive.name, $drive.value, $true)
However, if I then try to use that drive letter to do anything in Powershell, say to create a folder, I get a DriveNotFoundException. But as I said, the drive is there and usable manually. I thought I might need to wait for a bit, or refresh Explorer, or both, but doing so seems not to affect anything. However, if I turn around and rerun the script, which checks in advance to see if the drive is there, and only creates it if not, it will see the drive and not recreate, and a following task will work fine. As if perhaps the drive letter is session based?
I also tried adding
New-PSDrive -name:($drive.name -replace ':', '') -psProvider:FileSystem -root:$drive.value -scope:Global
as well, in the hopes that this would provide a session based drive, but no good.
An additional wrinkle is that the script has to be Run as Administrator, but again, if I do it as two different scripts, one to create the drive and one to use it, it works when both are Run As Administrator. It's only when both tasks are done in the one script that it fails.
One last point, I know PS 3 has a better way to handle mapped drives, but due to things beyond my control I am limited to PS v2.

I don't think what you want is possible in PowerShell v2. To have New-PSDrive create a Windows drive you need to add the parameter -Persist, which isn't available in PowerShell v2.
From the documentation:
-Persist
Creates a Windows mapped network drive. Mapped network drives are saved in Windows on the local computer. They are persistent, not session-specific, and can be viewed and managed in File Explorer and other tools.
I'd say your best option (aside from upgrading to PowerShell v3 or newer) is to use the net command:
& net use $drive.name $drive.value /persistent:yes

Related

Permission error for shutils moving png to path

You see I am making a python gui with pysimplegui for my old qrcode generator script so im using shutils for the user to download the file.
I am using the 'default' thing because i want it to save to my users path not mine, do you know some other way I can do that? because i think this is the reason its not working
I tried making it so the user inputs there username such as 'My laptop' so it adds it to the path
src_path = r"D:\Python\QRcode generator\output.png"
dst_path = r"C:\Users\Default\Pictures"
shutil.move(src_path, dst_path)
The code is correct. It wont give any error if you dont use C drive (Where the operating system is installed)
This is mostly due to C drive is protected for windows stability.
If you are using any code editor (Pycharm, VS Code etc.) or running the code in windows command prompt or any terminal etc. Try and run it with administrator rights
It should work.

How to make container installation behave like host machine installation

I'm working with the following:
Docker for Windows v20.10.11
Docker running in Windows container mode
mcr.microsoft.com/windows:1903 base image
Proprietary application installed on top of this base image
Each year we create a Docker image with the latest version of our company's software. However this year's version behaves differently. Host machine installation runs fine. Containerized installation fails to run in certain situations. I can start the application as a simple EXE, for example using the Docker run command. The app will start and show up in "tasklist". However I can't start the app via the COM API, which is a critical requirement. The problem appears to be COM related. Normally we can create COM objects for our software just like for any other application. For example, IE returns a COM object just fine:
Creating these objects for our application works outside containers. However inside the container, our latest installation gives this error:
Access permissions appear to be ok. I tried a couple tests to prove this. First I can install other software like MS Word into a container and create COM objects for that:
Second I tried retrieving + modifying the application's DACL in PowerShell.
Changing access masks or trustees can cause an Access Denied error:
This also appears to confirm the access permissions were Ok by default.
Next I made sure COM is aware of the application. This appears to be fine. I get the same result on host machine and container when running this PS script:
gci HKLM:\Software\Classes -ea 0| ? {$.PSChildName -match '^\w+.\w+$' -and
(gp "$($.PSPath)\CLSID" -ea 0)} | ft PSChildName
The application shows up just like any other. The details show up fine when querying by AppID. LocalServer32 points to the correct EXE:
Some other things I tried:
Querying registry keys. There are 7 keys created when installing our software. These appear identical on host machine install and container install.
Even though permissions appear fine, I still tried logging into the container as alternate users. For example "nt authority\system" is another virtual admin user. I also changed the password of the "builtin\administrator" user to enable logging in with that one. Lastly tried creating new users entirely and adding them to the Administrators user group. All these attempts had the same errors as "builtin\containeradministrator" (default user).
A minor check was ensuring CMD.exe / Powershell is running as x64:
Re-registering the DLLs associated with the installation using regsvr32.
Starting from different base images. https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-base-images. The full Win Server base image behaves exactly the same way regarding errors. The smaller Win Server Core base image is even more problematic, as I can't even start the app's EXE manually using that base. Lastly I tried other tags of the full Windows base image such as 20H2 and 2004. Same result from those. Multiarch or x64 makes no difference.
Included the "Ogawa hack" which was historically needed to make MS Office apps function correctly with COM: https://stackoverflow.com/a/1680214/7991646. It could be necessary for other COM apps too, but didn't help with my specific installation.
Is there anything else I can do to diagnose or solve this COM issue?
There are several things to consider:
The Considerations for server-side Automation of Office article states the following:
Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behavior and/or deadlock when Office is run in this environment.
If you are building a solution that runs in a server-side context, you should try to use components that have been made safe for unattended execution. Or, you should try to find alternatives that allow at least part of the code to run client-side. If you use an Office application from a server-side solution, the application will lack many of the necessary capabilities to run successfully. Additionally, you will be taking risks with the stability of your overall solution.
The When CoCreateInstance returns 0x80080005 (CO_E_SERVER_EXEC_FAILURE) page describes possible reasons.
If many COM+ applications run under different user accounts that are specified in the This User property, the computer cannot allocate memory to create a new desktop heap for the new user. Therefore, the process cannot start. See Error when you start many COM+ applications: Error code 80080005 -- server execution failed for more information.
Finally, you may find a similar thread here helpful, see Server execution failed (Exception from HRESULT: 0x80080005 (CO_E_SERVER_EXEC_FAILURE)).

Google Colab: Can we restore all the data even after the runtime disconnects?

I am a new learner. I recently started learning Google Colab. Whenever I close my Colab and reopen it, all the code start executing from beginning. Is there any way to restore the local variable, code outputs and all the previous program data?
It is really time-consuming to load the dataset every time.
Unfortunately No (by this answer posted date), you cannot restore to previous runtime. Everything restarts on a new runtime session on a different virtual machine. Notebooks run by connecting to virtual machines that have maximum lifetimes that can be as much as 12 hours. And Colab Pro says to provide around 24hrs of runtime. This is necessary for Colab to be able to offer computational resources for free.
However you can apply good practices to help you work faster. Some of them are:
Save your datasets and trained models on your Google Drive; Mount it and use it as required. Only runtime local variables and program data for that session are destroyed.
Use pre-trained models to implement Transfer Learning to save training time.
Use "Connect to hosted runtime" and "Manage Sessions" to use the free resources effectively.
Sadly, it's just part of the workflow with colab, but there are ways to make life easier. To persist data you'd want to connect to google drive and pull/save files from there:
from google.colab import drive
drive.mount('/content/drive')
Then follow instructions - click the link, copy/paste the auth token.
After connecting to google drive - copy files that are stored on the drive using command !cp. For example, these commands copy files stored on the drive to local notebook environment:
!cp "/content/drive/My Drive/Colab Notebooks/trainer.py" "trainer.py"
!cp "/content/drive/My Drive/Colab Notebooks/data.pkl" "data.pkl"
To copy files and folders from notebook environment to drive use the same !cp command:
!cp "model" "/content/drive/My Drive/Colab Notebooks/my-fancy-model"
Assuming you want to see previous ouputs of the code. You could use File > Save and Pin Revision to save revision history including revision name. That way it will store previous outputs including code changes. Now going to File > Revision History, it will show difference between two version. Clicking on three dot on right side it will show option to restore version, open, or name it.

How do I make a simple public read-only WebDAV server with SabreDAV?

I recently began looking into WebDAV, as I found it to be an option for letting me play a Blu-ray folder remotely - i.e. without requiring the viewer to download the whole 24gb ISO first.
Add a WebDAV source in Kodi v18 to a Blu-ray folder - and it actually plays! Very awesome.
The server can also be mounted on Windows with
net use m: http://example.com/webdavfolder/
or in Linux with
sudo mount -t davfs http://example.com/webdavfolder/ /mnt/mywebdav
-and should then (in theory) play with any software media players that supports Blu-ray Disc Java (BD-J), such as PowerDVD and VLC.
vlc bluray:///mnt/mywebdav --bluray-menu
PowerDVD.exe AUTOPLAY BD m:
(Unless of course time-out values has been set too low, which seems to be the case for VLC at the moment).
Anyway, all this is great, except I can't figure out how to make my WebDAV server read-only. Currently anyone can delete files as they wish, and that's of course not optimal.
So far I've only experimented with SabreDAV, because afaik that's the only option I have if I want to keep using my existing webhost. Trying with very minimal setups, because I've read that minimal setups should default to a read-only solution. It just doesn't seem to happen.
I initially used the setup from http://sabre.io/dav/gettingstarted/ and tried removing some lines. Also tried calling chmod 0444 MainFolder -R on the webserver. And I can see that everything does get a read-only attribute. But it changes nothing. It's still possible to delete whatever I want. :-(
What am I missing?
Maybe I'm using the wrong technology for what I want to do? Is there some other/better way of offering a Blu-ray folder for remote viewing? (One that includes the whole experience - i.e. full Java menus etc).
I should probably mention that all of this is of course perfectly legal. It is my own Blu-ray project - not copyright material.
Also: Difficult to decide if this belongs on StackOverflow or SuperUser. I ended up posting it on StackOverflow because SabreDAV is about coding, and because there's no sabredav tag on SuperUser.
You have two options:
Create your own file/directory classes for sabre/dav that simply throw an error when trying to delete. You can basically start with a copy of Sabre\DAV\FS\Directory and Sabre\DAV\FS\File and change the methods that do writing.
Since you're considering just using linux file permissions, really the key thing you are missing is that that 'deleting' is not controlled on the file or directory you're trying to delete. To delete a file or directory in unix, all you need is write permissions on the parent directory. However, I wouldn't recommend going this route as doing this will just cause a weird error in sabre/dav, which might leave clients in a confused state. It would result in a 500 error, not the expected 403 error.

Network Service account does not accept local paths

I am creating a program that runs as a service and creates database backups (using pg_dump.exe) at certain points during the day. This program needs to be able to write the backup files to local drives AND mapped network drives.
At first, I was unable to write to network drives, but solved the problem by having the service log on as an administrator account. However, my boss wants the program to run without users having to key in a username and password for the account.
I tried to get around this by using the Network Service account (which does not need a password and always has the same name). Now my program will write to network drives, but not local drives! I tried using the regular C:\<directory name>\ path syntax as well as \\<computer name>\C$\<directory name>\ syntax and also \\<ip address>\C$\<directory name>\, none of which work.
Is there any way to get the Network Service account to access local drives?
Just give the account permission to access those files/directories, it should work. For accessing local files, you need to tweak ACLs on the files and directories. For accessing via network share, you have to make changes to file ACLs, as well as permissions on network share.
File ACLs can be modified in Exploler UI, or from command line, using standard icacls.exe. E.g. this command line will give directory and all files underneath Read, Write and Delete permissions for Network Service.
icacls c:\MyDirectory /T /grant "NT AUTHORITY\Network Service":(R,W,D)
File share permissions are easier to modify from UI, using fsmgmt.msc tool.
You will need to figure out what minimal set of permissions necessary to be applied. If you don't worry about security at all, you can give full permissions, but it is almost always an overkill, and opens you up more if for any reason the service is compromised.
I worked around this problem by creating a new user at install time which I add to the Administrators group. This allows the service to write to local and network drives, without ever needing password/username info during the setup.

Resources