Electron does not run on shared folder - electron

C:\share is shared folder.
C:\share\electron-v13.0.1-win32-x64, \\192.168.1.10\share\electron-v13.0.1-win32-x64 and Z:\electron-v13.0.1-win32-x64 are same folder.
Electron app is launched correctly when I execute C:\share\electron-v13.0.1-win32-x64\electron.exe command.
However, electron app is not launched correctly when I execute Z:\electron-v13.0.1-win32-x64\electron.exe command.
According to the task manager, electron processes are running.
However, electron's window is not shown.
Can electron run correctly on shared folder?

Should be safer to use it locally (from the C:\share). The mapped drives behave very differently compared to local filesystem. And their implementations can differ in their settings as well:
https://wiki.samba.org/index.php/Time_Synchronisation
https://www.truenas.com/community/threads/issue-with-modified-timestamps-on-windows-file-copy.82649/
https://help.2brightsparks.com/support/solutions/articles/43000335953-the-last-modification-date-and-time-are-wrong
If I understand you are just mapping back your own shared folder, and overall the Windows server cofigurations felt to me more consistent, however the protocol changed over the time as well:
https://en.wikipedia.org/wiki/Server_Message_Block
I do not understand the network sharing protocols well to give you exact answer why you have the problem, but I know enough to tell you that the mounted shared folders are not like your own local filesystem. In many cases the differences do not matter and it gives great user expierence, but in some cases these minute differences break things in misterious ways, even if they are mapped/mounted almost like a regular/local drive. This is not exclusive problem to Electron.
And that is a problem with a lot of things through SMB (mainly binaries/tools), the shared folder might be running a different filesystem, different permission and privileges (or run a completely different structure of permissions underneath if it's a completely different filesystem). Remote folders might have issues with inotify getting events on file updates, might miss changed file (like touch on Linux is meant to update date on the file), so through shared folder the date updates might be delayed/rounded. I think at one point even Makefiles were misbehaving as it was depending on the access-date to work the way it would locally.
Other problem with tools is the sharability, can it handle run multiple instances from the same location? Is it saving something into a ./tmp or some other file which could conflict with other user running it at the same time?
Overall with shares I tend to use them for data (and few times had issues with them as well), but have shared remotely applications only if they are known to not cause troubles.

Related

How to make container installation behave like host machine installation

I'm working with the following:
Docker for Windows v20.10.11
Docker running in Windows container mode
mcr.microsoft.com/windows:1903 base image
Proprietary application installed on top of this base image
Each year we create a Docker image with the latest version of our company's software. However this year's version behaves differently. Host machine installation runs fine. Containerized installation fails to run in certain situations. I can start the application as a simple EXE, for example using the Docker run command. The app will start and show up in "tasklist". However I can't start the app via the COM API, which is a critical requirement. The problem appears to be COM related. Normally we can create COM objects for our software just like for any other application. For example, IE returns a COM object just fine:
Creating these objects for our application works outside containers. However inside the container, our latest installation gives this error:
Access permissions appear to be ok. I tried a couple tests to prove this. First I can install other software like MS Word into a container and create COM objects for that:
Second I tried retrieving + modifying the application's DACL in PowerShell.
Changing access masks or trustees can cause an Access Denied error:
This also appears to confirm the access permissions were Ok by default.
Next I made sure COM is aware of the application. This appears to be fine. I get the same result on host machine and container when running this PS script:
gci HKLM:\Software\Classes -ea 0| ? {$.PSChildName -match '^\w+.\w+$' -and
(gp "$($.PSPath)\CLSID" -ea 0)} | ft PSChildName
The application shows up just like any other. The details show up fine when querying by AppID. LocalServer32 points to the correct EXE:
Some other things I tried:
Querying registry keys. There are 7 keys created when installing our software. These appear identical on host machine install and container install.
Even though permissions appear fine, I still tried logging into the container as alternate users. For example "nt authority\system" is another virtual admin user. I also changed the password of the "builtin\administrator" user to enable logging in with that one. Lastly tried creating new users entirely and adding them to the Administrators user group. All these attempts had the same errors as "builtin\containeradministrator" (default user).
A minor check was ensuring CMD.exe / Powershell is running as x64:
Re-registering the DLLs associated with the installation using regsvr32.
Starting from different base images. https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-base-images. The full Win Server base image behaves exactly the same way regarding errors. The smaller Win Server Core base image is even more problematic, as I can't even start the app's EXE manually using that base. Lastly I tried other tags of the full Windows base image such as 20H2 and 2004. Same result from those. Multiarch or x64 makes no difference.
Included the "Ogawa hack" which was historically needed to make MS Office apps function correctly with COM: https://stackoverflow.com/a/1680214/7991646. It could be necessary for other COM apps too, but didn't help with my specific installation.
Is there anything else I can do to diagnose or solve this COM issue?
There are several things to consider:
The Considerations for server-side Automation of Office article states the following:
Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behavior and/or deadlock when Office is run in this environment.
If you are building a solution that runs in a server-side context, you should try to use components that have been made safe for unattended execution. Or, you should try to find alternatives that allow at least part of the code to run client-side. If you use an Office application from a server-side solution, the application will lack many of the necessary capabilities to run successfully. Additionally, you will be taking risks with the stability of your overall solution.
The When CoCreateInstance returns 0x80080005 (CO_E_SERVER_EXEC_FAILURE) page describes possible reasons.
If many COM+ applications run under different user accounts that are specified in the This User property, the computer cannot allocate memory to create a new desktop heap for the new user. Therefore, the process cannot start. See Error when you start many COM+ applications: Error code 80080005 -- server execution failed for more information.
Finally, you may find a similar thread here helpful, see Server execution failed (Exception from HRESULT: 0x80080005 (CO_E_SERVER_EXEC_FAILURE)).

How do I make a simple public read-only WebDAV server with SabreDAV?

I recently began looking into WebDAV, as I found it to be an option for letting me play a Blu-ray folder remotely - i.e. without requiring the viewer to download the whole 24gb ISO first.
Add a WebDAV source in Kodi v18 to a Blu-ray folder - and it actually plays! Very awesome.
The server can also be mounted on Windows with
net use m: http://example.com/webdavfolder/
or in Linux with
sudo mount -t davfs http://example.com/webdavfolder/ /mnt/mywebdav
-and should then (in theory) play with any software media players that supports Blu-ray Disc Java (BD-J), such as PowerDVD and VLC.
vlc bluray:///mnt/mywebdav --bluray-menu
PowerDVD.exe AUTOPLAY BD m:
(Unless of course time-out values has been set too low, which seems to be the case for VLC at the moment).
Anyway, all this is great, except I can't figure out how to make my WebDAV server read-only. Currently anyone can delete files as they wish, and that's of course not optimal.
So far I've only experimented with SabreDAV, because afaik that's the only option I have if I want to keep using my existing webhost. Trying with very minimal setups, because I've read that minimal setups should default to a read-only solution. It just doesn't seem to happen.
I initially used the setup from http://sabre.io/dav/gettingstarted/ and tried removing some lines. Also tried calling chmod 0444 MainFolder -R on the webserver. And I can see that everything does get a read-only attribute. But it changes nothing. It's still possible to delete whatever I want. :-(
What am I missing?
Maybe I'm using the wrong technology for what I want to do? Is there some other/better way of offering a Blu-ray folder for remote viewing? (One that includes the whole experience - i.e. full Java menus etc).
I should probably mention that all of this is of course perfectly legal. It is my own Blu-ray project - not copyright material.
Also: Difficult to decide if this belongs on StackOverflow or SuperUser. I ended up posting it on StackOverflow because SabreDAV is about coding, and because there's no sabredav tag on SuperUser.
You have two options:
Create your own file/directory classes for sabre/dav that simply throw an error when trying to delete. You can basically start with a copy of Sabre\DAV\FS\Directory and Sabre\DAV\FS\File and change the methods that do writing.
Since you're considering just using linux file permissions, really the key thing you are missing is that that 'deleting' is not controlled on the file or directory you're trying to delete. To delete a file or directory in unix, all you need is write permissions on the parent directory. However, I wouldn't recommend going this route as doing this will just cause a weird error in sabre/dav, which might leave clients in a confused state. It would result in a 500 error, not the expected 403 error.

Updating EXE file from server…

I need to update my application from a central server.
The application checks always if it is a correct version, against the server installation.
So when it is not, I need it to update itself.
So how can I copy the EXE if it is running? What solution do I have?
I rename the current running exe to MyTempExe.exe, copy the new exe to the correct location (request elevated privileges if necessary) and then run a separate app to restart the main app. On start up I check for MyTempExe.exe delete it if it's there.
The reason I use a separate app for the restart is I don't have a set time frame for the app to close down and need to wait for it to finish whatever it's doing, on shutdown it writes information to disk about its current state that the updated app will use to resume where the old one left off.
I don't know if it's the best solution but it's the one I use.
As you can see by all the answers there is no set way to do this, so I thought I would add the way we have successfully done this.
We never run an application directly from the network.
We run the application from the local machine and have it copy from the network on startup.
We do this using an application launcher. It downloads an XML file that contains CRC and Version Resource Values for the application files. The XML File is created during the deployment process, in a FinalBuilder Script.
The application then compares the XML File to local content, and copies down needed files. Finally we then launch the application in question. This has worked well for deploying an application that serves around 300 local users. Recently we switch from a file copy to an HTTP download as we found problems with remote user disconnecting drives.
We still still build installations (With Innosetup) to get the basic required files deployed.
Package your app with an installer such as Inno. Download and execute the installer. Have the installer search for and kill your app, or instruct the user to close it. The setup will replace your .exe, and if the app can't be killed or the user is non-cooperative, it'll issue a re-start notice.
Download new EXE to TEMP
Create Batch from EXE, content:
taskkill /PID %process id of running EXE%
copy %new EXE% %running EXE%
%EXE%
all values in %...% are placeholders
execute batch from the running EXE
delete batch
I use TMS TWebUpdate myself, for software updates. The advantage is that there a bunch of extra actions you can put into the script, if you need anything other than plain EXE updates.
I have two components at work the application executable itself and a web-service (SOAP) which provides version details and file downloads.
The application calls a method on the SOAP service to ask for the number of files in the project (project is identified by using the application.exename usually).
The soap service gets its info from an INI file, which has entries like:
[ProjectName]
NumberOfFiles=2
File1=myapp.exe;1.0.0.1
File2=mydll.dll;1.0.0.2
You just update this file at the same time as uploading your new files.
The process of updating the application this:
Get number of files available on the web service
For each file, the application asks for the name and version number from the SOAP server.
The application compares this information to its own version info and decides if the file needs updating, building a local list of files that need updating.
For each file that needs updating the application downloads the file to filename.ext.new
Finally, the application renames all filename.ext to filename.ext.old and renames filename.ext.new to filename.ext and then restarts itself. (No real need for an external app to restart your own program).
Note 1, that you may have to ask for elevation to replace files, depending on where you install your files.
Note 2: be kind to your users, think carefully before you force updates on users.
Note 3: You cannot delete a running exe, but you can rename it and then restart the new version.
Edit===
For some reference data files which cannot contain version information resources, you can have entires like File99=MyDataFile;1.1.2011 the 3 elements to the version number indicates to the client that it should check against the file date/stamp.
You could have a separate update executable whose task is to check the server version, download an updated executable if necessary, and then run the local executable.
Or you could have one executable running in two different modes: 1. on startup, check for an update, if there is one, download the executable to a download directory, run it and quit.
2. The new executable would check if it's running from the installation directory, if not, it would copy itself there, overwriting the old version, start the copy from there, and quit.
My way is the other way round: If a new version is online, promt the user to update. If he want's to (or is forced to...) I end the app and start a new exe (updater). this updater loads the update and replaces the old exe (not running). then it starts the new exe. ready. (You can of course replace other files too.) BUT: Using an Installer like InnoSetup gives you more possibilities and doesn't mix up with the regular uninstaller, so it is really better...
You can do this without running another application. Push the updates to the client from the server while running, storing in a temporary directory on the client. When you want to upgrade move all your running files to another temporary directory, move the new files into the original application directory, and just restart the application using the standard executable name on shutdown.
I upgrade client applications running on unattended machines automatically this way.

Different paths on different computers

I use 3 computers regularly and a fourth one ocassionaly. I have used dorpbox to synchronize my .ahk script to all computers. However, the path names are different on the different computers. For instance at home there is C:\Users\Farrel\Documents\SyRRuP where as at work it is something such as C:\Users\fbuchins\Documents\SyRRuP and on an Windows XP computer it is somethinge else. Consequently a particular sequence of code that runs a particular file only works on one computer and bombs out on the others. What is the most elegant way to overcome the problem?
I'm not sure about dropbox, but I have used Windows environment variables to set things like this before. Something like PROGPATH="C:\thispath\" then read the PROGPATH variable from the app or script -

Unable to understand the basic PATHs at root

I trying to put my Mac's data in order.
I have many rc-files at my root such as .vimrc, .srceenrc and .bashrc.
I would like to put these files to the specific folders such as .vimrc and .screenrc to ~/bin/coding and .bashrc then again to ~/bin/shells.
How can you determine where these rc-files must be?
Seriously, you should leave them where they are. Applications will be looking for them in specific locations (probably your $HOME directory which is not root, by the way, or shouldn't be). This is a very old UNIX convention that you should attempt to change only if you fully understand the consequences.
Not meaning to sound condescending but your error in naming your home directory as your root directory seems to indicate your knowledge level of how it all works is less than it should be to understand those consequences (apologies if that offends you, I agonized over the best way to say it - what I mean is that you should tread carefully).
If you move them, you will have to ensure you run the applications that use them with their paths fully specified, and some applications may not let you do that.
They all start with "." so that they're hidden to the normal ls commands and, if you're using a graphical file browser, there should be a way to hide them there as well (such as the Gnome File Manager CTRL-H).
Configuration of a program is both defined at system-level and user-level, you can tweak the user-level one, which resides in your home, to help you in what you need.
No need to group them in subfolders as you said: leaving them in your home (not root) is following the convention everybody uses, rc-files usually stay there after the program has been uninstalled, so if some day you make a fresh install you'll find the application configured as you left it.
Also, by leaving them in your home, you can bring your own home folder to another system and have the environment set as you like it.

Resources