Cannot find path because it does not exist - google-cloud-powershell

PS D:\> cd gs:\
cd : Cannot find drive. A drive with the name 'gs' does not exist.
PS D:\> Get-GcsBucket
PS D:\> cd gs:\mybucket
Why I can not change drive to gs:\ before Get-GcsBucket?
PS gs:\mybucket> mkdir NewFolder
PS gs:\mybucket> cd .\NewFolder
cd : Cannot find path 'gs:\mybucket\NewFolder' because it does not exist.
PS gs:\mybucket> ls
Name Size ContentType TimeCreated Updated
---- ---- ----------- ----------- -------
NewFolder
Why I can not change directory?

Why I can not change drive to gs:\ before Get-GcsBucket?
Unlike Cmdlets and Functions, Providers and the drives they add can not be discovered until the module they are part of is imported into the current PowerShell session. This can be done explicitly with Import-Module, or implicitly by calling a Cmdlet or Function that is discoverable, such as Get-GcsBucket.
Why are Cmdlets discoverable but drives aren't? Because the module manifest lists the Cmdlets, but does not have an entry for drives, and also because the Cmdlet names are stored in assembly metadata (as attributes) that can be read without loading the assembly, while the drive comes directly from code that can only be run after loading the assembly.
Why I can not change directory?
It looks like a bug, but I have not been able to reproduce it. If you can provide more information, I encourage you to submit an issue on the Google Cloud Powershell issues page.

I'm going to guess this is a bug in the Cloud Tools for PowerShell module.
When you launch PowerShell it loads a manifest file (GoogleCloud.psd1) which provides a declaration for every cmdlet that the module contains. This allows PowerShell to delay loading the actual cmdlets assembly until it is actually needed. And thereby speeding up startup time considerably.
The actual list of which cmdlets are found in the module is determined as part of the build and release process. Some info here.
Anyways, that manifest is not declaring the existence of the Cloud Storage PowerShell provider (the cd gs:\ bits.) So PowerShell doesn't know that it exists until after it loads the GoogleCloud PowerShell module, which is done after you invoke Get-GcsBucket (or I assume any cmdlet in the module) at least once.

Related

How do I make a script to run another program after a file is created in a specific folder?

How do I make a script to run another program after a file is created in a specific folder? Could be Windows or Linux. To be specific, I want to use rclone to move file to remote folder right after it is created. The program itself doesnt have built-in monitoring function
Could be Windows or Linux.
Linux: It's easy with inotifywait.
cd "a specific folder"
inotifywait -me close_write --format %f . | while read file
do rclone move "$file" …
done
Unfortunately this doesn't work with the Ubuntu Linux app from the Windows Store for files on mounted Windows filesystems.

Trying to port application to docker nanoserver container. Running exe fails with exit code -1073741515 (Dependency missing)

I'm currently trying to port my image optimizer application to a NanoServer docker image. One of the tools my image optimizer uses is truepng.exe. (Can be downloaded here: http://x128.ho.ua/clicks/clicks.php?uri=TruePNG_0625.zip)
I simply created a nanoserver container and mounted a folder that contained truepng.exe:
docker run --rm -it -v C:\data:C:\data mcr.microsoft.com/windows/nanoserver:2004-amd64
When I now run truepng.exe I expect some output regarding command line arguments missing:
C:\MyLocalWindowsMachine>truepng
TruePNG 0.6.2.5 : PNG Optimizer
by x128 (2010-2017)
x128#ua.fm
...
However when I call this from inside the nanoserver docker container I basically see no output:
C:\data>truepng
C:\data>echo %ERRORLEVEL%
-1073741515
As you can see above, the exit code is set to -1073741515. According to this it typically means that there's a dependency missing.
I then downloaded https://github.com/lucasg/Dependencies to see the dependencies of truepng:
It seems it has some dependencies on 5 DLL's. Looking these up I found that there's apparently something called 'Reverse Forwarders': https://cloudblogs.microsoft.com/windowsserver/2015/11/16/moving-to-nano-server-the-new-deployment-option-in-windows-server-2016/
According to the following post though they should already be included in nanoserver: https://social.technet.microsoft.com/Forums/en-US/5b36a6d3-84c9-4940-8b7a-9e2a38468291/reverse-forwarders-package-in-tp5?forum=NanoServer
After all this investigation I've also been playing around with manually copying over the DLL's from my local machine (system32) to the docker machine without any success (it just kept breaking other things like the copy command which required me to recreate the container). Next to that I've also copied the files from SysWOW64, but this didn't help either.
I'm currently quite stranded on how to proceed further as I'm not even sure if the tool is missing dependencies or if something else is going on. Is there a way to investigate what DLL's are missing once a tool is starting?
Kind regards,
Devedse
Edit 1: Idea from #CherryDT
I tried running gflags (https://social.msdn.microsoft.com/Forums/en-US/f004a7e5-9024-4555-9ada-e692fbc3160d/how-to-start-quotloader-snapsquot?forum=vcgeneral) which gave the following output:
C:\data>"C:\data\gflags.exe" /i TruePNG.exe +sls
Current Registry Settings for TruePNG.exe executable are: 00000000
After this I tried running Dbgview.exe, this however never resulted in a log file being written:
C:\data>"C:\data\DebugView\Dbgview.exe" /v /l debugview-log.txt /g /n
C:\data>
I also started TruePNG.exe again, but again, no log file was written.
I tried querying the EventLogs using a dotnet core application, but this resulted in the following exception:
Unhandled exception. System.InvalidOperationException: Cannot open log Application on computer '.'. This function is not supported on this system.
at System.Diagnostics.EventLogInternal.OpenForRead(String currentMachineName)
at System.Diagnostics.EventLogInternal.GetEntryAtNoThrow(Int32 index)
at System.Diagnostics.EventLogEntryCollection.GetEntryAtNoThrow(Int32 index)
at System.Diagnostics.EventLogEntryCollection.EntriesEnumerator.MoveNext()
at EventLogReaderTest.ConsoleApp.Program.Main(String[] args) in C:\data\EventLogReaderTest.ConsoleApp\Program.cs:line 22
Windows Nano Server is tiny and only supports 64-bit applications, tools, and agents. The missing dependency in this case is the entire x86 emulation layer (WoW64), as TruePNG is a 32-bit application.
Windows Server Core contains WoW64 and other components missing from Nano Server. Use a Windows Server Core image instead.
Example command:
docker run --rm -it -v C:\Temp:C:\Temp mcr.microsoft.com/windows/servercore:2004 C:\Temp\TruePNG.exe
Yields the expected output:
TruePNG 0.6.2.5 : PNG Optimizer
by x128 (2010-2017)
x128#ua.fm
TruePNG {options} files
options:
/f# PNG delta filters 0=None, 1=Sub, 2=Up, 3=Average, 4=Paeth, 5=Mixed
/fe PNG extra filters, overrides /f switch
/i# PNG interlace method 0=None, 1=Adam7 (default input)
/g# PNG gamma 0=Remove, 1=Apply & Remove, 2=Keep (default)
[...]

Alternative to using --squash when building docker images on Windows using local files

We have some local installers and zip files that we use to build our docker images. It is easy to get this to work in a Dockerfile:
FROM mcr.microsoft.com/windows/nanoserver
COPY myinstaller.exe .
RUN myinstaller.exe; \
del myinstaller.exe
The problem here is that it produces a layer for the COPY line, which increases the size of the image. A common work-around for this is to have one RUN line, that downloads the file from the Internet, runs commands, and then deletes the installation file. The problem, as written above, is that the installers are on the local filesystem.
I found that there is a --squash command for docker:
docker build --squash -t mytestimage .
This does exactly what I want: It gives me an image without this extra installer file that is not necessary. To run this command, you need to enable experimental features though. There is also an open issue to simply remove this feature:
https://github.com/moby/moby/issues/34565
Is there some alternative way of using local installers in a Dockerfile when running on Windows, that doesn't involve setting up a server to provide the files?
We ended up setting up nginx to provide files when building. On our build server, the machine building our docker images and the server that has the installer files have a very good connection between them, so downloading huge files is not a real problem.
When it comes to --squash, it is bugged for Docker on Windows. Here is the relevant issue for it:
https://github.com/moby/moby/issues/31468
There is an issue to move --squash out of experimental, but it doesn't seem to have a lot of support:
https://github.com/moby/moby/issues/38657
The alternative that some people propose instead of --squash is multi stage build, discussion here:
https://github.com/moby/moby/issues/34565
There is an alternative to --squash, if you have local installer files, you don't want to set up a web server, and you would like your docker image to be small, and you are running Windows: Use mapped drives.
In Windows, you can share folders with other users on your network. Docker containers are like another computer that is running on your physical machine, and it can access these network drives.
First set up a new user, for example username share and password password1. Create a folder somewhere on your computer. Then right click it, click properties, and then go to the Sharing tab and click "Share". Find the user that you have just created, using the little dropdown menu and Find people ..., and share the folder with this user.
Create a folder somewhere for your test project. Create a batch file setupshare.bat that looks like this:
#echo off
for /f "tokens=2 delims=:" %%i in ('ipconfig ^| findstr "Default Gateway"') do (
set hostip=%%i
goto :end
)
:end
set hostip=%hostip: =%
net use O: \\%hostip%\vms /USER:share password1
The first part of this file is only to find the ip address that the docker container can use to access its host computer. It is not the most pretty thing I've ever put together, so let me know if there's a better way!
It uses a for-loop, as that is the way to save the output of a command to a variable in batch files. The command is ipconfig, and we pipe it to findstr and searches for Default Gateway. We need to use ^| instead of just | because it is in a for-loop. The first part of the for-loop divides each line from the command on the delimiter, which is : in this case, and we only take the second token. The for-loop only handles the first line, if there are multiple entries with a Default Gateway. This script doesn't work if there are multiple entries and the first one is not the correct one.
The line set hostip=%hostip: =% is to remove a space at the start of the string.
We then have the IP address that we want to use stored in hostip. We use this in the net use command, which will map O:\ to shared folder vms on the machine with IP hostip. We use the username share and the password password1. Note that this is a very bad way of handling passwords, as they kind of should be secret!
With a batch file like this, we can set up a Dockerfile in this way:
# escape=`
FROM mcr.microsoft.com/dotnet/core/sdk:3.0
COPY setupshare.bat .
RUN setupshare.bat && `
copy O:\file.txt file.txt
The RUN command will first call setupshare.bat that sets up the network share properly. We can then use any file that we shared, for example a huge installer, and install the things that we want. In this case I have only shared a test file file.txt to see that it works, so just change that line.
I would still advice everyone to just set up a little web server, for example nginx, and use the standard way of writing Dockerfiles, with downloading files and running it in the same RUN command. That's what people expect when they see a Dockerfile, and it should be a more robust solution.
We can also hope that the Docker people either makes a COPY command that can copy, run, and delete installers in the same layer, or that --squash is implemented properly.

Google Cloud Platform - Viewing downloaded files after wget

I am completing this tutorial and am at the part where you download the code for the tutorial. The request we send to Github is:
wget https://github.com/GoogleCloudPlatform/cloudml-samples/archive/master.zip
I understand that this downloads archive to GCP, and I can see the files in the Cloud shell, but is there a way to see the files through the Google Console GUI? I would like to browse the files I have downloaded to understand their structure better.
By clicking on the pencil icon on the top right corner, the Cloud Shell Code editor will pop.
Quoting the documentation:
"The built-in code editor is based on Orion. You can use the code
editor to browse file directories as well as view and edit files, with
continued access to the Cloud Shell. The code editor is available by
default with every Cloud Shell instance."
You can find more info here: https://cloud.google.com/shell/docs/features#code_editor
If you prefer to use the command line to view files, you can install and run the tree Unix CLI command 1 and run it in Cloud Shell to list contents of directories in a tree-like format.
install tree => $ sudo apt-get install tree
run it => $ tree ./ -h --filelimit 4
-h will show human readable size of files/directories
and you can use --filelimit to set the maximum number of directories to descent within the list.
Use $ man tree to see the available parameters for the command, or check the man online documentation here: https://linux.die.net/man/1/tree

Unable to start any container when Volumes are enabled Docker Toolbox

I am running Docker Toolbox v. 1.13.1a on Windows 7 Pro Service pack 1 x64OS.
with Virtual Box Version 5.1.14 r112924
when I try to run any docker image e.g. official postgres image from Docker Hub with volumes disabled, it works fine!
But when I enable the volumes it fails.
I tried all official documentations
The VM has shared folder as required and has full access to it also
shared folder screenshot
In case of my example of postgresql it crashes with following log
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... LOG: could not link file "pg_xlog/xlogtemp.27" to "pg_xlog/000000010000000000000001": Operation not permitted
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
I know its the problem with folder permissions. But kinda stuck!
A ton of thanks in advance
I've been busy with this problem all day and my conclusion that it's currently simply not possible to run postgresql inside a docker container while keeping your data persistent in a separate volume.
I even tried running the container without linking to a volume and copying the data that was originally in /var/lib/postgresql into a folder of my host OS (Windows 10 Home), then copy that into the folder that got then linked to the container itself.
Alas, I got the next error:
FATAL: data directory "/var/lib/postgresql/data/pgadmin" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
In conclusion: There's something going wrong with the ownership and the correct user owning it and to be able to fix it, you'll need a unix commandline on Windows that is able to run docker (something currently not possible with Bash on Ubuntu on Windows that is running using Ubuntu 16.04 binaries).
Maybe, in the future, you'll be able to run the needed commands (found here, under Arbitrary --user Notes), but these are *nix commands and powershell (started by Kitematic) can't run those. Bash for Ubuntu for Windows could run those, but that shell has no connection to the docker daemon/service on windows...
TL;DR: Lost a day of work: It is currently impossible on Windows.
I have been trying to fix this issue also ..
At first I thought it was a symlink problem (because the first error fails on " could not link .. operation not permitted)
To be sure symlink is permitted you have to :
share a folder in virtualbox
run virtualbox as administrator (if you account is in administrator group) Right click virtualbox.exe and select run as Administrator
if your account is not administrator, add the symlink privilege with secpol.msc > "Local Policies-User Rights Assignments" add your user to "Create symbolic links"
enable symlink for your shared folder in virtualbox :
VBoxManage setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARED_FOLDER_NAME 1
Alternatively you can also use the c:\User\username folder which is shared and symlink enabled by default dockertools installation
Now I can create symlinks in the shared folder from the docker container .. but I still have the same error "could not link ... operation not permitted"
So the reason must be somewhere else ... in the file permissions as you said but I do not see why ?

Resources