We are trying to dockerize some old powerbuilder batch jobs, and the existing code requires a ghostscript printer installed with the exact name (for the details see here)
Microsoft's documentation on Print Spooler in containers
states "apps that have a dependency on installing printer drivers into the host cannot be containerized because driver installation from within a container is unsupported"
I do not know if this is just a typo/missunderstanding, as why would any application want to install driver into host? I need it in the container, also is it only for the drivers or for the Printers as well?
Assuming this is only a typo/missunderstanding, and microsoft claims it can not be done in the container, the question is
could I do it through the host (and commit it into a new docker image)
could I do it through doccker file , using some MSI installer and run it there
is there any possibility to add a driver and printer in docker image ? (and if not it should be clearly stated)
Assuming this the above statement is not a typo/missunderstanding, than it should be possible to add printer to docker. For that we are using
FROM mcr.microsoft.com/windows:1909
and isolation is hyperv
We tried runnig this from the dockerfile
RUN powershell -command Add-Printer -Name \"Test Printer\" -DriverName \"Microsoft Print to PDF\" -PortName \"PORTPROMPT:\"
and get following errror
# InvalidData: (MSFT_Printer:ROOT/StandardCimv2/MS
# FullyQualifiedErrorId : HRESULT 0x80070006,Add-Printer
but I was not able to find anything meangifull for this error
I can list, rename and remove a printer but I can not add existing one, also if I execute
Get-PrintConfiguration "Microsoft Print To PDF"
I am getting
+ CategoryInfo : NotSpecified: (MSFT_PrinterConfiguration:ROOT/StandardCi...erConfiguration) [Get-PrintConfiguration], CimException
+ FullyQualifiedErrorId : HRESULT 0x8000ffff,Get-PrintConfiguration
so it seams there is something fishy with printers in the docker image.
Do you if there is any possibility to add printer to docker image / container?
thanks
almir
Probably your error in in the DockerFile command. You have to pass the hole command to the powershell and not just the Add-Printer.
I did not try it out, but i would do it like this:
RUN powershell -command "Add-Printer -Name \"Test Printer\" -DriverName \"Microsoft Print to PDF\" -PortName \"PORTPROMPT:\""
The question is quite old, but i would be curios to have a beedback :)
Related
I want to build on top of a windows docker container by installing a couple programs. The files total .5 GB and I want to keep the layers as small as possible. I am hoping I can run the setup files from the build-context, and then have the build-context swept away at the end so I don't have a needless copy of the source files for the setup.exe embedded in my container layers. However, I have not found an example where this is the case. Instead I mostly see people run a COPY command to a temporary build folder, run their setup, then remove the folder. Won't those files still be in the container layers because the COPY command creates a new layer when it's done?
I don't know if the container can see the build-context directly. I was hoping for some magical folder filled with the build-context files so I could run a script using it, but haven't found anything.
It seems like the alternative is to create a private file-server and perform a RUN that can download them from that private server and unpack them, run the install, and remove them (all as 1 docker step). I understand this would make it more available to others who need to rerun the build, but I'm not convinced we'll need to rerun it. It's not likely to change as the container will build patches for a legacy application. Just seems like a lot to host files on a private, public-facing server for something that will get called once every couple years if ever.
So are these my two options?
Make a container with needless copies of source files embedded within
Host the files on a private file server and download/install/remove them
Or am I missing another option or point about how the containers work?
It's a long shot as Windows is a tricky thing with file system, but you could do this way:
In your Dockerfile use a COPY command, install then RUN del ... to remove the installation files
Build your image docker build -t my-large-image:latest .
Run your image docker run --name my-large-container my-large-image:latest
Stop the container
Export your container filesystem docker export my-large-container > my-large-container.tar
Import the filesystem to a new image cat my-large-container.tar | docker import - my-small-image
Caveat is you need to run the container once which might not be what you want. And also I haven't tested with windows container, sorry.
I usually do the download or copy in one step, then in the next step I do the silent installation and remove the installer.
# escape=`
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ADD https://download.visualstudio.microsoft.com/download/pr/6afa582f-fa26-4a73-8cb9-194321e85f8d/ecea51ead62beb7acc73ad9799511ffdb3083ad384fe04ec50e2cbecfb426482/VS_RemoteTools.exe VS_RemoteTools_x64.exe
RUN Start-Process .\\VS_RemoteTools_x64.exe -ArgumentList #('/install','/quiet','/norestart') -NoNewWindow -Wait; `
Remove-Item -Path C:/VS_RemoteTools_x64.exe -Force;
But otherwise, I don't think you can mount a custom volume while it's being built.
I didn't find a satisfactory answer to this. Docker seems designed for only the modern era and assumes you'll be able to download what you need via scripts and tools hitting APIs and file servers. The easiest option I found that I eventually went with was to host the files on a private file server or service (in my case, AWS S3).
I really wish there was a way to have files hosted by the docker daemon in some way, eg. if it acted like a temporary server that you could get data from via http instead of needing to COPY the files and create a layer. Alas, I found no such feature.
Taking this route made my container about a GB smaller.
I'm currently trying to port my image optimizer application to a NanoServer docker image. One of the tools my image optimizer uses is truepng.exe. (Can be downloaded here: http://x128.ho.ua/clicks/clicks.php?uri=TruePNG_0625.zip)
I simply created a nanoserver container and mounted a folder that contained truepng.exe:
docker run --rm -it -v C:\data:C:\data mcr.microsoft.com/windows/nanoserver:2004-amd64
When I now run truepng.exe I expect some output regarding command line arguments missing:
C:\MyLocalWindowsMachine>truepng
TruePNG 0.6.2.5 : PNG Optimizer
by x128 (2010-2017)
x128#ua.fm
...
However when I call this from inside the nanoserver docker container I basically see no output:
C:\data>truepng
C:\data>echo %ERRORLEVEL%
-1073741515
As you can see above, the exit code is set to -1073741515. According to this it typically means that there's a dependency missing.
I then downloaded https://github.com/lucasg/Dependencies to see the dependencies of truepng:
It seems it has some dependencies on 5 DLL's. Looking these up I found that there's apparently something called 'Reverse Forwarders': https://cloudblogs.microsoft.com/windowsserver/2015/11/16/moving-to-nano-server-the-new-deployment-option-in-windows-server-2016/
According to the following post though they should already be included in nanoserver: https://social.technet.microsoft.com/Forums/en-US/5b36a6d3-84c9-4940-8b7a-9e2a38468291/reverse-forwarders-package-in-tp5?forum=NanoServer
After all this investigation I've also been playing around with manually copying over the DLL's from my local machine (system32) to the docker machine without any success (it just kept breaking other things like the copy command which required me to recreate the container). Next to that I've also copied the files from SysWOW64, but this didn't help either.
I'm currently quite stranded on how to proceed further as I'm not even sure if the tool is missing dependencies or if something else is going on. Is there a way to investigate what DLL's are missing once a tool is starting?
Kind regards,
Devedse
Edit 1: Idea from #CherryDT
I tried running gflags (https://social.msdn.microsoft.com/Forums/en-US/f004a7e5-9024-4555-9ada-e692fbc3160d/how-to-start-quotloader-snapsquot?forum=vcgeneral) which gave the following output:
C:\data>"C:\data\gflags.exe" /i TruePNG.exe +sls
Current Registry Settings for TruePNG.exe executable are: 00000000
After this I tried running Dbgview.exe, this however never resulted in a log file being written:
C:\data>"C:\data\DebugView\Dbgview.exe" /v /l debugview-log.txt /g /n
C:\data>
I also started TruePNG.exe again, but again, no log file was written.
I tried querying the EventLogs using a dotnet core application, but this resulted in the following exception:
Unhandled exception. System.InvalidOperationException: Cannot open log Application on computer '.'. This function is not supported on this system.
at System.Diagnostics.EventLogInternal.OpenForRead(String currentMachineName)
at System.Diagnostics.EventLogInternal.GetEntryAtNoThrow(Int32 index)
at System.Diagnostics.EventLogEntryCollection.GetEntryAtNoThrow(Int32 index)
at System.Diagnostics.EventLogEntryCollection.EntriesEnumerator.MoveNext()
at EventLogReaderTest.ConsoleApp.Program.Main(String[] args) in C:\data\EventLogReaderTest.ConsoleApp\Program.cs:line 22
Windows Nano Server is tiny and only supports 64-bit applications, tools, and agents. The missing dependency in this case is the entire x86 emulation layer (WoW64), as TruePNG is a 32-bit application.
Windows Server Core contains WoW64 and other components missing from Nano Server. Use a Windows Server Core image instead.
Example command:
docker run --rm -it -v C:\Temp:C:\Temp mcr.microsoft.com/windows/servercore:2004 C:\Temp\TruePNG.exe
Yields the expected output:
TruePNG 0.6.2.5 : PNG Optimizer
by x128 (2010-2017)
x128#ua.fm
TruePNG {options} files
options:
/f# PNG delta filters 0=None, 1=Sub, 2=Up, 3=Average, 4=Paeth, 5=Mixed
/fe PNG extra filters, overrides /f switch
/i# PNG interlace method 0=None, 1=Adam7 (default input)
/g# PNG gamma 0=Remove, 1=Apply & Remove, 2=Keep (default)
[...]
I am using VSCode in my local PC and connecting to a Docker container in a remote server with VSCode's Extensions of Remote - SSH and Remote - containers. However, when I type
$ code <file name>
on the VSCode's terminal (Bash), I get an error messages saying that
bash: code: command not found
and I can't edit the file on the VSCode's editor.
If I click on the file from VSCode's Explorer (Ctrl+Shift+E), the edit screen will appear, but isn't it possible to call it with the code command?
Also, I call the command palette (Ctrl+Shift+P) and then search for Shell Command: Install 'code' command in PATH but no matching commands are found.
The execution environment is as follows:.
Local PC: Windows 10 Pro
Remote host PC: Ubuntu 18.04.3 LTS
Docker container in the remote host PC: Ubuntu 18.04.3 LTS
Thank you very much for your answer.
I'll answer myself as I was able to call VSCode using the code command in the remote's container.
When I look closely underneath home, I see that there was a code at the following directory.
$HOME/.vscode-server/bin/<directory with a hash-like name>/bin/
So I passed the PATH through it and it worked.
By the way, <directory with a hash-like name> is a directory with a hash-like name, which is randomly assigned when you connect to the container remotely. It's different every time, so please refer to it by yourself.
The way to register the path is as follows.
export PATH="$PATH:$HOME/.vscode-server/bin/<directory with a hash-like name>/bin/"
Thank you very much for your support.
They posted an easy solution in response to the issue created by #davetapley here.
Go to the VSCode settings, search for "terminal integrated inherit env" and enable the option. The code command will be available the next time you open a terminal.
(This should have been a comment but I don't have that privilege yet!)
Here is what worked for a similar problem where the code command was not working as expected on my Linux system, connected to my windows PC via VSCode's Extensions of Remote - SSH: adding VS Code to PATH by editing the ~/.bashrc file in my linux remote system.
I used the path variable from akki's answer, and the procedure detailed in this answer by oadams.
To edit this file in my system, I use nano:
nano ~/.bashrc
at the end the file, add the export path statement akki mentioned, just without the quotation marks:
export PATH=$PATH:$HOME/.vscode-server/bin/<directory with a hash-like name>/bin/
However, my hashtag-like-name of the code mentioned in akki's answer does not change when I remote SSH to my Raspberry Pi, so I am not sure how to fix that part of the problem.
My execution environment is as follows:
Local PC: Windows 10 on Dell Latitude PC.
Remote host PC: Raspbian GNU/Linux 10 (buster) on Raspberry Pi 3B.
It sounds like you are confusing which place you are writing the code command. Your installation of Visual Studio Code is local on your machine and not inside the docker container. When you open a terminal inside the docker container this is as if it was a different machine altogether. Here is a link to vscode documentation that is both interesting and useful.
Derived from #akki's answer, I noticed that the hash is stored in several environment variables. So I added this to my .zshrc which simply finds the path to the bin and then makes an alias.
VSCODE_SSH_BIN=$(echo "$BROWSER" | sed -e 's/\/helpers\/browser.sh//g')
alias code='$VSCODE_SSH_BIN/remote-cli/code'
I am using Chocolatey to install Docker.
When I originally run the following command:
choco install docker
and try to run the "docker --version" command, everything goes as expected.
Docker version 17.10.0-ce, build f4ffd25
When I try to run "dockerd" command, it shows as not being part of my path.
'dockerd' is not recognized as an internal or external command,
Looking at the PATH variable, and navigating to where Chocolatey stores the executables, dockerd.exe is not present while docker.exe is. Am I missing something in instructing Chocolatey in adding dockerd?
The reason I need the dockerd executable is so that I can limit the number of concurrent downloads, as shown in the Docker documentation.
This is a decision that the package maintainer(s) for Docker have made. If you have a look here:
https://chocolatey.org/packages/docker#files
You will see that there is a dockerd.exe.ignore file. This file is used to instruct Chocolatey to explicitly not create what is referred to as a shim file, which would make it work from the command line, in the same way as Docker does.
My best suggestion would be to reach out to the maintainers of that package to ask them why this was done, and to perhaps get it changed. You can do this by clicking on the Contact Maintainers link on this page:
https://chocolatey.org/packages/docker
As a workaround, you could add the following path to your Windows PATH environment variable:
C:\ProgramData\chocolatey\lib\docker\tools\docker
Which would allow it to work.
I want to use get-vm command from powershell.
$my_vm = get-vm -Name MY_VM_NAME
Have error message that
The term 'get-vm' is not recognized as the name of a cmdlet, function,
script f ile, or operable program. Check the spelling of the name, or
if a path was incl uded, verify that the path is correct and try
again.
I found that I need to install Microsoft.SystemCenter.VirtualMachineManager
(how to run get-vm command on windows powershell)
Execute that:
PS C:\Windows\system32> Add-PSSnapin -Name
Microsoft.SystemCenter.VirtualMachineManager
Add-PSSnapin : The Windows PowerShell snap-in 'Microsoft.SystemCenter.VirtualMa
chineManager' is not installed on this machine.
At line:1 char:13
Do I really need to install that huge tool? I don't have enough space on my VM.
http://www.microsoft.com/en-us/download/details.aspx?id=10712
Is it another way to solve issue with get-vm?
Thanks!
You can look in to connecting to the machine that has VMM on it. You can use remoting to do this. You can find some more info about it here: http://windowsitpro.com/scripting/creating-remote-sessions-powershell-20?page=3