I'm attempting to containerize a legacy ASP.NET application that has a dependency on Web Services Enhancements (WSE) 3.0. I understand that this is a legacy technology but refactoring the application to remove it is not an option.
My Dockerfile:
FROM mcr.microsoft.com/windows/servercore/iis
RUN mkdir prereqs
WORKDIR /prereqs
COPY ["prereqs/WSE30.msi", "c:/prereqs/"]
RUN "C:\prereqs\WSE30.msi /qn /quiet /passive"
Which fails with the following:
The command 'cmd /S /C "C:\prereqs\WSE30.msi /qn /quiet /passive"' returned a non-zero code: 1603
I've tried modifying the RUN command to include logging...
RUN "C:\prereqs\WSE30.msi /qn /quiet /passive /lv c:/logs/wse30.txt"
...but this creates a condition where the docker build just seems to hang; I've let several of these attempts run for more than an hour and they do not appear to progress or complete.
I've also tried adding an "exit 0" to simply let the build continue if there is an error...
RUN "C:\prereqs\WSE30.msi /qn /quiet /passive /lv c:/logs/wse30.txt" ; exit 0
..but the result is the same. The build appears to hang and never complete.
I know that this particular MSI supports unattended/silent installation as I've done so in batch files.
#DWRoelands , I faced the same issue but I was able to install it on the windowservercore:1909 after installing the all web-serverr and all its subcomponents components.
You will have to set the source of .Net3.5 to the downloaded file of https://dotnetbinaries.blob.core.windows.net/dockerassets/microsoft-windows-netfx3-1909.zip
Related
I am writing a few kong custom plugins in Lua. I am using Kong 2.3.3 and Lua 5.1.
I have some test cases (unit tests + integration tests) and i am running them with pongo run -coverage option. I have already installed luacov (and also cluacov, both with luarocks install) and all my tests are passing but no luacov files are being generated with coverage data. I am not running pongo from Docker, i have installed and configured it in my local machine (which is Linux Ubuntu 20.04).
I have already tried a few things as follows:
my .busted file is setting coverage = true, verbose = true and output = "gtest" (already tried utfTerminal, tap and json too)
tried adding luacov as a dependency to my rockspec file... the build does not fail but no coverage file is generated
i even tried running the tests without pongo, using busted directly but this is a very bad option because things like spec.helpers, or the cjson lib are not set in my LUAPATH
A quick way to do this is to modify pongo
Edit your pongo.sh file to:
add coverage flag to busted --coverage
call luacov to generate the report luacov
display the report cat luacov.report.out
locate where busted is called, line 959 for me:
"/bin/sh" "-c" "bin/busted --coverage --helper=bin/busted_helper.lua ${busted_params[*]} ${busted_files[*]};luacov;cat luacov.report.out"
Install luacov, edit assets/Dockerfile
after busted installation add luacov:
&& luarocks install busted-htest \
&& luarocks install luacov \
pongo run will give you
[...]
==============================================================================
Summary
==============================================================================
File Hits Missed Coverage
-------------------------------------------------------------------------------------------------------
/kong-plugin/kong/plugins/myplugin/schema.lua 105 1 99.06%
/kong-plugin/spec/myplugin/01-schema_spec.lua 199 5 97.55%
[...]
You can create a docker image based on pongo
spec/unit/docker/Dockerfile
FROM kong-pongo-test:2.3.2
USER root
RUN luarocks install luacov
WORKDIR /kong-plugin
COPY . .
spec/unit/docker/run.sh
#!/bin/sh
busted --coverage spec/unit
luacov
cat luacov.report.out
Run
docker build -f spec/unit/docker/Dockerfile -t my-coverage .
docker run my-coverage sh spec/unit/docker/run.sh
Pongo gained some support for this (still a PR). Note that it only covers unit tests, not integration ones.
See https://github.com/Kong/kong-pongo/pull/184
btw: the other anwers are too complex imo, you can add .pongo/pongo-setup.sh to install LuaCov, and move the .luacov file from /kong-plugin to /kong. That should be all that is necessary.
Running tests with coverage can be simply done by passing the flag, without any need to edit pongo or the dockerfile. Try pongo run -- --coverage for example.
I am working on creating a docker image for running jmeter tests with plugins, but I am stuck at point where the jmeter plugin manager wont install. I tried most of things on the internet but with no luck.
I tried using existing docker images on the hub but everything fails on finding the class for pluginmanager.
Below is the error I get:
Step 16/23 : RUN cd $JMETER_HOME && java -cp /lib/ext/jmeter-plugins-manager-1.4.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
---> Running in d2d5579a4832
Error: Could not find or load main class org.jmeterplugins.repository.PluginManagerCMDInstaller
The command '/bin/sh -c cd $JMETER_HOME && java -cp /lib/ext/jmeter-plugins-manager-1.4.jar org.jmeterplugins.repository.PluginManagerCMDInstaller' returned a non-zero code: 1
Could not find or load main class org.jmeterplugins.repository.PluginManagerCMDInstaller
it means that your CLASSPATH doesn't contain PluginManagerCMDInstaller class.
My expectation is that you need to remove the slash before /lib as in Linux paths which start from slash are considered absolute and most probably you need to use the relative one:
RUN cd $JMETER_HOME && java -cp lib/ext/jmeter-plugins-manager-1.4.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
More information:
How to Install the JMeter Plugins Manager
Plugins Manager from Command-Line
I have a few Docker builds on Azure DevOps for React apps which include packages from a private npm feed also hosted on Azure DevOps. Recently the builds have started failing at the npm install command.
In order to authenticate the container to install from the private feed I've always used an .npmrc file. This is saved locally as .npmrc.docker and looks like this:
#<package-scope>:registry=https://<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:username=<feed-name>
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:_password=${NPM_TOKEN}
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:email=npm requires email to be set but doesn't use the value
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:username=<feed-name>
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:_password=${NPM_TOKEN}
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:email=npm requires email to be set but doesn't use the value
I define a scoped package source at the top and the rest is generated from Azure DevOps via its Connect to feed wizard. ${NPM_TOKEN} is my feed password which I pass into the docker build command as a build argument.
The part of my Dockerfile which uses this looks like this:
FROM node:alpine as build
ARG NPM_TOKEN
COPY ./.npmrc.docker /app/.npmrc
COPY ./package.json /app/package.json
WORKDIR /app
RUN npm install
RUN rm -f .npmrc
In my Azure DevOps build pipeline this has always worked. The Build image part of the pipeline feeds in this build arg from a variable like this - --build-arg NPM_TOKEN=$(ArtifactsNpmPat) - where ArtifactsNpmPat is a variable in my library.
Recently my builds have started failing. Initially I assumed my token had expired so I generated and stored a new one. Here's the error from the agent:
[error]The command '/bin/sh -c npm install' returned a non-zero code: 1
[error]The process '/usr/bin/docker' failed with exit code 1
Note that the same process continues to work locally. So I've no idea how to diagnose this. I did find this SO post which led me to update my Dockerfile to look like this:
FROM node:alpine as build
ARG NPM_TOKEN
COPY ./.npmrc.docker /app/.npmrc
COPY ./package.json /app/package.json
WORKDIR /app
RUN npm install -g vsts-npm-auth
RUN vsts-npm-auth -config .npmrc
RUN npm install
RUN rm -f .npmrc
However, a docker build now generates a pretty crazy error from the RUN vsts-npm-auth command.
/usr/local/bin/vsts-npm-auth: line 1: MZ�╚╝���#���: not found
/usr/local/bin/vsts-npm-auth: line 1: �ԞO���: not found
/usr/local/bin/vsts-npm-auth: line 57: ╔╚��║[�
╔0�
&� �# ╗╝═�╗��╔╚.rsrc�##.reloc
�╗�#�H╗║�?�Y
╗*═d��╚���╗(&
╗╚s'
}║╝╗╗╚═}═╝*╗{═╝*0╗4╔╗╚( ═╔╗╚(
═╔╚╚(
═╔╗╚(
═╔╗╚(
+═*0╔╗╚/(═
+═*0╝M╗╗{║╝o(
═()
�╔
,+╚═o*
�╔ ,s+
z═o,
Xo-
╝+║╚╝+╝*0╔╗╚?(═
+═*0╔╗╚#(═
+═*0A╚╚r╔po.
-╗{║╝o/
r╔p(0
+╔
═,║╚
+╗╚/(═
+*0╝�╝╗╚╝║(═
═�╔ ,&═╝╝,r║p╝�S╔╚(1
s2
z╚║+Z╚═o3
╚╚o,
═X(4
o5
╝-╚+║rgp║-╗(6
%-═&rgp+║rgp╝-rgp+(7
║+║*0╗║║- ╚╝o8
+╚╝o9
+═*0╚L╗(&
╗╚}
╝╗║}╝╗═}╝╗╗(═}╝*╝═
╗{╝o:
╗(╗{
╝s;
╗(═o
╝+#╝o
║╗ ║(═══�╚,╗
╝o
╝╝o
0�s<╔╗30c
╗{╝ripo=
�╚,E(>
r�p�╔%rip�%�(?
�S╔%;�o#
═ sA
oB
═╚╗{╝sC
oB
═sD
╝+╝*0║╗{╝-rp╔p+║r�╔p
╝═╗{╝oE
+*0╗# ╗{
╝%-
&╗{╝(═: not found
/usr/local/bin/vsts-npm-auth: line 58: syntax error: unexpected word (expecting ")")
So I'm stuck. Has something changed in DevOps around authenticating with its private feeds? Not that I'm aware of, but like I say these builds just stopped working some time in October without me changing anything. Advice appreciated.
You are trying to run vsts-npm-auth which is not compatible with Linux platform, which in azure docs in documentation is explained that you only need .npmrc file with credentials
As answered by dawid debinski, vsts-npm-auth doesn't run on Linux platform as written here.
Check this documentation out in which you have to authenticate your pipeline with npm authenticate, or use the YAML version.
I am trying create a Docker image to host my asp.net MVC app that has a dependency on Crystal Reports.
My dockerfile looks like this
FROM microsoft/iis
COPY ./bin/Release/Publish/ c:\\inetpub\\wwwroot
RUN ["powershell.exe", "Install-WindowsFeature NET-Framework-45-ASPNET"]
RUN ["powershell.exe", "Install-WindowsFeature Web-Asp-Net45"]
#install Crystal reports runtime
COPY Resources/Files/CRRuntime_64bit_13_0_21.msi .
RUN powershell.exe -Command Start-Process CRRuntime_64bit_13_0_21.msi -ArgumentList '/quiet' -Wait
The installing of the CRRuntime_64bit_13_0_21.msi fails. I logged onto my container and ran the msi install from powershell and produced a log. Its very long but here are 2 things that stand out:
Error 1904. Module C:\Program Files (x86)\SAP BusinessObjects\Crystal Reports for .NET Framework 4.0\Common\SAP BusinessObjects Enterprise XI 4.0\win64_x64\pageobjectmodel.dll failed to register. HRESULT -2147024770. Contact your support personnel.
Action ended 17:20:50: InstallFinalize. Return value 3.
Action ended 17:23:56: INSTALL. Return value 3.
MSI (s) (3C:54) [17:23:56:467]: Product: SAP Crystal Reports runtime engine for .NET Framework (64-bit) -- Installation operation failed.
MSI (s) (3C:54) [17:23:56:467]: Windows Installer installed the product. Product Name: SAP Crystal Reports runtime engine for .NET Framework (64-bit). Product Version: 13.0.21.2533. Product Language: 1033. Manufacturer: SAP. Installation success or error status: 1603.
The first error does not seem to halt the install.
Any suggestions for troubleshooting this are welcome as are alternative ways of creating the image.
Also, just to confirm. The website loads and runs fine. I just can't use any of the features that require the Crystal Reports dependency.
Using the full Windows 2019 container mcr.microsoft.com\windows:1809 as a base, the installer works, which hints that it's just down to missing OS components.
I don't get the 'Error 1904' logged but maybe I'm on a different host OS.
The installer log shows that a custom action SetASPDotNetDllPath is failing.
If you:
Open the installer MSI (e.g. in Orca)
Locate and extract the action binary, save as dll
Inspect its imports (e.g. with dumpbin)
This shows a dependency on oledlg.dll.
This is its only dependency which isn't available in Server Core.
Its not great, but you can copy this version from the full windows container to fix it:
FROM mcr.microsoft.com/windows:1809 as dll_source
FROM microsoft/iis
#hack in oledlg dll!!
COPY --from=dll_source /windows/system32/oledlg.dll /windows/system32/oledlg.dll
COPY --from=dll_source /windows/syswow64/oledlg.dll /windows/syswow64/oledlg.dll
RUN ["powershell.exe", "Install-WindowsFeature NET-Framework-45-ASPNET"]
RUN ["powershell.exe", "Install-WindowsFeature Web-Asp-Net45"]
WORKDIR c:/temp
COPY CRRuntime_64bit_13_0_21.msi .
RUN powershell.exe -Command Start-Process c:\temp\CRRuntime_64bit_13_0_21.msi -ArgumentList '/l*v c:\temp\install.log' -Wait
I'm going to add an additional answer while Peters answer worked perfectly for installing Crystal Reports, I had an additional issue with missing Fonts when exporting to PDF from Crystal Report.
This is what I've ended up with. The key is the change in the image tag name to be an older version.
#windowsservercore-1803 required as it has the fonts we need in the report in order to export to PDF
FROM microsoft/iis:windowsservercore-1803
#install features we need
RUN ["powershell.exe", "Install-WindowsFeature NET-Framework-45-ASPNET"]
RUN ["powershell.exe", "Install-WindowsFeature Web-Asp-Net45"]
#hack in oledlg dll so that Crystal Runtime will install
COPY Resources/Files/64/oledlg.dll /windows/syswow64/oledlg.dll
COPY Resources/Files/32/oledlg.dll /windows/system32/oledlg.dll
#copy in Crystal MSI and install. Note it's 64bit version
WORKDIR c:/temp
COPY Resources/Files/CRRuntime_64bit_13_0_21.msi .
RUN powershell.exe -Command Start-Process c:\temp\CRRuntime_64bit_13_0_21.msi -ArgumentList '/quiet /l*v c:\temp\install64.log' -Wait
#Add website files
COPY ./bin/Release/Publish/ /inetpub/wwwroot
For some reason Microsoft have dropped lots of fonts from version 1803 to 1809. I can only assume to reduce the OS image size.
I'm trying to build an Alpine Docker image for the FIPS-enabled version of Go. To do this, I am trying to build Go from source using the dev.boringcrypto branch of the golang/go repository.
Upon running ./all.bash, I get the following errors:
Step 4/4 : RUN cd go/src && ./all.bash
---> Running in 00db552598f7
Building Go cmd/dist using /usr/lib/go.
# _/go/src/cmd/dist
loadinternal: cannot find runtime/cgo
/usr/lib/go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-
alpine-linux-musl/bin/ld: cannot find Scrt1.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-
alpine-linux-musl/bin/ld: cannot find crti.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lssp_nonshared
collect2: error: ld returned 1 exit status
The command '/bin/bash -c cd go/src && ./all.bash' returned a non-zero code: 2
Which causes the installation tests to fail and kicks me out of the Docker image build.
I have the gcc installed on the image, and tried setting the environment variable CGO_ENABLED=0 as suggested in other questions, but neither of these things seem to alleviate the problem.
I'm at my wits end with this problem. Has anybody else run into similar issues in the past? I don't understand why this is happening, as the build runs fine in an Ubuntu container.
Thanks!
I had the same error messages, although I was compiling a different project.
It turns out that alpine needs to have the musl-dev package installed for this to work, so I think you need to make sure that is included in your Dockerfile, or manually install it by running apk add --no-cache musl-dev.
Either go isn't correctly installed on the image or the GOROOT is wrong
Put go tool dist banner and go tool dist env in your all.bash for clues