I'm converting a .Net 2.1 lambda to 3.1 (or higher) and struggling with resolving the references that convert html to pdf. I'm currently using code from this solution https://github.com/HakanL/WkHtmlToPdf-DotNet, which works fine running a console app in the container. The lambda package is introducing issues that break this logic. Using a new lambda solution with this WkHtmlToPdf-DotNet project, the deployed image fails with this exception
GetModule WkHtmlModuleLinux64 Exception System.DllNotFoundException: Unable to load shared library '/var/task/runtimes/linux-x64/native/libwkhtmltox.so' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libjpeg.so.62: cannot open shared object file: No such file or directory
I am using the LD_DEBUG environment variable which shows before the exception: file=runtimes/linux-x86/native/libwkhtmltox [0]; dynamically loaded by /var/lang/bin/shared/Microsoft.NETCore.App/5.0.12/libcoreclr.so [0]
And I also output to the log a search for the file which yields this line:
GetFilePath res: /var/task/runtimes/linux-x64/native/libwkhtmltox.so
Any suggestions how to continue to troubleshoot this?
Thanks,
Reuven
I was able to resolve this issue by installing few of the packages that is required by DinkToPdf library in a docker container environment.
The issue however for installing those packages were not straight forward in Amazon Linux 2 instances. Below is the docker file I had to add for the DinkToPdf work properly.
FROM public.ecr.aws/lambda/dotnet:core3.1
WORKDIR /var/task
COPY "bin/Release/lambda-publish" .
RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y libgdiplus \
ca-certificates \
fontconfig \
freetype \
libX11 \
libXext \
libXrender \
For this to run I also had to copy the three dependent library files after build libwkhtmltox.dll, libwkhtmltox.dylib libwkhtmltox.dll.so.
Related
I am trying to install symfony framework from it's Github repository.
So, I am adding the apk file to my local machine and try to install it but I am getting the following error:
Error
#10 0.245 ERROR: /tmp/symfony-cli_5.4.11_x86_64.apk: UNTRUSTED signature
Install Script
SYMFONY_CLI_VERSION="5.4.11"
SYMFONY_ARCH="x86_64"
wget -O /tmp/symfony-cli_${SYMFONY_CLI_VERSION}_${SYMFONY_ARCH}.apk "https://github.com/symfony-cli/symfony-cli/releases/download/v${SYMFONY_CLI_VERSION}/symfony-cli_${SYMFONY_CLI_VERSION}_${SYMFONY_ARCH}.apk"
apk add --repositories-file=/dev/null --no-network --no-cache "/tmp/symfony-cli_${SYMFONY_CLI_VERSION}_${SYMFONY_ARCH}.apk"
I've tried to install the related .apk.pem file, base64 decode it, get a public RSA key from the decoded cert and add it to "/etc/apk/keys/${SYMFONY_CLI_VERSION}_${SYMFONY_ARCH}#symfony-1C204ECEF7BED6AB.rsa.pub" but the error still the same.
I've noticed that the file name in /etc/apk/keys/ contains the wrong fingerprint but I am not sure how to get it from resources that I have or even if it causing the problem.
if I add --allow-untrusted to theapk add command it will work. My question is How can I verfiy the downloaded file and install it without allowing untrusted?
why you dont follow the manual?
https://symfony.com/download#step-1-install-symfony-cli
sudo apk add --no-cache bash
curl -1sLf 'https://dl.cloudsmith.io/public/symfony/stable/setup.alpine.sh' | sudo -E bash
sudo apk add symfony-cli
or just look into setup.alpine.sh to see how its adding the signature.
I have a Docker image that is used for running tests in Jenkins and Bamboo. I need to upgrade the version of g++ used (to something with C++11 support).
I tried using a Dockerfile that looks roughly like the following one:
FROM docker.blahblahblah/centos/6.6:latest
RUN yum install -y git gcc-c++ imake centos-release-scl-rh devtoolset-7-toolchain
# I've tried putting this into /etc/bashrc, ~/.bashrc, ~/.bash_profile
RUN echo "source scl_source enable devtoolset-7" >> ~/.bashrc
My issue is that when g++ is used within the container, it uses the older one, instead of the newer one in devtoolset-7, even though the newer one should be sourced from the bashrc. (Maybe I'm misunderstanding how Docker will try to run everything.)
Could anyone point me in the right direction here?
I have my ASP.NET Core app running beautifully (more or less) on microsoft/aspnetcore:2.0-jessie. Now I want to try to get it to deploy to amazonlinux:2.
So far, the biggest hurdle has been libicu. I tried setting Globalization to Invariant, but this caused weird failures in, e.g., mySQL database calls.
Here's the relevant step from my Dockerfile:
RUN curl -L --http1.1 http://download.icu-project.org/files/icu4c/57.1/icu4c-57_1-RHEL6-x64.tgz --output icu.tgz \
&& tar -xf icu.tgz -C / \
&& export LD_LIBRARY_PATH=/usr/local/lib \
&& rm icu.tgz
(SourceForge was down while I was trying to work on this yesterday, which didn't improve matters.)
In any case, I still get the message of doom from .NET Core:
FailFast: Couldn't find a valid ICU package installed on the system. Set the configuration flag System.Globalization.Invariant to true if you want to run with no globalization support.
Any suggestions how to proceed?
Well, I revisited this yesterday. I don't know if it's because the base .tar of the Amazon Linux image has been updated, or because I was doing something wrong last time, but I installed the following packages using yum and all was well:
libunwind
libicu
dotnet-hosting-2.0.5
Note that for the dotnet package I needed first to set up Microsoft's package repository for yum, i.e.
rpm --import https://packages.microsoft.com/keys/microsoft.asc
and copying the following file to /etc/yum.repos.d/dotnetdev.repo :
[packages-microsoft-com-prod]
name=packages-microsoft-com-prod
baseurl=https://packages.microsoft.com/yumrepos/microsoft-rhel7.3-prod
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc
(see Microsoft's instructions for CentOS and other Linux distros)
I have a requirement that before an application runs, some part of it needs to read the environmental variable. For this I have the following docker file
FROM nodesource/jessie:0.12.7
# install gettext for envsubst
RUN apt-get update
RUN apt-get install -y gettext-base
# cache package.json and node_modules to speed up builds
ADD package.json package.json
RUN npm install
# Add source files
ADD src src
# Substiture value for backend endpoint env var
RUN envsubst < src/js/envapp.js > src/js/app.js
ADD node_modules node_modules
EXPOSE 8000
CMD ["npm","start"]
The above envsubst line reads (should read) an env variable $MYENV and substitutes it. But when I open the file app.js, its empty.
I checked if the environmental variable exists in the container and it does. Any reason its value is not read and substituted?
I also tried the same command in teh container and it works. It only does not work when I run the image
This is likely because $MYENV is not available for envsubst when you run the image.
Each RUN command runs on its own shell.
From the Docker documentations:
RUN (the command is run in a shell - /bin/sh -c - shell form)
You need to source your profile as well, for example if the $MYENV environment variable is available in the .bashrc file, you can modify your Dockerfile like this:
RUN source ~/.bashrc && envsubst < src/js/envapp.js > src/js/app.js
I encountered the same issues, and after much research and fishing through the internet. I managed to find a few work arounds to this issue. Below I'll list them and identifiable risks at the time of this "Answer post"
Solutions:
1.) apt-get install -y gettext its a standard GNU package language library, one of these libraries that it includes is envsubst` and I can confirm that it works for docker UBUNTU:latest and it should work for every flavored version.
2.) npm install envsub dependent on the "use case" - this approach would be better supported by node based projects.
3.) enstub cli project lib in my opinion it seems a bit overkill to downloading a custom cli from a random stranger but, it's also another option.
Risk:
apt-get install -y gettext:
1.) gettext - this approach would NOT be ideal for VM's as with any package library, it requires maintenance and updates as time passes. However, this isn't necessary for docker because once an a container is initialized and up and running we can create a bashscript to add the package, substitute env vars and then remove the package.
2.) It's a bad idea for VM's because it can be used to execute arbitrary code
npm install ensub
1.) envsub - updating packages and this approach wouldn't be ideal if your dealing with a different stack and not using nodejs.
NOTE:
There's also a PHP version for those developing a PHP application and it seems to work PHP's cli if you need a custom environment.
Resources:
GetText package library info: https://www.gnu.org/software/gettext/
GetText Risk - https://ubuntu.com/security/notices/USN-3815-2
PHP-GetText - apt-get install -y php-gettext
Custom ensubst cli: https://github.com/a8m/envsubst
I suggest that since your are using Node, you use the npm envsub module.
This module is well tested and is developed with docker in mind.
It avoids the need for relying on other dependencies when you already have the full Node arsenal at your fingertips.
envsub is described as
envsub is envsubst for NodeJS
NodeJS global CLI module providing file-level environment variable substitution via Handlebars
I am the author of the package. I think you will enjoy it.
I had some issues with envsubst in Docker.
For some reasons envsubst doesn't work when I try to copy the output in the same file. For example, this is not working:
RUN envsubst < file.conf > file.conf
But when I when I tried to use a temp file the issue disappeared:
RUN envsubst < file.conf > file.conf.temp && cp -f file.conf.temp file.conf
I have a makefile for building debian and rpm packages. I have two Jenkins environments, one for Ubuntu and one for CentOS. The debian package works no problem, and the rpm make command works on my machine, but not on Jenkins. Jenkins returns the following error:
cp: cannot stat /root/rpmbuild/SOURCES/myfile.file': No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.mII8KL (%install)
I was getting similar errors when developing the package but eventually figured everything out, and all was good. I think the problem may lie with $RPM_BUILD_ROOT, %{buildroot}, or _topdir options. Nothing I have tried has led me anywhere however.
Here is my (modified) Makefile:
# a list of tools we depend on and must install if they're missing
DEBTOOLS=/usr/bin/debuild-pbuilder
RPMTOOLS=/usr/bin/rpmbuild
# convenience target for "make deb"
deb: my-package_1.0_all.deb
# convenience target for "make rpm".
rpm: my-package-1.0-Public.x86_64.rpm
# the target package (on Ubuntu at least)
my-package_1.0_all.deb: $(DEBTOOLS)
cd my-package; debuild-pbuilder -us -uc
my-package-1.0-Public.x86_64.rpm: $(RPMTOOLS)
cd rpmbuild; rpmbuild -bb SPECS/my-package.spec
/usr/bin/debuild-pbuilder:
apt-get -y install pbuilder
/usr/bin/rpmbuild:
yum -y install rpm-build
This is my spec file:
Summary: My Package
Name: my-package
Version: 1.0
Release: Public
Group: Applications/System
License: Public
Requires: external-package
Source1: myfile.file
%description
blah blah
%files
%config /etc/myfile.file
%install
mkdir -p $RPM_BUILD_ROOT/etc/
cp %{SOURCE1} %{buildroot}/etc/myfile.file
%post
ln -sf /etc/myfile.file /etc/external-package.conf
The problem was in fact that the file wasn't being found (obviously). For me this had a lot to do with the confusing nature of building rpm files. When the make command is executed, and the rpmbuild command is called, I needed to be able to specify the directory. When reading the documentation, it was stated you could use rpmbuild -D '_topdir .' -bb path/to/spec.spec to set the _topdir variable to the local directory you call from. This made sense as . represents this in linux.
However the actual call needs to be
rpmbuild -D "_topdir `pwd`" -bb path/to/spec.spec
This doesn't look all that different except it is crucial to use double-quotes. Using this command will run the build within the directory you call it from. After this rpmbuild will copy and handle the files for you as it should (which is confusing in itself).