Docker clamav AWS lambda layer build on mac M1 - docker

Attempting to run the docker file from here
https://dev.to/sutt0n/using-a-lambda-container-to-scan-files-with-clamav-via-serverless-2a5g
I get an error
=> ERROR [10/52] RUN yumdownloader -x *i686 --archlist=x86_64 clamav
#13 4.564 64 packages excluded due to repository priority protections
#13 4.623 No Match for argument clamav
#13 4.623 Nothing to download
I am guessing that I need the i686 binary to run on AWS. How to get this to work?
Epel is installed
=> CACHED [ 9/54] RUN amazon-linux-extras install epel -y 0.0s
=> [10/54] RUN yum install -y epel-release 3.4s
=> [11/54] RUN yum install -y cpio yum-utils tar.x86_64 gzip zip
I started the docker container and login
docker exec -it 31a81f061b7e bash
Editing cat /etc/yum/pluginconf.d/priorities.conf does nothing.
# yum repolist
Loaded plugins: ovl, priorities
213 packages excluded due to repository priority protections
repo id repo name status
amzn2-core/2/aarch64 Amazon Linux 2 core repository 19985
amzn2extra-epel/2/aarch64 Amazon Extras repo for epel 1
epel/aarch64 Extra Packages for Enterprise Linux 7 - aarch64 12775+213
repolist: 32761
Shows ARM architecture.
# yum search clamav
Loaded plugins: ovl
=== N/S matched: clamav ========
clamav-filesystem.noarch : Filesystem structure for clamav
clamav-unofficial-sigs.noarch : Scripts to download unofficial clamav signatures
clamav.aarch64 : End-user tools for the Clam Antivirus scanner
clamav-data.noarch : Virus signature data for the Clam Antivirus scanner.
clamav-devel.aarch64 : Header files and libraries for the Clam Antivirus scanner
clamav-lib.aarch64 : Dynamic libraries for the Clam Antivirus scanner
clamav-milter.aarch64 : Milter module for the Clam Antivirus scanner
clamav-update.aarch64 : Auto-updater for the Clam Antivirus scanner data-files
Probably clamav.aarch64.
Take a guess, edit the Dockerfile. Sadly no.
ERROR [11/53] RUN yumdownloader -x *i686 --archlist=x86_64 clamav.x86_64

Hunch regarding platform seems to have been correct. Take Sutton's Dockerfile and specify a platform. Randomly chose 'amd64' as opposed to x86-64 which seems more logical as per a derivative blog post
FROM --platform=linux/amd64 amazonlinux:2
WORKDIR /home/build
RUN set -e

Related

CMAKE error when trying to build project inside of a docker container: cant find cudart lib

Using an nvidia jetson tx2 running ubuntu18.04 with docker,nvidia-docker2 and l4t-cuda installed on the host system.
Main Error when compiling:
CMake Error at /usr/local/lib/python3.6/dist-packages/cmake/data/share/cmake-3.25/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find CUDA (missing: CUDA_CUDART_LIBRARY) (found suitable version
"10.2", minimum required is "10.2")
Call Stack (most recent call first):
/usr/local/lib/python3.6/dist-packages/cmake/data/share/cmake-3.25/Modules/FindPackageHandleStandardArgs.cmake:600 (_FPHSA_FAILURE_MESSAGE)
/usr/local/lib/python3.6/dist-packages/cmake/data/share/cmake-3.25/Modules/FindCUDA.cmake:1266 (find_package_handle_standard_args)
CMakeLists.txt:17 (find_package)
CMakeLists.txt:
cmake_minimum_required (VERSION 3.5)
project(vision)
enable_testing()
# Variables scopes follow standard rules
# Variables defined here will carry over to its children, ergo subdirectories
# Setup ZED libs
find_package(ZED 3 REQUIRED)
include_directories(${ZED_INCLUDE_DIRS})
link_directories(${ZED_LIBRARY_DIR})
# Setup CUDA libs for zed and ai modules
find_package(CUDA ${ZED_CUDA_VERSION} REQUIRED)
include_directories(${CUDA_INCLUDE_DIRS})
link_directories(${CUDA_LIBRARY_DIRS})
# Setup OpenCV libs
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INLCUDE_DIRS})
# Check if OpenMP is installed
find_package(OpenMP)
checkPackage("OpenMP" "OpenMP not found, please install it to improve performances: 'sudo apt install libomp-dev'")
# TensorRT
set(TENSORRT_ROOT /usr/src/tensorrt/)
find_path(TENSORRT_INCLUDE_DIR NvInfer.h
HINTS ${TENSORRT_ROOT} PATH_SUFFIXES include/)
message(STATUS "Found TensorRT headers at ${TENSORRT_INCLUDE_DIR}")
set(MODEL_INCLUDE ../code/includes)
set(MODEL_LIB_DIR libs)
set(YAML_INCLUDE ../depends/yaml-cpp/include)
set(YAML_LIB_DIR ../depends/yaml-cpp/libs)
include_directories(${MODEL_INCLUDE} ${YAML_INCLUDE})
link_directories(${MODEL_LIB_DIR} ${YAML_LIB_DIR})
# Setup Darknet libs
#find_library(DARKNET_LIBRARY NAMES dark libdark.so libdarknet.so)
#find_package(dark REQUIRED)
# Setup HTTP libs
find_package(httplib REQUIRED)
find_package(nlohmann_json 3.2.0 REQUIRED)
# System libs
SET(SPECIAL_OS_LIBS "pthread")
link_libraries(stdc++fs)
# Optional definitions
add_definitions(-std=c++17 -g -O3)
# Add sub directories
add_subdirectory(zed_module)
add_subdirectory(ai_module)
add_subdirectory(http_api_module)
add_subdirectory(executable_module)
option(RUN_TESTS "Build the tests" off)
if (RUN_TESTS OR CMAKE_BUILD_TYPE MATCHES Debug)
add_subdirectory(test)
steps that fail in Dockerfile using image stereolabs/zed:3.7-devel-jetson-jp4.6:
WORKDIR /opt
RUN git clone https://github.com/Cruiz102/Vision-Module
WORKDIR /opt/Vision-Module
RUN mkdir build-debug && cd build-debug
RUN pwd
WORKDIR /opt/Vision-Module/build-debug
RUN cmake -DCMAKE_BUILD_TYPE=Release ..
contents of /etc/docker/daemon.json
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
Using the jetson, I've tried using flags to set the toolkit dir as well as editing daemon.json reinstalling dependencies, changing docker images, installing and reinstalling cudaart on host, changing flags and finishing the build in interactive mode However I always get the same error.
I have looked into Docker some time ago, so I'm not an expert, put as far as I remember, Docker and Docker containers are like virtual machines. It doesn't matter whether your pc has any cuda support or whether the libraries are installed. They are not part of your Docker VM. And since Docker runs without GUI this stuff will not be installed right away.
I don't see any code to install it in your container. Are you using one that has cuda support? If not you need to install it using your Dockerfile, not on your host

How to resolve libwkhtmltox.so reference in .Net AWS Lambda Docker image

I'm converting a .Net 2.1 lambda to 3.1 (or higher) and struggling with resolving the references that convert html to pdf. I'm currently using code from this solution https://github.com/HakanL/WkHtmlToPdf-DotNet, which works fine running a console app in the container. The lambda package is introducing issues that break this logic. Using a new lambda solution with this WkHtmlToPdf-DotNet project, the deployed image fails with this exception
GetModule WkHtmlModuleLinux64 Exception System.DllNotFoundException: Unable to load shared library '/var/task/runtimes/linux-x64/native/libwkhtmltox.so' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libjpeg.so.62: cannot open shared object file: No such file or directory
I am using the LD_DEBUG environment variable which shows before the exception: file=runtimes/linux-x86/native/libwkhtmltox [0]; dynamically loaded by /var/lang/bin/shared/Microsoft.NETCore.App/5.0.12/libcoreclr.so [0]
And I also output to the log a search for the file which yields this line:
GetFilePath res: /var/task/runtimes/linux-x64/native/libwkhtmltox.so
Any suggestions how to continue to troubleshoot this?
Thanks,
Reuven
I was able to resolve this issue by installing few of the packages that is required by DinkToPdf library in a docker container environment.
The issue however for installing those packages were not straight forward in Amazon Linux 2 instances. Below is the docker file I had to add for the DinkToPdf work properly.
FROM public.ecr.aws/lambda/dotnet:core3.1
WORKDIR /var/task
COPY "bin/Release/lambda-publish" .
RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y libgdiplus \
ca-certificates \
fontconfig \
freetype \
libX11 \
libXext \
libXrender \
For this to run I also had to copy the three dependent library files after build libwkhtmltox.dll, libwkhtmltox.dylib libwkhtmltox.dll.so.

Centos image build fails when I use rpms via docker

I'm currently working out a Dockerfile. So I am trying to build out a Centos 7.6 base image and I get a failure when I try to use any yum packages. I'm not sure what the cause of this is.
I've already attempted to make the user root to see if that makes a difference but it doesn't help the situation. I've also done a docker pull centos to recieve the latest version of centos.
I simplified the code and still the same error.
FROM centos
ARG MONGO-RAILS-VERSION="0.0"
RUN yum install vim
# curl -L get.rvm.io | bash -s stable \
# rvm install 2.3.1 \
# rvm use 2.3.1 --default \
# gem install rails
I get an error that looks something like this
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: base/7/x86_64
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was
14: curl#7 - "Failed to connect to 2001:1b48:203::4:10: Network is unreachable"
The command '/bin/sh -c yum install vim' returned a non-zero code: 1
You may want to have a look for Set build-time variables (--build-arg):
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 --build-arg FTP_PROXY=http://40.50.60.5:4567 .

pyodbc not working in web-service container, Azure Model Management

I am trying to create a web-service via Azure Model Management and am struggling.
I've followed the instructions and have managed to operationalize locally in a Docker container. My 'score.py' file includes a query to a SQL database using pyodbc. This functions perfectly when I test this on my local environment using the ML Workbench, however once this has been deployed in a Docker container I come across this error:
'Response Content': b'(\'01000\', "[01000] [unixODBC][Driver Manager]Can\'t open lib \'ODBC Driver 13 for SQL Server\' : file not found (0) (SQLDriverConnect)")'
I have included pyodbc in my conda_dependencies.yml.
Has anyone got any suggestions? Is there any further dependencies that I need to include?
Azure seem to have recently added the ability to customize container images using what they call a 'Docker Steps file'. I have practically no experience in Docker, but after reading this question i tried including a 'Docker Steps file' containing this:
ADD odbcinst.ini /etc/odbcinst.ini
RUN apt-get update
RUN apt-get install -y tdsodbc unixodbc-dev
RUN apt install unixodbc-bin -y
RUN apt-get clean -y
However i understand 'ADD' commands are not possible in this type of file, so this seems to have made no difference.
Hopefully this this all makes sense! Any advice would be very much appreciated! I hope I'm not the only one stumbling my way through Azure ML!
EDIT:
I'm still stuck, but making progress...
I accessed the root of the container using:
docker exec -ti -u root container_name bash
From here I ran 'odbcinst -j`, resulting in:
unixODBC 2.3.6
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /root/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
I couldn't seem to actually locate odbc.ini so i followed these instructions for installing 'ODBC Driver 13' for Ubuntu 16.04. Now when I run the service i get a different error:
{'Error': MlCliError({'Error': 'Error occurred while attempting to score service myapp.', 'Response Code': 502, 'Response Content': b'<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body bgcolor="white">\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx/1.10.3 (Ubuntu)</center>\r\n</body>\r\n</html>\r\n', 'Response Headers': {'Content-Length': '182', 'Content-Type': 'text/html', 'Date': 'Wed, 18 Apr 2018 14:06:30 GMT', 'Server': 'nginx/1.10.3 (Ubuntu)', 'Connection': 'keep-alive'}},), 'Azure-cli-ml Version': '0.1.0b2'}
I have also tried altering my score.py file to return: pyodbc.drivers() this results in a blank '[]'

Makefile for building an rpm works locally, but not in Jenkins

I have a makefile for building debian and rpm packages. I have two Jenkins environments, one for Ubuntu and one for CentOS. The debian package works no problem, and the rpm make command works on my machine, but not on Jenkins. Jenkins returns the following error:
cp: cannot stat /root/rpmbuild/SOURCES/myfile.file': No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.mII8KL (%install)
I was getting similar errors when developing the package but eventually figured everything out, and all was good. I think the problem may lie with $RPM_BUILD_ROOT, %{buildroot}, or _topdir options. Nothing I have tried has led me anywhere however.
Here is my (modified) Makefile:
# a list of tools we depend on and must install if they're missing
DEBTOOLS=/usr/bin/debuild-pbuilder
RPMTOOLS=/usr/bin/rpmbuild
# convenience target for "make deb"
deb: my-package_1.0_all.deb
# convenience target for "make rpm".
rpm: my-package-1.0-Public.x86_64.rpm
# the target package (on Ubuntu at least)
my-package_1.0_all.deb: $(DEBTOOLS)
cd my-package; debuild-pbuilder -us -uc
my-package-1.0-Public.x86_64.rpm: $(RPMTOOLS)
cd rpmbuild; rpmbuild -bb SPECS/my-package.spec
/usr/bin/debuild-pbuilder:
apt-get -y install pbuilder
/usr/bin/rpmbuild:
yum -y install rpm-build
This is my spec file:
Summary: My Package
Name: my-package
Version: 1.0
Release: Public
Group: Applications/System
License: Public
Requires: external-package
Source1: myfile.file
%description
blah blah
%files
%config /etc/myfile.file
%install
mkdir -p $RPM_BUILD_ROOT/etc/
cp %{SOURCE1} %{buildroot}/etc/myfile.file
%post
ln -sf /etc/myfile.file /etc/external-package.conf
The problem was in fact that the file wasn't being found (obviously). For me this had a lot to do with the confusing nature of building rpm files. When the make command is executed, and the rpmbuild command is called, I needed to be able to specify the directory. When reading the documentation, it was stated you could use rpmbuild -D '_topdir .' -bb path/to/spec.spec to set the _topdir variable to the local directory you call from. This made sense as . represents this in linux.
However the actual call needs to be
rpmbuild -D "_topdir `pwd`" -bb path/to/spec.spec
This doesn't look all that different except it is crucial to use double-quotes. Using this command will run the build within the directory you call it from. After this rpmbuild will copy and handle the files for you as it should (which is confusing in itself).

Resources