How to build a custom image using 'python:alpine' for use with AWS Lambda? - docker

This page describes creating a Docker image for use with Lambda using 'python:buster'
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-create-from-alt
I would like to do the same with 'python:alpine'
but get problems when trying to install 'libcurl4-openssl-dev'
Has anyone successfully built a 'python:alpine' image for use in lambda?

This package "libcurl4-openssl-dev" belongs to debian/ubuntu family which is not exist in Alpine linux distro but as only libcurl.
Btw you can search Alpine packages from here https://pkgs.alpinelinux.org/packages
If you want to achieve a custom Lambda Python runtime with ALPINE then this Dockerfile might useful.
I did slight modifications to fit into the alpine linux world.
# Define function directory
ARG FUNCTION_DIR="/function"
FROM python:alpine3.12 AS python-alpine
RUN apk add --no-cache \
libstdc++
FROM python-alpine as build-image
# Install aws-lambda-cpp build dependencies
RUN apk add --no-cache \
build-base \
libtool \
autoconf \
automake \
libexecinfo-dev \
make \
cmake \
libcurl
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy function code
COPY app/* ${FUNCTION_DIR}
# Install the runtime interface client
RUN python -m pip install --upgrade pip
RUN python -m pip install \
--target ${FUNCTION_DIR} \
awslambdaric
# Multi-stage build: grab a fresh copy of the base image
FROM python-alpine
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the build image dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "app.handler" ]

Related

A Dockerfile with 2 ENTRYPOINT

I am learning about docker, specificially how to write docker file. Recently I came across this one and couldn't understand why there are 2 ENTRYPOINT in it.
The original file is in this link CosmWasm/rust-optimizer Dockerfile. The code bellow is its current actual content.
FROM rust:1.60.0-alpine as targetarch
ARG BUILDPLATFORM
ARG TARGETPLATFORM
ARG TARGETARCH
ARG BINARYEN_VERSION="version_105"
RUN echo "Running on $BUILDPLATFORM, building for $TARGETPLATFORM"
# AMD64
FROM targetarch as builder-amd64
ARG ARCH="x86_64"
# ARM64
FROM targetarch as builder-arm64
ARG ARCH="aarch64"
# GENERIC
FROM builder-${TARGETARCH} as builder
# Download binaryen sources
ADD https://github.com/WebAssembly/binaryen/archive/refs/tags/$BINARYEN_VERSION.tar.gz /tmp/binaryen.tar.gz
# Extract and compile wasm-opt
# Adapted from https://github.com/WebAssembly/binaryen/blob/main/.github/workflows/build_release.yml
RUN apk update && apk add build-base cmake git python3 clang ninja
RUN tar -xf /tmp/binaryen.tar.gz
RUN cd binaryen-version_*/ && cmake . -G Ninja -DCMAKE_CXX_FLAGS="-static" -DCMAKE_C_FLAGS="-static" -DCMAKE_BUILD_TYPE=Release -DBUILD_STATIC_LIB=ON && ninja wasm-opt
# Run tests
RUN cd binaryen-version_*/ && ninja wasm-as wasm-dis
RUN cd binaryen-version_*/ && python3 check.py wasm-opt
# Install wasm-opt
RUN strip binaryen-version_*/bin/wasm-opt
RUN mv binaryen-version_*/bin/wasm-opt /usr/local/bin
# Check cargo version
RUN cargo --version
# Check wasm-opt version
RUN wasm-opt --version
# Download sccache and verify checksum
ADD https://github.com/mozilla/sccache/releases/download/v0.2.15/sccache-v0.2.15-$ARCH-unknown-linux-musl.tar.gz /tmp/sccache.tar.gz
RUN sha256sum /tmp/sccache.tar.gz | egrep '(e5d03a9aa3b9fac7e490391bbe22d4f42c840d31ef9eaf127a03101930cbb7ca|90d91d21a767e3f558196dbd52395f6475c08de5c4951a4c8049575fa6894489)'
# Extract and install sccache
RUN tar -xf /tmp/sccache.tar.gz
RUN mv sccache-v*/sccache /usr/local/bin/sccache
RUN chmod +x /usr/local/bin/sccache
# Check sccache version
RUN sccache --version
# Add scripts
ADD optimize.sh /usr/local/bin/optimize.sh
RUN chmod +x /usr/local/bin/optimize.sh
ADD optimize_workspace.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/optimize_workspace.sh
# Being required for gcc linking of build_workspace
RUN apk add --no-cache musl-dev
ADD build_workspace build_workspace
RUN cd build_workspace && \
echo "Installed targets:" && (rustup target list | grep installed) && \
export DEFAULT_TARGET="$(rustc -vV | grep 'host:' | cut -d' ' -f2)" && echo "Default target: $DEFAULT_TARGET" && \
# Those RUSTFLAGS reduce binary size from 4MB to 600 KB
RUSTFLAGS='-C link-arg=-s' cargo build --release && \
ls -lh target/release/build_workspace && \
(ldd target/release/build_workspace || true) && \
mv target/release/build_workspace /usr/local/bin
#
# base-optimizer
#
FROM rust:1.60.0-alpine as base-optimizer
# Being required for gcc linking
RUN apk update && \
apk add --no-cache musl-dev
# Setup Rust with Wasm support
RUN rustup target add wasm32-unknown-unknown
# Add wasm-opt
COPY --from=builder /usr/local/bin/wasm-opt /usr/local/bin
#
# rust-optimizer
#
FROM base-optimizer as rust-optimizer
# Use sccache. Users can override this variable to disable caching.
COPY --from=builder /usr/local/bin/sccache /usr/local/bin
ENV RUSTC_WRAPPER=sccache
# Assume we mount the source code in /code
WORKDIR /code
# Add script as entry point
COPY --from=builder /usr/local/bin/optimize.sh /usr/local/bin
ENTRYPOINT ["optimize.sh"]
# Default argument when none is provided
CMD ["."]
#
# workspace-optimizer
#
FROM base-optimizer as workspace-optimizer
# Assume we mount the source code in /code
WORKDIR /code
# Add script as entry point
COPY --from=builder /usr/local/bin/optimize_workspace.sh /usr/local/bin
COPY --from=builder /usr/local/bin/build_workspace /usr/local/bin
ENTRYPOINT ["optimize_workspace.sh"]
# Default argument when none is provided
CMD ["."]
According to this Document, only the last ENTRYPOINT will have effect. But those 2 are in 2 different base docker images, so in any special case, will those 2 ENTRYPOINT have effect or this is just a bug?
You can keep replacing the entry point down the file, however, that's a multi-stage docker file. so if you build a given stage then you'll get a different entry point.
For example:
docker build --target rust-optimizer .
will build up to and including that stage which when run will run optimize.sh .
however
docker build .
which when run will run optimize_workspace.sh .

version `GLIBC_2.29' not found

I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]

Python.h: No such file or directory on Amazon Linux Lambda Container

I am trying to build this public.ecr.aws/lambda/python:3.6 based Dockerfile with a requirements.txt file that contains some libraries that need gcc/g++ to build. I'm getting an error of a missing Python.h file despite the fact that I installed the python development package and /usr/include/python3.6m/Python.h exists in the file system.
Dockerfile
FROM public.ecr.aws/lambda/python:3.6
RUN yum install -y gcc gcc-c++ python36-devel.x86_64
RUN pip install --upgrade pip && \
pip install cyquant
COPY app.py ./
CMD ["app.handler"]
When I build this with
docker build -t redux .
I get the following error
cyquant/dimensions.cpp:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'gcc' failed with exit status 1
Notice, however, that my Dockerfile yum installs the development package. I have also tried the yum package python36-devel.i686 with no change.
What am I doing wrong?
The pip that you're executing lives in /var/lang/bin/pip whereas the python you're installing lives in the /usr prefix
presumably you could use /usr/bin/pip directly to install, but I'm not sure whether that works correctly with the lambda environment
I was able to duplicate the behavior of the AWS Lambda functionality without their Docker image and it works just fine. This is the Dockerfile I am using.
ARG FUNCTION_DIR="/function/"
FROM python:3.6 AS build
ARG FUNCTION_DIR
ARG NETRC_PATH
RUN echo "${NETRC_PATH}" > /root/.netrc
RUN mkdir -p ${FUNCTION_DIR}
COPY requirements.txt ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
RUN pip install --upgrade pip && \
pip install --target ${FUNCTION_DIR} awslambdaric && \
pip install --target ${FUNCTION_DIR} --no-warn-script-location -r requirements.txt
FROM python:3.6
ARG FUNCTION_DIR
WORKDIR ${FUNCTION_DIR}
COPY --from=build ${FUNCTION_DIR} ${FUNCTION_DIR}
COPY main.py ${FUNCTION_DIR}
ENV MPLCONFIGDIR=/tmp/mplconfig
ENTRYPOINT ["/usr/local/bin/python", "-m", "awslambdaric"]
CMD ["main.handler"]

Lambda docker image won't start if i overwrite entrypoint from lambda console

I have this Dockerfile
ARG FUNCTION_DIR="/opt/"
FROM node:10.13-alpine#sha256:22c8219b21f86dfd7398ce1f62c48a022fecdcf0ad7bf3b0681131bd04a023a2 AS BUILD_IMAGE
ARG FUNCTION_DIR
RUN apk --update add cmake autoconf automake libtool binutils libexecinfo-dev python2 gcc make g++ zlib-dev
ENV NODE_ENV=production
ENV PYTHON=/usr/bin/python2
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY package.json yarn.lock ./
RUN yarn --frozen-lockfile
RUN npm prune --production
RUN yarn cache clean
RUN npm cache clean --force
FROM node:10.13-alpine#sha256:22c8219b21f86dfd7398ce1f62c48a022fecdcf0ad7bf3b0681131bd04a023a2
ARG FUNCTION_DIR
ENV NODE_ENV=production
ENV NODE_OPTIONS=--max_old_space_size=4096
RUN apk update \
&& apk upgrade \
&& apk add mongodb-tools fontconfig dumb-init \
&& rm -rf /var/cache/apk/*
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY --from=BUILD_IMAGE ${FUNCTION_DIR}/node_modules ./node_modules
COPY . .
RUN if [ -f core/config/local.js ]; then rm core/config/local.js; fi
RUN cp core/config/local.js.aws.readonly core/config/local.js
USER node
EXPOSE 8080
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["node", "app.js", "--app=search", "--env=production"]
I use this Dockerfile to generate an image (called core-a) that run our application in K8s. I've added some code inside my application to handle the case our application is launched from a lambda function and i've created another Dockerfile like the one above but using custom ENTRYPOINT and CMD setting this values.
ENTRYPOINT [ "/usr/local/bin/npx", "aws-lambda-ric" ]
CMD [ "apps/search/index.handler" ]
Than i deployed this image called core-b to ecr using core-b as docker image for a lambda function and everything works as expected.
After that i thought that i can use the possibility to overwrite entrypoint and CMD in order to use the same docker image for both environments, so i modified Lambda function's image pointing to core-a and using the entrypoint and cmd values i used in core-b dockerfile, but doing so i get an error
Couldn't find valid bootstrap(s): [\"/usr/local/bin/npx\"]
Did anyone have any suggestion ?
Try to remove the quotation marks (" ") when entering the override value in this web form.
These AWS docs unfortunately have an uncorrect note that say to use the quotation marks on each string.

How to install guacenc with docker?

I installed guacamole and guacadmin with docker, so I also wanted to install guacenc with docker, but I didn't find the information.
In fact, I have not found any other way to install guacenc.
If anyone knows, I hope I can get the answer.
Thank you very much
If guacamole and guacadmin can be installed on linux/windows/Mac OS then it can be run inside docker.
Here is the official docker image of guacamole on dockerhub, which you can try out.
Also check this out.
Update:
To install guacenc in centos docker image you need to install necessary packages as mentioned here.
Quoting the statement from this link.
The guacenc utility, provided by guacamole-server to translate screen
recordings into video, depends on FFmpeg, and will only be built if at
least the libavcodec, libavutil, and libswscale libraries provided by
FFmpeg are installed.
The libavcodec, libavutil, and libswscale libraries provided by FFmpeg
are used by guacenc to encode video streams when translating
recordings of Guacamole sessions. Without FFmpeg, the guacenc utility
will simply not be built.
You need to install these packages using yum install so that guacenc utility can be build and installed.
Hope this helps.
This dockerfile is only reference. thanks
# This is a Dockerfile to build the docker image including guacenc service and ffmpeg utility.
# To reduce docker image size, utilize multi-stage build method and copy shared library file required while executing guacenc.
# Copying shared library method is inspired by this thread "https://gist.github.com/bcardiff/85ae47e66ff0df35a78697508fcb49af"
# mutlti-stage builds ref#
# https://docs.docker.com/develop/develop-images/multistage-build/
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# This Dockerfile is built off the https://github.com/apache/guacamole-server Dockerfile.
# In this repo, only Dockerfile provided. If you're about to build your own docker image, download whole project file from the link above.
# Encode existing session recordings to .m4v:
# $ docker exec -it guac_encoder guacenc -f /recordings/file-name-to-encode
# Convert .m4v to .mp4:
# $ docker exec -it guac_encoder ffmpeg -i /recordings/file-name-to-convert.m4v /records/file-name.mp4
# Use Debian as base for the build
ARG DEBIAN_VERSION=stable
FROM debian:${DEBIAN_VERSION} AS builder
ADD . /src
WORKDIR /src
# Base directory for installed build artifacts.
# Due to limitations of the Docker image build process, this value is
# duplicated in an ARG in the second stage of the build.
#
ARG PREFIX_DIR=/usr/local/guacamole
# Build arguments
ARG BUILD_DIR=/tmp/guacd-docker-BUILD
ARG BUILD_DEPENDENCIES=" \
autoconf \
automake \
gcc \
libcairo2-dev \
libjpeg62-turbo-dev \
libossp-uuid-dev \
libpango1.0-dev \
libtool \
libwebp-dev \
libavcodec-dev \
libavutil-dev \
libswscale-dev \
make"
# Bring build environment up to date and install build dependencies
RUN apt-get update && \
apt-get install -y $BUILD_DEPENDENCIES && \
rm -rf /var/lib/apt/lists/*
# Add configuration scripts
COPY src/guacd-docker/bin "${PREFIX_DIR}/bin/"
# Copy source to container for sake of build
COPY . "$BUILD_DIR"
# Build guacamole-server from local source
RUN ${PREFIX_DIR}/bin/build-guacd.sh "$BUILD_DIR" "$PREFIX_DIR"
# # Record the packages of all runtime library dependencies
# RUN ${PREFIX_DIR}/bin/list-dependencies.sh \
# ${PREFIX_DIR}/sbin/guacd \
# ${PREFIX_DIR}/lib/libguac-client-*.so \
# > ${PREFIX_DIR}/DEPENDENCIES
# Copy shared library file for guacenc to src folder located root directory
RUN ldd /usr/local/guacamole/bin/guacenc | tr -s '[:blank:]' '\n' | grep '^/' | \
xargs -I % sh -c 'mkdir -p $(dirname deps%); cp % deps%;'
#####################################################################
# Use same Debian as the base for the runtime image
FROM jrottenberg/ffmpeg:4.1-alpine
# Override existing ffmpeg ENTRYPOINT
ENTRYPOINT []
# Base directory for installed build artifacts.
# Due to limitations of the Docker image build process, this value is
# duplicated in an ARG in the first stage of the build. See also the
# CMD directive at the end of this build stage.
ARG PREFIX_DIR=/usr/local/guacamole
# Runtime environment
ENV LC_ALL=C.UTF-8
ENV LD_LIBRARY_PATH ${PREFIX_DIR}/lib:$LD_LIBRARY_PATH
ENV PATH ${PREFIX_DIR}/bin:$PATH
# Copy guacenc and lib
COPY --from=builder ${PREFIX_DIR} ${PREFIX_DIR}
# Copy shared library required while executing guacenc
COPY --from=builder /src/deps /
# # Bring runtime environment up to date and install runtime dependencies
# RUN apt-get update && \
# apt-get install -y $(cat "${PREFIX_DIR}"/DEPENDENCIES) && \
# rm -rf /var/lib/apt/lists/*

Resources