Can't install python-ldap in Docker - docker

I'm getting the following error when trying to install python-ldap module in Docker image for aws:
In file included from Modules/LDAPObject.c:3:0:
Modules/common.h:15:10: fatal error: lber.h: No such file or directory
#include <lber.h>
^~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for python-ldap
Failed to build python-ldap
ERROR: Could not build wheels for python-ldap, which is required to install pyproject.toml-based projects
The command '/bin/sh -c pipenv lock -r > requirements.txt && pip install -r requirements.txt -t python' returned a non-zero code: 1
And my Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
ARG TMP_BUILD=/tmp
ARG DIST=/opt/build-dist
RUN yum makecache fast; yum clean all && yum -y update && yum -y upgrade; yum clean all && \
yum install -y yum-plugin-ovl; yum clean all && yum -y groupinstall "Development Tools"; yum clean all
RUN yum -y install gcc gcc-c++ make autoconf aclocal automake libtool python-devel openldap-devel; yum clean all && \
pip install --upgrade pip && pip install pipenv
WORKDIR ${TMP_BUILD}/build
COPY Pipfile .
COPY Pipfile.lock .
RUN pipenv lock -r > requirements.txt && \
pip install -r requirements.txt -t python
# && \
# find ./python -depth -path '*dist-info*' -delete && \
# find ./python -depth -path '*test*' -delete && \
# find ./python -depth -path '*pycache*' -delete
WORKDIR /opt
RUN mkdir -p ${DIST}/python && \
cp -rf ${TMP_BUILD}/build/python ${DIST} && \
cp -rf ${TMP_BUILD}/build/requirements.txt ${DIST}/requirements.txt
WORKDIR /var/task
This build used to work until recently and as you can see i have the python-devel openldap-devel packages so what's the problem?
Was also having trouble installing this module on my regular machine which runs ManjaroLinux. I had to build from source and change the name of a binary file manually. Could this be a similar situation?
Here is the Pipfile if it helps
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
requests = "*"
slack-bolt = "*"
slack-sdk = "*"
aiohttp = "*"
python-ldap = "*"
[dev-packages]
black = "*"
boto3 = "*"
pytest = "*"
pytest-runner = "*"
pytest-mock = "*"
pandas = "*"
[requires]
python_version = "3.8"
[scripts]
lint = "pipenv run black . --check"
"lint:fix" = "pipenv run black ."
integrationtest = "pipenv run pytest . -m integration "
test = "pipenv run pytest . -m 'not integration' --ignore-glob='integration.py' --junitxml=./TEST-results-lambdas.xml"
[pipenv]
allow_prereleases = true

Below works - 2022
apt-get install build-essential python3-dev libmemcached-dev libldap2-dev libsasl2-dev libzbar-dev ldap-utils tox lcov valgrind
Sample:
FROM python:3.10-slim
RUN apt-get update && \
apt-get --yes install build-essential python3-dev libmemcached-dev libldap2-dev libsasl2-dev libzbar-dev ldap-utils tox lcov valgrind && \
apt-get clean
I follow the official doc python-ldap: Debian it stuck at sldap installation and prumpt input password.
If remove sldap it said:
fatal error: libmemcached/memcached.h: No such file or directory
After replace sldap with libmemcached-dev problem solved.

Python relies on some packages to be present, in order to have them installed just add
RUN apt-get -y install libldap2-dev libsasl2-dev
on your Dockerfile
(or yum install -y <package> as per your example)

Related

save plotly images from Rstudio in docker, get error ! System command 'orca' failed

I have RStudio in docker, and am trying to save a plotly image using orca. I installed orca following Docker and Plotly.
I build and start successfully, and to check if I can save an image I run:
library(plotly)
library(processx)
fig <- plot_ly(z = ~volcano) %>% add_surface()
orca(fig,"t.png")
Whereupon I receive the following error:
Error in `processx::run("orca", "-h")`:
! System command 'orca' failed
---
Exit status: 127
Stderr: <empty>
---
Type .Last.error to see the more details.
Warning message:
'orca' is deprecated.
Use 'kaleido' instead.
See help("Deprecated")
> .Last.error
<system_command_status_error/rlib_error_3_0/rlib_error/error>
Error in `processx::run("orca", "-h")`:
! System command 'orca' failed
---
Exit status: 127
Stderr: <empty>
---
Backtrace:
1. plotly::orca(fig, "t.png")
2. plotly:::orca_available()
3. plotly:::correct_orca()
4. processx::run("orca", "-h")
5. processx:::throw(new_process_error(res, call = sys.call(), echo = echo, …
>
Is there another way to install orca, or save a plotly image in RStudio running in docker?
My full dockerfile:
FROM rocker/verse
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends build-essential libpq-dev python3.9 python3-pip python3-setuptools python3-dev
RUN pip3 install --upgrade pip
ADD . ./home/rstudio
ADD requirements.txt .
ADD install_packages.r .
# Miniconda and dependencies
RUN cd /tmp/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda3 && \
/root/miniconda3/condabin/conda install -y python=3.7
ENV PATH=$PATH:/root/miniconda3/bin
#RUN npm install phantomjs-prebuilt --phantomjs_cdnurl=http://cnpmjs.org/downloads
# installing python libraries
RUN pip3 install -r requirements.txt
# installing r libraries
RUN Rscript install_packages.r
RUN if ! [[ "16.04 18.04 20.04 21.04 21.10" == *"$(lsb_release -rs)"* ]]; \
then \
echo "Ubuntu $(lsb_release -rs) is not currently supported."; \
exit; \
fi
RUN sudo su
RUN apt-get update && apt-get install -y gnupg2
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN exit
RUN sudo apt-get update
RUN sudo ACCEPT_EULA=Y apt-get install -y msodbcsql17
RUN chmod -R 777 /home/rstudio
# Download orca AppImage, extract it, and make it executable under xvfb
RUN apt-get install --yes xvfb libgconf-2-4
RUN wget https://github.com/plotly/orca/releases/download/v1.1.1/orca-1.1.1-x86_64.AppImage -P /home
RUN chmod 777 /home/orca-1.1.1-x86_64.AppImage
# To avoid the need for FUSE, extract the AppImage into a directory (name squashfs-root by default)
RUN cd /home && /home/orca-1.1.1-x86_64.AppImage --appimage-extract
RUN printf '#!/bin/bash \nxvfb-run --auto-servernum --server-args "-screen 0 640x480x24" /home/squashfs-root/app/orca "$#"' > /usr/bin/orca
RUN chmod 777 /usr/bin/orca
RUN chmod -R 777 /home/squashfs-root/

Errors Installing singularity inside dockerfile

I am trying to run a nextflow pipeline which uses an older version of nextflow (21.04.3) and java version 8. Since I have to use this pipeline on a remote server, therefore I can only use singularity.
As this nextflow pipeline also uses singularity pull calls therefore I need the singularity installed inside the docker image as well. Then, I can convert this image docker image to a singularity image and then I can move it to the remote server.
I am trying to install singularity inside dockerfile but I am getting errors,
This is the dockerfile that I am using,
FROM python:3.8.9-slim
LABEL authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
RUN apt-get install -y singularity-container
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow- io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
These are the errors I am getting
Step 9/17 : RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee
/etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --
keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
---> Running in afc3dcbbd1ee
--2022-03-17 17:40:19-- http://neuro.debian.net/lists/xenial.us-ca.full
Resolving neuro.debian.net (neuro.debian.net)... 129.170.233.11
Connecting to neuro.debian.net (neuro.debian.net)|129.170.233.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 262
Saving to: ‘STDOUT’
0K 100% 18.4M=0s
deb http://neurodeb.pirsquared.org data main contrib non-free
#deb-src http://neurodeb.pirsquared.org data main contrib non-free
deb http://neurodeb.pirsquared.org xenial main contrib non-free
#deb-src http://neurodeb.pirsquared.org xenial main contrib non-free
2022-03-17 17:40:19 (18.4 MB/s) - written to stdout [262/262]
/bin/sh: 1: apt-key: not found
The command '/bin/sh -c wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update'
returned a non-zero code: 127
I there a way to install singularity using a dockerfile ?
Thanks
I made some changes in the dockerfile based on the method to install singularity in linux given here.
The complete dockerfile with which I was able to run successfully nextflow, java and singularity within singularity is given below,
FROM python:3.8.9-slim
LABEL
authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
uuid-dev \
libgpgme11-dev \
squashfs-tools \
libseccomp-dev \
wget \
pkg-config \
procps
# Download Go source version 1.16.3, install them and modify the PATH
ENV VERSION=1.16.3
ENV OS=linux
ENV ARCH=amd64
RUN wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz && \
echo 'export PATH=$PATH:/usr/local/go/bin' | tee -a /etc/profile
# Download Singularity from version 3.7.3 (security version)
ENV VERSION=3.7.3
RUN wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz
# Compile Singularity sources and install it
RUN export PATH=$PATH:/usr/local/go/bin && \
cd singularity && \
./mconfig --without-suid && \
make -C ./builddir && \
make -C ./builddir install
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow-io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
The file named requirements.txt used in the above dockerfile is given below,
click
GitPython
jinja2
jsonschema
packaging
prompt_toolkit>=3.0.3
pyyaml
pytest-workflow
questionary>=1.8.0
requests_cache
requests
rich>=10.0.0
tabulate

Why docker-compose build start from begining

For example.
Every time I build.
Copy package.json
Install package.json
Add current directory.
My question is:
Why it does not use from the cache. For example, It should not install the package.json from the start if the package.json does not change.
It should use the cache and update only the changes code.
Update:
Dockerfile
FROM ubuntu:18.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get update && apt-get upgrade -y \
&& apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
gcc \
git \
libpq-dev \
make \
python-pip \
python2.7 \
python2.7-dev \
apt-transport-https \
curl \
g++ \
sudo \
wget \
bzip2 \
chrpath \
libssl-dev \
libxft-dev \
libfreetype6 \
libfreetype6-dev \
libfontconfig1 \
libfontconfig1-dev \
libfontconfig \
poppler-utils \
imagemagick \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/* \
&& apt-get -y autoclean
RUN apt-get update && apt-get install -y --no-install-recommends software-properties-common && add-apt-repository ppa:malteworld/ppa && apt update && apt install -y --no-install-recommends pdftk \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/* \
&& apt-get -y autoclean
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 10.6.0
# Install nvm with node and npm
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# Set up our PATH correctly so we don't have to long-reference npm, node, &c.
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Set the work directory
RUN mkdir -p /var/www/app/jobsaf-website
RUN mkdir /data
RUN mkdir /data/db
WORKDIR /var/www/app/jobsaf-website
RUN npm install -g node-gyp #angular/cli#6.2.3 nodemon request
# Add our package.json and install *before* adding our application files
COPY package.json ./
# RUN npm install --force
RUN npm install --force
RUN npm rebuild node-sass
# Add application files
ADD . .
EXPOSE 3000 5858 4200 35729 27017 6379 49153
.dockerignore
# See http://help.github.com/ignore-files/ for more about ignoring files.
# compiled output
/tmp
/public/__build__/
/src/*/__build__/
/__build__/**
/public/dist/
/src/*/dist/
/dist/**
/.awcache
.webpack.json
/compiled/
dll/
package-lock.json
# dependencies
/node_modules
*/node_modules
# IDEs and editors
/.idea
.project
.classpath
.c9/
*.launch
**.js.map
.settings/
# IDE - VSCode
.vscode/
# misc
/.sass-cache
/connect.lock
/coverage/*
/libpeerconnection.log
npm-debug.log
testem.log
/typings
# e2e
/e2e/*.js
/e2e/*.map
#System Files
.DS_Store
Thumbs.db
*.csv
*.dat
*.iml
*.log
*.out
*.pid
*.seed
*.sublime-*
*.swo
*.swp
*.tgz
*.xml
.strong-pm
coverage
npm-debug*
/admin/dist
npm
/.cache-loader/*
stats.json
!/src/assets/js/admin-header.js
!/src/assets/js/website-custom.js
webpack-cache/
web/
/src/app/**/*.map
/src/app/**/*.js
--force should be removed from the following line as it will ignore any cache and do a fresh installation for your packages which leads to a new docker build layer starting from the installation step.
RUN npm install --force
The -f or --force argument will force npm to fetch remote resources even if a local copy exists on disk.

error running Julia on Ubuntu 16.04 docker for host with GPU

I'm stuck in getting Julia to run on Ubuntu 16.04 on a server having GPUs. Basically we want to utilise power of GPUs.
We're using Docker image to host Julia, it's pulled from nvidia-cuda, the docker image is building successfully, but when I run julia with any switch e.g. julia -v or just julia, I'm getting error ERROR: Unable to find compatible target in system image. I tried finding hints online but no luck, hence posting question here.
After building docker image, I'm running using docker run command by mounting some shared folders, it's coming up successfully, but Julia doesn't seem to work. Please let me know what wrong am I doing here.
Following is Dockerfile code
FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
MAINTAINER comafire <comafire#gmail.com>
# Bash
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
USER root
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends \
apt-utils \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Lang
ARG locale="en_US.UTF-8"
ENV LOCALE ${locale}
RUN echo "LOCALE: $LOCALE"
RUN if [[ $LOCALE = *en* ]] \
; then \
apt-get update && apt-get install -y --no-install-recommends \
locales language-pack-en \
; else \
apt-get update && apt-get install -y --no-install-recommends \
locales language-pack-en \
; fi
RUN echo "$LOCALE UTF-8" > /etc/locale.gen && locale-gen
ENV LC_ALL ${LOCALE}
ENV LANG ${LOCALE}
ENV LANGUAGE ${LOCALE}
ENV LC_MESSAGES POSIX
# Common
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential vim curl wget git cmake bzip2 sudo unzip net-tools \
libffi-dev libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev llvm \
libfreetype6-dev libxft-dev
RUN apt-get update && apt-get install -y --no-install-recommends \
software-properties-common libjpeg-dev libpng-dev ncurses-dev imagemagick \
libgraphicsmagick1-dev libzmq-dev gfortran gnuplot gnuplot-x11 libsdl2-dev \
openssh-client htop iputils-ping
# Python2
RUN apt-get update && apt-get install -y --no-install-recommends \
python python-dev python-pip python-virtualenv python-software-properties
RUN pip2 install --upgrade pip
RUN pip2 install --cache-dir /tmp/pip2 --upgrade setuptools wheel
# Python3
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 python3-dev python3-pip python3-virtualenv python3-software-properties
RUN pip3 install --upgrade pip
RUN pip3 install --cache-dir /tmp/pip3 --upgrade setuptools wheel
# Julia
ENV JULIA_VERSION 1.0.2
RUN apt-get update && apt-get install -y build-essential libatomic1 python gfortran perl wget m4 cmake pkg-config
RUN cd /usr/local && git clone git://github.com/JuliaLang/julia.git && cd julia && git checkout v${JULIA_VERSION}
#RUN make -C deps distclean-llvm && make
RUN cd /usr/local/julia && make -j4
RUN sudo ln -s /usr/local/julia/usr/bin/julia /usr/local/bin/julia
RUN /usr/local/julia/usr/bin/julia -v
RUN ls -al /usr/local/bin
RUN julia -v
WORKDIR /tmp
COPY packages.jl ./
RUN julia packages.jl
When executing RUN julia is julia already in your $PATH? Try executing julia directly, for example:
RUN chmod +x /path/to/julia
RUN /path/to/julia

Building Image from Dockerfile fails on ppc64 when using COPY --FROM

Currently i have image build process via jenkins that launches agents on ppc64le and x86 architecture.
Currently everything works perfectly on the x86 agent, but when executing on ppc64le it fails with the error described bellow:
Error that only happeons on ppc64le:
---> Running in 5458becfaa7b
/usr/bin/apt-get: 1: /usr/bin/apt-get: ELF: not found
/usr/bin/apt-get: 1: /usr/bin/apt-get: #8�#8: not found
/usr/bin/apt-get: 8: /usr/bin/apt-get: Syntax error: Unterminated quoted string
The command '/bin/sh -c apt-get -qq update && apt-get -qqy install python3 python3-dev python3-numpy python3-scipy python3-pip libkeyutils1 && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 2
script returned exit code 2
The sections where it fails:
FROM ubuntu:16.04
## Install random tests
COPY --from=appt /usr/ /usr/
COPY --from=appt /bin /bin
RUN apt-get -qq update \
&& apt-get -qqy install \
python3 \
python3-dev \
python3-numpy \
python3-scipy \
python3-pip \
libkeyutils1 \
&& rm -rf /var/lib/apt/lists/*
Your copy from COPY --from=appt is causing issues because it contains non-ppc64le executables. apt-get must be running something from either the /usr or /bin directories.

Resources