Building Image from Dockerfile fails on ppc64 when using COPY --FROM - docker

Currently i have image build process via jenkins that launches agents on ppc64le and x86 architecture.
Currently everything works perfectly on the x86 agent, but when executing on ppc64le it fails with the error described bellow:
Error that only happeons on ppc64le:
---> Running in 5458becfaa7b
/usr/bin/apt-get: 1: /usr/bin/apt-get: ELF: not found
/usr/bin/apt-get: 1: /usr/bin/apt-get: #8�#8: not found
/usr/bin/apt-get: 8: /usr/bin/apt-get: Syntax error: Unterminated quoted string
The command '/bin/sh -c apt-get -qq update && apt-get -qqy install python3 python3-dev python3-numpy python3-scipy python3-pip libkeyutils1 && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 2
script returned exit code 2
The sections where it fails:
FROM ubuntu:16.04
## Install random tests
COPY --from=appt /usr/ /usr/
COPY --from=appt /bin /bin
RUN apt-get -qq update \
&& apt-get -qqy install \
python3 \
python3-dev \
python3-numpy \
python3-scipy \
python3-pip \
libkeyutils1 \
&& rm -rf /var/lib/apt/lists/*

Your copy from COPY --from=appt is causing issues because it contains non-ppc64le executables. apt-get must be running something from either the /usr or /bin directories.

Related

save plotly images from Rstudio in docker, get error ! System command 'orca' failed

I have RStudio in docker, and am trying to save a plotly image using orca. I installed orca following Docker and Plotly.
I build and start successfully, and to check if I can save an image I run:
library(plotly)
library(processx)
fig <- plot_ly(z = ~volcano) %>% add_surface()
orca(fig,"t.png")
Whereupon I receive the following error:
Error in `processx::run("orca", "-h")`:
! System command 'orca' failed
---
Exit status: 127
Stderr: <empty>
---
Type .Last.error to see the more details.
Warning message:
'orca' is deprecated.
Use 'kaleido' instead.
See help("Deprecated")
> .Last.error
<system_command_status_error/rlib_error_3_0/rlib_error/error>
Error in `processx::run("orca", "-h")`:
! System command 'orca' failed
---
Exit status: 127
Stderr: <empty>
---
Backtrace:
1. plotly::orca(fig, "t.png")
2. plotly:::orca_available()
3. plotly:::correct_orca()
4. processx::run("orca", "-h")
5. processx:::throw(new_process_error(res, call = sys.call(), echo = echo, …
>
Is there another way to install orca, or save a plotly image in RStudio running in docker?
My full dockerfile:
FROM rocker/verse
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends build-essential libpq-dev python3.9 python3-pip python3-setuptools python3-dev
RUN pip3 install --upgrade pip
ADD . ./home/rstudio
ADD requirements.txt .
ADD install_packages.r .
# Miniconda and dependencies
RUN cd /tmp/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b -p $HOME/miniconda3 && \
/root/miniconda3/condabin/conda install -y python=3.7
ENV PATH=$PATH:/root/miniconda3/bin
#RUN npm install phantomjs-prebuilt --phantomjs_cdnurl=http://cnpmjs.org/downloads
# installing python libraries
RUN pip3 install -r requirements.txt
# installing r libraries
RUN Rscript install_packages.r
RUN if ! [[ "16.04 18.04 20.04 21.04 21.10" == *"$(lsb_release -rs)"* ]]; \
then \
echo "Ubuntu $(lsb_release -rs) is not currently supported."; \
exit; \
fi
RUN sudo su
RUN apt-get update && apt-get install -y gnupg2
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN exit
RUN sudo apt-get update
RUN sudo ACCEPT_EULA=Y apt-get install -y msodbcsql17
RUN chmod -R 777 /home/rstudio
# Download orca AppImage, extract it, and make it executable under xvfb
RUN apt-get install --yes xvfb libgconf-2-4
RUN wget https://github.com/plotly/orca/releases/download/v1.1.1/orca-1.1.1-x86_64.AppImage -P /home
RUN chmod 777 /home/orca-1.1.1-x86_64.AppImage
# To avoid the need for FUSE, extract the AppImage into a directory (name squashfs-root by default)
RUN cd /home && /home/orca-1.1.1-x86_64.AppImage --appimage-extract
RUN printf '#!/bin/bash \nxvfb-run --auto-servernum --server-args "-screen 0 640x480x24" /home/squashfs-root/app/orca "$#"' > /usr/bin/orca
RUN chmod 777 /usr/bin/orca
RUN chmod -R 777 /home/squashfs-root/

Google Cloud Build does not use cache for some of the RUN executions

I have been using Google Cloud Build with cloudbuild.yaml and a Dockerfile. You can find the files below :
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker pull gcr.io/$PROJECT_ID/github.com/videoo-io/videoo-render:latest || exit 0']
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t', 'gcr.io/$PROJECT_ID/github.com/videoo-io/videoo-render:latest',
'--cache-from', 'gcr.io/$PROJECT_ID/github.com/videoo-io/videoo-render:latest',
'.'
]
images: ['gcr.io/$PROJECT_ID/github.com/videoo-io/videoo-render:latest']
timeout: 7200s
Dockerfile :
FROM --platform=amd64 ubuntu:22.10
# Use baseimage-docker's init system.
# CMD ["/sbin/my_init"]
ENV GCSFUSE_REPO gcsfuse-stretch
RUN apt-get update && apt-get install --yes --no-install-recommends \
ca-certificates \
curl \
gnupg \
&& echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" \
| tee /etc/apt/sources.list.d/gcsfuse.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update \
&& apt-get install --yes gcsfuse \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y gcc && \
apt-get install -y software-properties-common && \
apt install -y cmake && \
apt-get install -y make && \
apt-get install -y clang && \
apt-get install -y mesa-common-dev && \
apt-get install -y git && \
apt-get install -y xorg-dev && \
apt-get install -y nasm && \
apt-get install -y xvfb && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Update and upgrade repo
RUN apt-get update -y -q && apt-get upgrade -y -q
COPY . /app
RUN cd /app
RUN ls -la
# Technicly speaking we must be inside the projects directory now.
# DO NOT FORGET to go back to this directory when working.
# CMD bash premake.sh
# Set environment variables.
ENV HOME /root
ENV WDIR /app
# Define working directory.
WORKDIR /app
ARG CACHEBUST=1
RUN cd /app/lib/glfw && cmake -G "Unix Makefiles" && make && apt-get install libx11-dev
RUN apt-cache policy libxrandr-dev
RUN apt install libxrandr-dev
RUN cd /app/lib/ffmpeg && ./configure && make && make install
RUN cmake . && make
# Define default command.
CMD ["bash"]
When Cloud Build builds through the Dockerfile commands, only some commands are cached.
For instance the apt install commands are cached :
Step #1: Step 3/19 : RUN apt-get update && apt-get install --yes --no-install-recommends ca-certificates curl gnupg && echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | tee /etc/apt/sources.list.d/gcsfuse.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && apt-get update && apt-get install --yes gcsfuse && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Step #1: ---> Using cache
Step #1: ---> 57af7779364e
But the following is not cached :
Step #1: Step 14/19 : RUN cd /app/lib/glfw && cmake -G "Unix Makefiles" && make && apt-get install libx11-dev
Step #1: ---> Running in 167e30a29720
Step #1: CMake Warning:
Step #1: No source or binary directory provided. Both will be assumed to be the
Step #1: same as the current working directory, but note that this warning will
Step #1: become a fatal error in future CMake releases.
Step #1:
Step #1:
Step #1: -- The C compiler identification is GNU 11.3.0
Step #1: -- Detecting C compiler ABI info
Step #1: -- Detecting C compiler ABI info - done
Step #1: -- Check for working C compiler: /usr/bin/cc - skipped
Step #1: -- Detecting C compile features
Step #1: -- Detecting C compile features - done
And the following is not cached as well :
Step #1: Step 17/19 : RUN cd /app/lib/ffmpeg && ./configure && make && make install
Step #1: ---> Running in cafb9a07e2bc
Step #1: install prefix /usr/local
Step #1: source path .
Step #1: C compiler gcc
Step #1: C library glibc
Step #1: ARCH x86 (generic)
Step #1: big-endian no
Step #1: runtime cpu detection yes
Step #1: standalone assembly yes
Step #1: x86 assembler nasm
What is the reason for some of these RUN commands are cached and some are not ?

Can't install python-ldap in Docker

I'm getting the following error when trying to install python-ldap module in Docker image for aws:
In file included from Modules/LDAPObject.c:3:0:
Modules/common.h:15:10: fatal error: lber.h: No such file or directory
#include <lber.h>
^~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for python-ldap
Failed to build python-ldap
ERROR: Could not build wheels for python-ldap, which is required to install pyproject.toml-based projects
The command '/bin/sh -c pipenv lock -r > requirements.txt && pip install -r requirements.txt -t python' returned a non-zero code: 1
And my Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
ARG TMP_BUILD=/tmp
ARG DIST=/opt/build-dist
RUN yum makecache fast; yum clean all && yum -y update && yum -y upgrade; yum clean all && \
yum install -y yum-plugin-ovl; yum clean all && yum -y groupinstall "Development Tools"; yum clean all
RUN yum -y install gcc gcc-c++ make autoconf aclocal automake libtool python-devel openldap-devel; yum clean all && \
pip install --upgrade pip && pip install pipenv
WORKDIR ${TMP_BUILD}/build
COPY Pipfile .
COPY Pipfile.lock .
RUN pipenv lock -r > requirements.txt && \
pip install -r requirements.txt -t python
# && \
# find ./python -depth -path '*dist-info*' -delete && \
# find ./python -depth -path '*test*' -delete && \
# find ./python -depth -path '*pycache*' -delete
WORKDIR /opt
RUN mkdir -p ${DIST}/python && \
cp -rf ${TMP_BUILD}/build/python ${DIST} && \
cp -rf ${TMP_BUILD}/build/requirements.txt ${DIST}/requirements.txt
WORKDIR /var/task
This build used to work until recently and as you can see i have the python-devel openldap-devel packages so what's the problem?
Was also having trouble installing this module on my regular machine which runs ManjaroLinux. I had to build from source and change the name of a binary file manually. Could this be a similar situation?
Here is the Pipfile if it helps
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
requests = "*"
slack-bolt = "*"
slack-sdk = "*"
aiohttp = "*"
python-ldap = "*"
[dev-packages]
black = "*"
boto3 = "*"
pytest = "*"
pytest-runner = "*"
pytest-mock = "*"
pandas = "*"
[requires]
python_version = "3.8"
[scripts]
lint = "pipenv run black . --check"
"lint:fix" = "pipenv run black ."
integrationtest = "pipenv run pytest . -m integration "
test = "pipenv run pytest . -m 'not integration' --ignore-glob='integration.py' --junitxml=./TEST-results-lambdas.xml"
[pipenv]
allow_prereleases = true
Below works - 2022
apt-get install build-essential python3-dev libmemcached-dev libldap2-dev libsasl2-dev libzbar-dev ldap-utils tox lcov valgrind
Sample:
FROM python:3.10-slim
RUN apt-get update && \
apt-get --yes install build-essential python3-dev libmemcached-dev libldap2-dev libsasl2-dev libzbar-dev ldap-utils tox lcov valgrind && \
apt-get clean
I follow the official doc python-ldap: Debian it stuck at sldap installation and prumpt input password.
If remove sldap it said:
fatal error: libmemcached/memcached.h: No such file or directory
After replace sldap with libmemcached-dev problem solved.
Python relies on some packages to be present, in order to have them installed just add
RUN apt-get -y install libldap2-dev libsasl2-dev
on your Dockerfile
(or yum install -y <package> as per your example)

Errors Installing singularity inside dockerfile

I am trying to run a nextflow pipeline which uses an older version of nextflow (21.04.3) and java version 8. Since I have to use this pipeline on a remote server, therefore I can only use singularity.
As this nextflow pipeline also uses singularity pull calls therefore I need the singularity installed inside the docker image as well. Then, I can convert this image docker image to a singularity image and then I can move it to the remote server.
I am trying to install singularity inside dockerfile but I am getting errors,
This is the dockerfile that I am using,
FROM python:3.8.9-slim
LABEL authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
RUN apt-get install -y singularity-container
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow- io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
These are the errors I am getting
Step 9/17 : RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee
/etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --
keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
---> Running in afc3dcbbd1ee
--2022-03-17 17:40:19-- http://neuro.debian.net/lists/xenial.us-ca.full
Resolving neuro.debian.net (neuro.debian.net)... 129.170.233.11
Connecting to neuro.debian.net (neuro.debian.net)|129.170.233.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 262
Saving to: ‘STDOUT’
0K 100% 18.4M=0s
deb http://neurodeb.pirsquared.org data main contrib non-free
#deb-src http://neurodeb.pirsquared.org data main contrib non-free
deb http://neurodeb.pirsquared.org xenial main contrib non-free
#deb-src http://neurodeb.pirsquared.org xenial main contrib non-free
2022-03-17 17:40:19 (18.4 MB/s) - written to stdout [262/262]
/bin/sh: 1: apt-key: not found
The command '/bin/sh -c wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update'
returned a non-zero code: 127
I there a way to install singularity using a dockerfile ?
Thanks
I made some changes in the dockerfile based on the method to install singularity in linux given here.
The complete dockerfile with which I was able to run successfully nextflow, java and singularity within singularity is given below,
FROM python:3.8.9-slim
LABEL
authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
uuid-dev \
libgpgme11-dev \
squashfs-tools \
libseccomp-dev \
wget \
pkg-config \
procps
# Download Go source version 1.16.3, install them and modify the PATH
ENV VERSION=1.16.3
ENV OS=linux
ENV ARCH=amd64
RUN wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz && \
echo 'export PATH=$PATH:/usr/local/go/bin' | tee -a /etc/profile
# Download Singularity from version 3.7.3 (security version)
ENV VERSION=3.7.3
RUN wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz
# Compile Singularity sources and install it
RUN export PATH=$PATH:/usr/local/go/bin && \
cd singularity && \
./mconfig --without-suid && \
make -C ./builddir && \
make -C ./builddir install
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow-io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
The file named requirements.txt used in the above dockerfile is given below,
click
GitPython
jinja2
jsonschema
packaging
prompt_toolkit>=3.0.3
pyyaml
pytest-workflow
questionary>=1.8.0
requests_cache
requests
rich>=10.0.0
tabulate

Docker Ubuntu Picking Wrong Directory

I'm fairly new to Docker and trying to familarize myself by trying to run a Steam server inside it.
My Dockerfile is as follow:
FROM ubuntu:20.10
RUN dpkg --add-architecture i386 \
&& apt-get update \
&& apt-get install -y \
curl \
ca-certificates \
libgcc1 \
&& apt-get clean autoclean \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p steam/cmd \
&& cd steam \
&& curl -sqL "https://steamcdn-a.akamaihd.net/client/installer/steamcmd_linux.tar.gz" | tar zxvf - -C cmd
RUN ./steam/cmd/steamcmd.sh +quit
I can't figure out why the last step throws this error:
Step 4/4 : RUN ./steam/cmd/steamcmd.sh +quit
---> Running in c3f673328fe6
./steam/cmd/steamcmd.sh: line 37: /steam/cmd/linux32/steamcmd: No such file or directory
The command '/bin/sh -c ./steam/cmd/steamcmd.sh +quit' returned a non-zero code: 127
Why does ./steam/cmd/steamcmd.sh gets translated into /steam/cmd/linux32/steamcmd?
Step 3/3 isn't placing steamcmd.sh inside linux32.
Step 3/4 : RUN mkdir -p steam/cmd && cd steam && curl -sqL "https://steamcdn-a.akamaihd.net/client/installer/steamcmd_linux.tar.gz" | tar zxvf - -C cmd
---> Running in fa5dcd1fcadc
steamcmd.sh
linux32/steamcmd
linux32/steamerrorreporter
linux32/libstdc++.so.6
linux32/crashhandler.so
Using WORKDIR steam/cmd then following it up with RUN returns the same result as well.
See this discussion: https://askubuntu.com/questions/133389/no-such-file-or-directory-but-the-file-exists
Try doing apt install libc6-i386 as part of step 2.
RUN dpkg --add-architecture i386 \
&& apt-get update \
&& apt-get install -y \
curl \
ca-certificates \
libgcc1 \
libc6-i386 \
&& apt-get clean autoclean \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
Also jfyi, the sh file is not being "translated" as you think.
If you actually open that sh, you will see it tries to call the linux32 executable (which it calls $STEAMEXE).
If you want to be able to fix things like this yourself, here is how I investigated it:
I copied your dockerfile, but commented/deleted the last (broken) step.
I built the image with docker build -t helpthiscoder .
I got on the image with docker start helpthiscoder; docker run -i -t helpthiscoder bash
I looked inside the sh file on the image. apt update; apt install vim; vim /steam/cmd/steamcmd.sh.
Inside this file, I added my own echo $STEAMEXE to see that the path was /steam/cmd/linux32/steamcmd.
I went to /steam/cmd/linux32 and ran ls -lah to see that file was present and executable.
I ran file steamcmd to get info. Then I saw it was 32-bit.
Then I googled linux 32-bit file not found error Ubuntu 20 and found the link I mentioned in the first paragraph. :/
I get a different error now though:
Redirecting stderr to '/root/Steam/logs/stderr.txt'
threadtools.cpp (3787) : Assertion Failed: Probably deadlock or failure waiting for thread to initialize.
ILocalize::AddFile() failed to load file "public/steambootstrapper_english.txt".
[ 0%] Checking for available update...
threadtools.cpp (3787) : Assertion Failed: Probably deadlock or failure waiting for thread to initialize.
Thread failed to initialize
CWorkThreadPool::StartWorkThread: Thread creation failed.
Exiting on SPEW_ABORT
I will keep looking into it. I strongly advise you delete your last broken line and build an image that you can "explore" in the way I mentioned above. That's how I'm getting this new error.

Resources