I am a newbie to Docker and I am trying to install csvtk via Docker using debian:stretch-slim.
This below is my Dockerfile
FROM debian:stretch-slim
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
jq \
perl \
python3 \
wget \
&& rm -rf /var/lib/apt/lists/*
RUN wget -qO- https://github.com/shenwei356/csvtk/releases/download/v0.23.0/csvtk_linux_amd64.tar.gz | tar -xz \
&& cp csvtk /usr/local/bin/
It fails at the csvtk step with the below error message:
Step 3/3 : RUN wget -qO- https://github.com/shenwei356/csvtk/releases/download/v0.23.0/csvtk_linux_amd64.tar.gz | tar -xz && cp csvtk /usr/local/bin/
---> Running in 0f3a0e75a5de
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
The command '/bin/sh -c wget -qO- https://github.com/shenwei356/csvtk/releases/download/v0.23.0/csvtk_linux_amd64.tar.gz | tar -xz && cp csvtk /usr/local/bin/' returned a non-zero code: 2
I would appreciate any help/suggestions.
Thanks in advance.
wget was exiting with error code meaning 5 SSL verification failed on wget. From this answer, you just needed to install ca-certificates before wget.
This Dockerfile should build successfully:
FROM debian:stretch-slim
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
jq \
perl \
python3 \
wget \
# added this package to help with ssl certs in Docker
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
RUN wget -qO- https://github.com/shenwei356/csvtk/releases/download/v0.23.0/csvtk_linux_amd64.tar.gz | tar -xz \
&& cp csvtk /usr/local/bin/
As a general tip when debugging issues like these, it's likely easiest to remove the offending RUN line from your Dockerfile and then try building and running the container in a shell and manually executing the commands you want. Like this:
docker build -t test:v1 .
docker run --rm -it test:v1 /bin/bash
# run commands manually and check the full error output
While combining different RUN instructions with && is best practice to reduce the number of image layers, it's difficult to debug when building.
Related
I would like to install aws-cli for below images but I received below error. I tried with apk, apt but none of then did not work. Can you please help how should I update my dockerfile?
I do not want to change my base image, I need to use maven:3.6.3-openjdk-14.
sh: apt-get: command not found
FROM maven:3.6.3-openjdk-14
RUN apt-get update \
&& apt-get install -y vim jq unzip curl \
&& apt-get upgrade -y \
#install aws 2
RUN curl --silent --show-error --fail "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install && \
rm -rf awscliv2.zip
Docker image maven:3.6.3-openjdk-14 is based on Oracle Linux which uses rpm to manage packages, so apt-get is not available.
docker run -i -t maven:3.6.3-openjdk-14 -- cat /etc/os-release
I'm fairly new to Docker and trying to familarize myself by trying to run a Steam server inside it.
My Dockerfile is as follow:
FROM ubuntu:20.10
RUN dpkg --add-architecture i386 \
&& apt-get update \
&& apt-get install -y \
curl \
ca-certificates \
libgcc1 \
&& apt-get clean autoclean \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p steam/cmd \
&& cd steam \
&& curl -sqL "https://steamcdn-a.akamaihd.net/client/installer/steamcmd_linux.tar.gz" | tar zxvf - -C cmd
RUN ./steam/cmd/steamcmd.sh +quit
I can't figure out why the last step throws this error:
Step 4/4 : RUN ./steam/cmd/steamcmd.sh +quit
---> Running in c3f673328fe6
./steam/cmd/steamcmd.sh: line 37: /steam/cmd/linux32/steamcmd: No such file or directory
The command '/bin/sh -c ./steam/cmd/steamcmd.sh +quit' returned a non-zero code: 127
Why does ./steam/cmd/steamcmd.sh gets translated into /steam/cmd/linux32/steamcmd?
Step 3/3 isn't placing steamcmd.sh inside linux32.
Step 3/4 : RUN mkdir -p steam/cmd && cd steam && curl -sqL "https://steamcdn-a.akamaihd.net/client/installer/steamcmd_linux.tar.gz" | tar zxvf - -C cmd
---> Running in fa5dcd1fcadc
steamcmd.sh
linux32/steamcmd
linux32/steamerrorreporter
linux32/libstdc++.so.6
linux32/crashhandler.so
Using WORKDIR steam/cmd then following it up with RUN returns the same result as well.
See this discussion: https://askubuntu.com/questions/133389/no-such-file-or-directory-but-the-file-exists
Try doing apt install libc6-i386 as part of step 2.
RUN dpkg --add-architecture i386 \
&& apt-get update \
&& apt-get install -y \
curl \
ca-certificates \
libgcc1 \
libc6-i386 \
&& apt-get clean autoclean \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
Also jfyi, the sh file is not being "translated" as you think.
If you actually open that sh, you will see it tries to call the linux32 executable (which it calls $STEAMEXE).
If you want to be able to fix things like this yourself, here is how I investigated it:
I copied your dockerfile, but commented/deleted the last (broken) step.
I built the image with docker build -t helpthiscoder .
I got on the image with docker start helpthiscoder; docker run -i -t helpthiscoder bash
I looked inside the sh file on the image. apt update; apt install vim; vim /steam/cmd/steamcmd.sh.
Inside this file, I added my own echo $STEAMEXE to see that the path was /steam/cmd/linux32/steamcmd.
I went to /steam/cmd/linux32 and ran ls -lah to see that file was present and executable.
I ran file steamcmd to get info. Then I saw it was 32-bit.
Then I googled linux 32-bit file not found error Ubuntu 20 and found the link I mentioned in the first paragraph. :/
I get a different error now though:
Redirecting stderr to '/root/Steam/logs/stderr.txt'
threadtools.cpp (3787) : Assertion Failed: Probably deadlock or failure waiting for thread to initialize.
ILocalize::AddFile() failed to load file "public/steambootstrapper_english.txt".
[ 0%] Checking for available update...
threadtools.cpp (3787) : Assertion Failed: Probably deadlock or failure waiting for thread to initialize.
Thread failed to initialize
CWorkThreadPool::StartWorkThread: Thread creation failed.
Exiting on SPEW_ABORT
I will keep looking into it. I strongly advise you delete your last broken line and build an image that you can "explore" in the way I mentioned above. That's how I'm getting this new error.
I am asking for a massive favor. I was stuck below the issue for the last couple of days. If someone helps then that would be great. Going back to the issue. I have installed a docker and docker container using the following code (Docker-Apache spark).
Docker File:-
FROM debian:stretch
MAINTAINER Getty Images "https://github.com/gettyimages"
RUN apt-get update \
&& apt-get install -y locales \
&& dpkg-reconfigure -f noninteractive locales \
&& locale-gen C.UTF-8 \
&& /usr/sbin/update-locale LANG=C.UTF-8 \
&& echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen \
&& locale-gen \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Users with other locales should set this in their derivative image
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN apt-get update \
&& apt-get install -y curl unzip \
python3 python3-setuptools \
&& ln -s /usr/bin/python3 /usr/bin/python \
&& easy_install3 pip py4j \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# http://blog.stuart.axelbrooke.com/python-3-on-spark-return-of-the-pythonhashseed
ENV PYTHONHASHSEED 0
ENV PYTHONIOENCODING UTF-8
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
# JAVA
RUN apt-get update \
&& apt-get install -y openjdk-8-jre \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# HADOOP
ENV HADOOP_VERSION 3.0.0
ENV HADOOP_HOME /usr/hadoop-$HADOOP_VERSION
ENV HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
ENV PATH $PATH:$HADOOP_HOME/bin
RUN curl -sL --retry 3 \
"http://archive.apache.org/dist/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz" \
| gunzip \
| tar -x -C /usr/ \
&& rm -rf $HADOOP_HOME/share/doc \
&& chown -R root:root $HADOOP_HOME
# SPARK
ENV SPARK_VERSION 2.4.1
ENV SPARK_PACKAGE spark-${SPARK_VERSION}-bin-without-hadoop
ENV SPARK_HOME /usr/spark-${SPARK_VERSION}
ENV SPARK_DIST_CLASSPATH="$HADOOP_HOME/etc/hadoop/*:$HADOOP_HOME/share/hadoop/common/lib/*:$HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/hdfs/lib/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/yarn/lib/*:$HADOOP_HOME/share/hadoop/yarn/*:$HADOOP_HOME/share/hadoop/mapreduce/lib/*:$HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_HOME/share/hadoop/tools/lib/*"
ENV PATH $PATH:${SPARK_HOME}/bin
RUN curl -sL --retry 3 \
"https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/${SPARK_PACKAGE}.tgz" \
| gunzip \
| tar x -C /usr/ \
&& mv /usr/$SPARK_PACKAGE $SPARK_HOME \
&& chown -R root:root $SPARK_HOME
WORKDIR $SPARK_HOME
CMD ["bin/spark-class", "org.apache.spark.deploy.master.Master"]
Command:
ubuntu#ip-123.43.11.136:~$ sudo docker run -it --rm -v $(pwd):/home/ubuntu sparkimage /home/ubuntu bin/spark-submit ./count.py
Got Error below
Error :- Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/home/ubuntu\": permission denied": unknown.
Can some help me what could be the issue? I have gone through several links but no luck still not able to resolve the issue.
ERRO[0001] error waiting for the container: context cancelled
Everything passes after image sparkimage are considered as an argument to the Docker Entrypoint.
For example
Entrypoint ["node"]
so when you start
docker run -it my_image app.js
Now here app.js will be the argument for the node which will start app.js and docker will treat them like node app.js.
So you are passing invalid option to the image in your docker run command as there is no entrypoint in your Dockefile and command become
CMD ["/home/ubuntu bin/spark-submit ./count.py"]
That's is its throw error for /home/ubuntu permission denied.
You can try these two combinations.
Etnrypoint ["bin/spark-class", "org.apache.spark.deploy.master.Master"]
with the run command.
sudo docker run -it --rm -v $(pwd):/home/ubuntu sparkimage /home/ubuntu bin/spark-submit ./count.py
OR
CMD ["bin/spark-class", "org.apache.spark.deploy.master.Master","/home/ubuntu bin/spark-submit ./count.py"]
with docker run command
sudo docker run -it --rm -v $(pwd):/home/ubuntu sparkimage
The issue has been resolved. the corrected right mount path and executed and it was working fine without any issue.
I am relatively new to docker. I have an application which I want to containerize.
Below is is my docker file:
FROM ubuntu:16.04
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar
# Install Python 3.6.5
RUN wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz \
&& tar -xvf Python-${PYTHON_VERSION}.tar.xz \
&& cd Python-${PYTHON_VERSION} \
&& ./configure \
&& make altinstall \
&& cd / \
&& rm -rf Python-${PYTHON_VERSION}
# Install Google Cloud SDK
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
I am trying to install python3.6.5 version but I am receiving the following error.
020-01-09 17:26:13 (107 KB/s) - 'Python-3.6.5.tar.xz' saved [17049912/17049912]
tar (child): xz: Cannot exec: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
The command '/bin/sh -c wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz && tar -xvf Python-${PYTHON_VERSION}.tar.xz && cd Python-${PYTHON_VERSION} && ./configure && make altinstall && cd / && rm -rf Python-${PYTHON_VERSION}' returned a non-zero code: 2
Decompressing an .xz file requires the xz binary which under ubuntu is provided by the package xz-utils So You have to instal xz-utils on your image prior to decompressing an .xz file.
You can add this to your previous apt-get install run:
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar \
xz-utils
This should fix the following call to tar in the next RUN expression
Instead of trying to install Python, just start with a base image that has Python preinstalled, e.g. python:3.6-buster. This image is based on Debian Buster, which was released in 2019. Since Ubuntu is based on Debian, everything will be pretty similar, and since it's from 2019 (as opposed to Ubuntu 16.04, which is from 2016) you'll get more up-to-date software.
See https://pythonspeed.com/articles/base-image-python-docker-images/ for longer discussion.
I'm working on a project that uses a Docker image for a specific feature, other than that I don't need docker at all so I don't understand much about it. The issue is that Docker doesn't finds a file that is actually in the folder and the build process breaks.
When trying to create the image using docker build -t project/render-worker . the error is this:
Step 18/23 : RUN bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo
---> Running in 695db3bf2f02
/bin/sh: 1: bin/composer-install: not found
The command '/bin/sh -c bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo' returned a non-zero code: 127
As mentioned the file composer-install does exist and this is what's in it:
#!/bin/sh
EXPECTED_SIGNATURE="$(wget -q -O - https://composer.github.io/installer.sig)"
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
ACTUAL_SIGNATURE="$(php -r "echo hash_file('SHA384', 'composer-setup.php');")"
if [ "$EXPECTED_SIGNATURE" != "$ACTUAL_SIGNATURE" ]
then
echo 'ERROR: Invalid installer signature'
rm composer-setup.php
fi
Basically this is to get composer as you can see.
This is the Docker file:
FROM php:7.2-apache
RUN echo 'deb http://ftp.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
libpq-dev \
libxml2-dev \
ffmpeg \
imagemagick \
wget \
git \
zlib1g-dev \
libpng-dev \
unzip \
mencoder \
parallel \
ruby-dev
RUN apt-get -t stretch-backports install -y --no-install-recommends \
libav-tools \
&& rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install \
pcntl \
pdo_pgsql \
pgsql \
soap \
gd \
zip
RUN gem install compass
RUN a2enmod rewrite
ENV APACHE_RUN_USER root
ENV APACHE_RUN_GROUP root
EXPOSE 80
WORKDIR /app
COPY . /app
# Configuring apache to run the symfony app
COPY config/docker/apache.conf /etc/apache2/sites-enabled/000-default.conf
RUN echo "export DATABASE_URL" >> /etc/apache2/envvars \
&& echo ". /etc/environment" >> /etc/apache2/envvars
RUN wget -cqO- https://nodejs.org/dist/v10.15.3/node-v10.15.3-linux-x64.tar.xz | tar -xJ
RUN cp -a node-v10.15.3-linux-x64/bin /usr \
&& cp -a node-v10.15.3-linux-x64/include /usr \
&& cp -a node-v10.15.3-linux-x64/lib /usr \
&& cp -a node-v10.15.3-linux-x64/share /usr/ \
&& rm -rf node-v10.15.3-linux-x64 node-v10.15.3-linux-x64.tar.xz
RUN bin/composer-install \
&& php composer-setup.php --install-dir=/bin \
&& php -r "unlink('composer-setup.php');" \
# Install prestissimo for dramatically faster `composer install`
&& php /bin/composer.phar global require hirak/prestissimo
RUN APP_ENV=prod APP_SECRET= DATABASE_URL= AWS_KEY= AWS_SECRET= AWS_REGION= MEDIA_S3_BUCKET= \
GIPHY_API_KEY= FACEBOOK_APP_ID= FACEBOOK_APP_SECRET= \
GOOGLE_API_KEY= GOOGLE_CLIENT_ID= GOOGLE_CLIENT_SECRET= STRIPE_SECRET_KEY= STRIPE_ENDPOINT_SECRET= \
THEYSAIDSO_API_KEY= REV_CLIENT_API_KEY= REV_USER_API_KEY= REV_API_ENDPOINT= RENDER_QUEUE_URL= \
CLOUDWATCH_LOG_GROUP_NAME= \
php /bin/composer.phar install --no-interaction --no-dev --prefer-dist --optimize-autoloader --no-scripts \
&& php /bin/composer.phar clear-cache
RUN npm install \
&& node_modules/bower/bin/bower install --allow-root \
&& node_modules/grunt/bin/grunt
# Don't allow it to keep logs around; they're emitted on STDOUT and sent to AWS
# CloudWatch from there, so we don't need them on disk filling up the space
RUN mkdir -p var/cache/prod && chmod -R 777 var/cache/prod
RUN mkdir -p var/log && ln -s /dev/null var/log/prod.log \
&& ln -s /dev/null var/log/prod.deprecations.log && chmod -R 777 var/log
CMD ["/usr/bin/env", "bash", "./bin/start_render_worker"]
Like I said, unfortunately I don't have the slightest idea of how docker works and what's going on, just that I need it. I'm running docker in Win10 Pro and to make matters even worst it is actually working for another dev running Win10. We tried a few things but we can't make it work. I tried cloning the repo in other locations with no success at all. Everything before this particular step runs correctly.
[EDIT]
As suggested by the users I ran RUN ls bin/ before the composer install line and this is the result:
Step 18/24 : RUN ls bin/
---> Running in 6cb72090a069
append_captions
capture
composer-install
concat_project_video
console
encode_frames
encode_frames_to_gif
format_video_for_concatenation
generate_meme_bar
image_to_video
install.sh
phpcs
phpunit
process_render_queue
publish_docker_image
run_animation_worker
run_render_worker
run_render_worker_osx
start_render_worker
update
Removing intermediate container 6cb72090a069
As you can see composer-install is there so this is quite baffling.
Also I checked and set the line ending sequence to LF and the result is the same error.
[SECOND EDIT]
I added COPY bin/composer-install /bin
Then RUN ls bin/
And the results are the same. The ls command finds the file but the error persists. Also adding a slash before bin doesn't change anything :(