Apline Docker Image not finding AWS after build and registry in GitLab - docker

I'm running into an issue with a custom Docker image. I've installed a number of tools and all seem to be working except for the AWSCLI.
I install here:
RUN apt-get install -y \
python \
python-pip \
groff \
less \
mime-support \
&& \
pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic && \
apt-get -v del python-pip && \
rm -rf /var/cache/apt/*
VOLUME /root/.aws
Which installs successfully, I even ran aws --version to confirm no errors. Then when running in .gitlab-ci.yml aws is not recognized but my other tools are.
Here is the command I'm running:
aws ec2 describe-instances --filters "Name=tag:Project,Values=" --region us-east-2 --query "Reservations[].Instances[].[PrivateIpAddress]" --output=text
This is the error I get:
/bin/sh: eval: line 132: aws: not found

Core of your problem is same as in this question:
awscli not added to path after installation
Python of specific version was installed and it's /bin folder is not in system executable path. You need to add your Python version to system PATH:
ENV PATH "$PATH:/Library/Frameworks/Python.framework/Versions/3.8/bin"
Other variant: install only py-pip and it will fetch python and install aws globally, DO not remove py-pip after, or it will clean references to aws.

Related

Error in connection when building image with Minikube

I've downloaded Minikube and I'm using it in my application.
I've prepared my local docker command to use the one provided by minikube with eval $(minikube docker-env)”
Finally, I'm using docker-compose with commands like docker-compose build myimage and I'm getting the following error:
failed to get status: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing unable to upgrade to h2c, received 404"
Any idea what could we the problem? Except this, I find that docker-compose and docker is behaving as I was expecting
The relevant section of the docker-compose.yml is
myservice:
build:
context: .
dockerfile: Dockerfile
image: myimage
And for the Dockerfile:
FROM python:3.8-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get -y upgrade && \
apt-get -y install \
build-essential \
gettext-base \
libffi-dev \
libldap2-dev \
libmagic1 \
libsasl2-dev \
libssl-dev \
libxml2-dev \
libxmlsec1-dev \
libxslt1-dev \
libyaml-dev \
pkg-config \
&& \
apt-get clean && rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip pipenv && rm -rf ~/.cache/pip
ENV PYTHONPATH=/opt/app/src:/opt/app/src/vendor
RUN mkdir -p /opt/app
COPY build/Pipfile build/Pipfile.lock /tmp/
WORKDIR /tmp
RUN pipenv install --system && rm -rf ~/.cache/pip{,env,-tools}
Also, I want to insist that this works perfectly when using it locally. It's only when I try to use it on minikube when it starts failing
According to the information I found in github there are two possible causes for this behavior, one user is getting the same error as you and turning off the ‘Experimental Features’ options solved the issue, another user had to downlevel the docker version and rebuild the deployment.
I experienced the same issue building within Gitlab CI building on a Debian 11 image, docker 20.10.5 and compose 2.5.1.
I was able to work around it by setting DOCKER_BUILDKIT=0 in the build environment.

install python3.6 on amazonlinux docker image

I have been experimenting to create a docker image with python3.6 based on amazonlinux.
So far, I have not been very successful. I use
docker run -it amazonlinux
to start an interactive docker terminal. Inside the terminal, I run "yum install python36" and see the following error message. Note that I copied this step was from an old amazonlinux based Dockerfile. This Dockerfile used to work. So I suspend the error I see below is due to amazon updated their docker linux image
bash-4.2# yum install python36
Loaded plugins: ovl, priorities
amzn2-core | 2.4 kB 00:00:00
No package python36 available.
Error: Nothing to do
I have tried to add a python3.6 repo by following this post
https://janikarhunen.fi/how-to-install-python-3-6-1-on-centos-7 however, it still gives the same error when I run
yum install python36u
Is there any way to add python3.6 to amazonlinux base layer? Thanks in advance.
There is now a far easier answer to this question thanks to aws 'extras'. Now this will work:
amazon-linux-extras install python3
You can check this Dockerfile based on amazon Linux and having python version is PYTHON_VERSION=3.6.4.
Or you can work with your existing one like
ARG PYTHON_VERSION=3.6.4
ARG BOTO3_VERSION=1.6.3
ARG BOTOCORE_VERSION=1.9.3
ARG APPUSER=app
RUN yum -y update &&\
yum install -y shadow-utils findutils gcc sqlite-devel zlib-devel \
bzip2-devel openssl-devel readline-devel libffi-devel && \
groupadd ${APPUSER} && useradd ${APPUSER} -g ${APPUSER} && \
cd /usr/local/src && \
curl -O https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz && \
tar -xzf Python-${PYTHON_VERSION}.tgz && \
cd Python-${PYTHON_VERSION} && \
./configure --enable-optimizations && make && make altinstall && \
rm -rf /usr/local/src/Python-${PYTHON_VERSION}* && \
yum remove -y shadow-utils audit-libs libcap-ng && yum -y autoremove && \
yum clean all
But better to clone the repo and make your own image form that.
I too had similiar issue for docker.
yum install docker
Loaded plugins: ovl, priorities
amzn2-core | 3.7 kB 00:00:00
No package docker available.
Error: Nothing to do
instead yum I used amazon-linux-extras, it worked
amazon-linux-extras install docker

Tensorflow serving GPU using REST API and SSL self certificate

I am trying to install TensorFlow-gpu with REST API in Centos 7 Docker container. But I am unable to find an exact procedure for this. Do I need to install following dependencies?
I have installed cuda 9.0
cdDNN 7.4
NCCL 2.x
I haven't started yet to build tensorflow serving using GPU. I m in the middle of research stage ~ in this process in every article showing related to Ubuntu installation and m trying to install.in centos 7 .. so I don't have any docker file ..
Hope this may help you and me to get solution .
Here is what I use to build a tensorflow-serving-runtime docker image.
FROM nvidia/cuda:9.0-cudnn7-runtime-centos7
ARG TF_VERSION=1.9.0
RUN yum install -y \
yum-plugin-ovl \
libgomp \
ca-certificates \
zip \
unzip \
curl \
&& \
yum clean all
WORKDIR /usr/
RUN curl -sSL -o /usr/nccl_2.2.13-1-cuda9.0_x86_64.tgz http://some-of-my-net-disk/tensorflow-serving/lib/nccl_2.2.13-1-cuda9.0_x86_64.tgz && \ # Change your way to get nccl library here
tar -xvf nccl_2.2.13-1-cuda9.0_x86_64.tgz &&\
rm -f nccl_2.2.13-1-cuda9.0_x86_64.tgz
ENV LD_LIBRARY_PATH /usr/nccl_2.2.13-1+cuda9.0_x86_64/lib/:${LD_LIBRARY_PATH}
# Change your way to get tensorflow_model_server here
WORKDIR /serving
RUN curl -sSL -o /usr/local/bin/tensorflow_model_server http://some-of-my-net-disk/tensorflow-serving/bin/tf-serving-${TF_VERSION}/tensorflow_model_server_gpu-centos &&\
chmod u+x /usr/local/bin/tensorflow_model_server
For me, this worked fine. Hope it helps.

How to save my installations on Ubuntu image into Docker

Docker Codes
# Import Ubuntu image to Docker
docker pull ubuntu:16.04
docker run -it ubuntu:16.04
# Instsall Python3 and pip3
apt-get update
apt-get install -y python3 python3-pip
# Install Selenium
pip3 install selenium
# Install BeautifulSoup4
pip3 install beautifulsoup4
# Install library for PhantomJS
apt-get install -y wget libfontconfig
# Downloading and installing binary
mkdir -p /home/root/src && cd &_
tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2
cd phantomjs-2.1.1-linux-x86_64/bin/
cp phantomjs /usr/local/bin/
# Installing font
apt-get install -y fonts-nanum*
Question
I am trying to import Ubuntu image to docker and install serveral packages inscluding python3, pip3, bs4, and PhantomJs. Then I want to save all this configurations in Docker as "ubuntu-phantomjs". As I am currently on Ubuntu image, anything that starts with 'docker' command do not work. How could I save my image?
Here is the dockerfile:
# Import Ubuntu image to Docker
FROM ubuntu:16.04
# Install Python3, pip3, library and fonts
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
wget libfontconfig \
fonts-nanum*
&& rm -rf /var/lib/apt/lists/*
RUN pip3 install selenium beautifulsoup4
# Downloading and installing binary
RUN mkdir -p /home/root/src && cd &_ tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 && cd phantomjs-2.1.1-linux-x86_64/bin/ && cp phantomjs /usr/local/bin/
Now after saving the code in file named dockerfile, open a terminal in the same directory as the one where file is stored, and run following command:
$ docker build -t ubuntu-phantomjs .
-t means that the target is ubuntu-phantomjs and . means that the context for docker is the current directory. The above dockerfile is not a standard one, and does not follow all good practices mentioned here. You can change this file according to your needs, read the documentations for more help.

GCP Docker error: File does not reside within any path specified using --proto_path (or -I)

we are trying to host tensorflow object-detection model on GCP.
we have maintain below directory structure before running "gcloud app deploy".
For you convenient I am attaching the configuration files with the question.
Wer are getting deployment error which is mentioned below. Please suggest a solution.
+root
+object_detection/
+slim/
+env
+app.yaml
+Dockerfile
+requirement.txt
+index.html
+test.py
Dockerfile
FROM gcr.io/google-appengine/python
LABEL python_version=python2.7
RUN virtualenv --no-download /env -p python2.7
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Various Python and C/build deps
RUN apt-get update && apt-get install -y \
wget \
build-essential \
cmake \
git \
unzip \
pkg-config \
python-dev \
python-opencv \
libopencv-dev \
libav-tools \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libjasper-dev \
libgtk2.0-dev \
python-numpy \
python-pycurl \
libatlas-base-dev \
gfortran \
webp \
python-opencv \
qt5-default \
libvtk6-dev \
zlib1g-dev \
protobuf-compiler \
python-pil python-lxml \
python-tk
# Install Open CV - Warning, this takes absolutely forever
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
RUN protoc /app/object_detection/protos/*.proto --python_out=/app/.
RUN export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/app/slim
CMD exec gunicorn -b :$PORT UploadTest:app
requirement.txt
Flask==0.12.2
gunicorn==19.7.1
numpy==1.13.1
requests==0.11.1
bs4==0.0.1
nltk==3.2.1
pymysql==0.7.2
xlsxwriter==0.8.5
Pillow==4.2.1
pytesseract==0.1
opencv-python>=3.0
matplotlib==2.0.2
tensorflow==1.3.0
lxml==4.0.0
app.yaml
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT UploadTest:app
threadsafe: true
runtime_config:
python_version: 2
After all this i am seeting up the google cloud environment with gcloud init
And then start command gcloud app deploy
I am getting below error while deploying the solution.
Error:
Step 10/12 : RUN protoc /app/object_detection/protos/*.proto --python_out=/app/.
---> Running in 9b3ec9c43c2d
/app/object_detection/protos/anchor_generator.proto: File does not reside within any path specified using --proto_path (or -I). You must specify a --proto_path which encompasses this file. Note that the proto_path must be an exact prefix of the .proto file names -- protoc is too dumb to figure out when two paths (e.g. absolute and relative) are equivalent (it's harder than you think).
The command '/bin/sh -c protoc /app/object_detection/protos/*.proto --python_out=/app/.' returned a non-zero code: 1
ERROR
ERROR: build step "gcr.io/cloud-builders/docker#sha256:a4a83be9b2fb61452e864ecf1bcfca99d1845499ef9500ae2905cea0ea593769" failed: exit status 1
----------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.app.deploy) Cloud build failed. Check logs at https://console.cloud.google.com/gcr/builds/4dba3217-b7d6-4341-b28e-09a9dad45c18?
There is a directory "object_detection/protos" present and all necessary files are present there. Still getting deployment error. Please suggest where to change in dockerfile to deploy it successfully.
My assumption: GCP is not able to figure out the path of the protc file. May be I have to alter something in Docketfile. But not able to figure out the solution. Please answer.
NB: This setup is running well in local machine. But not working in GCP

Resources