docker bash: npm: command not found after building container - docker

I'm building an image with the the below Dockerfile.
But after building, when I run npm install it says bash: npm: command not found
I have 2 RUN commands to build 2 layers, the first one is basically to build the Python environment for the image, including installing pyodbc driver and install the necessary packages.
The second layer is to install node.
Which step is wrong with the dockerfile?
ARG VARIANT=3
FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT}
# Avoid warnings by switching to noninteractive
ENV DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED 1
ARG USERNAME=vscode
ARG USER_UID=1000
ARG USER_GID=$USER_UID
# [Optional] Update UID/GID if needed
RUN if [ "$USER_GID" != "1000" ] || [ "$USER_UID" != "1000" ]; then \
groupmod --gid $USER_GID $USERNAME \
&& usermod --uid $USER_UID --gid $USER_GID $USERNAME \
&& chmod -R $USER_UID:$USER_GID /home/$USERNAME; \
fi
# [Optional] If your requirements rarely change, uncomment this section to add them to the image.
#
COPY requirements.txt /tmp/pip-tmp/
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get -y update \
&& ACCEPT_EULA=Y apt-get -y install msodbcsql17 \
&& ACCEPT_EULA=Y apt-get -y install mssql-tools \
&& echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile \
&& echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc \
&& /bin/bash -c "source ~/.bashrc" \
&& apt-get install -y unixodbc-dev \
&& pip3 install django-mssql-backend \
&& apt-get install -y libgssapi-krb5-2 \
&& pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
&& rm -rf /tmp/pip-tmp
# ** [Optional] Uncomment this section to install additional packages. **
#
RUN apt-get update \
&& apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
#
# Verify git, process tools, lsb-release (common in install instructions for CLIs) installed
&& apt-get -y install git iproute2 procps lsb-release \
#
# Install pylint
#&& pip install pylint \
#
# Update Python environment based on requirements.txt
# && pip --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
# && rm -rf /tmp/pip-tmp \
#
# Install node for building front-ends
&& apt-get update \
&& apt-get -y install curl gnupg \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get -y install nodejs \
#
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
# Switch back to dialog for any ad-hoc use of apt-get
ENV DEBIAN_FRONTEND=

Debian Nodejs could be boguous. Use official nodejs instructions https://github.com/nodesource/distributions/blob/master/README.md#debinstall to install nodejs.

Related

Docker: /usr/bin/python: can't find '__main__' module in '.'

Below is my docker file
FROM ubuntu:latest
COPY requirements.txt .
RUN apt-get update -y && apt-get install -y telnet unixodbc-dev ksh curl apt-utils apt-transport-https debconf-utils gcc build-essential python2.7.x python-dev build-essential && curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.py && python get-pip.py && curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list && curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > /etc/apt/sources.list.d/mssql-release.list && curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > /etc/apt/sources.list.d/mssql-release.list && curl https://packages.microsoft.com/config/ubuntu/21.04/prod.list > /etc/apt/sources.list.d/mssql-release.list && apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql17 && ACCEPT_EULA=Y apt-get install -y mssql-tools && /bin/bash -c 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc && /bin/bash -c "source ~/.bashrc" && pip install -r requirements.txt && apt-get -y clean && rm -rf /var/lib/apt/lists/*
Entrypoint ["python"]
Below is my requirements.txt
pyodbc==4.0.30
I am getting the below error:
Cause: found extra "." caused the issue.
Solution: docker run -t getting-started

NVIDIA Driver Not found during Nvidia + Cuda - Docker Image build

I am trying to create a GPU microservice using Nvidia cuda Base image, but during the docker build, I am facing Driver not found issue, can someone point out what is missing here?
DockerFile:
FROM nvidia/cuda:10.1-devel
# Install some basic utilities
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
git \
bzip2 \
libx11-6 \
&& rm -rf /var/lib/apt/lists/*
ENV CONDA_AUTO_UPDATE_CONDA=false
ENV PATH=/home/user/miniconda/bin:$PATH
RUN curl -sLo ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh \
&& chmod +x ~/miniconda.sh \
&& ~/miniconda.sh -b -p ~/miniconda \
&& rm ~/miniconda.sh \
&& conda install -y python==3.7 \
&& conda clean -ya
ENV PATH="/usr/local/cuda-10.1/bin:$PATH"
ENV LD_LIBRARY_PATH="/usr/local/cuda-10.1/lib64:$LD_LIBRARY_PATH"
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
ENV NVIDIA_VISIBLE_DEVICES=all
ENV FORCE_CUDA="1"
RUN conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
RUN pip install -v -e .
Error:
"/home/user/miniconda/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1013, in _get_cuda_arch_flags
capability = torch.cuda.get_device_capability()
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 320, in get_device_capability
prop = get_device_properties(device)
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 325, in get_device_properties
_lazy_init() # will define _get_device_properties and _CudaDeviceProperties
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 196, in _lazy_init
_check_driver()
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 101, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
The issues happens during execution of last step in docker file.
I tried using multiple Nvidia base docker images, but didn't help much. (cuda:10.1-base-ubuntu18.04, cuda:10.1-runtime-ubuntu18.04)
Any pointers appreciated.
After lot of trial and errors and going through a lot of documentation, this is what worked fine.
ARG PYTORCH=1.3
ARG CUDA=10.1
ARG CUDNN=7
FROM pytorch/pytorch:1.3-cuda10.1-cudnn7-devel
RUN mkdir /app
WORKDIR /app
ENV TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0+PTX"
ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
ENV CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"
RUN apt-get update && apt-get install -y libglib2.0-0 libsm6 libxrender-dev libxext6 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install some basic utilities
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
git \
bzip2 \
libx11-6 \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential g++ \
libglib2.0-0 libsm6 libxrender-dev libxext6 wget
# Create a non-root user and switch to it
RUN adduser --disabled-password --gecos '' --shell /bin/bash user \
&& chown -R user:user /app
RUN echo "user ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-user
USER user
# All users can use /home/user as their home directory
ENV HOME=/home/user
RUN chmod 777 /home/user
# Install Miniconda and Python 3.7
ENV CONDA_AUTO_UPDATE_CONDA=false
ENV PATH=/home/user/miniconda/bin:$PATH
RUN curl -sLo ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh \
&& chmod +x ~/miniconda.sh \
&& ~/miniconda.sh -b -p ~/miniconda \
&& rm ~/miniconda.sh \
&& conda install -y python==3.7 \
&& conda clean -ya
RUN conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
RUN pip install -v -e .
Hope this helps!
Good luck!

node installed in docker container but not npm

I have this DockerFile
FROM docker:17.12.0-ce as static-docker-source
FROM ubuntu:18.04
FROM python:3.8-slim-buster
ARG CLOUD_SDK_VERSION=335.0.0
ENV CLOUD_SDK_VERSION=$CLOUD_SDK_VERSION
ENV PATH "$PATH:/opt/google-cloud-sdk/bin/"
COPY --from=static-docker-source /usr/local/bin/docker /usr/local/bin/docker
RUN apt-get -qqy update && apt-get install -qqy \
curl \
gcc \
apt-transport-https \
lsb-release \
openssh-client \
git \
gnupg && \
pip install crcmod && \
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb https://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" > /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && \
apt-get install -y google-cloud-sdk=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python-extras=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-cbt=${CLOUD_SDK_VERSION}-0 \
kubectl && \
gcloud --version && \
docker --version && kubectl version --client
VOLUME ["/root/.config"]
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN curl -sL https://sentry.io/get-cli/ | bash -
RUN apt-get install -y nodejs build-essential pkg-config libcairo2-dev libjpeg-dev libgif-dev
ADD requirements.txt /
ADD requirements-prod.txt /
ADD submodules /
RUN python --version
RUN pip3 install --upgrade pip
RUN pip3 install -r /requirements.txt
RUN pip3 install -r /requirements-prod.txt
when I run node --version it returns v10.24.0 (I think it should be 8.x.y ....).
And when I run npm --version it returns bash: npm: command not found
Shouldn't node install npm too ?
Thanks
So I fixed it by changing my DockerFile.
I deleted FROM python:3.8-slim-buster as I read that having multiple 'FROM' on a dockerfile could cause conflict. And I installed python with apt like this:
RUN apt-get -qqy update && apt-get install -qqy \
curl \
gcc \
python3.8 \
python3-pip \
python3.8-dev \
However, it is not perfect to me because I now have python3.6.x under python 3, python 2.7.x under python and I must specified python3.8 when I need it...
My new DockerFile:
FROM docker:17.12.0-ce as static-docker-source
FROM ubuntu:18.04
ARG CLOUD_SDK_VERSION=335.0.0
ENV CLOUD_SDK_VERSION=$CLOUD_SDK_VERSION
ENV PATH "$PATH:/opt/google-cloud-sdk/bin/"
COPY --from=static-docker-source /usr/local/bin/docker
/usr/local/bin/docker
RUN apt-get -qqy update && apt-get install -qqy \
curl \
gcc \
python3.8 \
python3-pip \
python3.8-dev \
apt-transport-https \
lsb-release \
openssh-client \
git \
gnupg && \
pip3 install crcmod && \
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb https://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" > /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && \
apt-get install -y google-cloud-sdk=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python-extras=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-cbt=${CLOUD_SDK_VERSION}-0 \
kubectl && \
gcloud --version && \
docker --version && kubectl version --client
VOLUME ["/root/.config"]
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN curl -sL https://sentry.io/get-cli/ | bash -
RUN apt-get install -y nodejs build-essential pkg-config libcairo2-dev
libjpeg-dev libgif-dev
RUN node --version
RUN npm --version
RUN python --version
ADD requirements.txt /
ADD requirements-prod.txt /
ADD submodules /
RUN python --version
RUN pip3 install --upgrade pip
RUN pip3 install -r /requirements.txt
RUN pip3 install -r /requirements-prod.txt

Why docker-compose build start from begining

For example.
Every time I build.
Copy package.json
Install package.json
Add current directory.
My question is:
Why it does not use from the cache. For example, It should not install the package.json from the start if the package.json does not change.
It should use the cache and update only the changes code.
Update:
Dockerfile
FROM ubuntu:18.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get update && apt-get upgrade -y \
&& apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
gcc \
git \
libpq-dev \
make \
python-pip \
python2.7 \
python2.7-dev \
apt-transport-https \
curl \
g++ \
sudo \
wget \
bzip2 \
chrpath \
libssl-dev \
libxft-dev \
libfreetype6 \
libfreetype6-dev \
libfontconfig1 \
libfontconfig1-dev \
libfontconfig \
poppler-utils \
imagemagick \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/* \
&& apt-get -y autoclean
RUN apt-get update && apt-get install -y --no-install-recommends software-properties-common && add-apt-repository ppa:malteworld/ppa && apt update && apt install -y --no-install-recommends pdftk \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/* \
&& apt-get -y autoclean
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 10.6.0
# Install nvm with node and npm
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# Set up our PATH correctly so we don't have to long-reference npm, node, &c.
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Set the work directory
RUN mkdir -p /var/www/app/jobsaf-website
RUN mkdir /data
RUN mkdir /data/db
WORKDIR /var/www/app/jobsaf-website
RUN npm install -g node-gyp #angular/cli#6.2.3 nodemon request
# Add our package.json and install *before* adding our application files
COPY package.json ./
# RUN npm install --force
RUN npm install --force
RUN npm rebuild node-sass
# Add application files
ADD . .
EXPOSE 3000 5858 4200 35729 27017 6379 49153
.dockerignore
# See http://help.github.com/ignore-files/ for more about ignoring files.
# compiled output
/tmp
/public/__build__/
/src/*/__build__/
/__build__/**
/public/dist/
/src/*/dist/
/dist/**
/.awcache
.webpack.json
/compiled/
dll/
package-lock.json
# dependencies
/node_modules
*/node_modules
# IDEs and editors
/.idea
.project
.classpath
.c9/
*.launch
**.js.map
.settings/
# IDE - VSCode
.vscode/
# misc
/.sass-cache
/connect.lock
/coverage/*
/libpeerconnection.log
npm-debug.log
testem.log
/typings
# e2e
/e2e/*.js
/e2e/*.map
#System Files
.DS_Store
Thumbs.db
*.csv
*.dat
*.iml
*.log
*.out
*.pid
*.seed
*.sublime-*
*.swo
*.swp
*.tgz
*.xml
.strong-pm
coverage
npm-debug*
/admin/dist
npm
/.cache-loader/*
stats.json
!/src/assets/js/admin-header.js
!/src/assets/js/website-custom.js
webpack-cache/
web/
/src/app/**/*.map
/src/app/**/*.js
--force should be removed from the following line as it will ignore any cache and do a fresh installation for your packages which leads to a new docker build layer starting from the installation step.
RUN npm install --force
The -f or --force argument will force npm to fetch remote resources even if a local copy exists on disk.

why am getting this error?

im trying to copy jenkins-cli-wrapper.sh to a directory
but im getting the following error.
my code is given below
`
FROM ubuntu:14.04
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:webupd8team/java -y && \
apt-get update && \
echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && \
apt-get install -f -y oracle-java9-installer && \
apt install -y default-jre curl wget git nano; \
apt-get clean
ENV JAVA_HOME /usr
ENV PATH $JAVA_HOME/bin:$PATH
WORKDIR /jenkins-cli
COPY jenkins-cli-wrapper.sh .
ENV JENKINS_URL "localhost:8080"
ENV PRIVATE_KEY "/ssh/id_rsa"
VOLUME /ssh
ENTRYPOINT ["./jenkins-cli-wrapper.sh"]
CMD ["help"]`
****also when i ever i try to use "ADD" and "COPY" im getting this error in every other code****

Resources