Why is the Puppeteer docker image running CMD ["google-chrome-stable"]? - docker

Reading the dockerfile presented in the docker folder.
FROM node:16
# Install latest chrome dev package and fonts to support major charsets (Chinese, Japanese, Arabic, Hebrew, Thai and a few others)
# Note: this installs the necessary libs to make the bundled version of Chromium that Puppeteer
# installs, work.
RUN apt-get update \
&& apt-get install -y wget gnupg \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-stable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-khmeros fonts-kacst fonts-freefont-ttf libxss1 \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /home/pptruser
COPY puppeteer-latest.tgz /home/pptruser/puppeteer-latest.tgz
# Install puppeteer into /home/pptruser/node_modules.
RUN npm i ./puppeteer-latest.tgz \
&& rm puppeteer-latest.tgz \
# Add user so we don't need --no-sandbox.
# same layer as npm install to keep re-chowned files from using up several hundred MBs more space
&& groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& (node -e "require('child_process').execSync(require('puppeteer').executablePath() + ' --credits', {stdio: 'inherit'})" > THIRD_PARTY_NOTICES)
USER pptruser
CMD ["google-chrome-stable"]
I created the .tar file needed for the docker.
Git clone
npm install
npm pack, put the .tar created in the docker folder
renamed the .tar to be used by the dockerfile
It builds correctly. But why did they launch the chrome browser at the end with the CMD command? It is not needed to run puppeteer afterwards, is it ?

Related

CircleCI Docker: My workflow is stopping at Spin up environment

I am trying to run Docker in CircleCI. I built our own image and tried to run it directly in CircleCI. The weird thing is, it stops only after Spin up environment. Did I miss something?
If anyone can suggest an image that caters Firefox, Selenium, Behave and Python please let me know :( Anything that is not depreciated hopefully I can try. I also have other requirements listed in requirements.txt below:
behave>=1.2.6
numpy>= 1.16.6
openpyxl>=2.6.4
parse>=1.19.0
parse-type>=0.6.0
pip>=20.3.4
pytz>=2022.2.1
selenium>=4.0.0a6.post2
self>=2020.12.3
setuptools>=44.1.1
Here is the content of my Dockerfile:
FROM ubuntu:bionic
RUN apt-get update && apt-get install -y \
python-pip \
fonts-liberation libappindicator3-1 libasound2 libatk-bridge2.0-0 \
libnspr4 libnss3 lsb-release xdg-utils libxss1 libdbus-glib-1-2 \
curl unzip wget \
xvfb
# install geckodriver and firefox
ARG GECKODRIVER_VERSION=0.31.0
RUN wget --no-verbose -O /tmp/geckodriver.tar.gz https://github.com/mozilla/geckodriver/releases/download/v$GECKODRIVER_VERSION/geckodriver-v$GECKODRIVER_VERSION-linux64.tar.gz \
&& rm -rf /opt/geckodriver \
&& tar -C /opt -zxf /tmp/geckodriver.tar.gz \
&& rm /tmp/geckodriver.tar.gz \
&& mv /opt/geckodriver /opt/geckodriver-$GECKODRIVER_VERSION \
&& chmod 755 /opt/geckodriver-$GECKODRIVER_VERSION \
&& ln -fs /opt/geckodriver-$GECKODRIVER_VERSION /usr/bin/geckodriver
RUN FIREFOX_SETUP=firefox-setup.tar.bz2 && \
apt-get purge firefox && \
wget -O $FIREFOX_SETUP "https://download.mozilla.org/?product=firefox-latest&os=linux64" && \
tar xjf $FIREFOX_SETUP -C /opt/ && \
ln -s /opt/firefox/firefox /usr/bin/firefox && \
rm $FIREFOX_SETUP
# install selenium
RUN pip install selenium
RUN pip install behave
COPY requirements.txt /opt/app/requirements.txt
COPY . /opt/app
WORKDIR /opt/app
RUN pip install -r requirements.txt
Here is my config.yml
version: 2.1
orbs:
python: circleci/python#1.5.0
jobs:
jobName:
docker:
- image: dockerImageName
steps:
- checkout
- python/install-packages:
pkg-manager: pip
- run:
name: Run tests
command: behave
workflows:
featureFiles:
jobs:
- jobName
Here is what it looks like in CircleCI:

NVIDIA Driver Not found during Nvidia + Cuda - Docker Image build

I am trying to create a GPU microservice using Nvidia cuda Base image, but during the docker build, I am facing Driver not found issue, can someone point out what is missing here?
DockerFile:
FROM nvidia/cuda:10.1-devel
# Install some basic utilities
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
git \
bzip2 \
libx11-6 \
&& rm -rf /var/lib/apt/lists/*
ENV CONDA_AUTO_UPDATE_CONDA=false
ENV PATH=/home/user/miniconda/bin:$PATH
RUN curl -sLo ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh \
&& chmod +x ~/miniconda.sh \
&& ~/miniconda.sh -b -p ~/miniconda \
&& rm ~/miniconda.sh \
&& conda install -y python==3.7 \
&& conda clean -ya
ENV PATH="/usr/local/cuda-10.1/bin:$PATH"
ENV LD_LIBRARY_PATH="/usr/local/cuda-10.1/lib64:$LD_LIBRARY_PATH"
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
ENV NVIDIA_VISIBLE_DEVICES=all
ENV FORCE_CUDA="1"
RUN conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
RUN pip install -v -e .
Error:
"/home/user/miniconda/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1013, in _get_cuda_arch_flags
capability = torch.cuda.get_device_capability()
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 320, in get_device_capability
prop = get_device_properties(device)
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 325, in get_device_properties
_lazy_init() # will define _get_device_properties and _CudaDeviceProperties
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 196, in _lazy_init
_check_driver()
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 101, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
The issues happens during execution of last step in docker file.
I tried using multiple Nvidia base docker images, but didn't help much. (cuda:10.1-base-ubuntu18.04, cuda:10.1-runtime-ubuntu18.04)
Any pointers appreciated.
After lot of trial and errors and going through a lot of documentation, this is what worked fine.
ARG PYTORCH=1.3
ARG CUDA=10.1
ARG CUDNN=7
FROM pytorch/pytorch:1.3-cuda10.1-cudnn7-devel
RUN mkdir /app
WORKDIR /app
ENV TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0+PTX"
ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
ENV CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"
RUN apt-get update && apt-get install -y libglib2.0-0 libsm6 libxrender-dev libxext6 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install some basic utilities
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
git \
bzip2 \
libx11-6 \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential g++ \
libglib2.0-0 libsm6 libxrender-dev libxext6 wget
# Create a non-root user and switch to it
RUN adduser --disabled-password --gecos '' --shell /bin/bash user \
&& chown -R user:user /app
RUN echo "user ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-user
USER user
# All users can use /home/user as their home directory
ENV HOME=/home/user
RUN chmod 777 /home/user
# Install Miniconda and Python 3.7
ENV CONDA_AUTO_UPDATE_CONDA=false
ENV PATH=/home/user/miniconda/bin:$PATH
RUN curl -sLo ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh \
&& chmod +x ~/miniconda.sh \
&& ~/miniconda.sh -b -p ~/miniconda \
&& rm ~/miniconda.sh \
&& conda install -y python==3.7 \
&& conda clean -ya
RUN conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
RUN pip install -v -e .
Hope this helps!
Good luck!

Docker issue : Chrome failed to start: exited abnormally (unknown error: DevToolsActivePort file doesn't exist) : Chrome Browser and Driver 78

I recently updated Chrome Browser to 78 version, and it has caused an issue.
I am running Selenium tests inside Linux Docker container in Headless Chrome mode, with latest chrome-78.0.3904.108, driver-78.0.3904.105 and selenium- 3.141.0, specflow packages -3.1.67.
I have tried almost all capabilities suggested on this forum to run Chrome headlessly inside Docker.
case "Headless_Chrome":
string driverPath = "/opt/selenium/";
string driverExecutableFileName = "chromedriver";
ChromeDriverService service_headless = ChromeDriverService.CreateDefaultService(driverPath, driverExecutableFileName);
chrome_options.BinaryLocation = "/opt/google/chrome/chrome";
chrome_options.AddArgument("--no-sandbox");
chrome_options.AddArgument("--headless");
chrome_options.AddArgument("--window-size=1420,1080");
chrome_options.AddArgument("--disable-extensions");
chrome_options.AddArgument("--proxy-server='direct://'");
chrome_options.AddArgument("--proxy-bypass-list=*");
chrome_options.AddArgument("--disable-gpu"); //even will come redundant in case of linux
chrome_options.AddArgument("--disable-dev-shm-usage"); // to fix - error: unknown error: session deleted because of page crash
chrome_options.AddArgument("--remote-debugging-port=9222");
chrome_options.AddArgument("--remote-debugging-address=0.0.0.0");
chrome_options.AddArgument("--disable-infobars");
chrome_options.AddArgument("--user-data-dir=/data");
chrome_options.AddArgument("--disable-features=VizDisplayCompositor"); //to save from zombie chrome process running
//chrome_options.AddArgument("--disable-setuid-sandbox");
//chrome_options.AddArgument("--privileged"); // can be a security risk
//chrome_options.AddArgument("--lang=en_US");
//chrome_options.AddArgument("--start-maximized");
//chrome_options.AddAdditionalCapability("useAutomationExtension", false);
driver = new ChromeDriver(service_headless, chrome_options, TimeSpan.FromSeconds(120));
break;
My Docker file(License key is replaced with xxxxx ) :
FROM microsoft/dotnet:2.2-sdk
ENV PATH="${PATH}:/root/.dotnet/tools"
RUN dotnet tool install --global SpecFlow.Plus.License
RUN specflow-plus-license register --licenseKey KBD0xxxxxxxxxxxxxxxxxxxxiqQGIUTnUBAU/wn/EAAA== --issuedTo "xxxxxxxxxxxxxx"
ENV LANG en_US.UTF-8  
ENV LANGUAGE en_US:en  
ENV LC_ALL en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV C en_US.UTF-8
ENV TERM xterm
ENV TZ Europe/Copenhagen
USER root
# Install Chrome
RUN apt-get update && apt-get install -y \
apt-transport-https ca-certificates curl gnupg hicolor-icon-theme \
libcanberra-gtk* libgl1-mesa-dri libgl1-mesa-glx libpango1.0-0 libpulse0 \
libv4l-0 fonts-symbola \
--no-install-recommends \
&& curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list \
&& apt-get update && apt-get install -y google-chrome-stable --no-install-recommends \
&& apt-get purge --auto-remove -y curl \
&& rm -rf /var/lib/apt/lists/*
#RUN dpkg -s google-chrome-stable
#RUN apt-get update && apt-get search google-chrome-stable && apt-get show google-chrome-stable
# Download the google-talkplugin And ChromeDrive
ARG CHROME_DRIVER_VERSION="latest"
RUN set -x \
&& apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates curl unzip \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL "https://dl.google.com/linux/direct/google-talkplugin_current_amd64.deb" -o /tmp/google-talkplugin-amd64.deb \
&& dpkg -i /tmp/google-talkplugin-amd64.deb \
&& rm -rf /tmp/*.deb \
&& CD_VERSION=$(if [ ${CHROME_DRIVER_VERSION:-latest} = "latest" ]; then echo $(wget -qO- https://chromedriver.storage.googleapis.com/LATEST_RELEASE); else echo $CHROME_DRIVER_VERSION; fi) \
&& echo "Using chromedriver version: "$CD_VERSION \
&& mkdir /opt/selenium \
&& curl -sSL "https://chromedriver.storage.googleapis.com/$CD_VERSION/chromedriver_linux64.zip" -o /tmp/chromedriver.zip \
&& unzip -o /tmp/chromedriver -d /opt/selenium/ \
&& rm -rf /tmp/*.zip \
&& apt-get purge -y --auto-remove curl unzip
# Add chrome user
RUN groupadd -r chrome && useradd -r -g chrome -G audio,video chrome \
&& mkdir -p /home/chrome/Downloads && chown -R chrome:chrome /home/chrome
#ENV DISPLAY=:99
WORKDIR /data/WebShopTestAutomation
# copy code
RUN mkdir -p /data && mkdir /reports
COPY ./source /data
RUN ls -ls /data
RUN cd /data/WebShopTestAutomation && dotnet build
CMD ["dotnet", "vstest", "--logger:trx;LogFileName=/reports/TestResults/report.trx", "/data/WebShopTestAutomation/bin/Debug/netcoreapp2.2/WebShopTestAutomation.dll"]
Closing this issue,
as Root cause was this : default.srprofile which was used in project was not working in Docker, it is just not read.
Have raised a issue at Git hub Specflow
For details:
https://github.com/techtalk/SpecFlow/issues/1841

Why docker-compose build start from begining

For example.
Every time I build.
Copy package.json
Install package.json
Add current directory.
My question is:
Why it does not use from the cache. For example, It should not install the package.json from the start if the package.json does not change.
It should use the cache and update only the changes code.
Update:
Dockerfile
FROM ubuntu:18.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get update && apt-get upgrade -y \
&& apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
gcc \
git \
libpq-dev \
make \
python-pip \
python2.7 \
python2.7-dev \
apt-transport-https \
curl \
g++ \
sudo \
wget \
bzip2 \
chrpath \
libssl-dev \
libxft-dev \
libfreetype6 \
libfreetype6-dev \
libfontconfig1 \
libfontconfig1-dev \
libfontconfig \
poppler-utils \
imagemagick \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/* \
&& apt-get -y autoclean
RUN apt-get update && apt-get install -y --no-install-recommends software-properties-common && add-apt-repository ppa:malteworld/ppa && apt update && apt install -y --no-install-recommends pdftk \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/* \
&& apt-get -y autoclean
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 10.6.0
# Install nvm with node and npm
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# Set up our PATH correctly so we don't have to long-reference npm, node, &c.
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Set the work directory
RUN mkdir -p /var/www/app/jobsaf-website
RUN mkdir /data
RUN mkdir /data/db
WORKDIR /var/www/app/jobsaf-website
RUN npm install -g node-gyp #angular/cli#6.2.3 nodemon request
# Add our package.json and install *before* adding our application files
COPY package.json ./
# RUN npm install --force
RUN npm install --force
RUN npm rebuild node-sass
# Add application files
ADD . .
EXPOSE 3000 5858 4200 35729 27017 6379 49153
.dockerignore
# See http://help.github.com/ignore-files/ for more about ignoring files.
# compiled output
/tmp
/public/__build__/
/src/*/__build__/
/__build__/**
/public/dist/
/src/*/dist/
/dist/**
/.awcache
.webpack.json
/compiled/
dll/
package-lock.json
# dependencies
/node_modules
*/node_modules
# IDEs and editors
/.idea
.project
.classpath
.c9/
*.launch
**.js.map
.settings/
# IDE - VSCode
.vscode/
# misc
/.sass-cache
/connect.lock
/coverage/*
/libpeerconnection.log
npm-debug.log
testem.log
/typings
# e2e
/e2e/*.js
/e2e/*.map
#System Files
.DS_Store
Thumbs.db
*.csv
*.dat
*.iml
*.log
*.out
*.pid
*.seed
*.sublime-*
*.swo
*.swp
*.tgz
*.xml
.strong-pm
coverage
npm-debug*
/admin/dist
npm
/.cache-loader/*
stats.json
!/src/assets/js/admin-header.js
!/src/assets/js/website-custom.js
webpack-cache/
web/
/src/app/**/*.map
/src/app/**/*.js
--force should be removed from the following line as it will ignore any cache and do a fresh installation for your packages which leads to a new docker build layer starting from the installation step.
RUN npm install --force
The -f or --force argument will force npm to fetch remote resources even if a local copy exists on disk.

package.json file won't persist in docker container

I am trying to build a docker environment. I have made a Dockerfile which builds my image. Everything seems to work fine except for the issue that my package.json file won't persist inside the container. It seems as if it is getting removed somewhere. What I am doing wrong? Here is my Docker file content:
FROM ubuntu:14.04
RUN groupadd -r webuser && useradd -r -g webuser webuser && mkdir /home/webuser/ && chown webuser:webuser /home/webuser/
# install curl, apache, php
RUN sudo DEBIAN_FRONTEND=noninteractive \
apt-get -y update && \
apt-get -y install software-properties-common python-software-properties && \
add-apt-repository ppa:ondrej/php && \
apt-get -y update && \
apt-get install -y --force-yes \
curl \
git-core \
apache2 \
php5.6 php5.6-mcrypt php5.6-mbstring php5.6-curl php5.6-cli php5.6-mysql php5.6-gd php5.6-intl php5.6-xsl \
php5.6-bz2 php5.6-zip && \
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
mv composer.phar /usr/local/bin/composer && \
chmod +x /usr/local/bin/composer
# install PHPUnit
RUN curl -L https://phar.phpunit.de/phpunit.phar -o phpunit.phar && \
chmod +x phpunit.phar && \
mv phpunit.phar /usr/local/bin/phpunit && \
chmod +x /usr/local/bin/phpunit
ADD package.json /var/www/html/package.json
WORKDIR /var/www/html
RUN chown -R webuser:webuser /var/www/html
USER webuser
# install node js 6
RUN NVM_DIR="/home/webuser/.nvm" && \
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.32.0/install.sh | bash && \
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" && \
nvm install 6 && \
npm install -g webpack && \
npm install
RUN echo 'export NVM_DIR="/home/webuser/.nvm"\n\
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"'\
>> /home/webuser/.bashrc
COPY src /var/www/html/
USER root
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Try changing your ADD command to the following :
RUN mkdir -p /var/www/html
ADD package.json /var/www/html
Also make sure the package.json is present in the
FROM ubuntu:14.04
ADD package.json /var/www/html/package.json
RUN groupadd -r webuser && useradd -r -g webuser webuser && mkdir /home/webuser/ && chown webuser:webuser /home/webuser/
# install curl, apache, php
RUN sudo DEBIAN_FRONTEND=noninteractive \
apt-get -y update && \
apt-get -y install software-properties-common python-software-properties && \
add-apt-repository ppa:ondrej/php && \
apt-get -y update && \
apt-get install -y --force-yes \
curl \
git-core \
apache2 \
php5.6 php5.6-mcrypt php5.6-mbstring php5.6-curl php5.6-cli php5.6-mysql php5.6-gd php5.6-intl php5.6-xsl \
php5.6-bz2 php5.6-zip && \
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
mv composer.phar /usr/local/bin/composer && \
chmod +x /usr/local/bin/composer
# install PHPUnit
RUN curl -L https://phar.phpunit.de/phpunit.phar -o phpunit.phar && \
chmod +x phpunit.phar && \
mv phpunit.phar /usr/local/bin/phpunit && \
chmod +x /usr/local/bin/phpunit
WORKDIR /var/www/html
RUN chown -R webuser:webuser /var/www/html
USER webuser
# install node js 6
RUN NVM_DIR="/home/webuser/.nvm" && \
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.32.0/install.sh | bash && \
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" && \
nvm install 6 && \
npm install -g webpack && \
npm install
RUN echo 'export NVM_DIR="/home/webuser/.nvm"\n\
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"'\
>> /home/webuser/.bashrc
COPY src /var/www/html/
USER root
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
i have executed your Dockerfile and had the same problem. It works if the ADD is at the beginning of the Dockerfile. But there are some other problems. The build process is stopping right after
chmod +x /usr/local/bin/composer
it wont install PHPUnit and nodeJS, set owner of the www directory and so on.
Maybe you should chain the whole RUN into one.
It seems like we need to have the package.json file within the source directory. Copying package.json separately and running the npm install pattern is used to make use of docker's caching system.

Resources