Firefox(headless)+selenium cannot access internet from docker container - docker

I have tested the internet connection by wget https://www.google.com and it worked from inside the docker. But, when I run a headless firefox with selenium python binding, selenium throughs the TimeoutException:
>> docker run myselcontainer
Traceback (most recent call last):
File "run.py", line 24, in <module>
driver = webdriver.Firefox(service_log_path=os.devnull, options=options, capabilities=capabilities, firefox_profile=profile)
File "/usr/local/lib/python3.8/site-packages/selenium/webdriver/firefox/webdriver.py", line 170, in __init__
RemoteWebDriver.__init__(
File "/usr/local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
self.start_session(capabilities, browser_profile)
File "/usr/local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message: Connection refused (os error 111)
But when I run the same python file from my host, it runs completely fine.
(Kindly do not suggest me to use docker-selenium image. I have reasons to not to use them. Except changing the base image, any suggestion or query is welcome.)
Below is the run.py:
from selenium import webdriver
import os
# Set proper profile
profile = webdriver.FirefoxProfile()
profile.set_preference("security.fileuri.strict_origin_policy", False) # disable Strict Origin Policy
profile.set_preference("dom.webdriver.enabled", False) # disable Strict Origin Policy
# Capabilities
capabilities = webdriver.DesiredCapabilities.FIREFOX
capabilities['marionette'] = True
# Options
options = webdriver.FirefoxOptions()
options.add_argument("--log-level=OFF")
# Using non Headless for debugging
options.headless = True
driver = webdriver.Firefox(service_log_path=os.devnull, options=options, capabilities=capabilities, firefox_profile=profile)
driver.set_window_size(1920, 1080)
driver.get('https://www.google.com')
print(driver.page_source)
driver.quit()
And below is the Dockerfile:
FROM python:3.8-slim-buster
# Python optimization
## Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
## Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# Locales
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
# Installation required for selenium
RUN apt-get update -y \
&& apt-get install --no-install-recommends --no-install-suggests -y tzdata ca-certificates bzip2 curl wget libc-dev libxt6 \
&& apt-get install --no-install-recommends --no-install-suggests -y `apt-cache depends firefox-esr | awk '/Depends:/{print$2}'` \
&& update-ca-certificates \
# Cleanup unnecessary stuff
&& apt-get purge -y --auto-remove \
-o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/* /tmp/*
# install geckodriver
RUN GECKODRIVER_VERSION=`curl https://github.com/mozilla/geckodriver/releases/latest | grep -Po 'v[0-9]+.[0-9]+.[0-9]+'` && \
wget https://github.com/mozilla/geckodriver/releases/download/$GECKODRIVER_VERSION/geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz && \
tar -zxf geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz -C /usr/local/bin && \
chmod +x /usr/local/bin/geckodriver && \
rm geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz
# install firefox
RUN FIREFOX_SETUP=firefox-setup.tar.bz2 && \
wget -O $FIREFOX_SETUP "https://download.mozilla.org/?product=firefox-latest&os=linux64" && \
tar xjf $FIREFOX_SETUP -C /opt/ && \
ln -s /opt/firefox/firefox /usr/bin/firefox && \
rm $FIREFOX_SETUP
# Install pip requirements
RUN python -m pip install --upgrade pip && python -m pip install --no-cache-dir selenium scrapy
ENV APP_HOME /usr/src/app
WORKDIR /$APP_HOME
COPY . $APP_HOME/
RUN export PYTHONPATH=$PYTHONPATH:$APP_HOME
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser $APP_HOME
USER appuser
CMD [ "python3", "run.py" ]
My container build and run commands are :
docker build -t myselcontainer .
docker run myselcontainer

Related

Passing "vars" to dbt-snowflake docker container image is throwing errors

I'm running dbt-snowflake docker image and need to pass parameters while running the container. So, I tried passing --vars from the command prompt. But getting below error.
12:54:44 Running with dbt=1.3.1
12:54:45 Encountered an error:
'dbt_snowflake://macros/apply_grants.sql'
12:54:45 Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/dbt/main.py", line 135, in main
results, succeeded = handle_and_check(args)
File "/usr/local/lib/python3.10/site-packages/dbt/main.py", line 198, in handle_and_check
task, res = run_from_args(parsed)
File "/usr/local/lib/python3.10/site-packages/dbt/main.py", line 245, in run_from_args
results = task.run()
File "/usr/local/lib/python3.10/site-packages/dbt/task/runnable.py", line 453, in run
self._runtime_initialize()
File "/usr/local/lib/python3.10/site-packages/dbt/task/runnable.py", line 161, in _runtime_initialize
super()._runtime_initialize()
File "/usr/local/lib/python3.10/site-packages/dbt/task/runnable.py", line 94, in _runtime_initialize
self.load_manifest()
File "/usr/local/lib/python3.10/site-packages/dbt/task/runnable.py", line 81, in load_manifest
self.manifest = ManifestLoader.get_full_manifest(self.config)
File "/usr/local/lib/python3.10/site-packages/dbt/parser/manifest.py", line 221, in get_full_manifest
manifest = loader.load()
File "/usr/local/lib/python3.10/site-packages/dbt/parser/manifest.py", line 320, in load
self.load_and_parse_macros(project_parser_files)
File "/usr/local/lib/python3.10/site-packages/dbt/parser/manifest.py", line 422, in load_and_parse_macros
block = FileBlock(self.manifest.files[file_id])
KeyError: 'dbt_snowflake://macros/apply_grants.sql'
Below is my docker file
# Top level build args
ARG build_for=linux/amd64
##
# base image (abstract)
##
FROM --platform=$build_for python:3.10.7-slim-bullseye as base
ARG dbt_core_ref=dbt-core#v1.4.0a1
ARG dbt_postgres_ref=dbt-core#v1.4.0a1
ARG dbt_redshift_ref=dbt-redshift#v1.4.0a1
ARG dbt_bigquery_ref=dbt-bigquery#v1.4.0a1
ARG dbt_snowflake_ref=dbt-snowflake#v1.3.0
ARG dbt_spark_ref=dbt-spark#v1.4.0a1
# special case args
ARG dbt_spark_version=all
ARG dbt_third_party
# System setup
RUN apt-get update \
&& apt-get dist-upgrade -y \
&& apt-get install -y --no-install-recommends \
git \
ssh-client \
software-properties-common \
make \
build-essential \
ca-certificates \
libpq-dev \
&& apt-get clean \
&& rm -rf \
/var/lib/apt/lists/* \
/tmp/* \
/var/tmp/*
# Env vars
ENV PYTHONIOENCODING=utf-8
ENV LANG=C.UTF-8
# Update python
RUN python -m pip install --upgrade pip setuptools wheel --no-cache-dir
RUN pip install -q --no-cache-dir dbt-core
RUN pip install -q --no-cache-dir dbt-snowflake
# RUN mkdir /root/.dbt
# ADD profiles.yml /root/.dbt
# Set docker basics
WORKDIR /usr/app/dbt/
VOLUME /usr/app
COPY **/profiles.yml /root/.dbt/profiles.yml
COPY . /usr/app/dbt/
ENTRYPOINT ["dbt"]
Here is my docker image docker pull madhuraju/gu-snowflake
Below is the command
docker run -it gu-snowflake:test run --vars '{"testKey": "testValue"}'
Please let me know how can I fix this issue and also how I can pass values at runtime so that dbt executes only specific models based on the values that are being passed.

Errors Installing singularity inside dockerfile

I am trying to run a nextflow pipeline which uses an older version of nextflow (21.04.3) and java version 8. Since I have to use this pipeline on a remote server, therefore I can only use singularity.
As this nextflow pipeline also uses singularity pull calls therefore I need the singularity installed inside the docker image as well. Then, I can convert this image docker image to a singularity image and then I can move it to the remote server.
I am trying to install singularity inside dockerfile but I am getting errors,
This is the dockerfile that I am using,
FROM python:3.8.9-slim
LABEL authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
RUN apt-get install -y singularity-container
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow- io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
These are the errors I am getting
Step 9/17 : RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee
/etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --
keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
---> Running in afc3dcbbd1ee
--2022-03-17 17:40:19-- http://neuro.debian.net/lists/xenial.us-ca.full
Resolving neuro.debian.net (neuro.debian.net)... 129.170.233.11
Connecting to neuro.debian.net (neuro.debian.net)|129.170.233.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 262
Saving to: ‘STDOUT’
0K 100% 18.4M=0s
deb http://neurodeb.pirsquared.org data main contrib non-free
#deb-src http://neurodeb.pirsquared.org data main contrib non-free
deb http://neurodeb.pirsquared.org xenial main contrib non-free
#deb-src http://neurodeb.pirsquared.org xenial main contrib non-free
2022-03-17 17:40:19 (18.4 MB/s) - written to stdout [262/262]
/bin/sh: 1: apt-key: not found
The command '/bin/sh -c wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update'
returned a non-zero code: 127
I there a way to install singularity using a dockerfile ?
Thanks
I made some changes in the dockerfile based on the method to install singularity in linux given here.
The complete dockerfile with which I was able to run successfully nextflow, java and singularity within singularity is given below,
FROM python:3.8.9-slim
LABEL
authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
uuid-dev \
libgpgme11-dev \
squashfs-tools \
libseccomp-dev \
wget \
pkg-config \
procps
# Download Go source version 1.16.3, install them and modify the PATH
ENV VERSION=1.16.3
ENV OS=linux
ENV ARCH=amd64
RUN wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz && \
echo 'export PATH=$PATH:/usr/local/go/bin' | tee -a /etc/profile
# Download Singularity from version 3.7.3 (security version)
ENV VERSION=3.7.3
RUN wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz
# Compile Singularity sources and install it
RUN export PATH=$PATH:/usr/local/go/bin && \
cd singularity && \
./mconfig --without-suid && \
make -C ./builddir && \
make -C ./builddir install
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow-io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
The file named requirements.txt used in the above dockerfile is given below,
click
GitPython
jinja2
jsonschema
packaging
prompt_toolkit>=3.0.3
pyyaml
pytest-workflow
questionary>=1.8.0
requests_cache
requests
rich>=10.0.0
tabulate

NVIDIA Driver Not found during Nvidia + Cuda - Docker Image build

I am trying to create a GPU microservice using Nvidia cuda Base image, but during the docker build, I am facing Driver not found issue, can someone point out what is missing here?
DockerFile:
FROM nvidia/cuda:10.1-devel
# Install some basic utilities
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
git \
bzip2 \
libx11-6 \
&& rm -rf /var/lib/apt/lists/*
ENV CONDA_AUTO_UPDATE_CONDA=false
ENV PATH=/home/user/miniconda/bin:$PATH
RUN curl -sLo ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh \
&& chmod +x ~/miniconda.sh \
&& ~/miniconda.sh -b -p ~/miniconda \
&& rm ~/miniconda.sh \
&& conda install -y python==3.7 \
&& conda clean -ya
ENV PATH="/usr/local/cuda-10.1/bin:$PATH"
ENV LD_LIBRARY_PATH="/usr/local/cuda-10.1/lib64:$LD_LIBRARY_PATH"
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
ENV NVIDIA_VISIBLE_DEVICES=all
ENV FORCE_CUDA="1"
RUN conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
RUN pip install -v -e .
Error:
"/home/user/miniconda/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1013, in _get_cuda_arch_flags
capability = torch.cuda.get_device_capability()
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 320, in get_device_capability
prop = get_device_properties(device)
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 325, in get_device_properties
_lazy_init() # will define _get_device_properties and _CudaDeviceProperties
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 196, in _lazy_init
_check_driver()
File "/home/user/miniconda/lib/python3.7/site-packages/torch/cuda/__init__.py", line 101, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
The issues happens during execution of last step in docker file.
I tried using multiple Nvidia base docker images, but didn't help much. (cuda:10.1-base-ubuntu18.04, cuda:10.1-runtime-ubuntu18.04)
Any pointers appreciated.
After lot of trial and errors and going through a lot of documentation, this is what worked fine.
ARG PYTORCH=1.3
ARG CUDA=10.1
ARG CUDNN=7
FROM pytorch/pytorch:1.3-cuda10.1-cudnn7-devel
RUN mkdir /app
WORKDIR /app
ENV TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0+PTX"
ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
ENV CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"
RUN apt-get update && apt-get install -y libglib2.0-0 libsm6 libxrender-dev libxext6 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install some basic utilities
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
git \
bzip2 \
libx11-6 \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential g++ \
libglib2.0-0 libsm6 libxrender-dev libxext6 wget
# Create a non-root user and switch to it
RUN adduser --disabled-password --gecos '' --shell /bin/bash user \
&& chown -R user:user /app
RUN echo "user ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-user
USER user
# All users can use /home/user as their home directory
ENV HOME=/home/user
RUN chmod 777 /home/user
# Install Miniconda and Python 3.7
ENV CONDA_AUTO_UPDATE_CONDA=false
ENV PATH=/home/user/miniconda/bin:$PATH
RUN curl -sLo ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh \
&& chmod +x ~/miniconda.sh \
&& ~/miniconda.sh -b -p ~/miniconda \
&& rm ~/miniconda.sh \
&& conda install -y python==3.7 \
&& conda clean -ya
RUN conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
RUN pip install -v -e .
Hope this helps!
Good luck!

Docker doesn't find file

I'm working on a project that uses a Docker image for a specific feature, other than that I don't need docker at all so I don't understand much about it. The issue is that Docker doesn't finds a file that is actually in the folder and the build process breaks.
When trying to create the image using docker build -t project/render-worker . the error is this:
Step 18/23 : RUN bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo
---> Running in 695db3bf2f02
/bin/sh: 1: bin/composer-install: not found
The command '/bin/sh -c bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo' returned a non-zero code: 127
As mentioned the file composer-install does exist and this is what's in it:
#!/bin/sh
EXPECTED_SIGNATURE="$(wget -q -O - https://composer.github.io/installer.sig)"
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
ACTUAL_SIGNATURE="$(php -r "echo hash_file('SHA384', 'composer-setup.php');")"
if [ "$EXPECTED_SIGNATURE" != "$ACTUAL_SIGNATURE" ]
then
echo 'ERROR: Invalid installer signature'
rm composer-setup.php
fi
Basically this is to get composer as you can see.
This is the Docker file:
FROM php:7.2-apache
RUN echo 'deb http://ftp.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
libpq-dev \
libxml2-dev \
ffmpeg \
imagemagick \
wget \
git \
zlib1g-dev \
libpng-dev \
unzip \
mencoder \
parallel \
ruby-dev
RUN apt-get -t stretch-backports install -y --no-install-recommends \
libav-tools \
&& rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install \
pcntl \
pdo_pgsql \
pgsql \
soap \
gd \
zip
RUN gem install compass
RUN a2enmod rewrite
ENV APACHE_RUN_USER root
ENV APACHE_RUN_GROUP root
EXPOSE 80
WORKDIR /app
COPY . /app
# Configuring apache to run the symfony app
COPY config/docker/apache.conf /etc/apache2/sites-enabled/000-default.conf
RUN echo "export DATABASE_URL" >> /etc/apache2/envvars \
&& echo ". /etc/environment" >> /etc/apache2/envvars
RUN wget -cqO- https://nodejs.org/dist/v10.15.3/node-v10.15.3-linux-x64.tar.xz | tar -xJ
RUN cp -a node-v10.15.3-linux-x64/bin /usr \
&& cp -a node-v10.15.3-linux-x64/include /usr \
&& cp -a node-v10.15.3-linux-x64/lib /usr \
&& cp -a node-v10.15.3-linux-x64/share /usr/ \
&& rm -rf node-v10.15.3-linux-x64 node-v10.15.3-linux-x64.tar.xz
RUN bin/composer-install \
&& php composer-setup.php --install-dir=/bin \
&& php -r "unlink('composer-setup.php');" \
# Install prestissimo for dramatically faster `composer install`
&& php /bin/composer.phar global require hirak/prestissimo
RUN APP_ENV=prod APP_SECRET= DATABASE_URL= AWS_KEY= AWS_SECRET= AWS_REGION= MEDIA_S3_BUCKET= \
GIPHY_API_KEY= FACEBOOK_APP_ID= FACEBOOK_APP_SECRET= \
GOOGLE_API_KEY= GOOGLE_CLIENT_ID= GOOGLE_CLIENT_SECRET= STRIPE_SECRET_KEY= STRIPE_ENDPOINT_SECRET= \
THEYSAIDSO_API_KEY= REV_CLIENT_API_KEY= REV_USER_API_KEY= REV_API_ENDPOINT= RENDER_QUEUE_URL= \
CLOUDWATCH_LOG_GROUP_NAME= \
php /bin/composer.phar install --no-interaction --no-dev --prefer-dist --optimize-autoloader --no-scripts \
&& php /bin/composer.phar clear-cache
RUN npm install \
&& node_modules/bower/bin/bower install --allow-root \
&& node_modules/grunt/bin/grunt
# Don't allow it to keep logs around; they're emitted on STDOUT and sent to AWS
# CloudWatch from there, so we don't need them on disk filling up the space
RUN mkdir -p var/cache/prod && chmod -R 777 var/cache/prod
RUN mkdir -p var/log && ln -s /dev/null var/log/prod.log \
&& ln -s /dev/null var/log/prod.deprecations.log && chmod -R 777 var/log
CMD ["/usr/bin/env", "bash", "./bin/start_render_worker"]
Like I said, unfortunately I don't have the slightest idea of how docker works and what's going on, just that I need it. I'm running docker in Win10 Pro and to make matters even worst it is actually working for another dev running Win10. We tried a few things but we can't make it work. I tried cloning the repo in other locations with no success at all. Everything before this particular step runs correctly.
[EDIT]
As suggested by the users I ran RUN ls bin/ before the composer install line and this is the result:
Step 18/24 : RUN ls bin/
---> Running in 6cb72090a069
append_captions
capture
composer-install
concat_project_video
console
encode_frames
encode_frames_to_gif
format_video_for_concatenation
generate_meme_bar
image_to_video
install.sh
phpcs
phpunit
process_render_queue
publish_docker_image
run_animation_worker
run_render_worker
run_render_worker_osx
start_render_worker
update
Removing intermediate container 6cb72090a069
As you can see composer-install is there so this is quite baffling.
Also I checked and set the line ending sequence to LF and the result is the same error.
[SECOND EDIT]
I added COPY bin/composer-install /bin
Then RUN ls bin/
And the results are the same. The ls command finds the file but the error persists. Also adding a slash before bin doesn't change anything :(

in what scope are docker-compose commands run

I am hitting issues running a start script (eg npm run gulp-dist) for my container as specified in my docker compose file. I traced the issue down to a node version compatibility issue which has led me to some confusion.
If I enter the container with docker-compose run workspace bash and then run node -v I get back v10.5.0 as expected (and what my script requires).
Yet if in docker-compose I set command: node -v it prints v4.2.6 when bringing up the container with docker-compose up workspace.
So I'm wondering where are the commands run that I specify in docker-compose (I thought they were run in the container once it had started). And how do I run a command in the container - I want to specify it in docker-compose as I run a different command in two different docker-compose files (one for dev env, one for production).
Note: My dev machine has node version 11, so I have no idea where four is.
Also, if run docker-compose run workspace bash and then run the original script, it works fine - it is just failing when run as a docker-compose command.
Here's my dockerfile (sorry, it's big):
# FROM laradock/workspace:1.8-71
# copied the contents of the above laradock workspace
# dockerfile and replaced put here directly.
FROM phusion/baseimage:latest
MAINTAINER Mahmoud Zalt <mahmoud#zalt.me>
RUN DEBIAN_FRONTEND=noninteractive
RUN locale-gen en_US.UTF-8
ENV LANGUAGE=en_US.UTF-8
ENV LC_ALL=en_US.UTF-8
ENV LC_CTYPE=en_US.UTF-8
ENV LANG=en_US.UTF-8
ENV TERM xterm
# Add the "PHP 7" ppa
RUN apt-get install -y software-properties-common && \
add-apt-repository -y ppa:ondrej/php
#
#--------------------------------------------------------------------------
# Software's Installation
#--------------------------------------------------------------------------
#
# Install "PHP Extentions", "libraries", "Software's"
RUN apt-get update && \
apt-get install -y --allow-downgrades --allow-remove-essential \
--allow-change-held-packages \
php7.1-cli \
php7.1-common \
php7.1-curl \
php7.1-intl \
php7.1-json \
php7.1-xml \
php7.1-mbstring \
php7.1-mcrypt \
php7.1-mysql \
php7.1-pgsql \
php7.1-sqlite \
php7.1-sqlite3 \
php7.1-zip \
php7.1-bcmath \
php7.1-memcached \
php7.1-gd \
php7.1-dev \
pkg-config \
libcurl4-openssl-dev \
libedit-dev \
libssl-dev \
libxml2-dev \
xz-utils \
libsqlite3-dev \
sqlite3 \
git \
curl \
vim \
nano \
postgresql-client \
&& apt-get clean
#####################################
# Composer:
#####################################
# Install composer and add its bin to the PATH.
RUN curl -s http://getcomposer.org/installer | php && \
echo "export PATH=${PATH}:/var/www/vendor/bin" >> ~/.bashrc && \
mv composer.phar /usr/local/bin/composer
# Source the bash
RUN . ~/.bashrc
#
# other - workspace specific config
#
RUN apt-get -y update && \
apt-get install pkg-config libmagickwand-dev -y && \
pecl install imagick
#####################################
# Non-Root User:
#####################################
# Add a non-root user to prevent files being created with root permissions on host machine.
ENV PUID 1000
ENV PGID 1000
RUN groupadd -g ${PGID} laradock && \
useradd -u ${PUID} -g laradock -m laradock && \
apt-get update -yqq
#####################################
# Set Timezone
#####################################
ARG TZ=UTC
ENV TZ ${TZ}
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
#####################################
# Composer:
#####################################
# Add the composer.json
COPY ./composer.json /home/laradock/.composer/composer.json
# Make sure that ~/.composer belongs to laradock
RUN chown -R laradock:laradock /home/laradock/.composer
USER laradock
# Check if global install need to be ran
ARG COMPOSER_GLOBAL_INSTALL=false
ENV COMPOSER_GLOBAL_INSTALL ${COMPOSER_GLOBAL_INSTALL}
RUN if [ ${COMPOSER_GLOBAL_INSTALL} = true ]; then \
# run the install
composer global install \
;fi
USER root
#####################################
# Node / NVM:
#####################################
# Check if NVM needs to be installed
ARG NODE_VERSION=10.5.0
ENV NODE_VERSION 10.5.0
ENV NVM_DIR /home/laradock/.nvm
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.1/install.sh | bash && \
. $NVM_DIR/nvm.sh && \
nvm install ${NODE_VERSION} && \
nvm use ${NODE_VERSION} && \
npm install -g gulp bower vue-cli \
;fi
# link node and nodejs
RUN ln -s /usr/bin/nodejs /usr/bin/node
# Wouldn't execute when added to the RUN statement in the above block
# Source NVM when loading bash since ~/.profile isn't loaded on non-login shell
RUN echo "" >> ~/.bashrc && \
echo 'export NVM_DIR="$HOME/.nvm"' >> ~/.bashrc && \
echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm' >> ~/.bashrc \
;fi
# install required things
RUN apt-get update && apt-get install apt-transport-https && \
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
apt-get update && apt-get install -y --allow-unauthenticated yarn mysql-client
# Add NVM binaries to root's .bashrc
USER root
RUN apt-get install npm -y
# set npm registry address
RUN npm config set registry http://registry.npmjs.org/
#
#--------------------------------------------------------------------------
# Final Touch
#--------------------------------------------------------------------------
#
# Clean up
USER root
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Set default work directory
WORKDIR /var/www
# # copy in our code, so as not to rely on a volume in prod
COPY . /var/www
# ensure directories we need are writable
RUN chmod -R o+w /var/www/user-api-laravel/storage
RUN chmod -R o+w /var/www/user-api-laravel/bootstrap/cache
RUN chmod -R o+w /var/www/auto/storage
RUN chmod -R o+w /var/www/auto/bootstrap/cache
# install php project dependencies
RUN cd /var/www/user-api-laravel && composer install
RUN cd /var/www/auto && composer install
WORKDIR /var/www
USER root
# install auto-scalar deps
RUN cd /var/www/auto-scaler && npm i
# php.ini for cli
ADD ./php-cli.ini /etc/php/7.1/cli/php.ini
And relevant part of docker-compose:
workspace:
build:
context: ./www-workspace
args:
- TZ=${WORKSPACE_TIMEZONE}
- NODE_VERSION=${WORKSPACE_NODE_VERSION}
command: [bash, -c, "cd /var/www/spa && npm run dist-prod"]
Still don't know what context the commands run in, but made mine work. It was due to node being installed via NVM. Or at least when I installed, as #Noogen suggested, via curl -sL https://deb.nodesource.com/setup_10.x | sudo bash - I could then run commands against my container and they would have access to the correct node version. I had to settled for a lower node version (not 10.5.0 as I could specify with NVM) but in the end it worked so no worries.

Resources