docker compose not finding file in gitlab job - docker

I'm running a gitlab-ci.yml job with a gitlab-runner which is using the docker executor with the docker.sock mounted from the host (set in the config file).
docker composes run.sh file located here: https://github.com/docker/compose/blob/master/script/run/run.sh doesn't seem to find the docker-compose.lint.yml file I need it to run.
Part of the gitlab-ci.yml file:
Lint from image:
<<: *temp
stage: Lint
script:
- ls -a
- cat docker-compose.lint.yml
- . docker-compose_run.sh -f docker-compose.lint.yml up --remove-orphans --force-recreate --abort-on-container-exit
Dockerfile of the image that the gitlab-ci.yml file is using:
FROM debian:stretch
# Set Bash as default shell
RUN rm /bin/sh && \
ln --symbolic /bin/bash /bin/sh
# apt-get
RUN apt-get update && \
apt-get install -y \
# Install SSH
openssh-client \
openssh-server \
openssl \
# Install cURL
curl \
# Install git
git \
# Install locales
locales \
# Install Python
python3 \
python-dev \
python3-pip \
# Build stuff
build-essential \
libssl-dev \
libffi-dev \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
# Install Docker
# Add Docker’s official GPG key
RUN curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | apt-key add - && \
# Set up the stable repository
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
# Update package index
apt-get update && \
# Install Docker CE
apt-get install -y docker-ce
# pip
RUN pip3 install \
# Install Docker Compose
docker-compose
# Set locale
RUN sed --in-place '/en_US.UTF-8/s/^# //' /etc/locale.gen && \
locale-gen && \
# Set system locale (add line)
echo "export LANG=en_US.UTF-8" >> /etc/profile && \
# Set system timezone (add line)
echo "export TZ=UTC" >> /etc/profile
Job output:
...more stuff above here
$ ls -a
.
..
.clocignore
.dockerignore
.eslintignore
.eslintrc.json
.git
.gitignore
.gitlab-ci.yml
.nvmrc
.nyc_output
Dockerfile
README.md
__.gitlab-ci.yml
bin
build.sh
build_and_test.sh
coverage
docker-compose.lint.yml
docker-compose.test.yml
docker-compose_run.sh
lib
package-lock.json
package.json
stop.sh
test
test.sh
wait-for-vertica.sh
$ cat docker-compose.lint.yml
version: "3"
services:
ei:
image: lint:1
environment:
- NODE_ENV=test
entrypoint: ["npm", "run", "lint"]
$ . docker-compose_run.sh -f docker-compose.lint.yml up --remove-orphans --force-recreate --abort-on-container-exit
.IOError: [Errno 2] No such file or directory: u'./docker-compose.lint.yml'
ERROR: Job failed: exit code 1

Since this question was the top google result for my search:
executor failed running [/bin/sh -c apt-get install git-all]: exit code: 1
To fix my problem I added -y to the apt-get command, that is changed this:
apt-get install git-all
to this
apt-get install git-all -y

Related

"docker:19.03-dind" could not select device driver "nvidia" with capabilities: [[gpu]]

I got a K8S+DinD issue:
launch Kubernetes cluster
start a main docker image and a DinD image inside this cluster
when running a job requesting GPU, got error could not select device driver "nvidia" with capabilities: [[gpu]]
Full error
http://localhost:2375/v1.40/containers/long-hash-string/start: Internal Server Error ("could not select device driver "nvidia" with capabilities: [[gpu]]")
exec to the DinD image inside of K8S pod, nvidia-smi is not available.
Some debugging and it seems it's due to the DinD is missing the Nvidia-docker-toolkit, I had the same error when I ran the same job directly on my local laptop docker, I fixed the same error by installing nvidia-docker2 sudo apt-get install -y nvidia-docker2.
I'm thinking maybe I can try to install nvidia-docker2 to the DinD 19.03 (docker:19.03-dind), but not sure how to do it? By multiple stage docker build?
Thank you very much!
update:
pod spec:
spec:
containers:
- name: dind-daemon
image: docker:19.03-dind
I got it working myself.
Referring to
https://github.com/NVIDIA/nvidia-docker/issues/375
https://github.com/Henderake/dind-nvidia-docker
First, I modified the ubuntu-dind image (https://github.com/billyteves/ubuntu-dind) to install nvidia-docker (i.e. added the instructions in the nvidia-docker site to the Dockerfile) and changed it to be based on nvidia/cuda:9.2-runtime-ubuntu16.04.
Then I created a pod with two containers, a frontend ubuntu container and the a privileged docker daemon container as a sidecar. The sidecar's image is the modified one I mentioned above.
But since this post is 3 year ago from now, I did spent quite some time to match up the dependencies versions, repo migration over 3 years, etc.
My modified version of Dockerfile to build it
ARG CUDA_IMAGE=nvidia/cuda:11.0.3-runtime-ubuntu20.04
FROM ${CUDA_IMAGE}
ARG DOCKER_CE_VERSION=5:18.09.1~3-0~ubuntu-xenial
RUN apt-get update -q && \
apt-get install -yq \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" && \
apt-get update -q && apt-get install -yq docker-ce docker-ce-cli containerd.io
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN set -eux; \
apt-get update -q && \
apt-get install -yq \
btrfs-progs \
e2fsprogs \
iptables \
xfsprogs \
xz-utils \
# pigz: https://github.com/moby/moby/pull/35697 (faster gzip implementation)
pigz \
# zfs \
wget
# set up subuid/subgid so that "--userns-remap=default" works out-of-the-box
RUN set -x \
&& addgroup --system dockremap \
&& adduser --system -ingroup dockremap dockremap \
&& echo 'dockremap:165536:65536' >> /etc/subuid \
&& echo 'dockremap:165536:65536' >> /etc/subgid
# https://github.com/docker/docker/tree/master/hack/dind
ENV DIND_COMMIT 37498f009d8bf25fbb6199e8ccd34bed84f2874b
RUN set -eux; \
wget -O /usr/local/bin/dind "https://raw.githubusercontent.com/docker/docker/${DIND_COMMIT}/hack/dind"; \
chmod +x /usr/local/bin/dind
##### Install nvidia docker #####
# Add the package repositories
RUN curl -fsSL https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add --no-tty -
RUN distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && \
echo $distribution && \
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
tee /etc/apt/sources.list.d/nvidia-docker.list
RUN apt-get update -qq --fix-missing
RUN apt-get install -yq nvidia-docker2
RUN sed -i '2i \ \ \ \ "default-runtime": "nvidia",' /etc/docker/daemon.json
RUN mkdir -p /usr/local/bin/
COPY dockerd-entrypoint.sh /usr/local/bin/
RUN chmod 777 /usr/local/bin/dockerd-entrypoint.sh
RUN ln -s /usr/local/bin/dockerd-entrypoint.sh /
VOLUME /var/lib/docker
EXPOSE 2375
ENTRYPOINT ["dockerd-entrypoint.sh"]
#ENTRYPOINT ["/bin/sh", "/shared/dockerd-entrypoint.sh"]
CMD []
When I use exec to login into the Docker-in-Docker container, I can successfully run nvidia-smi (which previously return not found error then cannot run any GPU resource related docker run)
Welcome to pull my image at brandsight/dind:nvidia-docker

How to continuously copy files into docker

I am a docker newbie and i can't rly figure out how the changes that will be made to my working directory will be continuously copied to the docker container. Is there a command that copies all my changes to the docker container all the time ?
Edit : i added docker file and docker compose
My docker file
FROM scratch
ADD centos-7-x86_64-docker.tar.xz /
LABEL \
org.label-schema.schema-version="1.0" \
org.label-schema.name="CentOS Base Image" \
org.label-schema.vendor="CentOS" \
org.label-schema.license="GPLv2" \
org.label-schema.build-date="20201113" \
org.opencontainers.image.title="CentOS Base Image" \
org.opencontainers.image.vendor="CentOS" \
org.opencontainers.image.licenses="GPL-2.0-only" \
org.opencontainers.image.created="2020-11-13 00:00:00+00:00"
RUN yum clean all && yum update -y && yum -y upgrade
RUN yum groupinstall "Development Tools" -y
RUN yum install -y wget gettext-devel curl-devel openssl-devel perl-devel perl-CPAN zlib-devel && wget https://github.com/git/git/archive/v2.26.2.tar.gz\
&& tar -xvzf v2.26.2.tar.gz && cd git-2.26.2 && make configure && ./configure --prefix=/usr/local && make install
# RUN mkdir -p /root/.ssh && \
# chmod 0700 /root/.ssh && \
# ssh-keyscan github.com > /root/.ssh/known_hosts
# RUN ssh-keygen -q -t rsa -N '' -f /id_rsa
# RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
# echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
# chmod 600 /root/.ssh/id_rsa && \
# chmod 600 /root/.ssh/id_rsa.pub
RUN ls
RUN cd / && git clone https://github.com/odoo/odoo.git \
&& cd odoo \
&& git fetch \
&& git checkout 9.0
RUN yum install python-devel libxml2-devel libxslt-dev openldap-devel libtiff-devel libjpeg-devel libzip-devel freetype-devel lcms2-devel \
libwebp-devel tcl-devel tk-devel python-pip nodejs
RUN pip install setuptools==1.4.1 beautifulsoup4==4.9.3 pillow openpyxl==2.6.4 luhn gmp-devel paramiko==1.7.7.2 python2-secrets cffi pysftp==0.2.8
RUN pip install -r requirements.txt
RUN npm install -g less
CMD ["/bin/bash","git"]
My docker-compose
version: '3.3'
services:
app: &app
build:
context: .
dockerfile: ./docker/app/Dockerfile
container_name: app
tty: true
db:
image: postgres:9.2.18
environment:
- POSTGRES_DB=test
ports:
- 5432:5432
volumes:
- ./docker/db/pg-data:/var/lib/postgresql/data
odoo:
<<: *app
command: python odoo.py -w odoo -r odoo
ports:
- '8069:8069'
depends_on:
- db
If I understand correctly you want to mount a path from the host into a container which can be done using volumes. Something like this would keep the folders in sync which can be useful for development
docker run -v /path/to/local/folder:/path/in/container busybox

docker-compose.yml - container with exited status on Ubuntu host

My docker-compose.yml:
version: "3.3"
services:
build_and_run_service:
image: myapp:0
build: .
network_mode: host
volumes:
- './bin/cookie:/app/cookie'
- './bin/logs:/app/logs'
- './bin/warehouse:/app/warehouse'
Dockerfile doesn't contain CMD and ENTRYPOINT, so when I execute commands in that order:
docker build --tag myapp:0 .
docker run -d -t myapp:0
docker exec -it <container_id> /bin/bash
It works as expected.
For some reason the container is not working when using docker compose...
Commands order:
docker-compose up -d --build
docker-compose run -d build_and_run_service bash
What's wrong?
Both cases work fine on Windows but not on Ubuntu...
#edit
Dockerfile:
FROM ubuntu:20.04 as runtime
LABEL description="Build and run container - myapp"
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN apt-get install -y nano
RUN apt-get install -y wget
RUN apt-get install -y curl
RUN apt-get install -y make
RUN apt-get install -y build-essential
RUN apt-get install -y tcl zlib1g-dev libssl-dev tk libcurl4-gnutls-dev libexpat1-dev gettext dos2unix
# Compilers
RUN apt-get install -y gcc-10
RUN apt-get install -y g++-10
RUN rm /usr/bin/gcc \
&& ln -s /usr/bin/gcc-10 /usr/bin/gcc
RUN rm /usr/bin/g++ \
&& ln -s /usr/bin/g++-10 /usr/bin/g++
# Postgres dev
RUN sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
RUN wget --no-check-certificate --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get update
RUN apt-get install -y libpq-dev postgresql-server-dev-13
RUN cd /tmp \
&& wget --no-check-certificate https://www.openssl.org/source/openssl-1.1.1g.tar.gz \
&& tar -zxf openssl-1.1.1g.tar.gz \
&& cd openssl-1.1.1g \
&& ./config \
&& make \
&& make install \
&& rm /usr/bin/openssl \
&& ln -s /usr/local/bin/openssl /usr/bin/openssl \
&& ldconfig
RUN cd /tmp \
&& wget --no-check-certificate https://cmake.org/files/v3.19/cmake-3.19.6-Linux-x86_64.tar.gz \
&& tar -zxf cmake-3.19.6-Linux-x86_64.tar.gz \
&& mv cmake-3.19.6-Linux-x86_64 /usr/local/ \
&& ln -s /usr/local/cmake-3.19.6-Linux-x86_64/bin/cmake /usr/bin/cmake
RUN cd /tmp \
&& wget --no-check-certificate https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.31.0.tar.gz \
&& tar -zxf git-2.31.0.tar.gz \
&& cd git-2.31.0 \
&& make prefix=/usr/local all \
&& make prefix=/usr/local install
RUN cd /tmp \
&& wget --no-check-certificate https://boostorg.jfrog.io/artifactory/main/release/1.75.0/source/boost_1_75_0.tar.gz \
&& tar -zxf boost_1_75_0.tar.gz \
&& cd boost_1_75_0 \
&& ./bootstrap.sh \
&& ./b2 \
&& ./b2 install
VOLUME ["/app/cookie", "/app/logs", "/app/warehouse"]
WORKDIR /app
COPY . /src
RUN cd /src \
&& mkdir build \
&& cd build
# Some building command
## PRIVATE ##
# Removes tmp
RUN cd /tmp \
&& rm -r *

Docker multi-stage build not copying between stages

I'm trying to create a multi-stage build where the first stage does a yarn install for the theme and the second stage sets up the PHP environment for Drupal.
When I look at the output it looks like yarn install is being run but the COPY command near the bottom doesn't copy it across to the PHP image. If I'm right when this works the node_modules directory should be created on my local machine?
docker-compose.yml:
version: '3.7'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./:/var/www/html:cached
env_file:
- ./local-development.env
ports:
- "8888:80"
db:
image: mysql:5.7
env_file:
- ./local-development.env
ports:
- "3306:3306"
Dockerfile:
FROM node:latest as yarn-install
WORKDIR /app
COPY ./web/themes/material_admin_mine ./
RUN yarn install --verbose --force;
# from https://www.drupal.org/docs/8/system-requirements/drupal-8-php-requirements
FROM php:7.2-apache
# Install & setup Xdebug
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini \
&& echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_connect_back=0' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_host=docker.for.mac.localhost' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_port=9000' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_handler=dbgp' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_mode=req' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_autostart=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.idekey=PHPSTORM' >> /usr/local/etc/php/conf.d/xdebug.ini
# Install git & mysql-client for running Drush
RUN apt update; \
apt install -y \
git \
mysql-client
# install the PHP extensions we need
RUN set -ex; \
\
if command -v a2enmod; then \
a2enmod rewrite; \
fi; \
\
savedAptMark="$(apt-mark showmanual)"; \
\
apt-get update; \
apt-get install -y --no-install-recommends \
libjpeg-dev \
libpng-dev \
libpq-dev \
unzip \
git \
; \
\
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer; \
\
docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; \
docker-php-ext-install -j "$(nproc)" \
gd \
opcache \
pdo_mysql \
pdo_pgsql \
zip \
; \
\
# reset apt-mark's "manual" list so that "purge --auto-remove" will remove all build dependencies
apt-mark auto '.*' > /dev/null; \
apt-mark manual $savedAptMark; \
ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so \
| awk '/=>/ { print $3 }' \
| sort -u \
| xargs -r dpkg-query -S \
| cut -d: -f1 \
| sort -u \
| xargs -rt apt-mark manual; \
\
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
rm -rf /var/lib/apt/lists/*
# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN { \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=60'; \
echo 'opcache.fast_shutdown=1'; \
echo 'opcache.enable_cli=1'; \
} > /usr/local/etc/php/conf.d/opcache-recommended.ini
# Various packages required to run Gulp in the theme directory
# gnupg is require for nodejs
RUN apt update; \
apt install gnupg -y; \
apt install gnupg1 -y; \
apt install gnupg2 -y; \
cd ~; \
curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh; \
bash nodesource_setup.sh; \
apt install nodejs -y; \
npm install gulp-cli -g -y; \
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - ;\
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list; \
apt update && apt install yarn -y;
WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine
When your Dockerfile ends with:
WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine
That should in fact copy the data from the first build stage to the final image. But then when you launch the container with
volumes:
- ./:/var/www/html:cached
everything in the /var/www/html directory tree, including that final COPY step, is hidden and replaced with what's in the current directory on the host. If you think of this like a copy, it's a one-way copy into the container; later changes will get copied back out to the host, but there's nothing that synchronizes what's in the image with what you previously had in the directory at startup time.
A Dockerfile intrinsically can't affect host filesystem content. In your case it sounds like the host content is secondary to your application proper. Given what's going into the first stage, I'd just run the yarn install step on the host and be done with it (you probably already have Node and Yarn available even). Otherwise you'd need a more selective volumes: section that carefully tried to avoid overwriting that one directory; you might be able to mount something like ./web/src:/var/www/html/web/src to only include your application code and avoid hiding the .../web/themes tree.

in what scope are docker-compose commands run

I am hitting issues running a start script (eg npm run gulp-dist) for my container as specified in my docker compose file. I traced the issue down to a node version compatibility issue which has led me to some confusion.
If I enter the container with docker-compose run workspace bash and then run node -v I get back v10.5.0 as expected (and what my script requires).
Yet if in docker-compose I set command: node -v it prints v4.2.6 when bringing up the container with docker-compose up workspace.
So I'm wondering where are the commands run that I specify in docker-compose (I thought they were run in the container once it had started). And how do I run a command in the container - I want to specify it in docker-compose as I run a different command in two different docker-compose files (one for dev env, one for production).
Note: My dev machine has node version 11, so I have no idea where four is.
Also, if run docker-compose run workspace bash and then run the original script, it works fine - it is just failing when run as a docker-compose command.
Here's my dockerfile (sorry, it's big):
# FROM laradock/workspace:1.8-71
# copied the contents of the above laradock workspace
# dockerfile and replaced put here directly.
FROM phusion/baseimage:latest
MAINTAINER Mahmoud Zalt <mahmoud#zalt.me>
RUN DEBIAN_FRONTEND=noninteractive
RUN locale-gen en_US.UTF-8
ENV LANGUAGE=en_US.UTF-8
ENV LC_ALL=en_US.UTF-8
ENV LC_CTYPE=en_US.UTF-8
ENV LANG=en_US.UTF-8
ENV TERM xterm
# Add the "PHP 7" ppa
RUN apt-get install -y software-properties-common && \
add-apt-repository -y ppa:ondrej/php
#
#--------------------------------------------------------------------------
# Software's Installation
#--------------------------------------------------------------------------
#
# Install "PHP Extentions", "libraries", "Software's"
RUN apt-get update && \
apt-get install -y --allow-downgrades --allow-remove-essential \
--allow-change-held-packages \
php7.1-cli \
php7.1-common \
php7.1-curl \
php7.1-intl \
php7.1-json \
php7.1-xml \
php7.1-mbstring \
php7.1-mcrypt \
php7.1-mysql \
php7.1-pgsql \
php7.1-sqlite \
php7.1-sqlite3 \
php7.1-zip \
php7.1-bcmath \
php7.1-memcached \
php7.1-gd \
php7.1-dev \
pkg-config \
libcurl4-openssl-dev \
libedit-dev \
libssl-dev \
libxml2-dev \
xz-utils \
libsqlite3-dev \
sqlite3 \
git \
curl \
vim \
nano \
postgresql-client \
&& apt-get clean
#####################################
# Composer:
#####################################
# Install composer and add its bin to the PATH.
RUN curl -s http://getcomposer.org/installer | php && \
echo "export PATH=${PATH}:/var/www/vendor/bin" >> ~/.bashrc && \
mv composer.phar /usr/local/bin/composer
# Source the bash
RUN . ~/.bashrc
#
# other - workspace specific config
#
RUN apt-get -y update && \
apt-get install pkg-config libmagickwand-dev -y && \
pecl install imagick
#####################################
# Non-Root User:
#####################################
# Add a non-root user to prevent files being created with root permissions on host machine.
ENV PUID 1000
ENV PGID 1000
RUN groupadd -g ${PGID} laradock && \
useradd -u ${PUID} -g laradock -m laradock && \
apt-get update -yqq
#####################################
# Set Timezone
#####################################
ARG TZ=UTC
ENV TZ ${TZ}
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
#####################################
# Composer:
#####################################
# Add the composer.json
COPY ./composer.json /home/laradock/.composer/composer.json
# Make sure that ~/.composer belongs to laradock
RUN chown -R laradock:laradock /home/laradock/.composer
USER laradock
# Check if global install need to be ran
ARG COMPOSER_GLOBAL_INSTALL=false
ENV COMPOSER_GLOBAL_INSTALL ${COMPOSER_GLOBAL_INSTALL}
RUN if [ ${COMPOSER_GLOBAL_INSTALL} = true ]; then \
# run the install
composer global install \
;fi
USER root
#####################################
# Node / NVM:
#####################################
# Check if NVM needs to be installed
ARG NODE_VERSION=10.5.0
ENV NODE_VERSION 10.5.0
ENV NVM_DIR /home/laradock/.nvm
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.1/install.sh | bash && \
. $NVM_DIR/nvm.sh && \
nvm install ${NODE_VERSION} && \
nvm use ${NODE_VERSION} && \
npm install -g gulp bower vue-cli \
;fi
# link node and nodejs
RUN ln -s /usr/bin/nodejs /usr/bin/node
# Wouldn't execute when added to the RUN statement in the above block
# Source NVM when loading bash since ~/.profile isn't loaded on non-login shell
RUN echo "" >> ~/.bashrc && \
echo 'export NVM_DIR="$HOME/.nvm"' >> ~/.bashrc && \
echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm' >> ~/.bashrc \
;fi
# install required things
RUN apt-get update && apt-get install apt-transport-https && \
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
apt-get update && apt-get install -y --allow-unauthenticated yarn mysql-client
# Add NVM binaries to root's .bashrc
USER root
RUN apt-get install npm -y
# set npm registry address
RUN npm config set registry http://registry.npmjs.org/
#
#--------------------------------------------------------------------------
# Final Touch
#--------------------------------------------------------------------------
#
# Clean up
USER root
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Set default work directory
WORKDIR /var/www
# # copy in our code, so as not to rely on a volume in prod
COPY . /var/www
# ensure directories we need are writable
RUN chmod -R o+w /var/www/user-api-laravel/storage
RUN chmod -R o+w /var/www/user-api-laravel/bootstrap/cache
RUN chmod -R o+w /var/www/auto/storage
RUN chmod -R o+w /var/www/auto/bootstrap/cache
# install php project dependencies
RUN cd /var/www/user-api-laravel && composer install
RUN cd /var/www/auto && composer install
WORKDIR /var/www
USER root
# install auto-scalar deps
RUN cd /var/www/auto-scaler && npm i
# php.ini for cli
ADD ./php-cli.ini /etc/php/7.1/cli/php.ini
And relevant part of docker-compose:
workspace:
build:
context: ./www-workspace
args:
- TZ=${WORKSPACE_TIMEZONE}
- NODE_VERSION=${WORKSPACE_NODE_VERSION}
command: [bash, -c, "cd /var/www/spa && npm run dist-prod"]
Still don't know what context the commands run in, but made mine work. It was due to node being installed via NVM. Or at least when I installed, as #Noogen suggested, via curl -sL https://deb.nodesource.com/setup_10.x | sudo bash - I could then run commands against my container and they would have access to the correct node version. I had to settled for a lower node version (not 10.5.0 as I could specify with NVM) but in the end it worked so no worries.

Resources