Setup GitLab CI for Rails with npm - docker

These days I am struggling with GitLab CI setup for my project. The setup is not as simple as Travis CI. I've spent a whole lot of time debugging this and all I found do not fit my requirements
Context
I have a Rails project which is using rvm and npm and postgresql. I built a custom Docker image with rvm and npm installed. However, before running I have to get the correspond ruby and node version as in my .gitlab-ci.yml :
image: "my-rvm-npm-image"
services:
- postgres:9.3-alpine
variables:
POSTGRES_DB: db
POSTGRES_USER: user
POSTGRES_PASSWORD:
cache:
untracked: true
key: "$CI_BUILD_REF_NAME"
paths:
- node_modules/
stages:
- build
- rspec
- npm
build:
stage: build
script:
- sudo chown -R $(whoami) /cache
- /bin/bash -l -c "rvm install $(cat .ruby-version)
&& rvm use $(cat .ruby-version)
&& gem install bundle && bundle install --jobs $(nproc) --path /cache
&& source ~/.nvm/nvm.sh && nvm install && npm install npm -g && npm install
&& bundle exec rake db:test:prepare"
rspec:
stage: rspec
script:
- bundle exec rake
npm:
stage: npm
script:
- npm test
Please note that this .gitlab-ci.yml file is still failed. bundle install will be in the /cache as in GitLab runner configuration:
[[runners]]
name = "xxxx"
url = "URLabc.com"
token = "myprojectoken"
executor = "docker"
[runners.docker]
tls_verify = false
image = "my-rvm-npm-image"
privileged = true
disable_cache = false
volumes = ["/home/ci/cache:/cache", "/home/ci/builds:/builds"]
[runners.cache]
builds and cache folder are host-bound. So that I can keep the bundle install cache for the next build.
My problem
build job passes, but rspec and npm failed /bin/bash: bundle: command not found. It seems that the bundle is not passed to other jobs. I know I should use artifacts to pass to other job, but I already have it as cache, I should have
I want to achieve these goals:
Cache bundle install for every run, since it take 10 ~ 15 mins to rebuild from scratch, and pass to rspec job to run the test.
Also cache node_modules from npm install and pass to npm job.
Any suggestion for rvm and npm installation, because I do not want the ugly way bash -l -c before every command.

Related

Tests in Travis CI are not found

I am trying to implement Travis CI in my Django/ Vue.js project.
I added this .travis.yml file to my root folder:
language: python
python:
- '3.7.3'
sudo: required
before_install:
- chmod +x ./pizza/manage.py
before_script:
- pip install -r requirements.txt
env: DJANGO_SETTINGS_MODULE="pizzago.settings"
services:
- postgresql
script:
- ./pizza/manage.py test --keepdb
But as I run the build I get this output:
pip install -r requirements.txt
./pizza/manage.py test --keepdb
System check identified no issues (0 silenced).
Ran 0 tests in 0.000s
OK
The command "./pizza/manage.py test --keepdb" exited with 0.
Done. Your build exited with 0.
Running my tests locally with 'python3 manage.py test --keepdb' works perfectly.
My manage.py is not in my root folder.
Looks like my tests are not found… How can I fix it?
If I get it right, your manage.py is not in your root directory but in a /pizza/ directory. Travis should run the script inside this directory.
Change your .travis.yml this way:
language: python
python:
- '3.7.3'
sudo: required
before_install:
- chmod +x ./pizza/manage.py
before_script:
- pip install -r requirements.txt
- cd ./pizza/
env: DJANGO_SETTINGS_MODULE="pizzago.settings"
services:
- postgresql
script:
- python manage.py test --keepdb

GitLab CI/CD pipeline error while installing npm packages [package.json file not found]

I have the gitlab repository setup with the frontend and backend folders inside it. Basically my folder structure is as below,
--repo
- frontend folder
- backend folder
- gitlab-ci.yml
According to the docs, the gitlab-ci.yml file is placed in the root folder as provided in the image.
I am getting the error while running the pipeline. "npm install" command does not gets executed, instead it gets errored out as no such file or directory. The package.json file is placed inside the backend folder.
I would require to change the directory while npm install command and also to deploy.
My gitlab-ci.yml file is as below,
# Node docker image on which this would be run
image: node:8.10.0
cache:
paths:
- node_modules/
stages:
- test
- deploy_production
# Job 1:
Test:
stage: test
script:
- npm install
# Job 2:
# Deploy to staging
Production:
image: ruby:latest
only:
- master
stage: deploy_production
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=XXXXXXX --api-key=XXXXXXXXXXXXXXXXXXXXXXXXXX
Any help would be really appreciated! Thanks
npm install needs to run in a folder containing a package.json file. I suspect this file might be present in your subfolders (frontend and/or backend).
You should add
before_script:
- cd backend # or frontend
to your Test job.

Docker and travis ci faling on build

I am trying to dockerize my app as part of travis ci so i can then publish it to docker hub:
I have set up my Dockerfile, docker-compose and travis.yml
when the pipeline in github finishes i get this error message:
0.60s$ docker run mysite /bin/sh -c "cd /root/mysite; bundle exec rake test"
/bin/sh: 1: cd: can't cd to /root/mysite
/bin/sh: 1: bundle: not found
The command "docker run mysite /bin/sh -c "cd /root/mysite; bundle exec rake test"" failed and exited with 127 during .
My Dockerfile:
#Server
FROM node:latest
#create app dir in the container
RUN mkdir -p /usr/src/app
#sets working direcotry for the app
#this allows to run all the comand
#like RUN CMD etc.
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm config set strict-ssl false
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3006
CMD [ "npm", "run", "start:unsafe" ]
Docker-compose:
version: '3'
services:
web:
build: .
travis.yml:
sudo: required
language: node_js
node_js:
- "stable"
services:
- docker
before_install:
- docker build -t mysite .
- docker run -d -p 127.0.0.1:80:4567 mysite /bin/sh -c "cd /root/mysite; bundle exec foreman start;"
- docker ps -a
- docker run mysite /bin/sh -c "cd /root/mysite; bundle exec rake test"
cache:
directories:
- node_modules
script:
- bundle exec rake test
- npm test
- npm run build
I have tried running the comands from travis yml locally and get the same error:
/bin/sh: 1: cd: can't cd to /usr/src/app/mysite
/bin/sh: 1: bundle: not found
I tried going into the container to see if they directories are matching but the container always exits right after it starts
to execute a command on an existing running container you must call 'docker exec' and not 'docker run'
You possible mixed up node_js and ruby. Rewrite your .travis.yml to something like:
sudo: required
language: node_js
node_js:
- "stable"
cache:
directories:
- "node_modules"
services:
- docker
before_install:
- docker build -t mysite:travis-$TRAVIS_BUILD_NUMBER .
script:
- npm test
- npm run build
- docker images "$DOCKER_USERNAME"/mysite
after_success:
- if [ "$TRAVIS_BRANCH" == "master" ]; then
docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD";
docker tag mysite:travis-$TRAVIS_BUILD_NUMBER "$DOCKER_USERNAME"/mysite:travis-$TRAVIS_BUILD_NUMBER;
docker push "$DOCKER_USERNAME"/mysite:travis-$TRAVIS_BUILD_NUMBER;
fi

Google cloud ruby deployment and ruby-docker

I am trying to put my rails project on the google cloud engine for the first time and I have a lot of trouble.
I've wanted to upload my project with a custom runtime app.yaml (because I would like yarn to install the dependencies as well), but the deployment command fails with this error:
Error Response: [4] Your deployment has failed to become healthy in the allotted time and therefore was rolled back. If you believe this was an error, try adjusting the 'app_start_timeout_sec' setting in the 'readiness_check' section.
PS: the app runs locally (development and production env).
My app.yaml looks like this:
entrypoint: bundle exec rails s -b '0.0.0.0' --port $PORT
env: flex
runtime: custom
env_variables:
My Environment variables
beta_settings:
cloud_sql_instances: ekoma-app:us-central1:ekoma-db
readiness_check:
path: "/_ah/health"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 1
app_start_timeout_sec: 120
And my Dockerfile looks like this:
FROM l.gcr.io/google/ruby:latest
RUN apt-get update -qq && apt-get install apt-transport-https
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev imagemagick yarn
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN gem install pkg-config -v "~> 1.1"
RUN bundle install && npm install
COPY . /app
When deploying with a ruby runtime I realized that the dockerfile generated was much more complex and probably complete and google provide a repo to generate it.
So, I tried to look into the ruby-docker public repo that google shared but I don't know how to use their generated docker images and therefore fix my Dockerfile issue
https://github.com/GoogleCloudPlatform/ruby-docker
Could someone help me figure what's wrong in my setup and how to run these ruby-docker image (seems very useful!)?
Thank you!
The "entrypoint" field in app.yaml is not used when a custom runtime is in play. Instead, set the CMD in your Dockerfile. e.g.:
CMD ["bundle", "exec", "rails", "s", "-b", "0.0.0.0", "--port", "8080"]
That probably will get your application running. (Remember that environment variables are not interpolated in exec form, so I replaced your $PORT with the hard-coded port 8080, which is the port App Engine expects.)
As an alternative:
It may be possible to use the Ruby runtime images in the ruby-docker repo, and not have to use a custom runtime (i.e. you may not need to write your own Dockerfile), even if you have custom build steps like doing yarn installs. Most of the build process in runtime: ruby is customizable, but it's not well-documented. If you want to try this path, the TL;DR is:
Use runtime: ruby in your app.yaml and don't provide your own Dockerfile. (And reinstate the entrypoint of course.)
If you want to install ubuntu packages not normally present in runtime: ruby, list them in app.yaml under runtime_config:packages. For example:
runtime_config:
packages:
- libgeos-dev
- libproj-dev
If you want to run custom build steps, list them in app.yaml under runtime_config:build. They get executed in the Dockerfile after the bundle install step (which cannot itself be modified). For example:
runtime_config:
build:
- npm install
- bundle exec rake assets:precompile
- bundle exec rake setup_my_stuff
Note that by default, if you don't provide custom build steps, the ruby runtime behaves as if there is one build step: bundle exec rake assets:precompile || true. That is, by default, runtime: ruby will attempt to compile your assets during app engine deployment. If you do modify the build steps and you want to keep this behavior, make sure you include that rake task as part of your custom build steps.

Docker container builds on OSX but not Amazon Linux

My Docker container builds fine on OSX:
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
But doesn't build on Amazon Linux:
Docker version 17.12.0-ce, build 3dfb8343b139d6342acfd9975d7f1068b5b1c3d3
docker-compose version 1.20.1, build 5d8c71b
Full Dockerfile:
# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <douglas.duhaime#gmail.com>
# Add source to a directory and use that directory
# NB: /app is a reserved directory in tomcat container
ENV APP_PATH="/lts-app"
RUN mkdir "$APP_PATH"
ADD . "$APP_PATH"
WORKDIR "$APP_PATH"
##
# Build BlackLab
##
RUN apk add --update --no-cache \
wget \
tar \
git
# Store the path to the maven home
ENV MAVEN_HOME="/usr/lib/maven"
# Add maven and java to the path
ENV PATH="$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH"
# Install Maven
RUN MAVEN_VERSION="3.3.9" && \
cd "/tmp" && \
wget "http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" -O - | tar xzf - && \
mv "/tmp/apache-maven-$MAVEN_VERSION" "$MAVEN_HOME" && \
ln -s "$MAVEN_HOME/bin/mvn" "/usr/bin/mvn" && \
rm -rf "/tmp/*"
# Get the BlackLab source
RUN git clone "git://github.com/INL/BlackLab.git"
# Build BlackLab with Maven
RUN cd "BlackLab" && \
mvn clean install
##
# Build Python + Node dependencies
##
# Install system deps with Alpine Linux package manager
RUN apk add --update --no-cache \
g++ \
gcc \
make \
openssl-dev \
python3-dev \
python \
py-pip \
nodejs
# Install Python dependencies
RUN pip install -r "requirements.txt" && \
npm install --no-optional && \
npm run build
# Store Mongo service name as mongo host
ENV MONGO_HOST=mongo_service
ENV TOMCAT_HOST=tomcat_service
ENV TOMCAT_WEBAPPS=/tomcat_webapps/
# Make ports available
EXPOSE 7082
# Seed the db
CMD npm run seed && \
gunicorn -b 0.0.0.0:7082 --access-logfile - --reload server.app:app
Full docker-compose.yml
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
The command I'm running is: docker-compose up --build
The result on Amazon Linux is:
Running setup.py install for pymongo: started
Running setup.py install for pymongo: finished with status 'done'
Running setup.py install for pluggy: started
Running setup.py install for pluggy: finished with status 'done'
Running setup.py install for coverage: started
Running setup.py install for coverage: finished with status 'done'
Successfully installed Faker-0.8.12 Flask-0.12.2 Flask-Cors-3.0.3 Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 astroid-1.6.2 attrs-17.4.0 backports.functools-lru-cache-1.5 beautifulsoup4-4.5.1 click-6.7 configparser-3.5.0 coverage-4.5.1 enum34-1.1.6 funcsigs-1.0.2 futures-3.2.0 gunicorn-19.7.1 ipaddress-1.0.19 isort-4.3.4 itsdangerous-0.24 lazy-object-proxy-1.3.1 mccabe-0.6.1 more-itertools-4.1.0 pluggy-0.6.0 py-1.5.3 py4j-0.10.6 pylint-1.8.3 pymongo-3.6.1 pytest-3.5.0 pytest-cov-2.5.1 python-dateutil-2.7.2 singledispatch-3.4.0.3 six-1.11.0 text-unidecode-1.2 wrapt-1.10.11
You are using pip version 8.1.2, however version 9.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
npm WARN deprecated redux-mock-store#1.5.1: breaking changes in minor version
> base62#1.2.7 postinstall /lts-app/node_modules/base62
> node scripts/install-stats.js || exit 0
ERROR: Service 'web' failed to build: The command '/bin/sh -c pip install -r "requirements.txt" && npm install --no-optional && npm run build' returned a non-zero code: 1
Does anyone know what might be causing this discrepancy? The error message from Docker doesn't give many clues. I'd be very grateful for any ideas others can offer!
To solve this problem, I followed #MazelTov's advice and built the containers on my local OSX development machine, then published the images to Docker Cloud, then pulled those images down onto and ran the images from my production server (AWS EC2).
Install Dependencies
I'll try and outline the steps I followed below in case they help others. Please note these steps require you to have docker and docker-compose installed on your development and production machines. I used the gui installer to install Docker for Mac.
Build Images
After writing a Dockerfile and docker-compose.yml file, you can build your images with docker-compose up --build.
Upload Images to Docker Cloud
Once the images are built, you can upload them to Docker Cloud with the following steps. First, create an account on Docker Cloud.
Then store your Docker Cloud username in an environment variable (so your ~/.bash_profile should contain export DOCKER_ID_USER='yaledhlab' (use your username though).
Next login to your account from your developer machine:
docker login
Once you're logged in, list your docker images:
docker ps
This will display something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89478c386661 yaledhlab/let-them-speak-web "/bin/sh -c 'npm run…" About an hour ago Up About an hour 0.0.0.0:7082->7082/tcp letthemspeak_web_1
5e9c75d29051 training/webapp:latest "python app.py" 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp heuristic_mirzakhani
890f7f1dc777 bitnami/tomcat:latest "/app-entrypoint.sh …" 4 hours ago Up About an hour 0.0.0.0:8080->8080/tcp letthemspeak_tomcat_service_1
09d74e36584d mongo "docker-entrypoint.s…" 4 hours ago Up About an hour 0.0.0.0:27017->27017/tcp letthemspeak_mongo_service_1
For each of the images you want to publish to Docker Cloud, run:
docker tag image_name $DOCKER_ID_USER/my-uploaded-image-name
docker push $DOCKER_ID_USER/my-uploaded-image-name
For example, to upload mywebapp_web to your user's account on Docker cloud, you can run:
docker tag mywebapp_web $DOCKER_ID_USER/web
docker push $DOCKER_ID_USER/web
You can then run open https://cloud.docker.com/swarm/$DOCKER_ID_USER/repository/list to see your uploaded images.
Deploy Images
Finally, you can deploy your images on EC2 with the following steps. First, install Docker and Docker-Compose on the Amazon-flavored EC2 instance:
# install docker
sudo yum install docker -y
# start docker
sudo service docker start
# allow ec2-user to run docker
sudo usermod -a -G docker ec2-user
# get the docker-compose binaries
sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# change the permissions on the source
sudo chmod +x /usr/local/bin/docker-compose
Log out, then log back in to update your user's groups. Then start a screen and run the server: screen. Once the screen starts, you should be able to add a new docker-compose config file that specifies the path to your deployed images. For example, I needed to fetch the let-them-speak-web container housed within yaledhlab's Docker Cloud account, so I changed the docker-compose.yml file above to the file below, which I named production.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
image: 'yaledhlab/let-them-speak-web'
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
Then the production compose file can be run with: docker-compose -f production.yml up. Finally, ssh in with another terminal, and detach the screen with screen -D.

Resources