I'm trying to setup a basic pipeline in Gitlab that does the following:
Run tests command, compile the client and deploy the application using docker-compose.
The problem comes when I'm trying to use npm install.
My .gitlab-ci.yml file looks like:
# This file is a template, and might need editing before it works on your
project.
# Official docker image.
image: docker:latest
services:
- docker:dind
stages:
- test
- build
- deploy
build:
stage: build
script:
- cd packages/public/client/
- npm install --only=production
- npm run build
test:
stage: test
only:
- develop
- production
script:
- echo run tests in this section
step-deploy-production:
stage: deploy
only:
- production
script:
- docker-compose up -d --build
environment: production
when: manual
And the error is:
Skipping Git submodules setup
$ cd packages/public/client/
$ npm install --only=production
bash: line 69: npm: command not found
ERROR: Job failed: exit status 1
I'm using the last docker image, so, I'm wondering whether I can define a new service on my build stage or should I use a different image for the whole process?
Thanks
A new service will not help you, you'll need to use a different image.
You can use a node image just for your build-stage like this:
build:
image: node:8
stage: build
script:
- cd packages/public/client/
- npm install --only=production
- npm run build
Related
Summary of Problem
I have a task in gitlab that requires an npm build to run.
This build then generates the static folder that is needed for my docker build for the server which copies the generated files into the build. I think I can use artifacts and depends_on to make the second task wait on the npm build and get the files it needs, but this makes the artifacts downloadable from the UI which is not desirable. I found a gitlab issue that seems stale and unlikely to ever go anywhere. Is there any other method I can use?
Dependency build
build-web:
stage: build
image: node:17.6.0-slim
before_script:
- set -euo pipefail
- set -x
- cd web
- npm install
- npm run check || true
- npm run lint || true
script:
- npm run build
```yml
## Server build
build-server:
stage: build
tags:
- shell
before_script:
- echo Building server image with tag $CI_COMMIT_REF_NAME
script:
- DOCKER_BUILDKIT=1 BUILDKIT_INLINE_CACHE=1 docker build --tag "server:$CI_COMMIT_REF_NAME" -f ./deployment/server/Dockerfile .
Relative Dockerfile Lines
COPY ./api .
COPY ./api/web ./web
Notes/edits
I host my own runners. I use a shell executor for docker build instead of dind
So I am trying to make my pipeline work, but I keep getting stuck.
I have a docker runner for my git-ci.yml file
I do this because my deploy stage errors with shell runners (but my build, test and sonarqube stage do work with the shell runner)
**git-ci.yml**
image: docker:latest
stages:
- build
- sonarqube-check
- test
- deploy
cache:
paths:
- .gradle/wrapper
- .gradle/caches
build:
stage: build
image: gradle:jre11-slim
script:
- chmod +x gradlew
- ./gradlew assemble
artifacts:
paths:
- build/libs/*.jar
expire_in: 1 week
only:
- master
sonarqube-check:
stage: test
image: gradle:jre11-slim
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script: ./gradlew sonarqube
allow_failure: true
only:
- master
test:
stage: test
script:
- ./gradlew check
deploy:
stage: deploy
image: gradle:latest
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=heroku-coalition --api-key=$HEROKU_API_KEY
- echo "This job deploys something from the $CI_COMMIT_BRANCH branch."
only:
- master
after_script:
- echo "End CI"
First I got errors about my java home so I switched the image for the build stage. Now I did that and keep getting errors about the permission so I added the chmod +x gradlew
But I get this error when I add that line:
chmod: changing permissions of 'gradlew': Operation not permitted
And when I remove the chmod gradlew line I get:
/bin/bash: line 115: ./gradlew: Permission denied
So now I do not really know what to do.
In short: Which runner should I use to get this yml file to work, or how would I need to edit this yml file accordingly?
So after some research I came across the tags keyword in gitlab. This enables you to run 2 runners for 1 yml file. In this way I could use a shell runner for my build, test and sonarqube fase and a docker runner for my deploy fase!
I'm new to Gitlab, newman and docker, and I'm sort of in a crash course on how to integrate everything together.
On my desktop (Windows OS), I've installed newman, and I have managed to run "newman run [postman collections]" via windows commandline.
What i ultimately want to do is to do run a newman command in Gitlab.
In the .gitlab-ci, I have this:
stages:
- test
Test_A:
stage: test
image: postman/newman
script:
- newman run Collection1.json
A few questions come to mind:
Do I need to also run the "npm install -g newman" command in the .gitlab-ci file?
If not, how does Gitlab know the syntax of a newman command?
Example: newman run
Do I need to specify in my .gitlab-ci file a command for docker?
Example: docker pull postman/newman
Update#2
stages:
- test
before_script:
- npm install -g newman
- npm install -g npm
Test_A:
stage: test
script:
- newman run Collection1.json
The first thing you have to identify is how your GitLab pipelines are executed.
My personal choice is to use Docker-based runner.
If you're using a GitLab docker-runner to run your pipeline then you just have to define your container image in the .gitlab-ci.yml file.
Here's a tested (on GitLab.com) version of pipeline yml
stages:
- test
run_postman_tests:
stage: test
image:
name: postman/newman
entrypoint: [""] #This overrides the default entrypoint for this image.
script:
- newman run postman_collection.json
I'm trying to set up continuous deployment on Circle CI.
I've successfully run my build script, which creates a build folder in the root directory. When I run the command locally to sync with s3, it works fine. But in Circle CI I can't get the path to the build folder.
I've tried ./build, adding working_directory: ~/circleci-docs in the deploy job, and printing the working directory in a test run, which was /home/circleci/project, so I tried manually using /home/circleci/project/build and that didn't work either.
This is my CircleCI config.yml file:
executors:
node-executor:
docker:
- image: circleci/node:10.8
python-executor:
docker:
- image: circleci/python:3.7
jobs:
build:
executor: node-executor
steps:
- checkout
- run:
name: Run build script
command: |
curl -o- -L https://yarnpkg.com/install.sh | bash
yarn install --production=false
yarn build
deploy:
executor: python-executor
steps:
- checkout
- run:
name: Install awscli
command: sudo pip install awscli
- run:
name: Deploy to S3
command: aws s3 sync build s3://{MY_BUCKET}
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
The error message was:
The user-provided path build does not exist.
Exited with code 255
I got it to work!
In the build job I used persist_to_workspace and the deploy job attach_workspace (both are under steps)
- persist_to_workspace:
root: ~/
paths:
- project/build
- attach_workspace:
at: ~/
I just got into the (wonderful) world of CI/CD and have working pipelines. They are not optimal, though.
The application is a dockerized website:
the source needs to be compiled by webpack and end up in dist
this dist directory is copied to a docker container
which is then remotely built and deployed
My current setup is quite naïve (I added some comments to show why I believe the various elements are needed/useful):
# I start with a small image
image: alpine
# before the job I need to have npm and docker
# the problem: I need one in one job, and the second one in the other
# I do not need both on both jobs but do not see how to split them
before_script:
- apk add --update npm
- apk add docker
- npm install
- npm install webpack -g
stages:
- create_dist
- build_container
- stop_container
- deploy_container
# the dist directory is preserved for the other job which will make use of it
create_dist:
stage: create_dist
script: npm run build
artifacts:
paths:
- dist
# the following three jobs are remote and need to be daisy chained
build_container:
stage: build_container
script: docker -H tcp://eu13:51515 build -t widgets-sentinels .
stop_container:
stage: stop_container
script: docker -H tcp://eu13:51515 stop widgets-sentinels
allow_failure: true
deploy_container:
stage: deploy_container
script: docker -H tcp://eu13:51515 run --rm -p 8880:8888 --name widgets-sentinels -d widgets-sentinels
This setups works bit npm and docker are installed in both jobs. This is not needed and slows down the deployment. Is there a way to state that such and such packages need to be added for specific jobs (and not globally to all of them)?
To make it clear: this is not a show stopper (and in reality not likely to be an issue at all) but I fear that my approach to such a job automation is incorrect.
You don't necessarily need to use the same image for all jobs. Let me show you one of our pipelines (partially) which does a similar thing, just with composer for php instead of npm:
cache:
paths:
- vendor/
build:composer:
image: registry.example.com/base-images/php-composer:latest # use our custom base image where only composer is installed on to build the dependencies)
stage: build dependencies
script:
- php composer.phar install --no-scripts
artifacts:
paths:
- vendor/
only:
changes:
- composer.{json,lock,phar} # build vendor folder only, when relevant files change, otherwise use cached folder form s3 bucket (configured in runner config)
build:api:
image: docker:18 # use docker image to build the actual application image
stage: build api
dependencies:
- build:composer # reference dependency dir
script:
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" "$CI_REGISTRY"
- docker build -t $CI_REGISTRY_IMAGE:latest.
- docker push $CI_REGISTRY_IMAGE:latest
The composer base image contains all necessary packages to run composer, so in your case you'd create a base image for npm:
FROM alpine:latest
RUN apk add --update npm
Then, use this image in your create_dist stage and use image: docker:latest as image in the other stages.
As well as referncing different images for different jobs you may also try gitlab anchors which provides reusable templates for the jobs:
.install-npm-template: &npm-template
before_script:
- apk add --update npm
- npm install
- npm install webpack -g
.install-docker-template: &docker-template
before_script:
- apk add docker
create_dist:
<<: *npm-template
stage: create_dist
script: npm run build
...
deploy_container:
<<: *docker-template
stage: deploy_container
...
Try multistage builder, you can intermediate temporary images and copy generated content final docker image. Also, npm should be part on docker image, create one npm image and use in final docker image as builder image.