gitlab ci e2e test against nginx docker image - docker

I'm trying to run e2e tests against a docker image, which is based on the offical nginx image, and built in a step before.
My idea was to make it available via service in this way:
e2e:
stage: e2e
image: weboaks/node-karma-protractor-chrome:alpine
services:
- name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
alias: app
before_script:
- yarn
- yarn run webdriver:update --standalone
script:
- yarn run e2e:ci
The docker file of the as service linked image looks like
FROM nginx:1.15-alpine
RUN rm -rf /usr/share/nginx/html/* && apk add --no-cache -vvv bash
ADD deploy/nginx/conf.d /etc/nginx/conf.d
ADD dist /usr/share/nginx/html
But it seems that app isn't available under http://app.
Do I miss something or is there any other approach to test against an already created image?
When I run the image with docker run -p 80:80 local-test locally or deploy it to a server everything works as expected.

Related

Gitlab CI npm cannot resolve module

Totally new to Gitlab and CI in general, so apologies for the lack of understanding. I have a repo, which is NuxtJS based, with a Dockerfile. The end goal of the pipeline is to build and push this repo to my docker account. The Dockerfile is relatively straight forward, containing an npm install and npm run build. I'm using a custom docker image as my runner, based on docker:20.10.17-dind-alpine3.16 with ansible, terraform and kubectl installed.
When building the project's docker image on my local machine, I receive no issues, however in gitlab, when running the npm run build command, I get the following error:
Module not found: Error: Can't resolve '../node_modules/vue-confirm-dialog' in '/usr/src/nuxt-app/plugins'
Here is my yml file:
stages:
- docker
docker:
stage: docker
image: <my-runner-image>
services:
- "docker:dind"
before_script:
- docker login -u $DOCKER_REGISTRY_USER -p $DOCKER_REGISTRY_PASSWORD
script:
- docker build -t <my-repo> .
- docker push <my-repo>
Any suggestions are greatly appreciated
--EDIT--
As requested, here is the project's Dockerfile:
FROM node:lts-alpine3.15
# create destination directory
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# update and install dependency
RUN apk update && apk upgrade
RUN apk add git
# copy the app, note .dockerignore
COPY . /usr/src/nuxt-app/
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=3000
CMD [ "npm", "start" ]

Gitlab CI hangs on npm run build (webpack production command)

GitLab Pipeline Output
GitLab CI/CD YML file
image: docker:latest
services:
- docker:dind
stages:
- test
test_stage:
stage: test
tags:
- def
before_script:
- apk version
- apk add --no-cache docker-compose
- docker info
- docker-compose --version
script:
- echo "Building and testing"
- docker-compose up --abort-on-container-exit
Dockerfile
FROM node:10.15.3 as source
COPY package.json ./
COPY package-lock.json ./
RUN node -v
RUN npm -v
RUN npm install
COPY . ./
RUN npm run build
FROM nginx:1.15.9
COPY default.template /etc/nginx/conf.d/default.conf
CMD ["nginx", "-g", "daemon off;"]
'npm run build' from the Dockerfile runs 'webpack --mode production' which attempts to start my app on localhost within Docker. Instead, GitLab is getting stuck in 'npm run build'.
This works locally with Docker but not on the GitLab CI/CD runner, it seems to be hanging there and potentially having an out of memory error, which I received earlier when it was hanging even longer.
Why is the GitLab runner getting stuck on 'webpack --mode production' (my npm run build command)? Should I only be using 'webpack -p"?

Junit report is not updated after run tests on gitlab

I run automated tests on gitlab ci with gitlab runner, all works good except reports. After tests junit reports are not updated, always show the same pass and not pass tests even thought cmd show different number of passed tests.
Gitlab script:
stages:
- build
- test
docker-build-master:
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker build ./AutomaticTests --pull -t "dockerImage"
- docker image tag dockerImage xxx/dockerImage:0.0.1
- docker push "xxx/dockerImage:0.0.1"
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run "xxx/dockerImage:0.0.1"
artifacts:
when: always
paths:
- AutomaticTests/bin/Release/artifacts/test-result.xml
reports:
junit:
- AutomaticTests/bin/Release/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /AutomaticTests
RUN chmod 777 /AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=..\artifacts\test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
I had a similar issue when using docker-in-docker for my gitlab pipeline. You run your tests inside your container. Therefore, test results are stored inside your "container-under-test". However, the gitlab-ci paths reference not the "container-under-test", but the outside container of your docker-in-docker environment.
You could try to copy the test results from the image directly to your outside container via something like this:
mkdir reports
docker cp $(docker create --rm DOCKER_IMAGE):/ABSOLUTE/FILEPATH/IN/DOCKER/CONTAINER reports/.
So, this would be something like this in your case (untested...!):
...
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- mkdir reports
- docker cp $(docker create --rm xxx/dockerImage:0.0.1):/AutomaticTests/bin/Release/artifacts/test-result.xml reports/.
artifacts:
when: always
reports:
junit:
- reports/test-result.xml
...
Also, see this post for furhter explanation on the docker cp command: https://stackoverflow.com/a/59055906/6603778
Keep in mind, that docker cp requires an absolute path to the file you want to copy from your container.

How to run a script inside gitlab ci docker executor?

I am trying to run a script inside the docker container.
I tried using docker cp but it does not work as there are no running containers. docker container ls is empty.
Locally I am able to docker cp my-custom-image:my_script.py my_script.py
so the problem is not with my docker image.
stage-name:
image: "my-custom-image:0.1.0"
stage: my-stage
script:
- python3 my_script.py
I finally found the solution
In the dockerfile of my-custom-image, I added these lines
COPY ./my_script.py /usr/local/bin/my_script.py
RUN chmod +x /usr/local/bin/my_script.py
I added python3 shebang in my_script.py
#!/usr/local/bin/python3
Then I am able to execute the script in the CI
stage-name:
image: "my-custom-image:0.1.0"
stage: my-stage
script:
- my_script.py

How to conditionally update a CI/CD job image?

I just got into the (wonderful) world of CI/CD and have working pipelines. They are not optimal, though.
The application is a dockerized website:
the source needs to be compiled by webpack and end up in dist
this dist directory is copied to a docker container
which is then remotely built and deployed
My current setup is quite naïve (I added some comments to show why I believe the various elements are needed/useful):
# I start with a small image
image: alpine
# before the job I need to have npm and docker
# the problem: I need one in one job, and the second one in the other
# I do not need both on both jobs but do not see how to split them
before_script:
- apk add --update npm
- apk add docker
- npm install
- npm install webpack -g
stages:
- create_dist
- build_container
- stop_container
- deploy_container
# the dist directory is preserved for the other job which will make use of it
create_dist:
stage: create_dist
script: npm run build
artifacts:
paths:
- dist
# the following three jobs are remote and need to be daisy chained
build_container:
stage: build_container
script: docker -H tcp://eu13:51515 build -t widgets-sentinels .
stop_container:
stage: stop_container
script: docker -H tcp://eu13:51515 stop widgets-sentinels
allow_failure: true
deploy_container:
stage: deploy_container
script: docker -H tcp://eu13:51515 run --rm -p 8880:8888 --name widgets-sentinels -d widgets-sentinels
This setups works bit npm and docker are installed in both jobs. This is not needed and slows down the deployment. Is there a way to state that such and such packages need to be added for specific jobs (and not globally to all of them)?
To make it clear: this is not a show stopper (and in reality not likely to be an issue at all) but I fear that my approach to such a job automation is incorrect.
You don't necessarily need to use the same image for all jobs. Let me show you one of our pipelines (partially) which does a similar thing, just with composer for php instead of npm:
cache:
paths:
- vendor/
build:composer:
image: registry.example.com/base-images/php-composer:latest # use our custom base image where only composer is installed on to build the dependencies)
stage: build dependencies
script:
- php composer.phar install --no-scripts
artifacts:
paths:
- vendor/
only:
changes:
- composer.{json,lock,phar} # build vendor folder only, when relevant files change, otherwise use cached folder form s3 bucket (configured in runner config)
build:api:
image: docker:18 # use docker image to build the actual application image
stage: build api
dependencies:
- build:composer # reference dependency dir
script:
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" "$CI_REGISTRY"
- docker build -t $CI_REGISTRY_IMAGE:latest.
- docker push $CI_REGISTRY_IMAGE:latest
The composer base image contains all necessary packages to run composer, so in your case you'd create a base image for npm:
FROM alpine:latest
RUN apk add --update npm
Then, use this image in your create_dist stage and use image: docker:latest as image in the other stages.
As well as referncing different images for different jobs you may also try gitlab anchors which provides reusable templates for the jobs:
.install-npm-template: &npm-template
before_script:
- apk add --update npm
- npm install
- npm install webpack -g
.install-docker-template: &docker-template
before_script:
- apk add docker
create_dist:
<<: *npm-template
stage: create_dist
script: npm run build
...
deploy_container:
<<: *docker-template
stage: deploy_container
...
Try multistage builder, you can intermediate temporary images and copy generated content final docker image. Also, npm should be part on docker image, create one npm image and use in final docker image as builder image.

Resources