How I can run selenium? - docker

I've stage, where I make tests for our app
test-dev:
stage: test
image: selenium/standalone-chrome
image: node:14.15.0-stretch
script:
- npm i
- npm run prod
- /opt/bin/start-selenium-standalone.sh
- npx mocha tests/js/screenshots-* --timeout 50000
- npx playwright test tests/js/pw_*
- php artisan test
artifacts:
when: always
name: $CI_COMMIT_SHA
untracked: true
paths:
- tests/js/screenshots/
- tests/js/screens/
- tests/js/report/
cache:
untracked: true
when: always
paths:
- tests/js/screenshots/
- tests/js/screens/
- tests/js/report/
- storage/
- vendor/ #composer packages
- node_modules
- public
But the system can't find start-selenium-standalone.sh, in the original docker image it is in /opt/bin
How I can launch it?

I wrote next Dockerfile and use them:
Dockerfile
FROM ubuntu:latest
FROM node:14.15.0-stretch AS node
RUN npm i libnpx
FROM selenium/standalone-chrome:latest as sel
USER root
#selenium copy
#COPY --from=sel /opt/bin/start-selenium-standalone.sh /opt/bin/start-selenium-standalone.sh
#COPY --from=sel /etc/supervisor/conf.d/ /etc/supervisor/conf.d/
#COPY --from=sel /opt/bin/generate_config /opt/bin/generate_config
#COPY --from=sel /opt/selenium/config.toml /opt/selenium/config.toml
#COPY --from=sel /opt/selenium/browser_name /opt/selenium/browser_name
#ENV SE_RELAX_CHECKS true
EXPOSE 4444
EXPOSE 7900
#Nodejs
#COPY --from=node / /
COPY --from=node /usr/local/lib/ /usr/local/lib/
COPY --from=node /usr/local/bin/ /usr/local/bin/
COPY --from=node /usr/local/bin/ /usr/local/bin/
RUN ln -s /usr/local/lib/node_modules/npm/bin/npm-cli.js /usr/local/bin/npm
LABEL miekrif uzvar-selenium
CMD ["bash"]
In following job:
test-dev:
stage: test
image: miekrif/uzavr-selenium:latest
script:
- nohup /opt/bin/entry_point.sh &
- npx mocha tests/js/screenshots-* --timeout 50000
- npx playwright test tests/js/pw_*
- php artisan test
artifacts:
when: always
name: $CI_COMMIT_SHA
untracked: true
paths:
- tests/js/screenshots/
- tests/js/screens/
- tests/js/report/
cache:
untracked: true
when: always
paths:
- tests/js/screenshots/
- tests/js/screens/
- tests/js/report/
- storage/
- vendor/ #composer packages
- node_modules
- public
tags:
- test_new_runner
only:
- ned_runner #Затычка бранча чтобы ранер не запускался
# - develop

Related

The gitlab-ci job does not fail but the test does

I need a failed test in my pipeline to fail the job so that I can have control over it. The problem is that the tests are being run in a "docker in docker" so the job doesn't fail because the container did run correctly, but the test doesn't return an error code (even if one fails).
The script "docker:test" run my test suit in a container and my pipeline is like:
image: docker:dind #Alpine
stages:
- install
- test
# - build
- deploy
env:
stage: install
script:
- chmod +x ./setup_env.sh
- ./setup_env.sh
artifacts:
paths:
- .env
expire_in: 1 days
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml
# docker image:
# stage: build
# script:
# - npm run docker:build
remove .env:
stage: deploy
script:
- rm .env
pages:
stage: deploy
script:
- mkdir .public
- cp -r coverage/* .public
- mv .public public
artifacts:
paths:
- public
# only:
# - main
And my npm script is:
"docker:test": "npm i && tsc && docker build -t extractos-bancarios-test --target test . && docker run -d --name extractos-bancarios-test extractos-bancarios-test && docker logs -f extractos-bancarios-test >> logs.log",
I need to fail the pipeline when a test fails while using docker in docker
I was able to solve the problem on my own and I leave it documented so that no one wastes as much time as I did.
For the container inside the first container to fail, I needed it to return an exit code 1 when there is an error in the report. So I added a conditional with a grep in the scripts section of my .gitlab-ci.yml:
tests:
stage: test
before_script:
- docker rm extractos-bancarios-test || true
- rm junit.xml || true
- rm -r coverage || true
script:
- apk add --update nodejs npm
- npm run docker:test
- docker cp extractos-bancarios-test:/usr/src/coverage .
- docker cp extractos-bancarios-test:/usr/src/junit.xml .
- if grep '<failure' junit.xml; then exit 1; else exit 0; fi
cache:
paths:
- coverage/
artifacts:
when: always
paths:
- coverage/
reports:
junit:
- junit.xml

Docker error: The command returned a non-zero code: 137

I am using a digital ocean droplet to deploy my webapp. I created a droplet and I have installed Docker version 20.10.17, build 100c701 and docker-compose version 1.29.2, build 5becea4c. I used the same webapp on another droplet with same configurations and it was all right. I need to change droplet because I want to resize it to a smaller one, and they don't allow it for filesystem reasons.
I have this Dockerfile:
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
COPY .env.local .env.local
RUN yarn install --frozen-lockfile
# If using npm with a `package-lock.json` comment out above and use below instead
# COPY package.json package-lock.json ./
# RUN npm ci
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY .env.local .env.local
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY .env.local .env.local
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY .env.local .env.local
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
This is my docker-compose.yml
services:
traefik:
image: "traefik:v2.7"
container_name: "traefik"
command:
#- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--certificatesresolvers.myresolver.acme.email=my#email.com"
- "--certificatesresolvers.myresolver.acme.storage=acme.json"
- "--certificatesresolvers.myresolver.acme.httpchallenge=true"
- "--certificatesresolvers.myresolver.acme.httpchallenge.entrypoint=web"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
appName:
build: repo-name/.
labels:
- "traefik.enable=true"
- "traefik.http.routers.casual_name.rule=Host(`casual_name.app`) || Host(`www.casual_name.app`)"
- "traefik.http.routers.casual_name.entrypoints=websecure"
- "traefik.http.routers.casual_name.tls=true"
- "traefik.http.routers.casual_name.tls.certresolver=myresolver"
# www -> https
- "traefik.http.routers.http-catchallwww.rule=Host(`www.casual_name.app`)"
- "traefik.http.routers.http-catchallwww.entrypoints=web"
- "traefik.http.routers.http-catchallwww.middlewares=redirect-to-https#docker"
- "traefik.http.middlewares.http-catchallwww.redirectscheme.scheme=https"
# www -> non-www
- "traefik.http.middlewares.www-redirect.redirectregex.regex=^https://www.casual_name.app/(.*)"
- "traefik.http.middlewares.www-redirect.redirectregex.replacement=https://casual_name.app/$${1}"
- "traefik.http.middlewares.www-redirect.redirectregex.permanent=true"
- "traefik.http.routers.http-catchallwww.middlewares=www-redirect"
I am getting that error on the final step [4/4] Building fresh packages....
The complete error log is The command '/bin/sh -c yarn install --frozen-lockfile' returned a non-zero code: 137
Do you have any idea what's wrong?

I want to Cache Docker's Yarn to speed up Build with Azure Pipelines

The following is specified
docker-compose.yml
services:
app:
build:
context: .
dockerfile: Dockerfile
command: "yarn start"
ports:
- "8080:8080"
volumes:
- ./app:/app
Dockerfile
FROM node:12.16.3
WORKDIR /app
COPY app/package.json .
COPY app/yarn.lock .
RUN yarn install
COPY . /app .
EXPOSE 8080
CMD yarn start
azure-pipelines-ci.yml
variables:
YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn
steps:
- task: Cache#2
inputs:
key: 'yarn | "$(Agent.OS)" | yarn.lock'
restoreKeys: |
yarn | "$(Agent.OS)"
yarn
path: $(YARN_CACHE_FOLDER)
- script: |
docker-compose up -d
However, the cache is working, but Docker build speed remains the same
How can I make it work?

How to deploy dockerized Django+uWSGI+Nginx app to Google App Engine using CircleCI

I have developed a Django dockerized web app using docker-compose. It runs in my local fine.
The point is that when I define a CI pipeline, specifically CircleCI (I don't know how it works with any other alternative), to upload it to GCloud App Engine the workflow works fine but when visiting the url it returns nothing (500 error).
The code I have and that I run locally using is the following. When I set the CircleCI pipeline I have no clue on how the app.yaml file interacts and what the steps in the .circleci/config.yml should be in order to run the docker-compose. Any idea or resource I might use?
My Dockerfile:
FROM python:3.9-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc libc-dev linux-headers
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir -p /app
COPY ./app /app
WORKDIR /app
COPY ./scripts /scripts
#this allows for execute permission in all files inside /scripts/
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
CMD ["entrypoint.sh"]
My docker-compose file:
version: '3.9'
services:
app:
build:
context: .
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=samplesecret123
- ALLOWED_HOSTS=127.0.0.1,localhost
proxy:
build:
context: ./proxy
volumes:
- static_data:/vol/static
ports:
- "8080:8080"
depends_on:
- app
volumes:
static_data:
Nginx Dockerfile:
FROM nginxinc/nginx-unprivileged:1-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./uwsgi_params /etc/nginx/uwsgi_params
USER root
RUN mkdir -p /vol/static
RUN chmod 755 /vol/static
USER nginx
Nginx default.conf
server {
listen 8080;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass app:8000;
include /etc/nginx/uwsgi_params;
}
}
entrypoint.sh
#!/bin/sh
set -e
python manage.py collectstatic --no-input
uwsgi --socket :8000 --master --enable-threads --module app.wsgi
.circleci/config.yml
version: 2.1
workflows:
version: 2
build_and_deploy_workflow:
jobs:
- build_and_deploy_job:
filters:
branches:
only:
- master
jobs:
build_and_deploy_job:
docker:
- image: google/cloud-sdk ##based in Debian
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
- run:
name: Install requirements.txt
command: |
apt install -y python-pip
python3 -m pip install -r requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
paths:
- "venv"
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
apt-get install -y sudo
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: 'Collect static'
command: |
docker-compose -f docker-compose-deploy.yml up --build
# docker-compose build
# docker-compose run --rm app
# docker-compose run --rm app sh -c "python manage.py collectstatic"
- run:
name: 'Deploy to app engine'
command: |
echo ${GCLOUD_SERVICE_KEY} > /tmp/sa_key.json | \
gcloud auth activate-service-account --key-file=/tmp/sa_key.json
rm /tmp/sa_key.json
gcloud config set project [projectname]
gcloud config set compute/region [region]
gcloud app deploy app.yaml
app.yaml GCloud App Engine:
runtime: python39
#entrypoint: gunicorn -b :$PORT --chdir app/ app.wsgi:application
#entrypoint: gunicorn -b :$PORT app:wsgi
entrypoint: uwsgi --socket :8000 --master --enable-threads --module app.wsgi
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
Here is a link that could help you with an example of app.yaml file for a Python 3 application:
https://cloud.google.com/appengine/docs/standard/python3/config/appref
Code example:
runtime: python39 # or another supported version
instance_class: F2
env_variables:
BUCKET_NAME: "example-gcs-bucket"
handlers:
# Matches requests to /images/... to files in static/images/...
- url: /images
static_dir: static/images
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
For Python 3, the app.yaml is required to contain at least a runtime: python39 entry.
For a brief overview, see defining runtime settings:
https://cloud.google.com/appengine/docs/standard/python3/configuring-your-app-with-app-yaml
To deploy to Google App Engine with CircleCi I found this article that may help you with your main issue:
https://medium.com/#1555398769574/deploy-to-google-app-engine-with-circleci-or-github-actions-cb1bab15ca80
Code example:
.circleci/config.yaml
version: 2
jobs:
build:
working_directory: ~/workspace
docker:
- image: circleci/php:7.2-stretch-node-browsers
steps:
- checkout
- run: |
cp .env.example .env &&
php artisan key:generate
- persist_to_workspace:
root: .
paths:
- .
deploy:
working_directory: ~/workspace
docker:
- image: google/cloud-sdk
steps:
- attach_workspace:
at: .
- run:
name: Service Account Key
command: echo ${GCLOUD_SERVICE_KEY} > ${HOME}/gcloud-service-key.json
- run:
name: Set gcloud command
command: |
gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
- run:
name: deploy to Google App Engine
command: |
gcloud app deploy app.yaml
workflows:
version: 2
build:
jobs:
- build
- deploy:
context: gcp
requires:
- build
filters:
branches:
only: master
Adding additional documentation on how to create CI/CD pipeline for Google App Engine with CircleCI 2.0:
https://runzhuoli.me/2018/12/21/ci-cd-gcp-gae-circleci.html

Docker and Golang permission denied : "can't create log directory"

Hello,
I'm here today because I'm stuck on a permission problem of my application.
I have set up a "lumberjack" log system to keep track of everything that happens on my application and rotate logs. In local I don't have any problem... but when I go in production it's not the same.
My error appear here when I want to access at "/var/log/app:
if err := os.MkdirAll(config.Directory, 0755); err != nil {
log.Error().Err(err).Str("path", config.Directory).Msg("can't create log directory")
return nil
}
A part of my docker-compose.yml:
(maybe missing a volumes in this container ?)
app:
container_name: app
image: app
restart: always
depends_on:
- postgres
networks:
- app
- web
environment:
- xxx_USERNAME=xxxxx
- xxx_PASSWORD=xxxxx
- xxx_HOST=xxxxx
- xxx_NAME=xxxxx
- xxx_DEBUG=1
- xxx_CONSOLELOGGING_LOG=TRUE
- xxx_ENCODELOGJSON_LOG=TRUE
- xxx_FILELOGGING_LOG=FALSE
- xxx_DIRECTORY_LOG=/var/log/app
- xxx_FILENAME_LOG=file.log
- xxx_MAXSIZE_LOG=100
- xxx_MAXBACKUPS_LOG=5
- xxx_MAXAGE_LOG=30
ports:
- 443:443
labels:
- #TRAEFIK CONFIG
Makebuild for Docker:
CGO_ENABLED=0 go build -ldflags "-X main.gitCommit=$(GIT_COMMIT) -X main.buildDate=$(BUILD_DATE) -X main.version=$(VERSION) -linkmode external -extldflags '-static' -s -w" -a -installsuffix cgo -o web
Dockerfile for my production app:
############################
# STEP 1 build executable binary
############################
FROM golang:alpine as builder
# Install git + SSL ca certificates.
# Ca-certificates is required to call HTTPS endpoints.
RUN apk update && apk add --no-cache git ca-certificates gcc g++ make && update-ca-certificates
# Create appuser
RUN adduser -D -g '' appuser
WORKDIR /usr/src/app
COPY . .
RUN go mod download
RUN go mod verify
WORKDIR /usr/src/app/cmd/web
# Build the binary static link
RUN make docker
############################
# STEP 2 build a small image
############################
FROM scratch
WORKDIR /
# Import from builder.
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /etc/passwd /etc/passwd
# Copy our static executable and resources
COPY --from=builder /usr/src/app/cmd/web/web /web
COPY --from=builder /usr/src/app/cmd/web/views /views
COPY --from=builder /usr/src/app/cmd/web/static /static
# Use an unprivileged user.
USER appuser
ENTRYPOINT ["./web"]
CMD [":443"]
Did you forgot to map volume to /var/log/ ?
app:
container_name: app
image: app
restart: always
depends_on:
- postgres
networks:
- app
- web
environment:
- xxx_USERNAME=xxxxx
- xxx_PASSWORD=xxxxx
- xxx_HOST=xxxxx
- xxx_NAME=xxxxx
- xxx_DEBUG=1
- xxx_CONSOLELOGGING_LOG=TRUE
- xxx_ENCODELOGJSON_LOG=TRUE
- xxx_FILELOGGING_LOG=FALSE
- xxx_DIRECTORY_LOG=/var/log/app
- xxx_FILENAME_LOG=file.log
- xxx_MAXSIZE_LOG=100
- xxx_MAXBACKUPS_LOG=5
- xxx_MAXAGE_LOG=30
ports:
- 443:443
labels:
- #TRAEFIK CONFIG
volumes:
- ./:/var/log/
See the last line of docker compose snippet...
FROM scratch is empty... and you not creating new directory it with provided instructions. Add settings to map directory to your container.

Resources