problem with robot framework test in docker runner - docker

I want to run my android tests with robot framework and appium on an android emulator with docker on a gitlab runner. after execution, i received these errors and job failed:
$ docker run --network mobile_driver_test_automation_default --rm --mount "type=bind,src=${CI_PROJECT_DIR}/robot/Reports,dst=/app/reports" qa/robot:latest python3 -m robot -d /app/reports /app/Tests/Login.robot
Init Page Assertion Design Chaneg Path [ WARN ] Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdd8c04ea00>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /wd/hub/session
[ WARN ] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdd8c04ed60>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /wd/hub/session
[ WARN ] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdd8c04eee0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /wd/hub/session
[ WARN ] Keyword 'Capture Page Screenshot' could not be run on failure: No application is open
| FAIL |
MaxRetryError: HTTPConnectionPool(host='appium_server', port=4723): Max retries exceeded with url: /wd/hub/session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdd8bff1880>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
My docker-compose and gitlab.ci files are shown below:
docker-compose:
version: "2.2"
services:
selenium_hub:
image: selenium/hub:3.14.0-curium
ports:
- 4444:4444
appium_server:
image: appium/appium:v1.22.3-p1
depends_on:
- selenium_hub
network_mode: "service:selenium_hub"
volumes:
- ./robot/apk:/root/tmp/apk/
environment:
- CONNECT_TO_GRID=true
- SELENIUM_HOST=selenium_hub
- RELAXED_SECURITY=true
nexus_emulator:
image: budtmo/docker-android-x86-10.0:v1.10-p7
depends_on:
- selenium_hub
- appium_server
volumes:
- ./robot/apk:/root/tmp/apk/
ports:
- 6080:6080
- 4723:4723
- 5554:5554
- 5555:5555
environment:
- DEVICE=Nexus 5
- CONNECT_TO_GRID=true
- APPIUM=true
- SELENIUM_HOST=selenium_hub
- AUTO_RECORD=true
stages:
- docker-login
- build-robot
- setup-test
- run-test
- docker-down
docker-login:
stage: docker-login
tags:
- docker
rules:
- if: $CI_MERGE_REQUEST_ID
when: never
- if: $CI_COMMIT_REF_NAME =~ "main"
script:
- docker login -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_REGISTRY
build-robot:
stage: build-robot
tags:
- docker
rules:
- if: $CI_MERGE_REQUEST_ID
when: never
- if: $CI_COMMIT_REF_NAME =~ "main"
script:
- docker build -t qa/robot:latest -f robot/Dockerfile robot
setup-test:
stage: setup-test
tags:
- docker
rules:
- if: $CI_MERGE_REQUEST_ID
when: never
- if: $CI_COMMIT_REF_NAME =~ "main"
script:
- docker-compose -f docker-compose.yaml up -d
- docker ps
run-login-test:
stage: run-test
tags:
- docker
rules:
- if: $CI_MERGE_REQUEST_ID
when: never
- if: $CI_COMMIT_REF_NAME =~ "main"
script:
- docker ps
- docker run --network mobile_driver_test_automation_default --rm --mount "type=bind,src=${CI_PROJECT_DIR}/robot/Reports,dst=/app/reports"qa/robot:latest python3 -m robot -d /app/reports /app/Tests/Login.robot
artifacts:
when: always
paths:
- robot/Reports/report.html
expire_in: 3 day
docker-down:
stage: docker-down
tags:
- docker
rules:
- if: $CI_MERGE_REQUEST_ID
when: never
- if: $CI_COMMIT_REF_NAME =~ "main"
script:
- docker-compose -f docker-compose.yaml down
robot framework script:
*** Settings ***
Library AppiumLibrary
Library Process
*** Test Cases ***
Init Page Assertion Design Chaneg Path
Open Application http://appium_server:4723/wd/hub app=/app/robot/Tests/apk/driver.apk autoGrantPermissions=true platformName=Android deviceName=Nexus 5 appPackage=packageName appActivity=ActivityName
Log "app opened succesfully"
and also my Dockerfile and project path picture:
FROM python:3.9
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt

You're trying to configure your tests to talk to http://appium_server:4723/wd/hub. In the Compose configuration, though, you specify
services:
appium_server:
network_mode: "service:selenium_hub"
This network_mode: setting is a very unusual setting that causes the appium_server container to not have its own network identity, but instead run inside the selenium_hub container's network namespace. I'm not clear when you'd need this setting, beyond programs that have incorrectly hard-coded localhost as the location of some other service; making the service location configurable through an environment variable would be a better solution.
I think either of two things would work:
Delete the network_mode: line. It's a strange configuration and hopefully unnecessary.
Configure your tests to use selenium_hub as the destination host name, because that's where this container actually is on the Docker internal network.

Related

Gitlab job failed with exit code2 after execution of owasp zap scanner

Gitlab job are failing with exit code 2 after execution. No error details are appearing in log file only showing ERROR: Job failed: exit code 2.
.gitlab.ci.yml file:
image: docker:latest
services:
- name: docker:dind
alias: thedockerhost
variables:
DOCKER_HOST: tcp://thedockerhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stages:
- test1
test1:
stage: test1
script:
- docker run -v $(pwd):/zap/wrk/:rw --name zap2 owasp/zap2docker-stable zap-baseline.py -t http://www.example.com -r example.html
artifacts:
when: always
paths:
- example.html
The the documentation:
https://www.zaproxy.org/docs/docker/full-scan/
https://www.zaproxy.org/docs/docker/baseline-scan/
https://www.zaproxy.org/docs/docker/api-scan/
-I do not return failure on warning
If you want to understand the various exit states you can check the Open Source code: https://github.com/zaproxy/zaproxy/tree/main/docker
For further information about using ZAP's packaged scans and docker images refer to: https://www.zaproxy.org/docs/docker/

Why database container with mariadb fails during gitlab cicd pipeline?

I'm trying to integrate new job to existing pipeline - build mariadb and to execute tests and migrations on it. I have database job:
.db-job:
image: mariadb:${MARIA_DB_IMAGE_VERSION}
script:
- echo "SELECT 'OK';" | mysql --user=root --password="$DATABASE_PASSWORD" --host="$jdbc:mysql://localhost" "$DATABASE_SCHEMA"
I have stage for database:
db:install:
stage: db
extends: .db-job
services:
- name: mariadb
alias: db
needs: [ ]
script:
- cd "$BACKEND_DIR"
- pwd
cache:
policy: pull-push
db:migrate:
stage: db
extends: .maven-job
script:
- cd "$BACKEND_DIR"
- pwd
- mvn --version
- mvn -Dflyway.user="$DATABASE_PASSWORD" -Dflyway.schemas="DATABASE_SCHEMA" flyway:migrate
cache:
policy: pull-push
So database build passes but in logs I have error:
Health check error:
start service container: Error response from daemon: Cannot link to a non running container
Service container logs:
2022-11-29T11:35:15.589696287Z 2022-11-29 11:35:15+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started.
2022-11-29T11:35:15.656319311Z 2022-11-29 11:35:15+00:00 [ERROR] [Entrypoint]: mariadbd failed while attempting to check config
How can I fix this problem?

Github Actions db service container not reachable

I have the following Github Actions pipeline:
name: Elixir CI
on:
push:
branches:
- '*'
pull_request:
branches:
- '*'
jobs:
build:
name: Build and test
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_PORT: 5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout#v2
- name: Docker Setup Buildx
uses: docker/setup-buildx-action#v1.6.0
with:
install: true
- name: building image
env:
DATABASE_HOST: postgres
DATABASE_PORT: 5432
run: |
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
I have a single build step for an Elixir app: the dockerfile is a multistage one, the first stage runs the tests and builds the production app, and the second copies the application folder/tar.
DATABASE_HOST is the variable that my Elixir app looks for to connect to the test environment.
I have the need to run tests against Postgres, so I spawn a container service with it. I have executed the build both in a container and outside of it, but I always have the following error:
...
#19 195.9 14:10:58.624 [error] GenServer #PID<0.9316.0> terminating
#19 195.9 ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
#19 195.9 (db_connection 2.4.1) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
#19 195.9 (connection 1.1.0) lib/connection.ex:622: Connection.enter_connect/5
#19 195.9 (stdlib 3.14.2.2) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
#19 195.9 Last message: nil
...
So apparently postgres:5432 is not reachable, am I missing something ?
The problem is in DATABASE_HOST: postgres I think.
The service container exports 5432 port to host, so for docker build, it should use host's ip address to visit that postgres service like next:
- name: building image
env:
DATABASE_PORT: 5432
run: |
DATABASE_HOST=$(ifconfig -a eth0|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print $2}'|tr -d "addr:")
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
Above will first use ifconfig to get virtual machine's ip(docker host's ip), then pass to docker build to let build container to visit the postgres.

Getting Error when creating a Windows Docker Container on Kaniko/Gitlab

I'm trying to create a Windows Docker container using Kaniko/Gitlab.
Here is the Error I see:
Resolving secrets
00:00
Preparing the "docker-windows" executor
Using Docker executor with image gcr.io/kaniko-project/executor:v1.6.0-debug ...
Pulling docker image gcr.io/kaniko-project/executor:v1.6.0-debug ...
WARNING: Failed to pull image with policy "always": no matching manifest for windows/amd64 10.0.17763 in the manifest list entries (docker.go:147:0s)
ERROR: Preparation failed: failed to pull image "gcr.io/kaniko-project/executor:v1.6.0-debug" with specified policies [always]: no matching manifest for windows/amd64 10.0.17763 in the manifest list entries (docker.go:147:0s)
For .gitlab-ci.yml file:
image:
name: microsoft/iis:latest
entrypoint: [""]
.build_variables: &build_variables
TAG: "docker-base-windows-2019-std-core"
AWS_ACCOUNT: "XXXXXXXXXX"
AWS_REGION: "XXXXXXX"
REGISTRY: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
.build_script: &build_script
script:
- echo "{\"credsStore\":\"ecr-login\"}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $REGISTRY:$TAG
stages:
- build-docker-image
build_image_dev:
variables:
<<: *build_variables
stage: build-docker-image
image:
name: gcr.io/kaniko-project/executor:v1.6.0-debug
entrypoint: [""]
tags: ['XXXXX']
<<: *build_script
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH == "main"'
- if: $CI_COMMIT_TAG
This is normal text Code for Docker file:
FROM Microsoft/iis:latest
CMD [ "cmd" ]
You have the error:
no matching manifest for windows/amd64
which means that particular image could not be found. It happens if you develop on windows and your server is a linux for instance.
This error implies your host machine's OS is not compatible with the OS docker image you are trying to pull.

Prometheus (in Docker container) Cannot Scrape Target on Host

Prometheus running inside a docker container (version 18.09.2, build 6247962, docker-compose.xml below) and the scrape target is on localhost:8000 which is created by a Python 3 script.
Error obtained for the failed scrape target (localhost:9090/targets) is
Get http://127.0.0.1:8000/metrics: dial tcp 127.0.0.1:8000: getsockopt: connection refused
Question: Why is Prometheus in the docker container unable to scrape the target which is running on the host computer (Mac OS X)? How can we get Prometheus running in docker container able to scrape the target running on the host?
Failed attempt: Tried replacing in docker-compose.yml
networks:
- back-tier
- front-tier
with
network_mode: "host"
but then we are unable to access the Prometheus admin page at localhost:9090.
Unable to find solution from similar questions
Getting error "Get http://localhost:9443/metrics: dial tcp 127.0.0.1:9443: connect: connection refused"
docker-compose.yml
version: '3.3'
networks:
front-tier:
back-tier:
services:
prometheus:
image: prom/prometheus:v2.1.0
volumes:
- ./prometheus/prometheus:/etc/prometheus/
- ./prometheus/prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- 9090:9090
networks:
- back-tier
restart: always
grafana:
image: grafana/grafana
user: "104"
depends_on:
- prometheus
ports:
- 3000:3000
volumes:
- ./grafana/grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
env_file:
- ./grafana/config.monitoring
networks:
- back-tier
- front-tier
restart: always
prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: 'my-project'
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'rigs-portal'
scrape_interval: 5s
static_configs:
- targets: ['127.0.0.1:8000']
Output at http://localhost:8000/metrics
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 65.0
python_gc_objects_collected_total{generation="1"} 281.0
python_gc_objects_collected_total{generation="2"} 0.0
# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 37.0
python_gc_collections_total{generation="1"} 3.0
python_gc_collections_total{generation="2"} 0.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="7",patchlevel="3",version="3.7.3"} 1.0
# HELP request_processing_seconds Time spend processing request
# TYPE request_processing_seconds summary
request_processing_seconds_count 2545.0
request_processing_seconds_sum 1290.4869346540017
# TYPE request_processing_seconds_created gauge
request_processing_seconds_created 1.562364777766845e+09
# HELP my_inprorgress_requests CPU Load
# TYPE my_inprorgress_requests gauge
my_inprorgress_requests 65.0
Python3 script
from prometheus_client import start_http_server, Summary, Gauge
import random
import time
# Create a metric to track time spent and requests made
REQUEST_TIME = Summary("request_processing_seconds", 'Time spend processing request')
#REQUEST_TIME.time()
def process_request(t):
time.sleep(t)
if __name__ == "__main__":
start_http_server(8000)
g = Gauge('my_inprorgress_requests', 'CPU Load')
g.set(65)
while True:
process_request(random.random())
While not a very common use case.. you can indeed connect from your container to your host.
From https://docs.docker.com/docker-for-mac/networking/
I want to connect from a container to a service on the host
The host has a changing IP address (or none if you have no network
access). From 18.03 onwards our recommendation is to connect to the
special DNS name host.docker.internal, which resolves to the internal
IP address used by the host. This is for development purpose and will
not work in a production environment outside of Docker Desktop for
Mac.
For reference for people who might find this question through search, this is supported now as of Docker 20.10 and above. See the following link:
How to access host port from docker container
and:
https://github.com/docker/for-linux/issues/264#issuecomment-823528103
Below is an example of running Prometheus on Docker for macOS which causes Prometheus to scrape a simple Spring Boot application running on localhost:8080:
Bash
docker run --rm --name prometheus -p 9090:9090 -v /Users/YourName/conf/prometheus.yml:/etc/prometheus/prometheus.yml -d prom/prometheus
/Users/YourName/conf/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'spring-boot'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:8080']
In this case it is the use of the special domain host.docker.internal instead of localhost that causes the host to be resolved from the container on a macOS as the config file is mapped into the Prometheus container.
Environment
Macbook Pro, Apple M1 Pro
Docker version 20.10.17, build 100c701
Prometheus 2.38

Resources