There is an error while run wdio test in Docker using Jenkins. I have no idea how to solve this problem :(
The same config run successfully on local env (windows + docker).
This is wdio config. I used default dockerOptions.
wdio.conf
import { config as sharedConfig } from './wdio.shared.conf'
export const config: WebdriverIO.Config = {
...sharedConfig,
...{
host: 'localhost',
services: ['docker'],
dockerLogs: './logs',
dockerOptions: {
image: 'selenium/standalone-chrome:4.1.2-20220131',
healthCheck: {
url: 'http://localhost:4444',
maxRetries: 3,
inspectInterval: 7000,
startDelay: 15000
},
options: {
p: ['4444:4444'],
shmSize: '2g'
}
},
capabilities: [{
acceptInsecureCerts: true,
browserName: 'chrome',
browserVersion: 'latest',
'goog:chromeOptions': {
args: [ '--verbose', '--headless', '--disable-gpu', 'window-size=1920,1800','--no-sandbox', '--disable-dev-shm-usage', '--disable-extensions'],
}
}]
}
}
After that, I try to run UI test via jenkins:
19:37:34 Run `npm audit` for details.
19:37:34 + npm run test:ci -- --spec ./test/specs/claim.BNB.spec.ts
19:37:34
19:37:34 > jasmine-boilerplate#1.0.0 test:ci
19:37:34 > wdio run wdio.ci.conf.ts
And got an error.
Logs attached:
wdio.log
2022-02-04T16:59:20.725Z DEBUG #wdio/utils:initialiseServices: initialise service "docker" as NPM package
2022-02-04T16:59:20.758Z INFO #wdio/cli:launcher: Run onPrepare hook
2022-02-04T16:59:20.760Z DEBUG wdio-docker-service: Docker command: docker run --cidfile /home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/selenium_standalone_chrome_latest.cid --rm -p 4444:4444 -p 5900:5900 --shm-size 2g selenium/standalone-chrome:latest
2022-02-04T16:59:20.769Z WARN wdio-docker-service: Connecting dockerEventsListener: 6283
2022-02-04T16:59:20.772Z INFO wdio-docker-service: Cleaning up CID files
2022-02-04T16:59:20.834Z INFO wdio-docker-service: Launching docker image 'selenium/standalone-chrome:latest'
2022-02-04T16:59:20.841Z INFO wdio-docker-service: Docker container is ready
2022-02-04T16:59:20.841Z DEBUG #wdio/cli:utils: Finished to run "onPrepare" hook in 82ms
2022-02-04T16:59:20.842Z INFO #wdio/cli:launcher: Run onWorkerStart hook
2022-02-04T16:59:20.843Z DEBUG #wdio/cli:utils: Finished to run "onWorkerStart" hook in 0ms
2022-02-04T16:59:20.843Z INFO #wdio/local-runner: Start worker 0-0 with arg: run,wdio.ci.conf.ts,--spec,./test/specs/claim.BNB.spec.ts
2022-02-04T16:59:22.034Z DEBUG #wdio/local-runner: Runner 0-0 finished with exit code 1
2022-02-04T16:59:22.035Z INFO #wdio/cli:launcher: Run onComplete hook
2022-02-04T16:59:22.036Z INFO wdio-docker-service: Shutting down running container
2022-02-04T16:59:32.372Z INFO wdio-docker-service: Cleaning up CID files
2022-02-04T16:59:32.373Z INFO wdio-docker-service: Docker container has stopped
2022-02-04T16:59:32.374Z WARN wdio-docker-service: Disconnecting dockerEventsListener: 6283
2022-02-04T16:59:32.374Z DEBUG #wdio/cli:utils: Finished to run "onComplete" hook in 10339ms
2022-02-04T16:59:32.430Z INFO #wdio/local-runner: Shutting down spawned worker
2022-02-04T16:59:32.681Z INFO #wdio/local-runner: Waiting for 0 to shut down gracefully
wdio-0-0.log
2022-02-04T16:59:21.223Z INFO #wdio/local-runner: Run worker command: run
2022-02-04T16:59:21.513Z DEBUG #wdio/config:utils: Found 'ts-node' package, auto-compiling TypeScript files
2022-02-04T16:59:21.714Z DEBUG #wdio/local-runner:utils: init remote session
2022-02-04T16:59:21.717Z DEBUG #wdio/utils:initialiseServices: initialise service "docker" as NPM package
2022-02-04T16:59:21.828Z DEBUG #wdio/local-runner:utils: init remote session
2022-02-04T16:59:21.840Z INFO devtools:puppeteer: Initiate new session using the DevTools protocol
2022-02-04T16:59:21.841Z INFO devtools: Launch Google Chrome with flags: --enable-automation --disable-popup-blocking --disable-extensions --disable-background-networking --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-sync --metrics-recording-only --disable-default-apps --mute-audio --no-first-run --no-default-browser-check --disable-hang-monitor --disable-prompt-on-repost --disable-client-side-phishing-detection --password-store=basic --use-mock-keychain --disable-component-extensions-with-background-pages --disable-breakpad --disable-dev-shm-usage --disable-ipc-flooding-protection --disable-renderer-backgrounding --force-fieldtrials=*BackgroundTracing/default/ --enable-features=NetworkService,NetworkServiceInProcess --disable-features=site-per-process,TranslateUI,BlinkGenPropertyTrees --window-position=0,0 --window-size=1200,900 --headless --disable-gpu --window-size=1920,1800 --no-sandbox --disable-dev-shm-usage --disable-extensions
2022-02-04T16:59:21.911Z ERROR #wdio/runner: Error:
at new LauncherError (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/utils.ts:31:18)
at new ChromePathNotSetError (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/dist/utils.js:33:9)
at Object.linux (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-finder.ts:153:11)
at Function.getFirstInstallation (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-launcher.ts:182:61)
at Launcher.launch (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-launcher.ts:252:37)
at launch (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/chrome-launcher/src/chrome-launcher.ts:74:18)
at launchChrome (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/devtools/build/launcher.js:80:55)
at launch (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/devtools/build/launcher.js:179:16)
at Function.newSession (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/devtools/build/index.js:50:54)
at remote (/home/jenkins/workspace/tests_e2e1_configure_CI_CD/e2e/node_modules/webdriverio/build/index.js:67:43)
wdio-chromedriver.log
Starting ChromeDriver 97.0.4692.71 (adefa7837d02a07a604c1e6eff0b3a09422ab88d-refs/branch-heads/4692#{#1247}) on port 9515
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
[1643987609.767][SEVERE]: bind() failed: Cannot assign requested address (99)
docker-log.txt
2022-02-04 16:59:21,482 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
2022-02-04 16:59:21,484 INFO supervisord started with pid 7
Trapped SIGTERM/SIGINT/x so shutting down supervisord...
2022-02-04 16:59:22,487 INFO spawned: 'xvfb' with pid 9
2022-02-04 16:59:22,489 INFO spawned: 'vnc' with pid 10
2022-02-04 16:59:22,491 INFO spawned: 'novnc' with pid 11
2022-02-04 16:59:22,492 INFO spawned: 'selenium-standalone' with pid 12
2022-02-04 16:59:22,493 WARN received SIGTERM indicating exit request
2022-02-04 16:59:22,493 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
Setting up SE_NODE_GRID_URL...
2022-02-04 16:59:22,501 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2022-02-04 16:59:22,501 INFO success: vnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2022-02-04 16:59:22,501 INFO success: novnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
Selenium Grid Standalone configuration:
[network]
relax-checks = true
[node]
session-timeout = "300"
override-max-sessions = false
detect-drivers = false
max-sessions = 1
[[node.driver-configuration]]
display-name = "chrome"
stereotype = '{"browserName": "chrome", "browserVersion": "97.0", "platformName": "Linux"}'
max-sessions = 1
Starting Selenium Grid Standalone...
16:59:22.930 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
16:59:22.939 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
16:59:23.452 INFO [NodeOptions.getSessionFactories] - Detected 4 available processors
16:59:23.493 INFO [NodeOptions.report] - Adding chrome for {"browserVersion": "97.0","browserName": "chrome","platformName": "Linux","se:vncEnabled": true} 1 times
16:59:23.505 INFO [Node.<init>] - Binding additional locator mechanisms: name, id, relative
16:59:23.526 INFO [LocalDistributor.add] - Added node 150c2c05-2b08-4ba9-929a-45fef66bb193 at http://172.17.0.2:4444. Health check every 120s
16:59:23.540 INFO [GridModel.setAvailability] - Switching node 150c2c05-2b08-4ba9-929a-45fef66bb193 (uri: http://172.17.0.2:4444) from DOWN to UP
16:59:23.645 INFO [Standalone.execute] - Started Selenium Standalone 4.1.2 (revision 9a5a329c5a): http://172.17.0.2:4444
2022-02-04 16:59:26,091 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
2022-02-04 16:59:29,095 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
2022-02-04 16:59:32,097 INFO waiting for xvfb, vnc, novnc, selenium-standalone to die
I configured Monit to watch unicorn and restart it when the memory exceeded or the cpu increased above a certain limit ,
how ever when it happens , monit doesn't restart unicorn , and here is the logs I found in monit log file
[UTC Aug 11 20:15:41] error : 'unicorn_myapp' failed to restart (exit status 127) -- '/etc/init.d/unicorn_myapp restart': /etc/init.d/unicorn_myapp: 27: kill: No such process
Couldn't reload, starting 'cd /home/ubuntu_user/apps/myapp/current; bundle exec unicorn -D -c /home/ubuntu_user/apps/myapp/shared/config/unicorn.rb -E pr
[UTC Aug 11 20:16:11] error : 'unicorn_myapp' process is not running
[UTC Aug 11 20:16:11] info : 'unicorn_myapp' trying to restart
[UTC Aug 11 20:16:11] info : 'unicorn_myapp' restart: '/etc/init.d/unicorn_myapp restart'
[UTC Aug 11 20:16:42] error : 'unicorn_myapp' failed to restart (exit status 127) -- '/etc/init.d/unicorn_myapp restart': /etc/init.d/unicorn_myapp: 27: kill: No such process
Couldn't reload, starting 'cd /home/ubuntu_user/apps/myapp/current; bundle exec unicorn -D -c /home/ubuntu_user/apps/myapp/shared/config/unicorn.rb -E pr
[UTC Aug 11 20:17:12] error : 'unicorn_myapp' process is not running
[UTC Aug 11 20:17:12] info : 'unicorn_myapp' trying to restart
[UTC Aug 11 20:17:12] info : 'unicorn_myapp' restart: '/etc/init.d/unicorn_myapp restart'
[UTC Aug 11 20:17:42] error : 'unicorn_myapp' failed to restart (exit status 127) -- '/etc/init.d/unicorn_myapp restart': /etc/init.d/unicorn_myapp: 27: kill: No such process
Couldn't reload, starting 'cd /home/ubuntu_user/apps/myapp/current; bundle exec unicorn -D -c /home/ubuntu_user/apps/myapp/shared/config/unicorn.rb -E pr
[UTC Aug 11 20:18:12] error : 'unicorn_myapp' process is not running
Here is my monit configuration under /etc/monit/conf.d/
check process unicorn_myapp
with pidfile /home/ubuntu_user/apps/myapp/current/tmp/pids/unicorn.pid
start program = "/etc/init.d/unicorn_myapp start"
stop program = "/etc/init.d/unicorn_myapp stop"
restart program = "/etc/init.d/unicorn_myapp restart"
if not exist then restart
if mem is greater than 300.0 MB for 2 cycles then restart # eating up memory?
if cpu is greater than 50% for 4 cycles then restart # send an email to admin
if cpu is greater than 80% for 30 cycles then restart # hung process?
group unicorn
it should restart unicorn when such error happens which break the app
from unicorn.log file
ERROR -- : Cannot allocate memory - fork(2) (Errno::ENOMEM)
when I run /etc/init.d/unicorn_myapp restart from terminal it works
Monit mostly uses the restart program to start a program. I do not know why this is, but I also observed this behavior.
Just try to comment out the "restart" line. This should force monit to run the start script, what should not try to kill an existing process.
You might also want to watch your log-file like
CHECK FILE unicorn_log PATH log__file__path___change_me_or_the_world_will_flatten
start program = "/etc/init.d/unicorn_myapp start"
stop program = "/etc/init.d/unicorn_myapp stop"
# No log File entry for one hour?
if timestamp is older than 1 hour then restart
# Allocate Memery error?
if content = "Cannot allocate memory" then restart
I have my flask webapp up and running in Docker and trying to implement some unit tests and having trouble executing the tests. While my containers are up and running, I run the following:
docker-compose run app python3 manage.py test
to try to execute my test function in manage.py:
import unittest
from flask.cli import FlaskGroup
from myapp import app, db
cli = FlaskGroup(app)
#cli.command()
def recreate_db():
db.drop_all()
db.create_all()
db.session.commit()
#cli.command()
def test():
""" Runs the tests without code coverage"""
tests = unittest.TestLoader().discover('myapp/tests', pattern='test*.py')
result = unittest.TextTestRunner(verbosity=2).run(tests)
if result.wasSuccessful():
return 0
return 1
if __name__ == '__main__':
cli()
But since I have a start.sh in my Dockerfile it just executes my Gunicorn start.sh but it doesn't run my test. I just see the following in the console, but no trace of my test() function.
[2018-12-08 01:21:08 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2018-12-08 01:21:08 +0000] [1] [DEBUG] Arbiter booted
[2018-12-08 01:21:08 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
[2018-12-08 01:21:08 +0000] [1] [INFO] Using worker: sync
[2018-12-08 01:21:08 +0000] [9] [INFO] Booting worker with pid: 9
[2018-12-08 01:21:08 +0000] [11] [INFO] Booting worker with pid: 11
[2018-12-08 01:21:08 +0000] [13] [INFO] Booting worker with pid: 13
[2018-12-08 01:21:08 +0000] [1] [DEBUG] 3 workers
start.sh:
#!/bin/sh
# Wait until MySQL is ready
while ! mysqladmin ping -h"db" -P"3306" --silent; do
echo "Waiting for MySQL to be up..."
sleep 1
done
source venv/bin/activate
# Start Gunicorn processes
echo Starting Gunicorn.
exec gunicorn -b 0.0.0.0:5000 wsgi --reload --chdir usb_app --timeout 9999 --workers 3 --access-logfile - --error-logfile - --capture-output --log-level debug
Does anyone know why or how I can execute the test function in an existing container without having to start the Gunicorn workers again?
I presume start.sh is your entrypoint. If not, you can make it as an entrypoint instead of putting it in CMD.
We can override the entrypoint script using --entrypoint argument, I do it as below -
docker-compose run --rm --entrypoint "python3 manage.py test" app
I have a docker image where I wish:
- to run a passenger server and another daemon for monitoring the passenger server.
- the container to exit as soon as either one of these 2 processes exit even once.
- direct all logs to stdout
In config file, I have put an event listener (Reference: https://serverfault.com/questions/760726/how-to-exit-all-supervisor-processes-if-one-exited-with-0-result/762406#762406) that captures some events for passenger_monit program and executes a script tt.sh.
I can see 1 extra instance of passenger_monit program being spawned and reaching FATAL state after a few tries. The other passenger_monit and passenger_server are fine. The other passenger_monit's events don't reach the eventlistener.
These are the scripts which are not working as expected:
This is the supervisord.conf
[supervisord]
nodaemon=true
stdout_logfile=/dev/fd/1
redirect_stderr=true
stdout_logfile_maxbytes=0
[unix_http_server]
file=%(here)s/supervisor.sock
[supervisorctl]
serverurl=unix://%(here)s/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:passenger_monit]
command=./script/passenger_monit.sh
process_name=passenger_monit
startretries=999
redirect_stderr=true
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
autorestart=true
killasgroup=true
stopasgroup=true
numprocs=1
[program:passenger_server]
command=passenger start
startretries=999
redirect_stderr=true
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
autorestart=true
killasgroup=true
stopasgroup=true
numprocs=1
[eventlistener:passenger_monit_exit]
command=./tt.sh
process_name=passenger_monit
events=PROCESS_STATE_STARTING,PROCESS_STATE_EXITED,PROCESS_STATE_FATAL
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
This is the ./script/passenger_monit.sh
#!/bin/bash
set -x
cd /passenger/newrelic_passenger_plugin/
# if exec is not put, then this process is not killed when supervisord exits
exec ./newrelic_passenger_agent
set +x
This is tt.sh
#!/bin/bash
echo "in tt!"
This is the command I run:
docker exec -it -u deploy 56bbbbe4352b supervisord
This is the output I get:
2016-08-26 19:47:29,369 INFO RPC interface 'supervisor' initialized
2016-08-26 19:47:29,369 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2016-08-26 19:47:29,370 INFO supervisord started with pid 2446
2016-08-26 19:47:30,374 INFO spawned: 'passenger_monit' with pid 2452
2016-08-26 19:47:30,377 INFO spawned: 'passenger_server' with pid 2453
in tt!
2016-08-26 19:47:30,392 INFO exited: passenger_monit (exit status 0; not expected)
=============== Phusion Passenger Standalone web server started ===============
PID file: /home/deploy/abc/tmp/pids/passenger.3000.pid
Log file: /home/deploy/abc/log/passenger.3000.log
Environment: development
Accessible via: http://0.0.0.0:3000/
You can stop Phusion Passenger Standalone by pressing Ctrl-C.
Problems? Check https://www.phusionpassenger.com/library/admin/standalone/troubleshooting/
===============================================================================
2016-08-26 19:47:31,565 INFO spawned: 'passenger_monit' with pid 2494
2016-08-26 19:47:31,566 INFO success: passenger_server entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
in tt!
2016-08-26 19:47:31,571 INFO exited: passenger_monit (exit status 0; not expected)
2016-08-26 19:47:33,576 INFO spawned: 'passenger_monit' with pid 2498
in tt!
2016-08-26 19:47:33,583 INFO exited: passenger_monit (exit status 0; not expected)
2016-08-26 19:47:36,588 INFO spawned: 'passenger_monit' with pid 2499
in tt!
2016-08-26 19:47:36,595 INFO exited: passenger_monit (exit status 0; not expected)
2016-08-26 19:47:37,597 INFO gave up: passenger_monit entered FATAL state, too many start retries too quickly
^C2016-08-26 19:47:47,730 WARN received SIGINT indicating exit request
2016-08-26 19:47:47,735 INFO waiting for passenger_server to die
Stopping web server... done
2016-08-26 19:47:47,839 INFO stopped: passenger_server (exit status 2)
This is the output for supervisorctl status
passenger_monit STOPPED Not started
passenger_monit_exit:passenger_monit FATAL Exited too quickly (process log may have details)
passenger_server RUNNING pid 2453, uptime 0:00:14
Output of supervisord -v
3.0b2
The following should work. Notice the 10 second script will be killed after 5 seconds.
[supervisord]
loglevel=warn
nodaemon=true
[program:hello]
command=bash -c "echo waiting 5 seconds . . . && sleep 5"
autorestart=false
numprocs=1
startsecs=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
[program:world]
command=bash -c "echo waiting 10 seconds . . . && sleep 10"
autorestart=false
numprocs=1
startsecs=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
[eventlistener:processes]
command=bash -c "echo READY && read line && kill -SIGQUIT $PPID"
events=PROCESS_STATE_STOPPED,PROCESS_STATE_EXITED,PROCESS_STATE_FATAL
So I have puma_start_staging.sh file. If I execute that file from terminal
. /home/deploy/puma_start_staging.sh
everything is ok. But monit can't execute that file.
Here's my config:
check process puma_staging with pidfile /home/deploy/apps/staging/shared/tmp/pids/puma.pid
start program = "/bin/bash -c '. /home/deploy/puma_start_staging.sh'"
stop program = "/bin/bash -c 'kill `cat /home/deploy/apps/staging/shared/tmp/pids/puma.pid`'"
I get this in monit.log
[EEST Aug 4 10:56:10] info : 'puma_staging' start: /bin/bash
[EEST Aug 4 10:56:40] error : 'puma_staging' failed to start
[EEST Aug 4 10:56:40] info : 'puma_staging' start action done
What am I doing wrong?