I am using the logback-test.xml from https://github.com/karatelabs/karate#logging . The karate log generated when the tests are executed locally has lot more information with the payload and response printed in the log. However, when we execute the same tests from jenkins, the karate.log is showing limited information. Please refer below logs for comparison.Is there any configuration required for showing the full information on Jenkins?
Jenkins Execution: karate.log
19:42:35.931 [Test worker] INFO com.intuit.karate.Runner - waiting for parallel features to complete ...
19:42:36.157 [ForkJoinPool-1-worker-5] INFO com.intuit.karate - karate.env system property was: dev
19:42:36.157 [ForkJoinPool-1-worker-3] INFO com.intuit.karate - karate.env system property was: dev
19:42:36.241 [ForkJoinPool-1-worker-5] INFO com.intuit.karate - [print] Car Transaction Example
19:42:36.242 [ForkJoinPool-1-worker-3] INFO com.intuit.karate - [print] Lodging Example
19:42:36.350 [pool-1-thread-3] INFO com.intuit.karate.Runner - <<pass>> feature 3 of 3: classpath:products/example/test-cases/suite-b/us_lodging_creditcard.feature
19:42:36.350 [pool-1-thread-1] INFO com.intuit.karate.Runner - <<pass>> feature 1 of 3: classpath:products/example/test-cases/suite-a/us_car.feature
Local Execution: karate.log
11:46:32.633 [Test worker] INFO com.intuit.karate.Runner - waiting for parallel features to complete ...
11:46:32.810 [ForkJoinPool-1-worker-3] INFO com.intuit.karate - karate.env system property was: dev
11:46:32.886 [ForkJoinPool-1-worker-3] INFO com.intuit.karate - [print] Car Transaction Example
11:46:32.973 [pool-1-thread-1] INFO com.intuit.karate.Runner - <<pass>> feature 1 of 2: classpath:products/example/test-cases/suite-a/us_car.feature
11:46:35.186 [Test worker] INFO com.intuit.karate.Runner - waiting for parallel features to complete ...
11:46:35.350 [ForkJoinPool-1-worker-3] INFO com.intuit.karate - karate.env system property was: dev
11:46:35.416 [ForkJoinPool-1-worker-3] INFO com.intuit.karate - [print] I am a happy happy test
11:46:35.499 [pool-1-thread-1] INFO com.intuit.karate.Runner - <<pass>> feature 1 of 1: classpath:framework/example/test_framework.feature
11:46:36.994 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - karate.env system property was: dev
11:46:36.996 [Test worker] INFO com.intuit.karate.Runner - waiting for parallel features to complete ...
11:46:37.148 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - backend sample-mock.feature initialized
11:46:37.149 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - all backends initialized
11:46:37.870 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - server started - http://127.0.0.1:9000
11:46:37.880 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - scenario called at line: 12 by tag: #addconsolerequest
11:46:37.956 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - [print] {
"consoleName": "PlayStation 5",
"manufacturer": "Sony",
"releaseDate": "09/12/2020",
"price": 1150,
"currency": "USD",
"consoleInventoryCode": 21964477
}
11:46:38.383 [nioEventLoopGroup-3-1] INFO com.intuit.karate - [print] {
"serial_number": "7d241ad1-c048-48e2-9648-45add42d2b4e",
"console_name": "PlayStation 5",
"message": "A new game console has been added to the gamestore's inventory."
}
11:46:38.401 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - scenario called at line: 10 by tag: #addoperation
11:46:38.413 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - [print] responseStatus: 201
11:46:38.420 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutting down
11:46:38.425 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutdown complete
11:46:38.430 [pool-2-thread-1] INFO com.intuit.karate.Runner - <<pass>> feature 2 of 6: classpath:examples/sampleapp/tests/consoles/gamestore_api_console_tests.feature
11:46:38.457 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - karate.env system property was: dev
11:46:38.494 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - backend sample-mock.feature initialized
11:46:38.494 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - all backends initialized
11:46:38.496 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - server started - http://127.0.0.1:9000
11:46:38.504 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - scenario called at line: 43 by tag: #addgamerequestps
11:46:38.550 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - [print] {
"gameTitle": "JUMANJI",
"gamePlatform": "PlayStation",
"compatibleWith": {
"PlayStation5": true,
"Playstation4": true
},
"releaseDate": "10/15/2021",
"price": 45,
"currency": "USD",
"publisher": "Outright Games",
"rating": "Teen",
"gameInventoryCode": 545737836
}
11:46:38.606 [nioEventLoopGroup-5-1] INFO com.intuit.karate - [print] {
"confirmation_number": "45fd9c35-7d63-4e2e-9fc0-076b3bf7e94a",
"gameTitle": "JUMANJI",
"message": "A new game has been added to the playstation games inventory."
}
11:46:38.611 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutting down
11:46:38.613 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutdown complete
11:46:38.616 [pool-2-thread-1] INFO com.intuit.karate.Runner - <<pass>> feature 5 of 6: classpath:examples/sampleapp/tests/games/gamestore_playstation_tests.feature
11:46:38.636 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - karate.env system property was: dev
11:46:38.665 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - backend sample-mock.feature initialized
11:46:38.665 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - all backends initialized
11:46:38.667 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - server started - http://127.0.0.1:9000
11:46:38.674 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - scenario called at line: 19 by tag: #addgamerequestxbox
11:46:38.711 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - [print] {
"gameTitle": "FIFA 22",
"gamePlatform": "Xbox",
"compatibleWith": {
"XboxOne": true,
"XboxSeriesX": true,
"Xbox360": false
},
"releaseDate": "10/1/2021",
"price": 55,
"currency": "USD",
"publisher": "#(publsiher)",
"gameInventoryCode": 1520969023
}
11:46:38.764 [nioEventLoopGroup-7-1] INFO com.intuit.karate - [print] {
"confirmation_number": "38a9d1cb-e899-43a2-8c8f-56f4bc43d0ab",
"gameTitle": "FIFA 22",
"message": "A new game has been added to the Xbox games inventory."
}
11:46:38.768 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutting down
11:46:38.771 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutdown complete
11:46:38.782 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - karate.env system property was: dev
11:46:38.811 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - backend sample-mock.feature initialized
11:46:38.812 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - all backends initialized
11:46:38.813 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - server started - http://127.0.0.1:9000
11:46:38.841 [nioEventLoopGroup-9-1] INFO com.intuit.karate - [print] {
"game_id": "123",
"gameTitle": "FIFA 2022",
"releaseDate": "10/1/2021"
}
11:46:38.843 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutting down
11:46:38.846 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutdown complete
11:46:38.850 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - karate.env system property was: dev
11:46:38.858 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - backend sample-mock.feature initialized
11:46:38.858 [ForkJoinPool-2-worker-3] INFO com.intuit.karate - all backends initialized
11:46:38.860 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - server started - http://127.0.0.1:9000
11:46:38.881 [nioEventLoopGroup-11-1] ERROR com.intuit.karate - no scenarios matched request
11:46:38.882 [nioEventLoopGroup-11-1] WARN com.intuit.karate - no matching scenarios in backend feature files
11:46:38.884 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutting down
11:46:38.887 [ForkJoinPool-2-worker-3] INFO c.intuit.karate.netty.FeatureServer - stop: shutdown complete
11:46:38.891 [pool-2-thread-1] INFO com.intuit.karate.Runner - <<pass>> feature 6 of 6: classpath:examples/sampleapp/tests/games/gamestore_xbox_tests.feature
Related
I have looked hard for an answer, but haven't managed to find one, so I'm hoping someone here can help me understand this error and what is happening during the singularity pull command.
Here is the error:
Error executing process > 'QC_TRIM_READS (1)'
Caused by:
Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632264509884 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
Here is the script (excuse the mess, I am just getting used to nextflow)
#!/usr/bin/env Nextflow
nextflow.enable.dsl=2
params.ref_genome = "./data/GmaxFiskeby_678_v1.0.fa"
params.ref_annotation = "./data/GmaxFiskeby_678_v1.1.gene_exons.gff3"
params.intermediate_dir = "$workDir/intermediate/"
workflow {
ref_genome_ch = Channel.fromPath("$params.ref_genome")
ref_annotation_ch = Channel.fromPath("$params.ref_annotation")
input_fastq_ch = Channel.fromPath("./data/*.fastq")
ref_genome_ch.view()
QC_TRIM_READS(input_fastq_ch)
STAR_INDEX_GENOME(ref_genome_ch, ref_annotation_ch)
}
process GZIP_VERSION {
echo true
script:
"""
gzip --version
"""
}
process UNZIP {
publishDir "intermediate/"
input:
path file
output:
path "${file.baseName}"
script:
"""
gzip -dfk ${file}
"""
}
process QC_TRIM_READS {
publishDir "intermediate/"
container 'quay.io/biocontainers/sickle-trim:1.33--2'
input:
path fastqFile
output:
path "${fastqFile.baseName}_trimmed.${fastqFile.getExtension()}"
script:
"""
sickle se \\
-f $fastqFile \\
-t sanger \\
-o ${fastqFile.baseName}_trimmed.${fastqFile.getExtension()} \\
-q 35 \\
-l 45
"""
}
process STAR_INDEX_GENOME {
publishDir "intermediate/indexedGenome/"
/*if (worflow.containerEngine == 'singularity'){
container "https://depot.galaxyproject.org/singularity/mulled-v2-1fa26d1ce03c295fe2fdcf85831a92fbcbd7e8c2:59cdd445419f14abac76b31dd0d71217994cbcc9-0"
} else {*/
container "quay.io/biocontainers/mulled-v2-1fa26d1ce03c295fe2fdcf85831a92fbcbd7e8c2:59cdd445419f14abac76b31dd0d71217994cbcc9-0" //'quay.io/biocontainers/star:2.6.1d--0'
//}
input:
path genome
path gtf
output:
path "star" , emit: index
script:
"""
STAR \\
--runMode genomeGenerate \\
--genomeDir star/ \\
--genomeFastaFiles ${genome}\\
--sjdbGTFfile ${gtf} \\
--sjdbGTFtagExonParentTranscript Parent \\
--sjdbOverhang 100 \\
--runThreadN 2
"""
}
Here is my configuration file:
//docker.enabled = false
singularity.enabled = true
singularity.autoMounts = true
I built my environment as a conda environment, here is the yml file:
name: nf-core
channels:
- conda-forge
- bioconda
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=4.5=1_gnu
- attrs=21.2.0=pyhd3eb1b0_0
- brotlipy=0.7.0=py38h27cfd23_1003
- bzip2=1.0.8=h7b6447c_0
- c-ares=1.17.1=h27cfd23_0
- ca-certificates=2021.7.5=h06a4308_1
- cairo=1.14.12=h7636065_2
- cattrs=1.7.1=pyhd3eb1b0_0
- certifi=2021.5.30=py38h06a4308_0
- cffi=1.14.6=py38h400218f_0
- charset-normalizer=2.0.4=pyhd3eb1b0_0
- click=8.0.1=pyhd3eb1b0_0
- colorama=0.4.4=pyhd3eb1b0_0
- commonmark=0.9.1=pyhd3eb1b0_0
- coreutils=8.32=h7b6447c_0
- cryptography=3.4.7=py38hd23ed53_0
- curl=7.78.0=h1ccaba5_0
- expat=2.4.1=h2531618_2
- fontconfig=2.12.6=h49f89f6_0
- freetype=2.8=hab7d2ae_1
- fribidi=1.0.10=h7b6447c_0
- future=0.18.2=py38_1
- gettext=0.21.0=hf68c758_0
- git=2.32.0=pl5262hc120c5b_1
- gitdb=4.0.7=pyhd3eb1b0_0
- gitpython=3.1.18=pyhd3eb1b0_1
- glib=2.69.1=h5202010_0
- graphite2=1.3.14=h23475e2_0
- graphviz=2.40.1=h25d223c_0
- harfbuzz=1.7.6=h5f0a787_1
- hdf5=1.10.6=hb1b8bf9_0
- icu=58.2=he6710b0_3
- idna=3.2=pyhd3eb1b0_0
- importlib-metadata=4.8.1=py38h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- itsdangerous=2.0.1=pyhd3eb1b0_0
- jinja2=3.0.1=pyhd3eb1b0_0
- jpeg=9d=h7f8727e_0
- jsonschema=3.2.0=pyhd3eb1b0_2
- krb5=1.19.2=hac12032_0
- ld_impl_linux-64=2.35.1=h7274673_9
- libcurl=7.78.0=h0b77cf5_0
- libedit=3.1.20210714=h7f8727e_0
- libev=4.33=h7b6447c_0
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h5101ec6_17
- libgfortran-ng=7.5.0=ha8ba4b0_17
- libgfortran4=7.5.0=ha8ba4b0_17
- libgomp=9.3.0=h5101ec6_17
- libiconv=1.15=h63c8f33_5
- libnghttp2=1.41.0=hf8bcb03_2
- libpng=1.6.37=hbc83047_0
- libssh2=1.9.0=h1ba5d50_1
- libstdcxx-ng=9.3.0=hd4cf53a_17
- libtiff=4.2.0=h85742a9_0
- libtool=2.4.6=h7b6447c_1005
- libwebp-base=1.2.0=h27cfd23_0
- libxcb=1.14=h7b6447c_0
- libxml2=2.9.12=h03d6c58_0
- lz4-c=1.9.3=h295c915_1
- markupsafe=2.0.1=py38h27cfd23_0
- ncbi-ngs-sdk=2.10.4=hdf6179e_0
- ncurses=6.2=he6710b0_1
- nextflow=21.04.0=h4a94de4_0
- nf-core=2.1=pyh5e36f6f_0
- openjdk=8.0.152=h7b6447c_3
- openssl=1.1.1l=h7f8727e_0
- ossuuid=1.6.2=hf484d3e_1000
- packaging=21.0=pyhd3eb1b0_0
- pango=1.42.0=h377f3fa_0
- pcre=8.45=h295c915_0
- pcre2=10.35=h14c3975_1
- perl=5.26.2=h14c3975_0
- perl-app-cpanminus=1.7044=pl526_1
- perl-business-isbn=3.004=pl526_0
- perl-business-isbn-data=20140910.003=pl526_0
- perl-carp=1.38=pl526_3
- perl-constant=1.33=pl526_1
- perl-data-dumper=2.173=pl526_0
- perl-encode=2.88=pl526_1
- perl-exporter=5.72=pl526_1
- perl-extutils-makemaker=7.36=pl526_1
- perl-file-path=2.16=pl526_0
- perl-file-temp=0.2304=pl526_2
- perl-mime-base64=3.15=pl526_1
- perl-parent=0.236=pl526_1
- perl-uri=1.76=pl526_0
- perl-xml-libxml=2.0132=pl526h7ec2d77_1
- perl-xml-namespacesupport=1.12=pl526_0
- perl-xml-sax=1.02=pl526_0
- perl-xml-sax-base=1.09=pl526_0
- perl-xsloader=0.24=pl526_0
- pip=21.2.2=py38h06a4308_0
- pixman=0.40.0=h7b6447c_0
- prompt-toolkit=3.0.17=pyhca03da5_0
- prompt_toolkit=3.0.17=hd3eb1b0_0
- pycparser=2.20=py_2
- pygments=2.10.0=pyhd3eb1b0_0
- pyopenssl=20.0.1=pyhd3eb1b0_1
- pyparsing=2.4.7=pyhd3eb1b0_0
- pyrsistent=0.17.3=py38h7b6447c_0
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.11=h12debd9_0_cpython
- python_abi=3.8=2_cp38
- pyyaml=5.4.1=py38h27cfd23_1
- questionary=1.10.0=pyhd8ed1ab_0
- readline=8.1=h27cfd23_0
- requests=2.26.0=pyhd3eb1b0_0
- requests-cache=0.7.4=pyhd8ed1ab_0
- rich=10.10.0=py38h578d9bd_0
- setuptools=58.0.4=py38h06a4308_0
- singularity=2.4.2=0
- six=1.16.0=pyhd3eb1b0_0
- smmap=4.0.0=pyhd3eb1b0_0
- sqlite=3.36.0=hc218d9a_0
- sra-tools=2.11.0=pl5262h314213e_0
- tabulate=0.8.9=py38h06a4308_0
- tk=8.6.10=hbc83047_0
- typing-extensions=3.10.0.2=hd3eb1b0_0
- typing_extensions=3.10.0.2=pyh06a4308_0
- url-normalize=1.4.3=pyhd8ed1ab_0
- urllib3=1.26.6=pyhd3eb1b0_1
- wcwidth=0.2.5=pyhd3eb1b0_0
- wheel=0.37.0=pyhd3eb1b0_1
- xz=5.2.5=h7b6447c_0
- yaml=0.2.5=h7b6447c_0
- zipp=3.5.0=pyhd3eb1b0_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.9=haebb681_0
prefix: /home/mkozubov/miniconda3/envs/nf-core
Here is the log file:
Sep-21 16:00:49.076 [main] DEBUG nextflow.cli.Launcher - $> nextflow run rnaseq.nf
Sep-21 16:00:49.318 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 21.04.0
Sep-21 16:00:49.367 [main] INFO nextflow.cli.CmdRun - Launching `rnaseq.nf` [reverent_jepsen] - revision: 0fc00d31fc
Sep-21 16:00:49.414 [main] DEBUG nextflow.config.ConfigBuilder - Found config local: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/nextflow.config
Sep-21 16:00:49.418 [main] DEBUG nextflow.config.ConfigBuilder - Parsing config file: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/nextflow.config
Sep-21 16:00:49.506 [main] DEBUG nextflow.config.ConfigBuilder - Applying config profile: `standard`
Sep-21 16:00:50.238 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; plugins-dir=/home/mkozubov/.nextflow/plugins
Sep-21 16:00:50.240 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins default=[]
Sep-21 16:00:50.242 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins local root: .nextflow/plr/empty
Sep-21 16:00:50.258 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
Sep-21 16:00:50.262 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
Sep-21 16:00:50.266 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode
Sep-21 16:00:50.289 [main] INFO org.pf4j.AbstractPluginManager - No plugins
Sep-21 16:00:50.366 [main] DEBUG nextflow.Session - Session uuid: 22a13149-e9f8-47cc-8f09-98a6b000a83a
Sep-21 16:00:50.367 [main] DEBUG nextflow.Session - Run name: reverent_jepsen
Sep-21 16:00:50.372 [main] DEBUG nextflow.Session - Executor pool size: 5
Sep-21 16:00:50.418 [main] DEBUG nextflow.cli.CmdRun -
Version: 21.04.0 build 5552
Created: 02-05-2021 16:22 UTC (09:22 PDT)
System: Linux 5.10.16.3-microsoft-standard-WSL2
Runtime: Groovy 3.0.7 on OpenJDK 64-Bit Server VM 1.8.0_152-release-1056-b12
Encoding: UTF-8 (UTF-8)
Process: 10590#DESKTOP-UJ90D1J [127.0.1.1]
CPUs: 5 - Mem: 1.9 GB (311.8 MB) - Swap: 1 GB (783.4 MB)
Sep-21 16:00:50.539 [main] DEBUG nextflow.file.FileHelper - Can't check if specified path is NFS (1): /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work
v9fs
Sep-21 16:00:50.541 [main] DEBUG nextflow.Session - Work-dir: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work [null]
Sep-21 16:00:50.545 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/bin
Sep-21 16:00:50.585 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[]
Sep-21 16:00:50.616 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
Sep-21 16:00:50.999 [main] DEBUG nextflow.Session - Session start invoked
Sep-21 16:00:51.461 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
Sep-21 16:00:51.511 [main] DEBUG nextflow.Session - Workflow process names [dsl2]: QC_TRIM_READS, UNZIP, STAR_INDEX_GENOME, GZIP_VERSION
Sep-21 16:00:51.643 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Sep-21 16:00:51.643 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Sep-21 16:00:51.651 [main] DEBUG nextflow.executor.Executor - [warm up] executor > local
Sep-21 16:00:51.656 [main] DEBUG n.processor.LocalPollingMonitor - Creating local task monitor for executor 'local' > cpus=5; memory=1.9 GB; capacity=5; pollInterval=100ms; dumpInterval=5m
Sep-21 16:00:51.868 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Sep-21 16:00:51.869 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Sep-21 16:00:51.904 [main] DEBUG nextflow.Session - Ignite dataflow network (5)
Sep-21 16:00:51.963 [main] DEBUG nextflow.processor.TaskProcessor - Starting process > QC_TRIM_READS
Sep-21 16:00:51.965 [main] DEBUG nextflow.processor.TaskProcessor - Starting process > STAR_INDEX_GENOME
Sep-21 16:00:51.966 [main] DEBUG nextflow.script.ScriptRunner - > Await termination
Sep-21 16:00:51.968 [main] DEBUG nextflow.Session - Session await
Sep-21 16:00:51.969 [PathVisitor-3] DEBUG nextflow.file.PathVisitor - files for syntax: glob; folder: ./data/; pattern: *.fastq; options: [:]
Sep-21 16:00:52.300 [Actor Thread 8] WARN nextflow.container.SingularityCache - Singularity cache directory has not been defined -- Remote image will be stored in the path: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work/singularity -- Use env variable NXF_SINGULARITY_CACHEDIR to specify a different location
Sep-21 16:00:52.300 [Actor Thread 8] INFO nextflow.container.SingularityCache - Pulling Singularity image docker://quay.io/biocontainers/sickle-trim:1.33--2 [cache /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work/singularity/quay.io-biocontainers-sickle-trim-1.33--2.img]
Sep-21 16:00:52.300 [Actor Thread 7] INFO nextflow.container.SingularityCache - Pulling Singularity image docker://quay.io/biocontainers/mulled-v2-1fa26d1ce03c295fe2fdcf85831a92fbcbd7e8c2:59cdd445419f14abac76b31dd0d71217994cbcc9-0 [cache /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work/singularity/quay.io-biocontainers-mulled-v2-1fa26d1ce03c295fe2fdcf85831a92fbcbd7e8c2-59cdd445419f14abac76b31dd0d71217994cbcc9-0.img]
Sep-21 16:00:52.433 [Actor Thread 5] ERROR nextflow.processor.TaskProcessor - Error executing process > 'QC_TRIM_READS (1)'
Caused by:
Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632265252300 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
java.lang.IllegalStateException: java.lang.IllegalStateException: Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632265252300 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
at nextflow.container.SingularityCache.getCachePathFor(SingularityCache.groovy:304)
at nextflow.container.SingularityCache$getCachePathFor.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139)
at nextflow.container.ContainerHandler.createSingularityCache(ContainerHandler.groovy:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:193)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:61)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:194)
at nextflow.container.ContainerHandler.normalizeImageName(ContainerHandler.groovy:68)
at nextflow.container.ContainerHandler$normalizeImageName.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139)
at nextflow.processor.TaskRun.getContainer(TaskRun.groovy:587)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at org.codehaus.groovy.runtime.metaclass.MethodMetaProperty$GetBeanMethodMetaProperty.getProperty(MethodMetaProperty.java:76)
at org.codehaus.groovy.runtime.callsite.GetEffectivePogoPropertySite.getProperty(GetEffectivePogoPropertySite.java:85)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341)
at nextflow.processor.TaskProcessor.createTaskHashKey(TaskProcessor.groovy:1939)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:193)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:61)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:171)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:185)
at nextflow.processor.TaskProcessor.invokeTask(TaskProcessor.groovy:591)
at nextflow.processor.InvokeTaskAdapter.call(InvokeTaskAdapter.groovy:59)
at groovyx.gpars.dataflow.operator.DataflowOperatorActor.startTask(DataflowOperatorActor.java:120)
at groovyx.gpars.dataflow.operator.ForkingDataflowOperatorActor.access$001(ForkingDataflowOperatorActor.java:35)
at groovyx.gpars.dataflow.operator.ForkingDataflowOperatorActor$1.run(ForkingDataflowOperatorActor.java:58)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632265252300 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
at nextflow.container.SingularityCache.runCommand(SingularityCache.groovy:256)
at nextflow.container.SingularityCache.downloadSingularityImage0(SingularityCache.groovy:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1268)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1035)
at org.codehaus.groovy.runtime.InvokerHelper.invokePogoMethod(InvokerHelper.java:1029)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:1012)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethodSafe(InvokerHelper.java:101)
at nextflow.container.SingularityCache$_downloadSingularityImage_closure1.doCall(SingularityCache.groovy:191)
at nextflow.container.SingularityCache$_downloadSingularityImage_closure1.call(SingularityCache.groovy)
at nextflow.file.FileMutex.lock(FileMutex.groovy:107)
at nextflow.container.SingularityCache.downloadSingularityImage(SingularityCache.groovy:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1268)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1035)
at org.codehaus.groovy.runtime.InvokerHelper.invokePogoMethod(InvokerHelper.java:1029)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:1012)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethodSafe(InvokerHelper.java:101)
at nextflow.container.SingularityCache$_getLazyImagePath_closure2.doCall(SingularityCache.groovy:281)
at nextflow.container.SingularityCache$_getLazyImagePath_closure2.call(SingularityCache.groovy)
at groovyx.gpars.dataflow.LazyDataflowVariable$1.run(LazyDataflowVariable.java:70)
... 3 common frames omitted
Sep-21 16:00:52.443 [Actor Thread 5] DEBUG nextflow.Session - Session aborted -- Cause: java.lang.IllegalStateException: Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632265252300 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
Sep-21 16:00:52.494 [Actor Thread 5] DEBUG nextflow.Session - The following nodes are still active:
[process] QC_TRIM_READS
status=ACTIVE
port 0: (queue) closed; channel: fastqFile
port 1: (cntrl) - ; channel: $
Sep-21 16:00:52.507 [main] DEBUG nextflow.Session - Session await > all process finished
Sep-21 16:00:52.510 [main] DEBUG nextflow.Session - Session await > all barriers passed
Sep-21 16:00:52.521 [main] DEBUG nextflow.trace.WorkflowStatsObserver - Workflow completed > WorkflowStats[succeededCount=0; failedCount=0; ignoredCount=0; cachedCount=0; pendingCount=0; submittedCount=0; runningCount=0; retriesCount=0; abortedCount=0; succeedDuration=0ms; failedDuration=0ms; cachedDuration=0ms;loadCpus=0; loadMemory=0; peakRunning=0; peakCpus=0; peakMemory=0; ]
Sep-21 16:00:52.685 [main] DEBUG nextflow.CacheDB - Closing CacheDB done
Sep-21 16:00:52.752 [main] DEBUG nextflow.script.ScriptRunner - > Execution complete -- Goodbye
I have been using nf-core's rnaseq pipeline to guide me a bit: https://github.com/nf-core/rnaseq
If it helps, here is the pipeline I am trying to automate: https://bioinformatics.uconn.edu/resources-and-events/tutorials-2/rna-seq-tutorial-with-reference-genome/#
My computer has a Windows 10 operating system, and I have enabled WSL2 and got Ubuntu.
I am fairly new to Docker, Singularity, and Nextflow so I am hoping someone can explain the error. I don't even understand why python is being mentioned. Is the issue that singularity cannot pull from Quay.io? I am a bit lost and would appreciate a nudge in the right direction.
Also the reason I am trying to get singularity to work is STAR immediately gives me a Segmentation fault error on my local machine (i'm assuming I run out of memory), and I would like to test this pipeline on our HPC (but I don't have root privileges).
You can ignore the Singularity warnings, but not the errors. The problem looks to be that you're missing python in your environment:
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
You need to make sure you have Python 3 installed. If you have, you should be able to see it here with:
/usr/bin/python --version
You didn't mention the version of Ubuntu you are using, but if you have Ubuntu 20.04 then you should already have Python 3 pre-installed. If this is the case, and you have Python 3 already installed (i.e. you find that /usr/bin/python3 --version works, note the '3') but the above doesn't, try:
sudo apt-get install python-is-python3
This will install a symlink to point the /usr/bin/python interpreter at the current default python3.
I'm currently researching the resilience4j library and for some reason the following code doesn't work as expected:
#Test
public void testRateLimiterProjectReactor()
{
// The configuration below will allow 2 requests per second and a "timeout" of 2 seconds.
RateLimiterConfig config = RateLimiterConfig.custom()
.limitForPeriod(2)
.limitRefreshPeriod(Duration.ofSeconds(1))
.timeoutDuration(Duration.ofSeconds(2))
.build();
// Step 2.
// Create a RateLimiter and use it.
RateLimiterRegistry registry = RateLimiterRegistry.of(config);
RateLimiter rateLimiter = registry.rateLimiter("myReactorServiceNameLimiter");
// Step 3.
Flux<Integer> flux = Flux.from(Flux.range(0, 10))
.transformDeferred(RateLimiterOperator.of(rateLimiter))
.log()
;
StepVerifier.create(flux)
.expectNextCount(10)
.expectComplete()
.verify()
;
}
According to the official examples here and here this should be limiting the request() to 2 elements per second. However, the logs show it's fetching all of the elements immediately:
15:08:24.587 [main] DEBUG reactor.util.Loggers - Using Slf4j logging framework
15:08:24.619 [main] INFO reactor.Flux.Defer.1 - onSubscribe(RateLimiterSubscriber)
15:08:24.624 [main] INFO reactor.Flux.Defer.1 - request(unbounded)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(0)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(1)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(2)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(3)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(4)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(5)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(6)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(7)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(8)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onNext(9)
15:08:24.626 [main] INFO reactor.Flux.Defer.1 - onComplete()
I don't see what's wrong?
As already answered in comments above RateLimiter tracks the number of subscriptions, not elements. To achieve rate limiting on elements you can use limitRate (and buffer + delayElements).
For example,
Flux.range(1, 100)
.delayElements(Duration.ofMillis(100)) // to imitate a publisher that produces elements at a certain rate
.log()
.limitRate(10) // used to requests up to 10 elements from the publisher
.buffer(10) // groups integers by 10 elements
.delayElements(Duration.ofSeconds(2)) // emits a group of ints every 2 sec
.subscribe(System.out::println);
I'm working with FastAPI framework, served by Uvicorn server.
My application should run some time consuming numerical computation at a given endpoint (/run). For this I am using 'background_task' from fastAPI (which is basically 'background_task' from Starlette).
When running the application, after some times of nominal behaviour, the server is shut down for some reason.
The logs from the application look like this:
INFO: Started server process [922]
INFO: Waiting for application startup.
DEBUG: None - ASGI [1] Started
DEBUG: None - ASGI [1] Sent {'type': 'lifespan.startup'}
DEBUG: None - ASGI [1] Received {'type': 'lifespan.startup.complete'}
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
DEBUG: ('10.0.2.111', 57396) - Connected
DEBUG: ('10.0.2.111', 57397) - Connected
DEBUG: ('10.0.2.111', 57396) - ASGI [2] Started
DEBUG: ('10.0.2.111', 57396) - ASGI [2] Received {'type': 'http.response.start', 'status': 200, 'headers': '<...>'}
INFO: ('10.0.2.111', 57396) - "GET /run HTTP/1.1" 200
DEBUG: ('10.0.2.111', 57396) - ASGI [2] Received {'type': 'http.response.body', 'body': '<32 bytes>'}
DEBUG: ('10.0.2.111', 57396) - ASGI [3] Started
DEBUG: ('10.0.2.111', 57396) - ASGI [3] Received {'type': 'http.response.start', 'status': 404, 'headers': '<...>'}
INFO: ('10.0.2.111', 57396) - "GET /favicon.ico HTTP/1.1" 404
DEBUG: ('10.0.2.111', 57396) - ASGI [3] Received {'type': 'http.response.body', 'body': '<22 bytes>'}
DEBUG: ('10.0.2.111', 57396) - ASGI [3] Completed
...
DEBUG: ('10.0.2.111', 57396) - Disconnected
... The background task is completed.
DEBUG: ('10.0.2.111', 57396) - ASGI [2] Completed
DEBUG: ('10.0.2.111', 57397) - Disconnected
DEBUG: ('10.0.2.111', 57405) - Connected
...
The application goes on, with requests and completed background tasks.
At some point, during the execution of a background task:
INFO: Shutting down
DEBUG: ('10.0.2.111', 57568) - Disconnected
DEBUG: ('10.0.2.111', 57567) - Disconnected
INFO: Waiting for background tasks to complete. (CTRL+C to force quit)
DEBUG: ('10.0.2.111', 57567) - ASGI [6] Completed
INFO: Waiting for application shutdown.
DEBUG: None - ASGI [1] Sent {'type': 'lifespan.shutdown'}
DEBUG: None - ASGI [1] Received {'type': 'lifespan.shutdown.complete'}
DEBUG: None - ASGI [1] Completed
INFO: Finished server process [922]
I really don't get why this happens. I have no idea what to try in order to fix it.
My code looks like this.
#!/usr/bin/env python3.7
import time
from fastapi import FastAPI, BackgroundTasks
import uvicorn
from starlette.responses import JSONResponse
import my_imports_from_project
analysis_api = FastAPI()
#analysis_api.get("/")
def root():
return {"message": "root"}
#analysis_api.get("/test")
def test():
return {"message": "test"}
#analysis_api.get("/run")
def run(name: str, background_task: BackgroundTasks):
try:
some_checks(name)
except RaisedExceptions:
body = {"running": False,
"name": name,
"cause": "Not found in database"}
return JSONResponse(status_code=400, content=body)
body = {"running": True,
"name": name}
background_task.add_task(run_analysis, name)
return JSONResponse(status_code=200, content=body)
if __name__ == "__main__":
uvicorn.run("api:analysis_api", host="0.0.0.0", log_level="debug")
This is how I solved the whole problem.
I think that the problem was that my tasks spawn some processes in order to perform computations.
So, instead of using FastApi background_task, I am now using multiprocessing.Process().
This solves it.
As pointed out from the guys from FastApi, this solution might not scale well if the project becomes big and complex. In that case it is highly suggested to use something like a message queue + task running (as suggested on FastApi site.
However, for small projects the solution with multiprocessing.Process or subprocess.Popen is totally fine.
I have a question regarding Flink. I am running an application in a local cluster, with 1 TaskManager and 4 Taskslots.
After some time of running the application, I got an Timeout error:
java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id feea6a6702a0cf960ae2847b5bd25665 timed out.
I have seen some posts with this topic but any answer to it. Could you help me to see the root cause, or a posible troubleshooting?
I am using flink version 1.5.3
It seems that the docker container of taskmanagers and JobManager are stopped when this happens.
Let me add the error trace from the JobManager container logs:
2019-06-09 13:31:06,300 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job Socket Window NgsiEvent (ef3a860de48d54544d973754c6170d8b) switched from state FAILING to FAILED.
java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id 63dbab620797b84da023b33578478238 timed out.
at org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1609)
at org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:339)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.flink.runtime.concurrent.akka.ActorSystemScheduledExecutorAdapter$ScheduledFutureTask.run(ActorSystemScheduledExecutorAdapter.java:154)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2019-06-09 13:31:06,308 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Could not restart the job Socket Window NgsiEvent (ef3a860de48d54544d973754c6170d8b) because the restart strategy prevented it.
java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id 63dbab620797b84da023b33578478238 timed out.
at org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1609)
at org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:339)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.flink.runtime.concurrent.akka.ActorSystemScheduledExecutorAdapter$ScheduledFutureTask.run(ActorSystemScheduledExecutorAdapter.java:154)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2019-06-09 13:31:06,317 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Stopping checkpoint coordinator for job ef3a860de48d54544d973754c6170d8b.
2019-06-09 13:31:06,322 INFO org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore - Shutting down
2019-06-09 13:31:06,331 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#16363182f31f:36715] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#16363182f31f:36715]] Caused by: [16363182f31f]
2019-06-09 13:31:06,351 INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Job ef3a860de48d54544d973754c6170d8b reached globally terminal state FAILED.
2019-06-09 13:31:06,434 INFO org.apache.flink.runtime.jobmaster.JobMaster - Stopping the JobMaster for job Socket Window NgsiEvent(ef3a860de48d54544d973754c6170d8b).
2019-06-09 13:31:06,447 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Suspending SlotPool.
2019-06-09 13:31:06,448 INFO org.apache.flink.runtime.jobmaster.JobMaster - Close ResourceManager connection 883e842633b0fd9a2e53ab45778581fe: JobManager is shutting down..
2019-06-09 13:31:06,449 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcActor - The rpc endpoint org.apache.flink.runtime.jobmaster.slotpool.SlotPool has not been started yet. Discarding message org.apache.flink.runtime.rpc.messages.LocalRpcInvocation until processing is started.
2019-06-09 13:31:06,457 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Disconnect job manager 00000000000000000000000000000000#akka.tcp://flink#jobmanager:6123/user/jobmanager_2 for job ef3a860de48d54544d973754c6170d8b from the resource manager.
2019-06-09 13:31:06,459 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Stopping SlotPool.
2019-06-09 13:31:06,460 INFO org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManagerRunner already shutdown.
2019-06-09 13:31:16,304 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#16363182f31f:36715] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#16363182f31f:36715]] Caused by: [16363182f31f: Name or service not known]
2019-06-09 13:31:26,320 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#16363182f31f:36715] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#16363182f31f:36715]] Caused by: [16363182f31f: Name or service not known]
2019-06-09 13:31:36,286 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#16363182f31f:36715] has failed, address is now gated for [50] ms. Reason: [Association failed with [akka.tcp://flink#16363182f31f:36715]] Caused by: [16363182f31f]
Thanks in advance!
The issue is as follows:
5825 [ZAP-SpiderInitThread-0] INFO org.zaproxy.zap.spider.Spider - Spider initializing...
5854 [ZAP-SpiderInitThread-0] INFO org.zaproxy.zap.spider.Spider - Starting spider...
5854 [ZAP-SpiderInitThread-0] WARN org.zaproxy.zap.spider.Spider - No seeds available for the Spider. Cancelling scan...
5854 [ZAP-SpiderInitThread-0] INFO org.zaproxy.zap.extension.spider.SpiderThread - Spider scanning complete: false
[ZAP Jenkins Plugin] SPIDER SCAN STATUS [ 100% ]
[ZAP Jenkins Plugin] ALERTS COUNT [ 0 ]
Can anyone resolve this issue?