When running my bot on the hosting, I used logging, and I got this error:
2022-04-07 17:38:23,815 (init.py:663 MainThread) INFO - TeleBot: "Started polling."
2022-04-07 17:38:23,816 (util.py:84 PollingThread) DEBUG - TeleBot: "Received task"
2022-04-07 17:38:23,816 (apihelper.py:88 PollingThread) DEBUG - TeleBot: "Request: method=get url=https://api.telegram.org/bot5186758570:{TOKEN}/getUpdates params={'offset': 1, 'timeout': 20, 'long_polling_timeout': 20} files=None"
2022-04-07 17:38:44,020 (apihelper.py:156 PollingThread) DEBUG - TeleBot: "The server returned: 'b'{"ok":true,"result":[]}''"
2022-04-07 17:38:44,021 (init.py:414 PollingThread) DEBUG - TeleBot: "Received 0 new updates"
2022-04-07 17:38:44,021 (util.py:88 PollingThread) DEBUG - TeleBot: "Task complete"
2022-04-07 17:38:44,021 (util.py:84 PollingThread) DEBUG - TeleBot: "Received task"
2022-04-07 17:38:44,021 (apihelper.py:88 PollingThread) DEBUG - TeleBot: "Request: method=get url=https://api.telegram.org/bot5186758570:{TOKEN}/getUpdates params={'offset': 1, 'timeout': 20, 'long_polling_timeout': 20} files=None"
Help fix it please, very necessary, I will be very grateful
Related
ES cloud is hosted in Azure. It works in spring data ES 4.1.5(ES client 7.9.3). But spring data ES 4.4.1 (ES client 7.17.4) requires cluster:monitor/main permission.
My admin doesn't want to grant such permission.
"root_cause":[{"type":"security_exception","reason":"action [cluster:monitor/main] is unauthorized for user [xxxx] with roles
I first asked question in this post. Val pointed out / end point cluster level API requires this permission.
why does it need cluster:monitor/main permission
I did some debug and found out more details. Spring data ES is sending HEAD / and GET / during SimpleElasticsearchRepository initialization.
Sorry I can't get the format working. So I attach screenshot here
There is error during SimpleElasticsearchRepository initialization. SimpleElasticsearchRepository -> RestIndexTemplate -> RestHighLevelClient
During SimpleElasticsearchRepository initialization, indices are empty. So the normal HEAD request on actual index becomes HEAD /. This user account doesn't have permission on sending HEAD /
Since the purpose is to check if index exists, is it possible to skip checking with server if indices are empty? Because it turns into request to check cluster level info. And below requests require total different permission.
HEAD /
HEAD /actual_index
I also find this error in the log. I couldn't find out how it was being sent out in spring data ES. This use can't run GET / neither.
It works in spring data ES 4.1.5 Why didn't Spring data ES 4.1.5 send out above HEAD / and GET / requests?
Updated to include my original post.
I found out my details about the issues.
HEAD /
During SimpleElasticsearchRepository initialization, it does below. My index has createIndex = true. So it goes to check exists(). But the index is empty here, so HEAD / my_index becomes HEAD /.
I don't know why index is empty here. But I set createIndex = false to skip it.
if (this.shouldCreateIndexAndMapping() && !this.indexOperations.exists()) {
this.indexOperations.createWithMapping();
}
GET /
Based on my testing, GET / is being sent out once per session. It acts like network connectivity testing. If there is any error, it will be ignored.
//MainClientExec - [exchange: 1] start execution -- GET / HTTP/1.1[\r][\n] -- once per session
11:58:46.648 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec - [exchange: 1] start execution
11:58:46.967 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> GET / HTTP/1.1
11:58:46.967 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> X-Elastic-Client-Meta: es=7.17.4p,jv=1.8,t=7.17.4p,hc=4.1.4,kt=1.6,gy=2.4
11:58:46.967 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> Content-Length: 0
11:58:46.967 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> Host: xxx.azure.elastic-cloud.com:9243
11:58:46.967 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> Connection: Keep-Alive
11:58:46.967 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> User-Agent: elasticsearch-java/7.17.4-SNAPSHOT (Java/1.8.0_331)
11:58:46.967 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> Authorization: Basic xxx
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "HTTP/1.1 403 Forbidden[\r][\n]"
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "Content-Length: 4157[\r][\n]"
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "Content-Type: application/json; charset=UTF-8[\r][\n]"
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "X-Cloud-Request-Id: 3TA-1pHlTXuLbAoue0rKZw[\r][\n]"
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "X-Elastic-Product: Elasticsearch[\r][\n]"
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "X-Found-Handling-Cluster: aaf9bff719c349bbad7a015615a994f7[\r][\n]"
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "X-Found-Handling-Instance: instance-0000000001[\r][\n]"
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "Date: Fri, 22 Jul 2022 16:58:46 GMT[\r][\n]"
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "[\r][\n]"
11:58:47.105 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 << "{"error":{"root_cause":[{"type":"security_exception","reason":"action [cluster:monitor/main] is unauthorized for user
11:58:47.115 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 << HTTP/1.1 403 Forbidden
11:58:47.115 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec - [exchange: 1] Response received HTTP/1.1 403 Forbidden
11:58:47.126 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch - http-outgoing-0 [ACTIVE(4157)] Input ready
11:58:47.126 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec - [exchange: 1] Consume content
11:58:47.131 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient - [exchange: 1] Connection can be kept alive indefinitely
11:58:47.131 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec - [exchange: 1] Response processed
11:58:47.131 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient - [exchange: 1] releasing connection
11:58:47.131 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl - http-outgoing-0
11:58:47.131 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager - Releasing connection: [id: http-outgoing-0][route: {s}->https://
11:58:47.131 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager - Connection [id: http-outgoing-0][route: {s}->https://
11:58:47.132 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl - http-outgoing-0 10.2
11:58:47.133 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager - Connection released: [id: http-outgoing-0][
After above, the real query is sent.
//MainClientExec - [exchange: 2] start execution - GET /my_index/_doc/asdf HTTP/1.1
11:58:47.137 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec - [exchange: 2] start execution
11:58:47.147 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> GET /my_index/_doc/asdf HTTP/1.1
11:58:47.147 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> X-Elastic-Client-Meta: es=7.17.4p,jv=1.8,t=7.17.4p,hc=4.1.4,kt=1.6,gy=2.4
11:58:47.147 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> Content-Length: 0
11:58:47.147 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> Host: xxx.azure.elastic-cloud.com:9243
11:58:47.147 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> Connection: Keep-Alive
11:58:47.147 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> User-Agent: elasticsearch-java/7.17.4-SNAPSHOT (Java/1.8.0_331)
11:58:47.147 [I/O dispatcher 1] DEBUG org.apache.http.headers - http-outgoing-0 >> Authorization: Basic xx==
11:58:47.148 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl - http-outgoing-0 10.24.15.131:58069<->52.158.162.229:9243[ACTIVE][rw:w][ACTIVE][rw][NOT_HANDSHAKING][0][0][0]: Event set [w]
11:58:47.148 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec - [exchange: 2] Request completed
11:58:47.148 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl - http-outgoing-0 10.24.15.131:58069<->52.158.162.229:9243[ACTIVE][rw:w][ACTIVE][rw][NOT_HANDSHAKING][0][0][372]: 343 bytes written
11:58:47.148 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 >> "GET /my_index/_doc/asdf HTTP/1.1[\r][\n]"
11:58:47.148 [I/O dispatcher 1] DEBUG org.apache.http.wire - http-outgoing-0 >> "X-Elastic-Client-Meta: es=7.17.4p,jv=1.8,t=7.17.4p,hc=4.1.4,kt=1.6,gy=2.4[\r][\n]"
I tried to build an application to connect to Twitter and post tweets. I created twitter template using spring social.
pom.xml
<dependency>
<groupId>org.springframework.social</groupId>
<artifactId>spring-social-twitter</artifactId>
<version>1.1.0.RELEASE</version>
</dependency>
twitter template
return new TwitterTemplate(consumerKey, consumerSecret, accessToken,
accessTokenSecret);
But I am getting the following error.
Caused by: org.springframework.social.MissingAuthorizationException: Authorization is required for the operation, but the API binding was created without authorization.
I added Read and Write permissions under app settings in the developer portal as well.
logs, of HTTPS calls, made to Twitter
12:19:49.750 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "POST /1.1/statuses/update.json HTTP/1.1[\r][\n]"
12:19:49.751 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Accept: application/json, application/*+json[\r][\n]"
12:19:49.751 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Content-Type: application/x-www-form-urlencoded;charset=UTF-8[\r][\n]"
12:19:49.751 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Authorization: OAuth oauth_nonce="2562115169", oauth_token="my_token", oauth_consumer_key="my_key", oauth_signature_method="HMAC-SHA1", oauth_timestamp="1632466188", oauth_version="1.0", oauth_signature="iwmB09dBm2R7SJ07w8CAlN4EiT8%3D"[\r][\n]"
12:19:49.752 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Content-Length: 27[\r][\n]"
12:19:49.752 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Host: api.twitter.com[\r][\n]"
12:19:49.752 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]"
12:19:49.752 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "User-Agent: Apache-HttpClient/4.5.10 (Java/1.8.0_231)[\r][\n]"
12:19:49.752 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Accept-Encoding: gzip,deflate[\r][\n]"
12:19:49.752 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "[\r][\n]"
12:19:49.753 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "status=Test+eng+model+tweet"
12:19:50.257 [main] DEBUG org.apache.http.wire - http-outgoing-0 << "HTTP/1.1 401 Unauthorized[\r][\n]"
What am I missing, any help is appreciated. Thanks in advance
I have looked hard for an answer, but haven't managed to find one, so I'm hoping someone here can help me understand this error and what is happening during the singularity pull command.
Here is the error:
Error executing process > 'QC_TRIM_READS (1)'
Caused by:
Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632264509884 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
Here is the script (excuse the mess, I am just getting used to nextflow)
#!/usr/bin/env Nextflow
nextflow.enable.dsl=2
params.ref_genome = "./data/GmaxFiskeby_678_v1.0.fa"
params.ref_annotation = "./data/GmaxFiskeby_678_v1.1.gene_exons.gff3"
params.intermediate_dir = "$workDir/intermediate/"
workflow {
ref_genome_ch = Channel.fromPath("$params.ref_genome")
ref_annotation_ch = Channel.fromPath("$params.ref_annotation")
input_fastq_ch = Channel.fromPath("./data/*.fastq")
ref_genome_ch.view()
QC_TRIM_READS(input_fastq_ch)
STAR_INDEX_GENOME(ref_genome_ch, ref_annotation_ch)
}
process GZIP_VERSION {
echo true
script:
"""
gzip --version
"""
}
process UNZIP {
publishDir "intermediate/"
input:
path file
output:
path "${file.baseName}"
script:
"""
gzip -dfk ${file}
"""
}
process QC_TRIM_READS {
publishDir "intermediate/"
container 'quay.io/biocontainers/sickle-trim:1.33--2'
input:
path fastqFile
output:
path "${fastqFile.baseName}_trimmed.${fastqFile.getExtension()}"
script:
"""
sickle se \\
-f $fastqFile \\
-t sanger \\
-o ${fastqFile.baseName}_trimmed.${fastqFile.getExtension()} \\
-q 35 \\
-l 45
"""
}
process STAR_INDEX_GENOME {
publishDir "intermediate/indexedGenome/"
/*if (worflow.containerEngine == 'singularity'){
container "https://depot.galaxyproject.org/singularity/mulled-v2-1fa26d1ce03c295fe2fdcf85831a92fbcbd7e8c2:59cdd445419f14abac76b31dd0d71217994cbcc9-0"
} else {*/
container "quay.io/biocontainers/mulled-v2-1fa26d1ce03c295fe2fdcf85831a92fbcbd7e8c2:59cdd445419f14abac76b31dd0d71217994cbcc9-0" //'quay.io/biocontainers/star:2.6.1d--0'
//}
input:
path genome
path gtf
output:
path "star" , emit: index
script:
"""
STAR \\
--runMode genomeGenerate \\
--genomeDir star/ \\
--genomeFastaFiles ${genome}\\
--sjdbGTFfile ${gtf} \\
--sjdbGTFtagExonParentTranscript Parent \\
--sjdbOverhang 100 \\
--runThreadN 2
"""
}
Here is my configuration file:
//docker.enabled = false
singularity.enabled = true
singularity.autoMounts = true
I built my environment as a conda environment, here is the yml file:
name: nf-core
channels:
- conda-forge
- bioconda
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=4.5=1_gnu
- attrs=21.2.0=pyhd3eb1b0_0
- brotlipy=0.7.0=py38h27cfd23_1003
- bzip2=1.0.8=h7b6447c_0
- c-ares=1.17.1=h27cfd23_0
- ca-certificates=2021.7.5=h06a4308_1
- cairo=1.14.12=h7636065_2
- cattrs=1.7.1=pyhd3eb1b0_0
- certifi=2021.5.30=py38h06a4308_0
- cffi=1.14.6=py38h400218f_0
- charset-normalizer=2.0.4=pyhd3eb1b0_0
- click=8.0.1=pyhd3eb1b0_0
- colorama=0.4.4=pyhd3eb1b0_0
- commonmark=0.9.1=pyhd3eb1b0_0
- coreutils=8.32=h7b6447c_0
- cryptography=3.4.7=py38hd23ed53_0
- curl=7.78.0=h1ccaba5_0
- expat=2.4.1=h2531618_2
- fontconfig=2.12.6=h49f89f6_0
- freetype=2.8=hab7d2ae_1
- fribidi=1.0.10=h7b6447c_0
- future=0.18.2=py38_1
- gettext=0.21.0=hf68c758_0
- git=2.32.0=pl5262hc120c5b_1
- gitdb=4.0.7=pyhd3eb1b0_0
- gitpython=3.1.18=pyhd3eb1b0_1
- glib=2.69.1=h5202010_0
- graphite2=1.3.14=h23475e2_0
- graphviz=2.40.1=h25d223c_0
- harfbuzz=1.7.6=h5f0a787_1
- hdf5=1.10.6=hb1b8bf9_0
- icu=58.2=he6710b0_3
- idna=3.2=pyhd3eb1b0_0
- importlib-metadata=4.8.1=py38h06a4308_0
- importlib_metadata=4.8.1=hd3eb1b0_0
- itsdangerous=2.0.1=pyhd3eb1b0_0
- jinja2=3.0.1=pyhd3eb1b0_0
- jpeg=9d=h7f8727e_0
- jsonschema=3.2.0=pyhd3eb1b0_2
- krb5=1.19.2=hac12032_0
- ld_impl_linux-64=2.35.1=h7274673_9
- libcurl=7.78.0=h0b77cf5_0
- libedit=3.1.20210714=h7f8727e_0
- libev=4.33=h7b6447c_0
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h5101ec6_17
- libgfortran-ng=7.5.0=ha8ba4b0_17
- libgfortran4=7.5.0=ha8ba4b0_17
- libgomp=9.3.0=h5101ec6_17
- libiconv=1.15=h63c8f33_5
- libnghttp2=1.41.0=hf8bcb03_2
- libpng=1.6.37=hbc83047_0
- libssh2=1.9.0=h1ba5d50_1
- libstdcxx-ng=9.3.0=hd4cf53a_17
- libtiff=4.2.0=h85742a9_0
- libtool=2.4.6=h7b6447c_1005
- libwebp-base=1.2.0=h27cfd23_0
- libxcb=1.14=h7b6447c_0
- libxml2=2.9.12=h03d6c58_0
- lz4-c=1.9.3=h295c915_1
- markupsafe=2.0.1=py38h27cfd23_0
- ncbi-ngs-sdk=2.10.4=hdf6179e_0
- ncurses=6.2=he6710b0_1
- nextflow=21.04.0=h4a94de4_0
- nf-core=2.1=pyh5e36f6f_0
- openjdk=8.0.152=h7b6447c_3
- openssl=1.1.1l=h7f8727e_0
- ossuuid=1.6.2=hf484d3e_1000
- packaging=21.0=pyhd3eb1b0_0
- pango=1.42.0=h377f3fa_0
- pcre=8.45=h295c915_0
- pcre2=10.35=h14c3975_1
- perl=5.26.2=h14c3975_0
- perl-app-cpanminus=1.7044=pl526_1
- perl-business-isbn=3.004=pl526_0
- perl-business-isbn-data=20140910.003=pl526_0
- perl-carp=1.38=pl526_3
- perl-constant=1.33=pl526_1
- perl-data-dumper=2.173=pl526_0
- perl-encode=2.88=pl526_1
- perl-exporter=5.72=pl526_1
- perl-extutils-makemaker=7.36=pl526_1
- perl-file-path=2.16=pl526_0
- perl-file-temp=0.2304=pl526_2
- perl-mime-base64=3.15=pl526_1
- perl-parent=0.236=pl526_1
- perl-uri=1.76=pl526_0
- perl-xml-libxml=2.0132=pl526h7ec2d77_1
- perl-xml-namespacesupport=1.12=pl526_0
- perl-xml-sax=1.02=pl526_0
- perl-xml-sax-base=1.09=pl526_0
- perl-xsloader=0.24=pl526_0
- pip=21.2.2=py38h06a4308_0
- pixman=0.40.0=h7b6447c_0
- prompt-toolkit=3.0.17=pyhca03da5_0
- prompt_toolkit=3.0.17=hd3eb1b0_0
- pycparser=2.20=py_2
- pygments=2.10.0=pyhd3eb1b0_0
- pyopenssl=20.0.1=pyhd3eb1b0_1
- pyparsing=2.4.7=pyhd3eb1b0_0
- pyrsistent=0.17.3=py38h7b6447c_0
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.11=h12debd9_0_cpython
- python_abi=3.8=2_cp38
- pyyaml=5.4.1=py38h27cfd23_1
- questionary=1.10.0=pyhd8ed1ab_0
- readline=8.1=h27cfd23_0
- requests=2.26.0=pyhd3eb1b0_0
- requests-cache=0.7.4=pyhd8ed1ab_0
- rich=10.10.0=py38h578d9bd_0
- setuptools=58.0.4=py38h06a4308_0
- singularity=2.4.2=0
- six=1.16.0=pyhd3eb1b0_0
- smmap=4.0.0=pyhd3eb1b0_0
- sqlite=3.36.0=hc218d9a_0
- sra-tools=2.11.0=pl5262h314213e_0
- tabulate=0.8.9=py38h06a4308_0
- tk=8.6.10=hbc83047_0
- typing-extensions=3.10.0.2=hd3eb1b0_0
- typing_extensions=3.10.0.2=pyh06a4308_0
- url-normalize=1.4.3=pyhd8ed1ab_0
- urllib3=1.26.6=pyhd3eb1b0_1
- wcwidth=0.2.5=pyhd3eb1b0_0
- wheel=0.37.0=pyhd3eb1b0_1
- xz=5.2.5=h7b6447c_0
- yaml=0.2.5=h7b6447c_0
- zipp=3.5.0=pyhd3eb1b0_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.9=haebb681_0
prefix: /home/mkozubov/miniconda3/envs/nf-core
Here is the log file:
Sep-21 16:00:49.076 [main] DEBUG nextflow.cli.Launcher - $> nextflow run rnaseq.nf
Sep-21 16:00:49.318 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 21.04.0
Sep-21 16:00:49.367 [main] INFO nextflow.cli.CmdRun - Launching `rnaseq.nf` [reverent_jepsen] - revision: 0fc00d31fc
Sep-21 16:00:49.414 [main] DEBUG nextflow.config.ConfigBuilder - Found config local: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/nextflow.config
Sep-21 16:00:49.418 [main] DEBUG nextflow.config.ConfigBuilder - Parsing config file: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/nextflow.config
Sep-21 16:00:49.506 [main] DEBUG nextflow.config.ConfigBuilder - Applying config profile: `standard`
Sep-21 16:00:50.238 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; plugins-dir=/home/mkozubov/.nextflow/plugins
Sep-21 16:00:50.240 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins default=[]
Sep-21 16:00:50.242 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins local root: .nextflow/plr/empty
Sep-21 16:00:50.258 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
Sep-21 16:00:50.262 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
Sep-21 16:00:50.266 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode
Sep-21 16:00:50.289 [main] INFO org.pf4j.AbstractPluginManager - No plugins
Sep-21 16:00:50.366 [main] DEBUG nextflow.Session - Session uuid: 22a13149-e9f8-47cc-8f09-98a6b000a83a
Sep-21 16:00:50.367 [main] DEBUG nextflow.Session - Run name: reverent_jepsen
Sep-21 16:00:50.372 [main] DEBUG nextflow.Session - Executor pool size: 5
Sep-21 16:00:50.418 [main] DEBUG nextflow.cli.CmdRun -
Version: 21.04.0 build 5552
Created: 02-05-2021 16:22 UTC (09:22 PDT)
System: Linux 5.10.16.3-microsoft-standard-WSL2
Runtime: Groovy 3.0.7 on OpenJDK 64-Bit Server VM 1.8.0_152-release-1056-b12
Encoding: UTF-8 (UTF-8)
Process: 10590#DESKTOP-UJ90D1J [127.0.1.1]
CPUs: 5 - Mem: 1.9 GB (311.8 MB) - Swap: 1 GB (783.4 MB)
Sep-21 16:00:50.539 [main] DEBUG nextflow.file.FileHelper - Can't check if specified path is NFS (1): /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work
v9fs
Sep-21 16:00:50.541 [main] DEBUG nextflow.Session - Work-dir: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work [null]
Sep-21 16:00:50.545 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/bin
Sep-21 16:00:50.585 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[]
Sep-21 16:00:50.616 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
Sep-21 16:00:50.999 [main] DEBUG nextflow.Session - Session start invoked
Sep-21 16:00:51.461 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
Sep-21 16:00:51.511 [main] DEBUG nextflow.Session - Workflow process names [dsl2]: QC_TRIM_READS, UNZIP, STAR_INDEX_GENOME, GZIP_VERSION
Sep-21 16:00:51.643 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Sep-21 16:00:51.643 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Sep-21 16:00:51.651 [main] DEBUG nextflow.executor.Executor - [warm up] executor > local
Sep-21 16:00:51.656 [main] DEBUG n.processor.LocalPollingMonitor - Creating local task monitor for executor 'local' > cpus=5; memory=1.9 GB; capacity=5; pollInterval=100ms; dumpInterval=5m
Sep-21 16:00:51.868 [main] DEBUG nextflow.executor.ExecutorFactory - << taskConfig executor: null
Sep-21 16:00:51.869 [main] DEBUG nextflow.executor.ExecutorFactory - >> processorType: 'local'
Sep-21 16:00:51.904 [main] DEBUG nextflow.Session - Ignite dataflow network (5)
Sep-21 16:00:51.963 [main] DEBUG nextflow.processor.TaskProcessor - Starting process > QC_TRIM_READS
Sep-21 16:00:51.965 [main] DEBUG nextflow.processor.TaskProcessor - Starting process > STAR_INDEX_GENOME
Sep-21 16:00:51.966 [main] DEBUG nextflow.script.ScriptRunner - > Await termination
Sep-21 16:00:51.968 [main] DEBUG nextflow.Session - Session await
Sep-21 16:00:51.969 [PathVisitor-3] DEBUG nextflow.file.PathVisitor - files for syntax: glob; folder: ./data/; pattern: *.fastq; options: [:]
Sep-21 16:00:52.300 [Actor Thread 8] WARN nextflow.container.SingularityCache - Singularity cache directory has not been defined -- Remote image will be stored in the path: /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work/singularity -- Use env variable NXF_SINGULARITY_CACHEDIR to specify a different location
Sep-21 16:00:52.300 [Actor Thread 8] INFO nextflow.container.SingularityCache - Pulling Singularity image docker://quay.io/biocontainers/sickle-trim:1.33--2 [cache /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work/singularity/quay.io-biocontainers-sickle-trim-1.33--2.img]
Sep-21 16:00:52.300 [Actor Thread 7] INFO nextflow.container.SingularityCache - Pulling Singularity image docker://quay.io/biocontainers/mulled-v2-1fa26d1ce03c295fe2fdcf85831a92fbcbd7e8c2:59cdd445419f14abac76b31dd0d71217994cbcc9-0 [cache /mnt/c/Users/mkozubov/Desktop/nextflow_tutorial/rnaseq/work/singularity/quay.io-biocontainers-mulled-v2-1fa26d1ce03c295fe2fdcf85831a92fbcbd7e8c2-59cdd445419f14abac76b31dd0d71217994cbcc9-0.img]
Sep-21 16:00:52.433 [Actor Thread 5] ERROR nextflow.processor.TaskProcessor - Error executing process > 'QC_TRIM_READS (1)'
Caused by:
Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632265252300 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
java.lang.IllegalStateException: java.lang.IllegalStateException: Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632265252300 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
at nextflow.container.SingularityCache.getCachePathFor(SingularityCache.groovy:304)
at nextflow.container.SingularityCache$getCachePathFor.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139)
at nextflow.container.ContainerHandler.createSingularityCache(ContainerHandler.groovy:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:193)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:61)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:194)
at nextflow.container.ContainerHandler.normalizeImageName(ContainerHandler.groovy:68)
at nextflow.container.ContainerHandler$normalizeImageName.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139)
at nextflow.processor.TaskRun.getContainer(TaskRun.groovy:587)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at org.codehaus.groovy.runtime.metaclass.MethodMetaProperty$GetBeanMethodMetaProperty.getProperty(MethodMetaProperty.java:76)
at org.codehaus.groovy.runtime.callsite.GetEffectivePogoPropertySite.getProperty(GetEffectivePogoPropertySite.java:85)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341)
at nextflow.processor.TaskProcessor.createTaskHashKey(TaskProcessor.groovy:1939)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:193)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:61)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:171)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:185)
at nextflow.processor.TaskProcessor.invokeTask(TaskProcessor.groovy:591)
at nextflow.processor.InvokeTaskAdapter.call(InvokeTaskAdapter.groovy:59)
at groovyx.gpars.dataflow.operator.DataflowOperatorActor.startTask(DataflowOperatorActor.java:120)
at groovyx.gpars.dataflow.operator.ForkingDataflowOperatorActor.access$001(ForkingDataflowOperatorActor.java:35)
at groovyx.gpars.dataflow.operator.ForkingDataflowOperatorActor$1.run(ForkingDataflowOperatorActor.java:58)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632265252300 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
at nextflow.container.SingularityCache.runCommand(SingularityCache.groovy:256)
at nextflow.container.SingularityCache.downloadSingularityImage0(SingularityCache.groovy:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1268)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1035)
at org.codehaus.groovy.runtime.InvokerHelper.invokePogoMethod(InvokerHelper.java:1029)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:1012)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethodSafe(InvokerHelper.java:101)
at nextflow.container.SingularityCache$_downloadSingularityImage_closure1.doCall(SingularityCache.groovy:191)
at nextflow.container.SingularityCache$_downloadSingularityImage_closure1.call(SingularityCache.groovy)
at nextflow.file.FileMutex.lock(FileMutex.groovy:107)
at nextflow.container.SingularityCache.downloadSingularityImage(SingularityCache.groovy:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1268)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1035)
at org.codehaus.groovy.runtime.InvokerHelper.invokePogoMethod(InvokerHelper.java:1029)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:1012)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethodSafe(InvokerHelper.java:101)
at nextflow.container.SingularityCache$_getLazyImagePath_closure2.doCall(SingularityCache.groovy:281)
at nextflow.container.SingularityCache$_getLazyImagePath_closure2.call(SingularityCache.groovy)
at groovyx.gpars.dataflow.LazyDataflowVariable$1.run(LazyDataflowVariable.java:70)
... 3 common frames omitted
Sep-21 16:00:52.443 [Actor Thread 5] DEBUG nextflow.Session - Session aborted -- Cause: java.lang.IllegalStateException: Failed to pull singularity image
command: singularity pull --name quay.io-biocontainers-sickle-trim-1.33--2.img.pulling.1632265252300 docker://quay.io/biocontainers/sickle-trim:1.33--2 > /dev/null
status : 127
message:
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
Sep-21 16:00:52.494 [Actor Thread 5] DEBUG nextflow.Session - The following nodes are still active:
[process] QC_TRIM_READS
status=ACTIVE
port 0: (queue) closed; channel: fastqFile
port 1: (cntrl) - ; channel: $
Sep-21 16:00:52.507 [main] DEBUG nextflow.Session - Session await > all process finished
Sep-21 16:00:52.510 [main] DEBUG nextflow.Session - Session await > all barriers passed
Sep-21 16:00:52.521 [main] DEBUG nextflow.trace.WorkflowStatsObserver - Workflow completed > WorkflowStats[succeededCount=0; failedCount=0; ignoredCount=0; cachedCount=0; pendingCount=0; submittedCount=0; runningCount=0; retriesCount=0; abortedCount=0; succeedDuration=0ms; failedDuration=0ms; cachedDuration=0ms;loadCpus=0; loadMemory=0; peakRunning=0; peakCpus=0; peakMemory=0; ]
Sep-21 16:00:52.685 [main] DEBUG nextflow.CacheDB - Closing CacheDB done
Sep-21 16:00:52.752 [main] DEBUG nextflow.script.ScriptRunner - > Execution complete -- Goodbye
I have been using nf-core's rnaseq pipeline to guide me a bit: https://github.com/nf-core/rnaseq
If it helps, here is the pipeline I am trying to automate: https://bioinformatics.uconn.edu/resources-and-events/tutorials-2/rna-seq-tutorial-with-reference-genome/#
My computer has a Windows 10 operating system, and I have enabled WSL2 and got Ubuntu.
I am fairly new to Docker, Singularity, and Nextflow so I am hoping someone can explain the error. I don't even understand why python is being mentioned. Is the issue that singularity cannot pull from Quay.io? I am a bit lost and would appreciate a nudge in the right direction.
Also the reason I am trying to get singularity to work is STAR immediately gives me a Segmentation fault error on my local machine (i'm assuming I run out of memory), and I would like to test this pipeline on our HPC (but I don't have root privileges).
You can ignore the Singularity warnings, but not the errors. The problem looks to be that you're missing python in your environment:
/usr/bin/env: ‘python’: No such file or directory
ERROR: pulling container failed!
You need to make sure you have Python 3 installed. If you have, you should be able to see it here with:
/usr/bin/python --version
You didn't mention the version of Ubuntu you are using, but if you have Ubuntu 20.04 then you should already have Python 3 pre-installed. If this is the case, and you have Python 3 already installed (i.e. you find that /usr/bin/python3 --version works, note the '3') but the above doesn't, try:
sudo apt-get install python-is-python3
This will install a symlink to point the /usr/bin/python interpreter at the current default python3.
I have a Rails 5.2.3 application with Ruby 2.4.5.
I found a weird issue that the request info are logged twice in stdout.
Here is the log config in config/environments/product.rb
config.logger = ActiveSupport::TaggedLogging.new(ActiveSupport::Logger.new(STDOUT))
config.log_tags = [ lambda {|req| "#{req.cookie_jar["_session_id"]}" }, :remote_ip, :uuid ]
Supposedly it will tag every log with the remote ip and request uuid, and it does for most of logs, except there is a weird additional log for the request. In following example, the last line is a duplicated log for the request without the tag:
[INFO] [2019-10-28 06:11:45 UTC] [127.0.0.1] [f6de1900-a7e5-4486-8b73-7095d0cacb35] Started GET "/api/v1/nodes?pageSize=20&pageNumber=1" for 127.0.0.1 at 2019-10-28 06:11:45 +0000
[INFO] [2019-10-28 06:11:45 UTC] [127.0.0.1] [f6de1900-a7e5-4486-8b73-7095d0cacb35] Processing by Api::V1::NodesController#index as XML
...
...
[INFO] [2019-10-28 06:20:02 UTC] [127.0.0.1] [993e0db4-3995-41ef-851a-bfea1bc25781] Completed 200 OK in 1084ms (Views: 341.9ms | ActiveRecord: 150.6ms)
127.0.0.1 - - [28/Oct/2019:06:20:02 +0000] "GET /api/v1/nodes?pageSize=20&pageNumber=1 HTTP/1.1" 200 - 1.1347
I checked configurations, there is no any other logger configured.
The consequence is that there is a timer in client side to check notification status by every 5 seconds. I added a log silencer to avoid log such request:
# config/application.rb
config.middleware.insert_before Rails::Rack::Logger, LogSilencer, silenced: /notification_messages/
# lib/log_silencer.rb
class LogSilencer
def initialize(app, opts = {})
#app = app
#silenced = opts.delete(:silenced)
end
def call(env)
if #silenced.match(env['PATH_INFO'])
Rails.logger.silence do
#app.call(env)
end
else
#app.call(env)
end
end
end
It does avoid logging from the tagged logger, but the duplicated request logs are still there, then the stdout is full of this request
127.0.0.1 - - [28/Oct/2019:07:04:21 +0000] "GET /api/v1/notification_messages/to_notify HTTP/1.1" 200 - 0.0118
127.0.0.1 - - [28/Oct/2019:07:04:26 +0000] "GET /api/v1/notification_messages/to_notify HTTP/1.1" 200 - 0.0140
127.0.0.1 - - [28/Oct/2019:07:04:31 +0000] "GET /api/v1/notification_messages/to_notify HTTP/1.1" 200 - 0.0137
127.0.0.1 - - [28/Oct/2019:07:04:36 +0000] "GET /api/v1/notification_messages/to_notify HTTP/1.1" 200 - 0.0149
...
Spent a whole day try to find out who generated this log, but no any clue...
Would like to ask for help on how to turn off this log so make the log data clear...
Thanks!
Seems the logs come from the web server used when up the Rails app.
When use WEBrick, the log is like
=> Booting WEBrick
=> Rails 5.2.3 application starting in development on http://localhost:4000
=> Run `rails server -h` for more startup options
[2019-10-30 07:01:13] INFO WEBrick 1.3.1
[2019-10-30 07:01:13] INFO ruby 2.4.5 (2018-10-18) [x86_64-linux]
[2019-10-30 07:01:13] INFO WEBrick::HTTPServer#start: pid=5812 port=4000
127.0.0.1 - - [30/Oct/2019:07:01:17 UTC] "GET /test HTTP/1.1" 304 0
- -> /test
When use unicorn, the log is like:
I, [2019-10-30T07:02:47.626962 #5956] INFO -- : Refreshing Gem list
I, [2019-10-30T07:02:48.289373 #5956] INFO -- : listening on addr=0.0.0.0:4000 fd=17
I, [2019-10-30T07:02:48.393598 #5956] INFO -- : master process ready
I, [2019-10-30T07:02:48.394853 #5965] INFO -- : worker=0 ready
I, [2019-10-30T07:02:48.399878 #5968] INFO -- : worker=1 ready
127.0.0.1 - - [30/Oct/2019:07:02:52 +0000] "GET /test HTTP/1.1" 304 - 0.0724
I can find that the request log format is a bit different (the time zone info, the time consumed etc).
I use Unicorn, its logger by default uses stderr, but seems the logger configuration does not work
logger Logger.new("#{rails_root}/log/unicorn.log")
so I have to set the stderr path
stderr_path "#{rails_root}/log/unicorn.stderr.log
Then the request logs are in the unicorn.stderr.log file, and STDOUT are the rails app logs.
But still dont know how to turn it off since it is a kind of duplicated and useless log...
Im trying to switch the pool for my Grails 2.5.4 app for Hikari and found it to make a pretty big performance boost compared to the default Tomcat jdbc!
However, Ive stumbled upon a problem when running some integration tests. This is the test method that is now failing:
def 'Returns the bar with the least foo'() { given:
def foobar = Foobar.build()
and:
def bar1 = Bar.build(foo: 25, foobar: foobar)
def item1 = BarItem.build(state: AVAILABLE)
item1.addToBars(bar1)
and:
def bar2 = Bar.build(foo: 12, foobar: foobar)
def item2 = BarItem.build(state: AVAILABLE)
item2.addToBars(bar2)
when:
def bestBar = foobar.getBestBar()
then:
bestBar.id == bar2.id
when:
item2.state = State.BLACKED_OUT
item2.save(flush: true)
def refreshedFoobar
Foobar.withNewSession {
refreshedFoobar = Foobar.get(foobar.id) //This is returning null
}
and:
bestBar = refreshedFoobar.getBestBar() //null pointer exception here
then:
bestBar.id == bar1.id
}
Why is this happening? Seems like things arent being pushed into the database properly, like its just holding them in session waiting to send them later.
Here is my Hikari config:
def hp = new Properties()
hp.username = ds.username
hp.password = ds.password
hp.connectionTimeout = ds.maxWait
hp.maximumPoolSize = ds.maxActive
hp.minimumIdle = ds.minIdle
hp.jdbcUrl = ds.url
hp.driverClassName = ds.driverClassName
HikariConfig hc = new HikariConfig(hp)
hc.with{
addDataSourceProperty("prepStmtCacheSize", 500)
addDataSourceProperty("prepStmtCacheSqlLimit", 2048)
addDataSourceProperty("cachePrepStmts", true)
addDataSourceProperty("useServerPrepStmts", true)
}
values being
dialect = org.hibernate.dialect.MySQL5InnoDBDialect
driverClassName = 'com.mysql.jdbc.Driver'
username = 'user'
password = 'pass'
maxActive = 250
minIdle = 5
maxWait = 10000
Log:
2016-08-04 17:49:28,070 [main] DEBUG hikari.HikariConfig - HikariPool-1 - configuration:
2016-08-04 17:49:28,071 [main] DEBUG hikari.HikariConfig - allowPoolSuspension.............false
2016-08-04 17:49:28,072 [main] DEBUG hikari.HikariConfig - autoCommit......................true
2016-08-04 17:49:28,072 [main] DEBUG hikari.HikariConfig - catalog.........................null
2016-08-04 17:49:28,072 [main] DEBUG hikari.HikariConfig - connectionInitSql...............null
2016-08-04 17:49:28,072 [main] DEBUG hikari.HikariConfig - connectionTestQuery.............null
2016-08-04 17:49:28,072 [main] DEBUG hikari.HikariConfig - connectionTimeout...............10000
2016-08-04 17:49:28,072 [main] DEBUG hikari.HikariConfig - dataSource......................null
2016-08-04 17:49:28,072 [main] DEBUG hikari.HikariConfig - dataSourceClassName.............null
2016-08-04 17:49:28,073 [main] DEBUG hikari.HikariConfig - dataSourceJNDI..................null
2016-08-04 17:49:28,073 [main] DEBUG hikari.HikariConfig - dataSourceProperties............{password=<masked>}
2016-08-04 17:49:28,073 [main] DEBUG hikari.HikariConfig - driverClassName................."com.mysql.jdbc.Driver"
2016-08-04 17:49:28,073 [main] DEBUG hikari.HikariConfig - healthCheckProperties...........{}
2016-08-04 17:49:28,073 [main] DEBUG hikari.HikariConfig - healthCheckRegistry.............null
2016-08-04 17:49:28,073 [main] DEBUG hikari.HikariConfig - idleTimeout.....................600000
2016-08-04 17:49:28,074 [main] DEBUG hikari.HikariConfig - initializationFailFast..........true
2016-08-04 17:49:28,074 [main] DEBUG hikari.HikariConfig - isolateInternalQueries..........false
2016-08-04 17:49:28,074 [main] DEBUG hikari.HikariConfig - jdbc4ConnectionTest.............false
2016-08-04 17:49:28,074 [main] DEBUG hikari.HikariConfig - jdbcUrl........................."jdbc:mysql://localhost:3306/foo_test?autoReconnect=true"
2016-08-04 17:49:28,074 [main] DEBUG hikari.HikariConfig - leakDetectionThreshold..........0
2016-08-04 17:49:28,074 [main] DEBUG hikari.HikariConfig - maxLifetime.....................1800000
2016-08-04 17:49:28,074 [main] DEBUG hikari.HikariConfig - maximumPoolSize.................250
2016-08-04 17:49:28,074 [main] DEBUG hikari.HikariConfig - metricRegistry..................null
2016-08-04 17:49:28,075 [main] DEBUG hikari.HikariConfig - metricsTrackerFactory...........null
2016-08-04 17:49:28,075 [main] DEBUG hikari.HikariConfig - minimumIdle.....................250
2016-08-04 17:49:28,075 [main] DEBUG hikari.HikariConfig - password........................<masked>
2016-08-04 17:49:28,075 [main] DEBUG hikari.HikariConfig - poolName........................"HikariPool-1"
2016-08-04 17:49:28,075 [main] DEBUG hikari.HikariConfig - readOnly........................false
2016-08-04 17:49:28,075 [main] DEBUG hikari.HikariConfig - registerMbeans..................false
2016-08-04 17:49:28,075 [main] DEBUG hikari.HikariConfig - scheduledExecutorService........null
2016-08-04 17:49:28,075 [main] DEBUG hikari.HikariConfig - threadFactory...................null
2016-08-04 17:49:28,076 [main] DEBUG hikari.HikariConfig - transactionIsolation............null
2016-08-04 17:49:28,076 [main] DEBUG hikari.HikariConfig - username........................"root"
2016-08-04 17:49:28,076 [main] DEBUG hikari.HikariConfig - validationTimeout...............5000
2016-08-04 17:49:28,077 [main] INFO hikari.HikariDataSource - HikariPool-1 - Started.
2016-08-04 17:49:28,287 [main] INFO pool.PoolBase - HikariPool-1 - Driver does not support get/set network timeout for connections. (com.mysql.jdbc.JDBC4Connection.getNetworkTimeout()I)
2016-08-04 17:49:28,312 [HikariPool-1 housekeeper] DEBUG pool.HikariPool - HikariPool-1 - Pool stats (total=0, active=0, idle=0, waiting=0)
2016-08-04 17:49:28,381 [HikariPool-1 connection adder] DEBUG pool.HikariPool - HikariPool-1 - Added connection com.mysql.jdbc.JDBC4Connection#2929ca0e
....
Then I get a lot of these:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up.
...
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection, message from server: "Too many connections"
But I think thats just because of my local mysql settings, since I am using production pool size in settings