Sonarscanner cannot reach sonarqube server using docker-compose - docker

I have just created my docker-compose file, trying to run sonarqube server along side posgres and sonarscanner. The sonarqube server and the database can connect however my sonarscanner cannot reach the sonarqube server.
This is my docker-compose file:
version: "3"
services:
sonarqube:
image: sonarqube
build: .
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://postgres:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
postgres:
image: postgres
build: .
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
sonarscanner:
image: newtmitch/sonar-scanner
networks:
- sonarnet
depends_on:
- sonarqube
volumes:
- ./:/usr/src
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
This is my sonar-project.propeties file:
# must be unique in a given SonarQube instance
sonar.projectKey=toh-token
# --- optional properties ---
#defaults to project key
#sonar.projectName=toh
# defaults to 'not provided'
#sonar.projectVersion=1.0
# Path is relative to the sonar-project.properties file. Defaults to .
#sonar.sources=$HOME/.solo/angular/toh
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
My sonar-project.properties is located in the same directory as the docker-compose file.
This is what happens whenever I start the services:
Attaching to sonarqube-postgres-1, sonarqube-sonarqube-1, sonarqube-sonarscanner-1
sonarqube-sonarqube-1 | Dropping Privileges
sonarqube-postgres-1 |
sonarqube-postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
sonarqube-postgres-1 |
sonarqube-postgres-1 | 2022-06-12 20:59:39.522 UTC [1] LOG: starting PostgreSQL 14.3 (Debian 14.3-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv6 address "::", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.525 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
sonarqube-postgres-1 | 2022-06-12 20:59:39.533 UTC [26] LOG: database system was shut down at 2022-06-12 20:57:58 UTC
sonarqube-postgres-1 | 2022-06-12 20:59:39.542 UTC [1] LOG: database system is ready to accept connections
sonarqube-sonarscanner-1 | INFO: Scanner configuration file: /usr/lib/sonar-scanner/conf/sonar-scanner.properties
sonarqube-sonarscanner-1 | INFO: Project root configuration file: /usr/src/sonar-project.properties
sonarqube-sonarscanner-1 | INFO: SonarScanner 4.5.0.2216
sonarqube-sonarscanner-1 | INFO: Java 12-ea Oracle Corporation (64-bit)
sonarqube-sonarscanner-1 | INFO: Linux 5.10.117-1-MANJARO amd64
sonarqube-sonarscanner-1 | INFO: User cache: /root/.sonar/cache
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:41087]
sonarqube-sonarscanner-1 | ERROR: SonarQube server [http://sonarqube:9000] can not be reached
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: EXECUTION FAILURE
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: Total time: 0.802s
sonarqube-sonarscanner-1 | INFO: Final Memory: 3M/20M
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | ERROR: Error during SonarScanner execution
sonarqube-sonarscanner-1 | org.sonarsource.scanner.api.internal.ScannerException: Unable to execute SonarScanner analysis
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:85)
sonarqube-sonarscanner-1 | at java.base/java.security.AccessController.doPrivileged(AccessController.java:310)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:74)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:70)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.doStart(EmbeddedScanner.java:185)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.start(EmbeddedScanner.java:123)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.execute(Main.java:73)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.main(Main.java:61)
sonarqube-sonarscanner-1 | Caused by: java.lang.IllegalStateException: Fail to get bootstrap index from server
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:42)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.getScannerEngineFiles(JarDownloader.java:58)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:53)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:76)
sonarqube-sonarscanner-1 | ... 7 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Failed to connect to sonarqube/172.30.0.2:9000
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:265)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connect(RealConnection.java:183)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.Transmitter.newExchange(Transmitter.java:169)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.getResponseWithInterceptorChain(RealCall.java:221)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.execute(RealCall.java:81)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.callUrl(ServerConnection.java:114)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.downloadString(ServerConnection.java:99)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:39)
sonarqube-sonarscanner-1 | ... 10 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Connection refused (Connection refused)
sonarqube-sonarscanner-1 | at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
sonarqube-sonarscanner-1 | at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
sonarqube-sonarscanner-1 | at java.base/java.net.Socket.connect(Socket.java:591)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.platform.Platform.connectSocket(Platform.java:130)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:263)
sonarqube-sonarscanner-1 | ... 31 more
sonarqube-sonarscanner-1 | ERROR:
sonarqube-sonarscanner-1 | ERROR: Re-run SonarScanner using the -X switch to enable full debug logging.
Is there something I am doing wrong?

As #Hans Killian said, the issue was with the scanner trying to connect to the server before the server was up and running. I fixed it by just adding the following in the service of the scanner:
command: ["sh", "-c", "sleep 60 && sonar-scanner && -Dsonar.projectBaseDir=/usr/src]. This allows the scanner to be suspended until the server is up and running
I then added the following credentials in the sonar.project.properties file:
sonar.login=admin
sonar.password=admin

Related

node-red localhost not connecting

I am running node-red in docker compose and from the gitlab-cli file I am calling docker/compose image and my pipeline is working and I can see this:
node-red | 11 Nov 11:28:51 - [info]
462node-red |
463node-red | Welcome to Node-RED
464node-red | ===================
465node-red |
466node-red | 11 Nov 11:28:51 - [info] Node-RED version: v3.0.2
467node-red | 11 Nov 11:28:51 - [info] Node.js version: v16.16.0
468node-red | 11 Nov 11:28:51 - [info] Linux 5.15.49-linuxkit x64 LE
469node-red | 11 Nov 11:28:52 - [info] Loading palette nodes
470node-red | 11 Nov 11:28:53 - [info] Settings file : /data/settings.js
471node-red | 11 Nov 11:28:53 - [info] Context store : 'default' [module=memory]
472node-red | 11 Nov 11:28:53 - [info] User directory : /data
473node-red | 11 Nov 11:28:53 - [warn] Projects disabled : editorTheme.projects.enabled=false
474node-red | 11 Nov 11:28:53 - [info] Flows file : /data/flows.json
475node-red | 11 Nov 11:28:53 - [warn]
476node-red |
477node-red | ---------------------------------------------------------------------
478node-red | Your flow credentials file is encrypted using a system-generated key.
479node-red |
480node-red | If the system-generated key is lost for any reason, your credentials
481node-red | file will not be recoverable, you will have to delete it and re-enter
482node-red | your credentials.
483node-red |
484node-red | You should set your own key using the 'credentialSecret' option in
485node-red | your settings file. Node-RED will then re-encrypt your credentials
486node-red | file using your chosen key the next time you deploy a change.
487node-red | ---------------------------------------------------------------------
488node-red |
489node-red | 11 Nov 11:28:53 - [info] Server now running at http://127.0.0.1:1880/
490node-red | 11 Nov 11:28:53 - [warn] Encrypted credentials not found
491node-red | 11 Nov 11:28:53 - [info] Starting flows
492node-red | 11 Nov 11:28:53 - [info] Started flows
but when I am trying to open the localhost server to access the node-red or the dashboard, I am getting the error "Failed to open page"
This is my docker-compose.yml
version: "3.7"
services:
node-red:
image: nodered/node-red:latest
user: '1000'
container_name: node-red
environment:
- TZ=Europe/Amsterdam
ports:
- "1880:1880"
This is my .gitlab-cli.yml
yateen-docker:
stage: build
image:
name: docker/compose
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
script:
- docker-compose up
only:
- main
Any help!
I tried to create the node-red docker via docker-compose not just by running docker run command. Though my node-red image is running but I can't access the server page

Why can't I access my web app from a remote container?

I have looked through past answers since this is a common question but no solution seems to work for me.
I have a Springboot Java app and a postgresql database, each in their own container. Both containers run on a remote headless server on a local network. My remote server's physical IP address is 192.168.1.200. When I enter: 'http://192.168.1.200:8080' in my browser from another machine I get a 'unable to connect' response.
Here is my Docker-compose file:
version: "3.3"
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: aaa
volumes:
- /var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- nat
web:
image: email-viewer
ports:
- "192.168.1.200:8080:80"
depends_on:
- db
networks:
- nat
networks:
nat:
external:
name: nat
Here is the output when I run docker-compose up:
Recreating email-viewer_db_1 ... done
Recreating email-viewer_web_1 ... done
Attaching to email-viewer_db_1, email-viewer_web_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2022-05-06 16:13:24.300 UTC [1] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db_1 | 2022-05-06 16:13:24.300 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2022-05-06 16:13:24.300 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2022-05-06 16:13:24.305 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2022-05-06 16:13:24.311 UTC [27] LOG: database system was shut down at 2022-05-06 13:54:07 UTC
db_1 | 2022-05-06 16:13:24.319 UTC [1] LOG: database system is ready to accept connections
web_1 |
web_1 | . ____ _ __ _ _
web_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
web_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
web_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
web_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
web_1 | =========|_|==============|___/=/_/_/_/
web_1 | :: Spring Boot :: (v2.6.5)
web_1 |
web_1 | 2022-05-06 16:13:25.487 INFO 1 --- [ main] c.a.emailviewer.EmailViewerApplication : Starting EmailViewerApplication v0.0.1-SNAPSHOT using Java 17.0.2 on 1a13d69d117d with PID 1 (/app/email-viewer-0.0.1-SNAPSHOT.jar started by root in /app)
web_1 | 2022-05-06 16:13:25.490 INFO 1 --- [ main] c.a.emailviewer.EmailViewerApplication : No active profile set, falling back to 1 default profile: "default"
web_1 | 2022-05-06 16:13:26.137 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
web_1 | 2022-05-06 16:13:26.184 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 38 ms. Found 1 JPA repository interfaces.
web_1 | 2022-05-06 16:13:26.764 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
web_1 | 2022-05-06 16:13:26.774 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
web_1 | 2022-05-06 16:13:26.775 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.60]
web_1 | 2022-05-06 16:13:26.843 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
web_1 | 2022-05-06 16:13:26.843 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1297 ms
web_1 | 2022-05-06 16:13:27.031 INFO 1 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
web_1 | 2022-05-06 16:13:27.077 INFO 1 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.6.7.Final
web_1 | 2022-05-06 16:13:27.222 INFO 1 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
web_1 | 2022-05-06 16:13:27.313 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
web_1 | 2022-05-06 16:13:27.506 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
web_1 | 2022-05-06 16:13:27.539 INFO 1 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.PostgreSQL10Dialect
web_1 | 2022-05-06 16:13:28.034 INFO 1 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
web_1 | 2022-05-06 16:13:28.042 INFO 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
web_1 | 2022-05-06 16:13:28.330 WARN 1 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
web_1 | 2022-05-06 16:13:28.663 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
web_1 | 2022-05-06 16:13:28.672 INFO 1 --- [ main] c.a.emailviewer.EmailViewerApplication : Started EmailViewerApplication in 3.615 seconds (JVM running for 4.024)
It turns out that the embedded tomcat server in spring boot uses port 8080 by default and would require 192.168.1.200:8080:8080 instead of 192.168.1.200:8080:80

Docker Static Analysis With Clair?

Who can help to deal with Docker Static Analysis With Clair?
I get an error when analyzing help me figure it out or tell me how to install the Docker Clair scanner correctly?
Getting Setup
git clone git#github.com:Charlie-belmer/Docker-security-example.git
docker-compose.yml
version: '2.1'
services:
postgres:
image: postgres:12.1
restart: unless-stopped
volumes:
- ./docker-compose-data/postgres-data/:/var/lib/postgresql/data:rw
environment:
- POSTGRES_PASSWORD=ChangeMe
- POSTGRES_USER=clair
- POSTGRES_DB=clair
clair:
image: quay.io/coreos/clair:v4.3.4
restart: unless-stopped
volumes:
- ./docker-compose-data/clair-config/:/config/:ro
- ./docker-compose-data/clair-tmp/:/tmp/:rw
depends_on:
postgres:
condition: service_started
command: [--log-level=debug, --config, /config/config.yml]
user: root
clairctl:
image: jgsqware/clairctl:latest
restart: unless-stopped
environment:
- DOCKER_API_VERSION=1.41
volumes:
- ./docker-compose-data/clairctl-reports/:/reports/:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
clair:
condition: service_started
user: root
docker-compose up
The server starts without errors but gets stuck on the same message
I don't understand what he doesn't like
test#parallels-virtual-platform:~/Docker-security-example/clair$ docker-compose up
clair_postgres_1 is up-to-date
Recreating clair_clair_1 ... done
Recreating clair_clairctl_1 ... done
Attaching to clair_postgres_1, clair_clair_1, clair_clairctl_1
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 22:55:36.853 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 22:55:36.877 UTC [24] LOG: database system was shut down at 2021-11-16 22:54:58 UTC
postgres_1 | 2021-11-16 22:55:36.888 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:01:15.219 UTC [1] LOG: received smart shutdown request
postgres_1 | 2021-11-16 23:01:15.225 UTC [1] LOG: background worker "logical replication launcher" (PID 30) exited with exit code 1
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 23:02:11.993 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 23:02:11.995 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 23:02:12.009 UTC [26] LOG: database system was interrupted; last known up at 2021-11-16 23:00:37 UTC
postgres_1 | 2021-11-16 23:02:12.164 UTC [26] LOG: database system was not properly shut down; automatic recovery in progress
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo starts at 0/1745C50
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: invalid record length at 0/1745D38: wanted 24, got 0
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo done at 0/1745D00
postgres_1 | 2021-11-16 23:02:12.180 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] ERROR: duplicate key value violates unique constraint "lock_name_key"
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] DETAIL: Key (name)=(updater) already exists.
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] STATEMENT: INSERT INTO Lock(name, owner, until) VALUES($1, $2, $3)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
installing a bad container
docker pull imiell/bad-dockerfile
docker-compose exec clairctl clairctl analyze -l imiell/bad-dockerfile
client quit unexpectedly
2021-11-16 23:05:19.221606 C | cmd: pushing image "imiell/bad-dockerfile:latest": pushing layer to clair: Post http://clair:6060/v1/layers: dial tcp: lookup clair: Try again
I don't understand what he doesn't like for analysis?
I just solved this yesterday, the 4.3.4 version of Clair only supports two command-line options, mode, and conf. Your output bears this out:
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
Change the command line to only specify your configuration file (line 23 of your docker-compose.yml) and place your debug directive in the configuration file.
command: [--conf, /config/config.yml]
This should get Clair running.
I think your are using the old clairctl with the new Clair v4. You should be using clairctl from here: https://github.com/quay/clair/releases/tag/v4.3.5.

Rabbitmq on docker: Application mnesia exited with reason: stopped

I'm trying to launch Rabbitmq with docker-compose alongside DRF and Celery.
Here's my docker-compose file. Everything else works fine, except for rabbitmq:
version: '3.7'
services:
drf:
build: ./drf
entrypoint: ["/bin/sh","-c"]
command:
- |
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
volumes:
- ./drf/:/usr/src/drf/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=base_test
redis:
image: redis:alpine
volumes:
- redis:/data
ports:
- "6379:6379"
depends_on:
- drf
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- net_1
celery_worker:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A base-test worker -l info"
depends_on:
- drf
- db
- redis
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
hostname: celery_worker
image: app-image
networks:
- net_1
restart: on-failure
celery_beat:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A mysite beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler"
depends_on:
- drf
- db
- redis
hostname: celery_beat
image: app-image
networks:
- net_1
restart: on-failure
networks:
net_1:
driver: bridge
volumes:
postgres_data:
redis:
And here's what happens when I launch it. Can someone please help me find the problem? I can't even follow the instruction and read the generated dump file because rabbitmq container exits after the error.
rabbitmq | Starting broker...2021-04-05 16:49:58.330 [info] <0.273.0>
rabbitmq | node : rabbit#0e652f57b1b3
rabbitmq | home dir : /var/lib/rabbitmq
rabbitmq | config file(s) : /etc/rabbitmq/rabbitmq.conf
rabbitmq | cookie hash : ZPam/SOKy2dEd/3yt0OlaA==
rabbitmq | log(s) : <stdout>
rabbitmq | database dir : /var/lib/rabbitmq/mnesia/rabbit#0e652f57b1b3
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: list of feature flags found:
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] drop_unroutable_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] empty_basic_get_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] implicit_default_bindings
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] maintenance_mode_status
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [ ] quorum_queue
rabbitmq | 2021-04-05 16:50:09.543 [info] <0.273.0> Feature flags: [ ] user_limits
rabbitmq | 2021-04-05 16:50:09.545 [info] <0.273.0> Feature flags: [ ] virtual_host_metadata
rabbitmq | 2021-04-05 16:50:09.546 [info] <0.273.0> Feature flags: feature flag states written to disk: yes
rabbitmq | 2021-04-05 16:50:10.844 [info] <0.273.0> Running boot step pre_boot defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.845 [info] <0.273.0> Running boot step rabbit_core_metrics defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.846 [info] <0.273.0> Running boot step rabbit_alarm defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.854 [info] <0.414.0> Memory high watermark set to 2509 MiB (2631391641 bytes) of 6273 MiB (6578479104 bytes) total
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Enabling free disk space monitoring
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Disk free limit set to 50MB
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step code_server_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step file_handle_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.419.0> Limiting to approx 1048479 file handles (943629 sockets)
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC read buffering: OFF
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC write buffering: ON
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.273.0> Running boot step worker_pool defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Will use 4 processes for default worker pool
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Starting worker pool 'worker_pool' with 4 processes in it
rabbitmq | 2021-04-05 16:50:10.876 [info] <0.273.0> Running boot step database defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.899 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq | 2021-04-05 16:50:10.900 [info] <0.273.0> Successfully synced tables from a peer
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq |
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0>
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0> BOOT FAILED
rabbitmq | BOOT FAILED
rabbitmq | ===========
rabbitmq | Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> ===========
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> {schema_integrity_check_failed,
rabbitmq | {schema_integrity_check_failed,
rabbitmq | [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options],
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options],
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options,
rabbitmq | type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.914 [error] <0.273.0> type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.916 [error] <0.273.0>
rabbitmq |
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.272.0> [{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.272.0>},{registered_name,[]},{error_info
,{exit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,138}]},{proc_l
ib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}},{ancestors,[<0.271.0>]},{message_queue_len,1},{messages,[{'EXIT',<0.273.0>,normal}]},{links,[<0.271.0>,<0.44.0>]},{dictionary,[]},{trap_exit,true},{
status,running},{heap_size,610},{stack_size,28},{reductions,534}], []
rabbitmq | 2021-04-05 16:50:11.924 [error] <0.272.0> CRASH REPORT Process <0.272.0> with 0 neighbours exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,t
ype_state]}]},...} in application_master:init/4 line 138
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | 2021-04-05 16:50:11.925 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | {"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,
[normal,[]]}}}"}
rabbitmq | Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusiv
e_owner,arg
rabbitmq |
rabbitmq | Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
rabbitmq exited with code 0
I've managed to make it work by removing container_name and volumes from rabbitmq section of docker-compose file. Still would be nice to have an explanation of this behavior.

docker rabbit mq image fails to load up

i have a project that i am using rabbitmq image docker when i use this command in my project :
docker-compose -f dc.rabbitmq.yml up
i get the below error for failing to start the container :
477a2217c16b: Pull complete
Digest: sha256:d8fb3795026b4c81eae33f4990e8bbc7b29c877388eef6ead2aca2945074c3f3
Status: Downloaded newer image for rabbitmq:3-management-alpine
Creating rabbitmq ... done
Attaching to rabbitmq
rabbitmq | Configuring logger redirection
rabbitmq | 07:50:26.108 [error]
rabbitmq |
rabbitmq | BOOT FAILED
rabbitmq | 07:50:26.116 [error] BOOT FAILED
rabbitmq | 07:50:26.117 [error] ===========
rabbitmq | ===========
rabbitmq | 07:50:26.117 [error] Exception during startup:
rabbitmq | 07:50:26.117 [error]
rabbitmq | Exception during startup:
rabbitmq |
rabbitmq | supervisor:'-start_children/2-fun-0-'/3 line 355
rabbitmq | 07:50:26.117 [error] supervisor:'-start_children/2-fun-0-'/3 line 355
rabbitmq | 07:50:26.117 [error] supervisor:do_start_child/2 line 371
rabbitmq | 07:50:26.117 [error] supervisor:do_start_child_i/3 line 385
rabbitmq | supervisor:do_start_child/2 line 371
rabbitmq | supervisor:do_start_child_i/3 line 385
rabbitmq | rabbit_prelaunch:run_prelaunch_first_phase/0 line 27
rabbitmq | rabbit_prelaunch:do_run/0 line 108
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch:run_prelaunch_first_phase/0 line 27
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch:do_run/0 line 108
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch_conf:setup/1 line 33
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch_conf:decrypt_config/2 line 404
rabbitmq | rabbit_prelaunch_conf:setup/1 line 33
rabbitmq | rabbit_prelaunch_conf:decrypt_config/2 line 404
rabbitmq | rabbit_prelaunch_conf:decrypt_app/3 line 425
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch_conf:decrypt_app/3 line 425
rabbitmq | 07:50:26.117 [error] throw:{config_decryption_error,{key,default_pass},badarg}
rabbitmq | throw:{config_decryption_error,{key,default_pass},badarg}
rabbitmq |
rabbitmq | 07:50:26.117 [error]
rabbitmq | 07:50:27.118 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {config_decryption_error,{key,default_pass},badarg} in context start_error
so what i have tried so far what trying to search for any child proccess who fails the rabbit but could not find any and removed images and container and build and up them again but again the same error .
#EDIT
here is the config file :
version: "3.7"
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
ports:
- "15672:15672"
- "5672:5672"
volumes:
- ./docker/rabbitmq/definitions.json:/opt/definitions.json:ro
- ./docker/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.config:ro
- dms_rabbitmq_data:/var/lib/rabbitmq/
- dms_rabbitmq_logs:/var/log/rabbitmq
networks:
- rabbitmq_network
labels:
- co.elastic.logs/module=rabbitmq
- co.elastic.logs/fileset.stdout=access
- co.elastic.logs/fileset.stderr=error
- co.elastic.metrics/module=rabbitmq
- co.elastic.metrics/metricsets=status
volumes:
dms_rabbitmq_etc:
name: dms_rabbitmq_etc
dms_rabbitmq_data:
name: dms_rabbitmq_data
dms_rabbitmq_logs:
driver: local
driver_opts:
type: "none"
o: "bind"
device: ${PWD}/storage/logs/rabbitmq
networks:
rabbitmq_network:
name: rabbitmq_network
I had the same problem. The solution for me was recreating an encoded password.

Resources