Docker Static Analysis With Clair? - docker

Who can help to deal with Docker Static Analysis With Clair?
I get an error when analyzing help me figure it out or tell me how to install the Docker Clair scanner correctly?
Getting Setup
git clone git#github.com:Charlie-belmer/Docker-security-example.git
docker-compose.yml
version: '2.1'
services:
postgres:
image: postgres:12.1
restart: unless-stopped
volumes:
- ./docker-compose-data/postgres-data/:/var/lib/postgresql/data:rw
environment:
- POSTGRES_PASSWORD=ChangeMe
- POSTGRES_USER=clair
- POSTGRES_DB=clair
clair:
image: quay.io/coreos/clair:v4.3.4
restart: unless-stopped
volumes:
- ./docker-compose-data/clair-config/:/config/:ro
- ./docker-compose-data/clair-tmp/:/tmp/:rw
depends_on:
postgres:
condition: service_started
command: [--log-level=debug, --config, /config/config.yml]
user: root
clairctl:
image: jgsqware/clairctl:latest
restart: unless-stopped
environment:
- DOCKER_API_VERSION=1.41
volumes:
- ./docker-compose-data/clairctl-reports/:/reports/:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
clair:
condition: service_started
user: root
docker-compose up
The server starts without errors but gets stuck on the same message
I don't understand what he doesn't like
test#parallels-virtual-platform:~/Docker-security-example/clair$ docker-compose up
clair_postgres_1 is up-to-date
Recreating clair_clair_1 ... done
Recreating clair_clairctl_1 ... done
Attaching to clair_postgres_1, clair_clair_1, clair_clairctl_1
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 22:55:36.853 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 22:55:36.877 UTC [24] LOG: database system was shut down at 2021-11-16 22:54:58 UTC
postgres_1 | 2021-11-16 22:55:36.888 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:01:15.219 UTC [1] LOG: received smart shutdown request
postgres_1 | 2021-11-16 23:01:15.225 UTC [1] LOG: background worker "logical replication launcher" (PID 30) exited with exit code 1
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 23:02:11.993 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 23:02:11.995 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 23:02:12.009 UTC [26] LOG: database system was interrupted; last known up at 2021-11-16 23:00:37 UTC
postgres_1 | 2021-11-16 23:02:12.164 UTC [26] LOG: database system was not properly shut down; automatic recovery in progress
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo starts at 0/1745C50
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: invalid record length at 0/1745D38: wanted 24, got 0
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo done at 0/1745D00
postgres_1 | 2021-11-16 23:02:12.180 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] ERROR: duplicate key value violates unique constraint "lock_name_key"
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] DETAIL: Key (name)=(updater) already exists.
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] STATEMENT: INSERT INTO Lock(name, owner, until) VALUES($1, $2, $3)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
installing a bad container
docker pull imiell/bad-dockerfile
docker-compose exec clairctl clairctl analyze -l imiell/bad-dockerfile
client quit unexpectedly
2021-11-16 23:05:19.221606 C | cmd: pushing image "imiell/bad-dockerfile:latest": pushing layer to clair: Post http://clair:6060/v1/layers: dial tcp: lookup clair: Try again
I don't understand what he doesn't like for analysis?

I just solved this yesterday, the 4.3.4 version of Clair only supports two command-line options, mode, and conf. Your output bears this out:
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
Change the command line to only specify your configuration file (line 23 of your docker-compose.yml) and place your debug directive in the configuration file.
command: [--conf, /config/config.yml]
This should get Clair running.

I think your are using the old clairctl with the new Clair v4. You should be using clairctl from here: https://github.com/quay/clair/releases/tag/v4.3.5.

Related

selenium remote hanging in docker

Same as here. docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_app
ports:
- '5432:5432'
chrome:
image: selenium/standalone-chrome
hostname: chrome
privileged: true
shm_size: 2g
web:
build: .
image: my-app
ports:
- "8000:8000"
depends_on:
- db
command: sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
environment:
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_HOST=db
- DB_NAME=my_app
everything starts as expected
% docker compose build && docker compose up
[+] Building 3.4s (11/11) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 189B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/python:3.11.1-bullseye 3.1s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 8.58kB 0.0s
=> [1/5] FROM docker.io/library/python:3.11.1-bullseye#sha256:cc4910af48 0.0s
=> CACHED [2/5] COPY requirements.txt requirements.txt 0.0s
=> CACHED [3/5] RUN pip install -r requirements.txt 0.0s
=> CACHED [4/5] COPY . /app 0.0s
=> CACHED [5/5] WORKDIR /app 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:10e58eeb73e4651e1e1aedb921fdde3b389cadc204787 0.0s
=> => naming to docker.io/library/my-app 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 3/3
⠿ Container my_app-db-1 Created 0.0s
⠿ Container my_app-chrome-1 Created 0.0s
⠿ Container my_app-web-1 Recreated 0.2s
Attaching to my_app-chrome-1, my_app-db-1, my_app-web-1
my_app-db-1 |
my_app-db-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
my_app-db-1 |
my_app-db-1 | 2023-01-08 14:01:28.671 UTC [1] LOG: starting PostgreSQL 15.1 (Debian 15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
my_app-db-1 | 2023-01-08 14:01:28.672 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
my_app-db-1 | 2023-01-08 14:01:28.673 UTC [1] LOG: listening on IPv6 address "::", port 5432
my_app-db-1 | 2023-01-08 14:01:28.688 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
my_app-db-1 | 2023-01-08 14:01:28.699 UTC [28] LOG: database system was shut down at 2023-01-08 13:54:14 UTC
my_app-db-1 | 2023-01-08 14:01:28.720 UTC [1] LOG: database system is ready to accept connections
my_app-chrome-1 | 2023-01-08 14:01:28,791 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
my_app-chrome-1 | 2023-01-08 14:01:28,803 INFO RPC interface 'supervisor' initialized
my_app-chrome-1 | 2023-01-08 14:01:28,803 CRIT Server 'unix_http_server' running without any HTTP authentication checking
my_app-chrome-1 | 2023-01-08 14:01:28,808 INFO supervisord started with pid 8
my_app-chrome-1 | 2023-01-08 14:01:29,811 INFO spawned: 'xvfb' with pid 10
my_app-chrome-1 | 2023-01-08 14:01:29,819 INFO spawned: 'vnc' with pid 11
my_app-chrome-1 | 2023-01-08 14:01:29,833 INFO spawned: 'novnc' with pid 12
my_app-chrome-1 | 2023-01-08 14:01:29,849 INFO spawned: 'selenium-standalone' with pid 14
my_app-chrome-1 | Setting up SE_NODE_GRID_URL...
my_app-chrome-1 | 2023-01-08 14:01:29,911 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
my_app-chrome-1 | 2023-01-08 14:01:29,913 INFO success: vnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
my_app-chrome-1 | 2023-01-08 14:01:29,913 INFO success: novnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
my_app-chrome-1 | 2023-01-08 14:01:29,914 INFO success: selenium-standalone entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
my_app-chrome-1 | Selenium Grid Standalone configuration:
my_app-chrome-1 | [network]
my_app-chrome-1 | relax-checks = true
my_app-chrome-1 |
my_app-chrome-1 | [node]
my_app-chrome-1 | session-timeout = "300"
my_app-chrome-1 | override-max-sessions = false
my_app-chrome-1 | detect-drivers = false
my_app-chrome-1 | drain-after-session-count = 0
my_app-chrome-1 | max-sessions = 1
my_app-chrome-1 |
my_app-chrome-1 | [[node.driver-configuration]]
my_app-chrome-1 | display-name = "chrome"
my_app-chrome-1 | stereotype = '{"browserName": "chrome", "browserVersion": "108.0", "platformName": "Linux"}'
my_app-chrome-1 | max-sessions = 1
my_app-chrome-1 |
my_app-chrome-1 | Starting Selenium Grid Standalone...
my_app-chrome-1 | Tracing is disabled
my_app-web-1 | Operations to perform:
my_app-web-1 | Apply all migrations: admin, auth, contenttypes, core, sessions
my_app-web-1 | Running migrations:
my_app-web-1 | No migrations to apply.
my_app-web-1 | Watching for file changes with StatReloader
my_app-web-1 | Performing system checks...
my_app-web-1 |
my_app-web-1 | System check identified no issues (0 silenced).
my_app-web-1 | January 08, 2023 - 14:01:33
my_app-web-1 | Django version 4.1.5, using settings 'my_app.settings'
my_app-web-1 | Starting development server at http://0.0.0.0:8000/
my_app-web-1 | Quit the server with CONTROL-C.
my_app-chrome-1 | 14:01:33.430 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
my_app-chrome-1 | 14:01:33.453 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
my_app-chrome-1 | 14:01:35.487 INFO [NodeOptions.getSessionFactories] - Detected 2 available processors
my_app-chrome-1 | 14:01:35.608 INFO [NodeOptions.report] - Adding chrome for {"browserVersion": "108.0","se:noVncPort": 7900,"browserName": "chrome","platformName": "LINUX","se:vncEnabled": true} 1 times
my_app-chrome-1 | 14:01:35.649 INFO [Node.<init>] - Binding additional locator mechanisms: name, relative, id
my_app-chrome-1 | 14:01:35.709 INFO [GridModel.setAvailability] - Switching Node 9f76899f-6574-4e21-9413-d6141aa6c584 (uri: http://172.25.0.2:4444) from DOWN to UP
my_app-chrome-1 | 14:01:35.709 INFO [LocalDistributor.add] - Added node 9f76899f-6574-4e21-9413-d6141aa6c584 at http://172.25.0.2:4444. Health check every 120s
my_app-chrome-1 | 14:01:36.147 INFO [Standalone.execute] - Started Selenium Standalone 4.7.2 (revision 4d4020c3b7): http://172.25.0.2:4444
The logs indicate selenium is currently available at http://172.25.0.2:4444, so I try:
>>> from selenium.webdriver import ChromeOptions, Remote
>>> options = ChromeOptions()
>>> options.add_argument('--headless')
>>> driver = Remote('http://172.25.0.2:4444')
It keeps hanging forever and no special output / log messages / anything further happens, it just keeps hanging until ^c. So how exactly is this supposed to be used? Also for some reason, if the ip address is replaced with http://chrome:4444, the connection is refused.

Sonarscanner cannot reach sonarqube server using docker-compose

I have just created my docker-compose file, trying to run sonarqube server along side posgres and sonarscanner. The sonarqube server and the database can connect however my sonarscanner cannot reach the sonarqube server.
This is my docker-compose file:
version: "3"
services:
sonarqube:
image: sonarqube
build: .
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://postgres:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
postgres:
image: postgres
build: .
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
sonarscanner:
image: newtmitch/sonar-scanner
networks:
- sonarnet
depends_on:
- sonarqube
volumes:
- ./:/usr/src
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
This is my sonar-project.propeties file:
# must be unique in a given SonarQube instance
sonar.projectKey=toh-token
# --- optional properties ---
#defaults to project key
#sonar.projectName=toh
# defaults to 'not provided'
#sonar.projectVersion=1.0
# Path is relative to the sonar-project.properties file. Defaults to .
#sonar.sources=$HOME/.solo/angular/toh
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
My sonar-project.properties is located in the same directory as the docker-compose file.
This is what happens whenever I start the services:
Attaching to sonarqube-postgres-1, sonarqube-sonarqube-1, sonarqube-sonarscanner-1
sonarqube-sonarqube-1 | Dropping Privileges
sonarqube-postgres-1 |
sonarqube-postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
sonarqube-postgres-1 |
sonarqube-postgres-1 | 2022-06-12 20:59:39.522 UTC [1] LOG: starting PostgreSQL 14.3 (Debian 14.3-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv6 address "::", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.525 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
sonarqube-postgres-1 | 2022-06-12 20:59:39.533 UTC [26] LOG: database system was shut down at 2022-06-12 20:57:58 UTC
sonarqube-postgres-1 | 2022-06-12 20:59:39.542 UTC [1] LOG: database system is ready to accept connections
sonarqube-sonarscanner-1 | INFO: Scanner configuration file: /usr/lib/sonar-scanner/conf/sonar-scanner.properties
sonarqube-sonarscanner-1 | INFO: Project root configuration file: /usr/src/sonar-project.properties
sonarqube-sonarscanner-1 | INFO: SonarScanner 4.5.0.2216
sonarqube-sonarscanner-1 | INFO: Java 12-ea Oracle Corporation (64-bit)
sonarqube-sonarscanner-1 | INFO: Linux 5.10.117-1-MANJARO amd64
sonarqube-sonarscanner-1 | INFO: User cache: /root/.sonar/cache
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:41087]
sonarqube-sonarscanner-1 | ERROR: SonarQube server [http://sonarqube:9000] can not be reached
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: EXECUTION FAILURE
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: Total time: 0.802s
sonarqube-sonarscanner-1 | INFO: Final Memory: 3M/20M
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | ERROR: Error during SonarScanner execution
sonarqube-sonarscanner-1 | org.sonarsource.scanner.api.internal.ScannerException: Unable to execute SonarScanner analysis
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:85)
sonarqube-sonarscanner-1 | at java.base/java.security.AccessController.doPrivileged(AccessController.java:310)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:74)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:70)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.doStart(EmbeddedScanner.java:185)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.start(EmbeddedScanner.java:123)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.execute(Main.java:73)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.main(Main.java:61)
sonarqube-sonarscanner-1 | Caused by: java.lang.IllegalStateException: Fail to get bootstrap index from server
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:42)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.getScannerEngineFiles(JarDownloader.java:58)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:53)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:76)
sonarqube-sonarscanner-1 | ... 7 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Failed to connect to sonarqube/172.30.0.2:9000
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:265)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connect(RealConnection.java:183)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.Transmitter.newExchange(Transmitter.java:169)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.getResponseWithInterceptorChain(RealCall.java:221)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.execute(RealCall.java:81)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.callUrl(ServerConnection.java:114)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.downloadString(ServerConnection.java:99)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:39)
sonarqube-sonarscanner-1 | ... 10 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Connection refused (Connection refused)
sonarqube-sonarscanner-1 | at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
sonarqube-sonarscanner-1 | at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
sonarqube-sonarscanner-1 | at java.base/java.net.Socket.connect(Socket.java:591)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.platform.Platform.connectSocket(Platform.java:130)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:263)
sonarqube-sonarscanner-1 | ... 31 more
sonarqube-sonarscanner-1 | ERROR:
sonarqube-sonarscanner-1 | ERROR: Re-run SonarScanner using the -X switch to enable full debug logging.
Is there something I am doing wrong?
As #Hans Killian said, the issue was with the scanner trying to connect to the server before the server was up and running. I fixed it by just adding the following in the service of the scanner:
command: ["sh", "-c", "sleep 60 && sonar-scanner && -Dsonar.projectBaseDir=/usr/src]. This allows the scanner to be suspended until the server is up and running
I then added the following credentials in the sonar.project.properties file:
sonar.login=admin
sonar.password=admin

docker-entrypoint.sh is being ignored

The console logs /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/tables during the build of the Docker container (see full log below). What could be the case for this considering I have the following code:
File overview
users.sql
BEGIN TRANSACTION;
CREATE TABLE users (
id serial PRIMARY KEY,
name VARCHAR(100),
email text UNIQUE NOT NULL,
entries BEGINT DEFAULT 0,
joined TIMESTAMP NOT NULL
);
COMMIT;
deploy_schemas.sql
-- Deploy fresh database tables
\i '/docker-entrypoint-initdb.d/tables/users.sql'
\i '/docker-entrypoint-initdb.d/tables/login.sql'
Dockerfile (in postgres folder)
FROM postgres:12.2
ADD /tables/ /docker-entrypoint-initdb.d/tables/
ADD deploy_schemas.sql /docker-entrypoint-initdb.d/tables/
**docker-compose.yml**
version: "3.3"
services:
# Backend API
smart-brain-app:
container_name: backend
# image: mode:14.2.0
build: ./
command: npm start
working_dir: /usr/src/smart-brain-api
environment:
POSTGRES_URI: postgres://postgres:1212#postgres:5431/smart-brain-api-db
links:
- postgres
ports:
- "3000:3000"
volumes:
- ./:/usr/src/smart-brain-api
# Postgres
postgres:
build: ./postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 1212
POSTGRES_DB: smart-brain-api-db
POSTGRES_HOST: postgres
ports:
- "5431:5432"
Dockerfile
FROM node:14.2.0
WORKDIR /usr/src/smart-brain-api
COPY ./ ./
RUN npm install | npm audit fix
CMD ["/bin/bash"]
Complete Log
Creating smart-brain-api_postgres_1 ... done
Creating backend ... done
Attaching to smart-brain-api_postgres_1, backend
postgres_1 | The files belonging to this database system will be owned by user "postgres".
postgres_1 | This user must also own the server process.
postgres_1 |
postgres_1 | The database cluster will be initialized with locale "en_US.utf8".
postgres_1 | The default database encoding has accordingly been set to "UTF8".
postgres_1 | The default text search configuration will be set
to "english".
postgres_1 |
postgres_1 | Data page checksums are disabled.
postgres_1 |
postgres_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1 | creating subdirectories ... ok
postgres_1 | selecting dynamic shared memory implementation ... posix
postgres_1 | selecting default max_connections ... 100
postgres_1 | selecting default shared_buffers ... 128MB
postgres_1 | selecting default time zone ... Etc/UTC
postgres_1 | creating configuration files ... ok
postgres_1 | running bootstrap script ... ok
backend |
backend | > node#1.0.0 start /usr/src/smart-brain-api
backend | > npx nodemon server.js
backend |
postgres_1 | performing post-bootstrap initialization ... ok
postgres_1 | syncing data to disk ... ok
postgres_1 |
postgres_1 | initdb: warning: enabling "trust" authentication for local connections
postgres_1 | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1 | --auth-local and --auth-host, the next time you run initdb.
postgres_1 |
postgres_1 | Success. You can now start the database server using:
postgres_1 |
postgres_1 | pg_ctl -D /var/lib/postgresql/data -l logfile
start
postgres_1 |
postgres_1 | waiting for server to start....2020-05-10 01:31:31.548 UTC [46] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1)
on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-10 01:31:31.549 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-05-10 01:31:31.565 UTC [47] LOG: database system was shut down at 2020-05-10 01:31:31 UTC
postgres_1 | 2020-05-10 01:31:31.569 UTC [46] LOG: database system is ready to accept connections
postgres_1 | done
postgres_1 | server started
postgres_1 | CREATE DATABASE
postgres_1 |
postgres_1 |
postgres_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/tables
postgres_1 |
postgres_1 | 2020-05-10 01:31:31.772 UTC [46] LOG: received fast shutdown request
postgres_1 | waiting for server to shut down....2020-05-10 01:31:31.774 UTC [46] LOG: aborting any active transactions
postgres_1 | 2020-05-10 01:31:31.775 UTC [46] LOG: background
worker "logical replication launcher" (PID 53) exited with exit code 1
postgres_1 | 2020-05-10 01:31:31.778 UTC [48] LOG: shutting down
postgres_1 | 2020-05-10 01:31:31.791 UTC [46] LOG: database system is shut down
postgres_1 | done
postgres_1 | server stopped
postgres_1 |
postgres_1 | PostgreSQL init process complete; ready for start
up.
postgres_1 |
postgres_1 | 2020-05-10 01:31:31.884 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-10 01:31:31.884 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-05-10 01:31:31.884 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-05-10 01:31:31.894 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-05-10 01:31:31.910 UTC [64] LOG: database system was shut down at 2020-05-10 01:31:31 UTC
postgres_1 | 2020-05-10 01:31:31.914 UTC [1] LOG: database system is ready to accept connections

Why I can't connect to postgres in docker?

I used docker-compose from this project. Both docker containers were launched successfully.
kshnkvn#kshnkvn-vb:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
10fafbab73dc openpoiservice_gunicorn_flask "/ops_venv/bin/gunic…" 23 minutes ago Up 22 minutes 0.0.0.0:5000->5000/tcp openpoiservice_gunicorn_flask_1
a66fe5691455 kartoza/postgis:11.0-2.5 "/bin/sh -c /docker-…" 23 minutes ago Up 22 minutes 5432/tcp openpoiservice_psql_postgis_db_1
But when trying to check the service for functionality - he could not connect to the database. I tried to do it manually:
kshnkvn#kshnkvn-vb:~$ docker exec -it 10fafbab73dc /bin/bash
root#10fafbab73dc:/deploy/app# psql -h localhost -U gis_admin-gis
psql: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
root#10fafbab73dc:/deploy/app#
Strange, checked just in case that the type of container network is the bridge:
kshnkvn#kshnkvn-vb:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
81001dac99c0 bridge bridge local
8e65fb4ef6f8 host host local
94ce4e1605ef none null local
a3f48ac3facc openpoiservice_default bridge local
e3d4286df013 openpoiservice_poi_network bridge local
Checked postgres launch logs:
kshnkvn#kshnkvn-vb:~$ docker logs a66fe5691455
Add rule to pg_hba: 0.0.0.0/0
Add rule to pg_hba: replication replicator
Setup master database
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
2020-02-08 13:50:20.675 UTC [25] LOG: listening on IPv4 address "127.0.0.1", port 5432
2020-02-08 13:50:20.683 UTC [25] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-02-08 13:50:20.756 UTC [37] LOG: database system was interrupted; last known up at 2020-02-08 13:35:17 UTC
2020-02-08 13:50:21.830 UTC [48] postgres#postgres FATAL: the database system is starting up
psql: FATAL: the database system is starting up
2020-02-08 13:50:22.726 UTC [37] LOG: database system was not properly shut down; automatic recovery in progress
2020-02-08 13:50:22.730 UTC [37] LOG: redo starts at 0/21CCC50
2020-02-08 13:50:22.730 UTC [37] LOG: invalid record length at 0/21CCC88: wanted 24, got 0
2020-02-08 13:50:22.730 UTC [37] LOG: redo done at 0/21CCC50
2020-02-08 13:50:22.867 UTC [25] LOG: database system is ready to accept connections
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+-----------+----------+---------+---------+-----------------------
gis | gis_admin | UTF8 | C.UTF-8 | C.UTF-8 |
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 |
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
postgres ready
Setup postgres User:Password
Creating superuser gis_admin
ALTER ROLE
Creating replication user replicator
ALTER ROLE
gis db already exists
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+-----------+----------+---------+---------+-----------------------
gis | gis_admin | UTF8 | C.UTF-8 | C.UTF-8 |
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 |
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
2020-02-08 13:50:24.785 UTC [25] LOG: received smart shutdown request
2020-02-08 13:50:24.799 UTC [25] LOG: background worker "logical replication launcher" (PID 58) exited with exit code 1
2020-02-08 13:50:24.801 UTC [53] LOG: shutting down
2020-02-08 13:50:24.838 UTC [25] LOG: database system is shut down
Postgres initialisation process completed .... restarting in foreground
2020-02-08 13:50:25.842 UTC [148] LOG: listening on IPv4 address "0.0.0.0", port 5432
2020-02-08 13:50:25.842 UTC [148] LOG: listening on IPv6 address "::", port 5432
2020-02-08 13:50:25.850 UTC [148] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-02-08 13:50:25.880 UTC [150] LOG: database system was shut down at 2020-02-08 13:50:24 UTC
2020-02-08 13:50:25.887 UTC [148] LOG: database system is ready to accept connections
It looks like the postgre started on ip 0.0.0.0
I looked at what ip are used by the docker ip addr show command. Tried to reconnect using this ip:
psql: could not connect to server: Connection refused
Is the server running on host "172.17.0.1" and accepting
TCP/IP connections on port 5432?
root#10fafbab73dc:/deploy/app# psql -h 172.17.255.255 -U gis_admin-gis
psql: could not connect to server: Connection timed out
Is the server running on host "172.17.255.255" and accepting
TCP/IP connections on port 5432?
What can I try to do to connect the script to the database?
I you are running with docker-compose.yml and want to connect postgis from your host, you need to map the port, by adding:
ports:
- "25432:25432"
within the psql_postgis_db container (in docker-compose.yml file).
Moreover, you can override the username & password with the POSTGRES_USER and POSTGRES_PASS environment variables. You can see the default username and password in the docker-compose.yml file.
version: '2.2'
volumes:
postgis-data:
services:
gunicorn_flask:
#network_mode: "host"
build: .
volumes:
- ./osm:/deploy/app/osm
- ./ops_settings_docker.yml:/deploy/app/openpoiservice/server/ops_settings.yml
- ./categories_docker.yml:/deploy/app/openpoiservice/server/categories/categories.yml
ports:
- "5000:5000"
mem_limit: 28g
networks:
- poi_network
# Don't forget to change the host name inside ops_settings_docker.yml by the one given to docker container.
# Also port should be set to 5432 (default value) inside the same file since they are on the same network
psql_postgis_db:
image: kartoza/postgis:11.0-2.5
volumes:
- postgis-data:/var/lib/postgresql
environment:
# If you need to create multiple database you can add coma separated databases eg gis,data
- POSTGRES_DB=gis
- POSTGRES_USER=gis_admin # Here it's important to keep the same name as the one configured inside ops_settings_docker.yml
- POSTGRES_PASS=admin # Here it's important to keep the same name as the one configured inside ops_settings_docker.yml
- POSTGRES_DBNAME=gis # Here it's important to keep the same name as the one configured inside ops_settings_docker.yml
- ALLOW_IP_RANGE=0.0.0.0/0
ports:
- "25432:25432"
restart: on-failure
networks:
- poi_network
networks:
poi_network:
BTW, you must have postgresql-client on your localhost if you want to connect from your host.

Docker Compose LAMP Database connection error

So after running docker-compose up I get the message Error establishing a database connection when visiting http://localhost:8000/
Output of docker ps -a:
➜ ~ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a3c015efeec dockercompose_wordpress "docker-php-entryp..." 17 minutes ago Up 16 minutes 0.0.0.0:8000->80/tcp dockercompose_wordpress_1
4e46c85345d5 dockercompose_db "docker-entrypoint..." 17 minutes ago Up 16 minutes 0.0.0.0:3306->3306/tcp dockercompose_db_1
Is this right? Or should it only show one container since wordpress depends_on db?
So I am expecting to see my Wordpress site at localhost:8000.
Had imported the database making sure I sed to change all url to point to http://localhost.
Had also mounted ./html which contains my source files to container's /var/www/html.
Did I miss anything?
Folder Structure:
Folder
|
|-db
| |-Dockerfile
| |-db.sql
|
|-html
| |- (Wordpress files)
|
|-php
| |-Dockerfile
|
|-docker-composer.yml
docker-composer.yml:
version: '3'
services:
db:
build:
context: ./db
args:
MYSQL_DATABASE: coown
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: coown
MYSQL_ROOT_PASSWORD: root
wordpress:
build:
context: ./php
depends_on:
- db
ports:
- "8000:80"
volumes:
- ./html:/var/www/html
db/Dockerfile:
FROM mysql:5.7
RUN chown -R mysql:root /var/lib/mysql/
ARG MYSQL_DATABASE
ARG MYSQL_ROOT_PASSWORD
ENV MYSQL_DATABASE=$MYSQL_DATABASE
ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
ADD db.sql /etc/mysql/db.sql
RUN cp /etc/mysql/db.sql /docker-entrypoint-initdb.d
EXPOSE 3306
php/Dockerfile:
FROM php:7.0-apache
RUN docker-php-ext-install mysqli
Some output of docker-compose up:
db_1 | 2017-06-12T19:21:33.873957Z 0 [Warning] CA certificate ca.pem is self signed.
db_1 | 2017-06-12T19:21:33.875841Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
db_1 | 2017-06-12T19:21:33.876030Z 0 [Note] IPv6 is available.
db_1 | 2017-06-12T19:21:33.876088Z 0 [Note] - '::' resolves to '::';
db_1 | 2017-06-12T19:21:33.876195Z 0 [Note] Server socket created on IP: '::'.
db_1 | 2017-06-12T19:21:33.885002Z 0 [Note] InnoDB: Buffer pool(s) load completed at 170612 19:21:33
db_1 | 2017-06-12T19:21:33.902676Z 0 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.902862Z 0 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.902964Z 0 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.903006Z 0 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.905557Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.910940Z 0 [Note] Event Scheduler: Loaded 0 events
db_1 | 2017-06-12T19:21:33.911310Z 0 [Note] mysqld: ready for connections.
db_1 | Version: '5.7.18' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
db_1 | 2017-06-12T19:21:33.911365Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
db_1 | 2017-06-12T19:21:33.911387Z 0 [Note] Beginning of list of non-natively partitioned tables
db_1 | 2017-06-12T19:21:33.926384Z 0 [Note] End of list of non-natively partitioned tables
wordpress_1 | 172.18.0.1 - - [12/Jun/2017:19:28:39 +0000] "GET / HTTP/1.1" 500 557 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
are you using "db" host to connect PHP (Wordpress? wp-config.php?) to your database instead of the usual "localhost"?.

Resources