docker-flink not showing all log statements - docker

I am using 2 docker flink images with AMIDST and the following sample code. AMIDST is a probabilistic graphical model framework which supports Flink.
One image is running as JobManager the other as TaskManager. JM is reachable via DNS and I provide my own log4j.properties based on the startup script in bin/flink-console.sh used by these images.
public class ParallelMLExample {
private static final Logger LOG = LoggerFactory.getLogger(ParallelMLExample.class);
public static void main(String[] args) throws Exception {
final ExecutionEnvironment env;
//Set-up Flink session
env = ExecutionEnvironment.getExecutionEnvironment();
env.getConfig().disableSysoutLogging();
//generate a random dataset
DataFlink<DataInstance> dataFlink = new DataSetGenerator().generate(env, 1234, 1000, 5, 0);
//Creates a DAG with the NaiveBayes structure for the random dataset
DAG dag = DAGGenerator.getNaiveBayesStructure(dataFlink.getAttributes(), "DiscreteVar4");
LOG.info(dag.toString());
//Create the Learner object
ParameterLearningAlgorithm learningAlgorithmFlink = new ParallelMaximumLikelihood();
//Learning parameters
learningAlgorithmFlink.setBatchSize(10);
learningAlgorithmFlink.setDAG(dag);
//Initialize the learning process
learningAlgorithmFlink.initLearning();
//Learn from the flink data
LOG.info("########## BEFORE UPDATEMODEL ##########");
learningAlgorithmFlink.updateModel(dataFlink);
LOG.info("########## AFTER UPDATEMODEL ##########");
//Print the learnt Bayes Net
BayesianNetwork bn = learningAlgorithmFlink.getLearntBayesianNetwork();
LOG.info(bn.toString());
}
}
The problem is that I only see LOG.info() entries up until the updateModel call. After that silence. If I comment out this call, I can see the other entries. I am silencing Flink entries on purpose here.
Creating flink_jobmanager_1 ... done
Creating flink_jobmanager_1 ...
Creating flink_taskmanager_1 ... done
Attaching to flink_jobmanager_1, flink_taskmanager_1
jobmanager_1 | Starting Job Manager
jobmanager_1 | config file:
taskmanager_1 | Starting Task Manager
jobmanager_1 | jobmanager.rpc.address: jobmanager
taskmanager_1 | config file:
jobmanager_1 | jobmanager.rpc.port: 6123
jobmanager_1 | jobmanager.heap.mb: 1024
taskmanager_1 | jobmanager.rpc.address: jobmanager
jobmanager_1 | taskmanager.heap.mb: 1024
taskmanager_1 | jobmanager.rpc.port: 6123
jobmanager_1 | taskmanager.numberOfTaskSlots: 1
taskmanager_1 | jobmanager.heap.mb: 1024
jobmanager_1 | taskmanager.memory.preallocate: false
taskmanager_1 | taskmanager.heap.mb: 1024
jobmanager_1 | parallelism.default: 1
taskmanager_1 | taskmanager.numberOfTaskSlots: 2
jobmanager_1 | web.port: 8081
taskmanager_1 | taskmanager.memory.preallocate: false
jobmanager_1 | blob.server.port: 6124
taskmanager_1 | parallelism.default: 1
jobmanager_1 | query.server.port: 6125
taskmanager_1 | web.port: 8081
jobmanager_1 | Starting jobmanager as a console application on host c16d9156ff68.
taskmanager_1 | blob.server.port: 6124
taskmanager_1 | query.server.port: 6125
taskmanager_1 | Starting taskmanager as a console application on host 76c78378d35c.
jobmanager_1 | 2018-02-18 15:31:42,809 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
taskmanager_1 | 2018-02-18 15:31:43,897 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
jobmanager_1 | 2018-02-18 15:32:18,667 INFO com.ness.ParallelMLExample - DAG
jobmanager_1 | DiscreteVar0 has 1 parent(s): {DiscreteVar4}
jobmanager_1 | DiscreteVar1 has 1 parent(s): {DiscreteVar4}
jobmanager_1 | DiscreteVar2 has 1 parent(s): {DiscreteVar4}
jobmanager_1 | DiscreteVar3 has 1 parent(s): {DiscreteVar4}
jobmanager_1 | DiscreteVar4 has 0 parent(s): {}
jobmanager_1 |
jobmanager_1 | 2018-02-18 15:32:18,679 INFO com.ness.ParallelMLExample - ########## BEFORE UPDATEMODEL ##########
The updateModel method starts with a new Configuration() then retrieves the data set. It then runs a map, reduce and collect against the supplied data set but does not seem to be messing with root loggers...
What am I missing?

Related

Sonarscanner cannot reach sonarqube server using docker-compose

I have just created my docker-compose file, trying to run sonarqube server along side posgres and sonarscanner. The sonarqube server and the database can connect however my sonarscanner cannot reach the sonarqube server.
This is my docker-compose file:
version: "3"
services:
sonarqube:
image: sonarqube
build: .
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://postgres:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
postgres:
image: postgres
build: .
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
sonarscanner:
image: newtmitch/sonar-scanner
networks:
- sonarnet
depends_on:
- sonarqube
volumes:
- ./:/usr/src
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
This is my sonar-project.propeties file:
# must be unique in a given SonarQube instance
sonar.projectKey=toh-token
# --- optional properties ---
#defaults to project key
#sonar.projectName=toh
# defaults to 'not provided'
#sonar.projectVersion=1.0
# Path is relative to the sonar-project.properties file. Defaults to .
#sonar.sources=$HOME/.solo/angular/toh
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
My sonar-project.properties is located in the same directory as the docker-compose file.
This is what happens whenever I start the services:
Attaching to sonarqube-postgres-1, sonarqube-sonarqube-1, sonarqube-sonarscanner-1
sonarqube-sonarqube-1 | Dropping Privileges
sonarqube-postgres-1 |
sonarqube-postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
sonarqube-postgres-1 |
sonarqube-postgres-1 | 2022-06-12 20:59:39.522 UTC [1] LOG: starting PostgreSQL 14.3 (Debian 14.3-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv6 address "::", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.525 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
sonarqube-postgres-1 | 2022-06-12 20:59:39.533 UTC [26] LOG: database system was shut down at 2022-06-12 20:57:58 UTC
sonarqube-postgres-1 | 2022-06-12 20:59:39.542 UTC [1] LOG: database system is ready to accept connections
sonarqube-sonarscanner-1 | INFO: Scanner configuration file: /usr/lib/sonar-scanner/conf/sonar-scanner.properties
sonarqube-sonarscanner-1 | INFO: Project root configuration file: /usr/src/sonar-project.properties
sonarqube-sonarscanner-1 | INFO: SonarScanner 4.5.0.2216
sonarqube-sonarscanner-1 | INFO: Java 12-ea Oracle Corporation (64-bit)
sonarqube-sonarscanner-1 | INFO: Linux 5.10.117-1-MANJARO amd64
sonarqube-sonarscanner-1 | INFO: User cache: /root/.sonar/cache
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:41087]
sonarqube-sonarscanner-1 | ERROR: SonarQube server [http://sonarqube:9000] can not be reached
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: EXECUTION FAILURE
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: Total time: 0.802s
sonarqube-sonarscanner-1 | INFO: Final Memory: 3M/20M
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | ERROR: Error during SonarScanner execution
sonarqube-sonarscanner-1 | org.sonarsource.scanner.api.internal.ScannerException: Unable to execute SonarScanner analysis
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:85)
sonarqube-sonarscanner-1 | at java.base/java.security.AccessController.doPrivileged(AccessController.java:310)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:74)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:70)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.doStart(EmbeddedScanner.java:185)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.start(EmbeddedScanner.java:123)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.execute(Main.java:73)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.main(Main.java:61)
sonarqube-sonarscanner-1 | Caused by: java.lang.IllegalStateException: Fail to get bootstrap index from server
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:42)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.getScannerEngineFiles(JarDownloader.java:58)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:53)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:76)
sonarqube-sonarscanner-1 | ... 7 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Failed to connect to sonarqube/172.30.0.2:9000
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:265)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connect(RealConnection.java:183)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.Transmitter.newExchange(Transmitter.java:169)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.getResponseWithInterceptorChain(RealCall.java:221)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.execute(RealCall.java:81)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.callUrl(ServerConnection.java:114)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.downloadString(ServerConnection.java:99)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:39)
sonarqube-sonarscanner-1 | ... 10 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Connection refused (Connection refused)
sonarqube-sonarscanner-1 | at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
sonarqube-sonarscanner-1 | at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
sonarqube-sonarscanner-1 | at java.base/java.net.Socket.connect(Socket.java:591)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.platform.Platform.connectSocket(Platform.java:130)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:263)
sonarqube-sonarscanner-1 | ... 31 more
sonarqube-sonarscanner-1 | ERROR:
sonarqube-sonarscanner-1 | ERROR: Re-run SonarScanner using the -X switch to enable full debug logging.
Is there something I am doing wrong?
As #Hans Killian said, the issue was with the scanner trying to connect to the server before the server was up and running. I fixed it by just adding the following in the service of the scanner:
command: ["sh", "-c", "sleep 60 && sonar-scanner && -Dsonar.projectBaseDir=/usr/src]. This allows the scanner to be suspended until the server is up and running
I then added the following credentials in the sonar.project.properties file:
sonar.login=admin
sonar.password=admin

Why can't I access my web app from a remote container?

I have looked through past answers since this is a common question but no solution seems to work for me.
I have a Springboot Java app and a postgresql database, each in their own container. Both containers run on a remote headless server on a local network. My remote server's physical IP address is 192.168.1.200. When I enter: 'http://192.168.1.200:8080' in my browser from another machine I get a 'unable to connect' response.
Here is my Docker-compose file:
version: "3.3"
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: aaa
volumes:
- /var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- nat
web:
image: email-viewer
ports:
- "192.168.1.200:8080:80"
depends_on:
- db
networks:
- nat
networks:
nat:
external:
name: nat
Here is the output when I run docker-compose up:
Recreating email-viewer_db_1 ... done
Recreating email-viewer_web_1 ... done
Attaching to email-viewer_db_1, email-viewer_web_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2022-05-06 16:13:24.300 UTC [1] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db_1 | 2022-05-06 16:13:24.300 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2022-05-06 16:13:24.300 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2022-05-06 16:13:24.305 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2022-05-06 16:13:24.311 UTC [27] LOG: database system was shut down at 2022-05-06 13:54:07 UTC
db_1 | 2022-05-06 16:13:24.319 UTC [1] LOG: database system is ready to accept connections
web_1 |
web_1 | . ____ _ __ _ _
web_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
web_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
web_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
web_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
web_1 | =========|_|==============|___/=/_/_/_/
web_1 | :: Spring Boot :: (v2.6.5)
web_1 |
web_1 | 2022-05-06 16:13:25.487 INFO 1 --- [ main] c.a.emailviewer.EmailViewerApplication : Starting EmailViewerApplication v0.0.1-SNAPSHOT using Java 17.0.2 on 1a13d69d117d with PID 1 (/app/email-viewer-0.0.1-SNAPSHOT.jar started by root in /app)
web_1 | 2022-05-06 16:13:25.490 INFO 1 --- [ main] c.a.emailviewer.EmailViewerApplication : No active profile set, falling back to 1 default profile: "default"
web_1 | 2022-05-06 16:13:26.137 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
web_1 | 2022-05-06 16:13:26.184 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 38 ms. Found 1 JPA repository interfaces.
web_1 | 2022-05-06 16:13:26.764 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
web_1 | 2022-05-06 16:13:26.774 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
web_1 | 2022-05-06 16:13:26.775 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.60]
web_1 | 2022-05-06 16:13:26.843 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
web_1 | 2022-05-06 16:13:26.843 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1297 ms
web_1 | 2022-05-06 16:13:27.031 INFO 1 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
web_1 | 2022-05-06 16:13:27.077 INFO 1 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.6.7.Final
web_1 | 2022-05-06 16:13:27.222 INFO 1 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
web_1 | 2022-05-06 16:13:27.313 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
web_1 | 2022-05-06 16:13:27.506 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
web_1 | 2022-05-06 16:13:27.539 INFO 1 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.PostgreSQL10Dialect
web_1 | 2022-05-06 16:13:28.034 INFO 1 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
web_1 | 2022-05-06 16:13:28.042 INFO 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
web_1 | 2022-05-06 16:13:28.330 WARN 1 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
web_1 | 2022-05-06 16:13:28.663 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
web_1 | 2022-05-06 16:13:28.672 INFO 1 --- [ main] c.a.emailviewer.EmailViewerApplication : Started EmailViewerApplication in 3.615 seconds (JVM running for 4.024)
It turns out that the embedded tomcat server in spring boot uses port 8080 by default and would require 192.168.1.200:8080:8080 instead of 192.168.1.200:8080:80

Docker Static Analysis With Clair?

Who can help to deal with Docker Static Analysis With Clair?
I get an error when analyzing help me figure it out or tell me how to install the Docker Clair scanner correctly?
Getting Setup
git clone git#github.com:Charlie-belmer/Docker-security-example.git
docker-compose.yml
version: '2.1'
services:
postgres:
image: postgres:12.1
restart: unless-stopped
volumes:
- ./docker-compose-data/postgres-data/:/var/lib/postgresql/data:rw
environment:
- POSTGRES_PASSWORD=ChangeMe
- POSTGRES_USER=clair
- POSTGRES_DB=clair
clair:
image: quay.io/coreos/clair:v4.3.4
restart: unless-stopped
volumes:
- ./docker-compose-data/clair-config/:/config/:ro
- ./docker-compose-data/clair-tmp/:/tmp/:rw
depends_on:
postgres:
condition: service_started
command: [--log-level=debug, --config, /config/config.yml]
user: root
clairctl:
image: jgsqware/clairctl:latest
restart: unless-stopped
environment:
- DOCKER_API_VERSION=1.41
volumes:
- ./docker-compose-data/clairctl-reports/:/reports/:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
clair:
condition: service_started
user: root
docker-compose up
The server starts without errors but gets stuck on the same message
I don't understand what he doesn't like
test#parallels-virtual-platform:~/Docker-security-example/clair$ docker-compose up
clair_postgres_1 is up-to-date
Recreating clair_clair_1 ... done
Recreating clair_clairctl_1 ... done
Attaching to clair_postgres_1, clair_clair_1, clair_clairctl_1
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 22:55:36.853 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 22:55:36.877 UTC [24] LOG: database system was shut down at 2021-11-16 22:54:58 UTC
postgres_1 | 2021-11-16 22:55:36.888 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:01:15.219 UTC [1] LOG: received smart shutdown request
postgres_1 | 2021-11-16 23:01:15.225 UTC [1] LOG: background worker "logical replication launcher" (PID 30) exited with exit code 1
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 23:02:11.993 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 23:02:11.995 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 23:02:12.009 UTC [26] LOG: database system was interrupted; last known up at 2021-11-16 23:00:37 UTC
postgres_1 | 2021-11-16 23:02:12.164 UTC [26] LOG: database system was not properly shut down; automatic recovery in progress
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo starts at 0/1745C50
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: invalid record length at 0/1745D38: wanted 24, got 0
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo done at 0/1745D00
postgres_1 | 2021-11-16 23:02:12.180 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] ERROR: duplicate key value violates unique constraint "lock_name_key"
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] DETAIL: Key (name)=(updater) already exists.
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] STATEMENT: INSERT INTO Lock(name, owner, until) VALUES($1, $2, $3)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
installing a bad container
docker pull imiell/bad-dockerfile
docker-compose exec clairctl clairctl analyze -l imiell/bad-dockerfile
client quit unexpectedly
2021-11-16 23:05:19.221606 C | cmd: pushing image "imiell/bad-dockerfile:latest": pushing layer to clair: Post http://clair:6060/v1/layers: dial tcp: lookup clair: Try again
I don't understand what he doesn't like for analysis?
I just solved this yesterday, the 4.3.4 version of Clair only supports two command-line options, mode, and conf. Your output bears this out:
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
Change the command line to only specify your configuration file (line 23 of your docker-compose.yml) and place your debug directive in the configuration file.
command: [--conf, /config/config.yml]
This should get Clair running.
I think your are using the old clairctl with the new Clair v4. You should be using clairctl from here: https://github.com/quay/clair/releases/tag/v4.3.5.

Spring security OAuth redirect endpoint not found

I have added Spring Security to an existing JEE application to add OAuth to the application.
The security configuration is set to protect the REST API, and that part seems to work fine.
When the UI requests a protected URL, the response contains a redirect to 'oauth2/authorize/keycloak'.
But that's where the story ends, since the request to 'oauth2/authorize/keycloak' itself returns a 404.
I am pretty out of date with spring security (have used it the last time with Spring applications about 8 years ago) and I have no idea where I am supposed to find the implementation of the endpoint 'oauth2/authorize/keycloak' in order to figure out what is missing or wrong in my setup.
The relevant part of my dependency tree looks as follows:
[INFO] | +- com.mycompany.auth:authentication-sso-configuration:jar:1.0.0-SNAPSHOT:compile
[INFO] | | +- org.reactivestreams:reactive-streams:jar:1.0.3:compile
[INFO] | | +- org.springframework.security:spring-security-oauth2-client:jar:5.3.3.RELEASE:compile
[INFO] | | | +- com.nimbusds:oauth2-oidc-sdk:jar:7.5:compile
[INFO] | | | | +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
[INFO] | | | | +- com.nimbusds:content-type:jar:2.0:compile
[INFO] | | | | +- net.minidev:json-smart:jar:2.3:compile (version selected from constraint [1.3.1,2.3])
[INFO] | | | | | \- net.minidev:accessors-smart:jar:1.2:compile
[INFO] | | | | | \- org.ow2.asm:asm:jar:5.0.4:compile
[INFO] | | | | \- com.nimbusds:lang-tag:jar:1.4.4:compile
[INFO] | | | +- org.springframework.security:spring-security-oauth2-core:jar:5.3.3.RELEASE:compile
[INFO] | | | \- org.springframework:spring-core:jar:5.2.6.RELEASE:compile
[INFO] | | | \- org.springframework:spring-jcl:jar:5.2.6.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-oauth2-jose:jar:5.3.3.RELEASE:compile
[INFO] | | | \- com.nimbusds:nimbus-jose-jwt:jar:8.18.1:compile
[INFO] | | +- org.springframework.security:spring-security-oauth2-resource-server:jar:5.3.3.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-core:jar:5.3.3.RELEASE:compile
[INFO] | | | +- org.springframework:spring-aop:jar:5.2.6.RELEASE:compile
[INFO] | | | +- org.springframework:spring-beans:jar:5.2.6.RELEASE:compile
[INFO] | | | +- org.springframework:spring-context:jar:5.2.6.RELEASE:compile
[INFO] | | | \- org.springframework:spring-expression:jar:5.2.6.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-web:jar:5.3.3.RELEASE:compile
[INFO] | | | \- org.springframework:spring-web:jar:5.2.6.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-config:jar:5.3.3.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-saml2-service-provider:jar:5.3.3.RELEASE:compile
[INFO] | | | +- org.opensaml:opensaml-core:jar:3.4.5:compile
[INFO] | | | | +- io.dropwizard.metrics:metrics-core:jar:3.1.2:compile
[INFO] | | | | \- net.shibboleth.utilities:java-support:jar:7.5.1:compile
[INFO] | | | +- org.opensaml:opensaml-saml-api:jar:3.4.5:compile
[INFO] | | | | +- org.opensaml:opensaml-xmlsec-api:jar:3.4.5:compile
[INFO] | | | | | \- org.opensaml:opensaml-security-api:jar:3.4.5:compile
[INFO] | | | | +- org.opensaml:opensaml-soap-api:jar:3.4.5:compile
[INFO] | | | | +- org.opensaml:opensaml-messaging-api:jar:3.4.5:compile
[INFO] | | | | +- org.opensaml:opensaml-profile-api:jar:3.4.5:compile
[INFO] | | | | \- org.opensaml:opensaml-storage-api:jar:3.4.5:compile
[INFO] | | | \- org.opensaml:opensaml-saml-impl:jar:3.4.5:compile
[INFO] | | | +- org.opensaml:opensaml-security-impl:jar:3.4.5:compile
[INFO] | | | +- org.opensaml:opensaml-xmlsec-impl:jar:3.4.5:compile
[INFO] | | | | \- org.apache.santuario:xmlsec:jar:2.0.10:compile
[INFO] | | | | \- com.fasterxml.woodstox:woodstox-core:jar:5.0.3:compile
[INFO] | | | | \- org.codehaus.woodstox:stax2-api:jar:3.1.4:compile
[INFO] | | | +- org.opensaml:opensaml-soap-impl:jar:3.4.5:compile
[INFO] | | | \- org.apache.velocity:velocity:jar:1.7:compile
[INFO] | | +- org.apache.logging.log4j:log4j-api:jar:2.13.3:compile
[INFO] | | +- org.apache.logging.log4j:log4j-core:jar:2.13.3:compile
[INFO] | | +- org.yaml:snakeyaml:jar:1.26:compile
[INFO] | | +- commons-collections:commons-collections:jar:3.2.2:compile
[INFO] | | +- org.bouncycastle:bcprov-jdk15on:jar:1.66:compile
[INFO] | | +- org.cryptacular:cryptacular:jar:1.2.4:compile
[INFO] | | \- org.apache.commons:commons-configuration2:jar:2.7:compile
[INFO] | | \- org.apache.commons:commons-text:jar:1.8:compile
And this is the configuration for OAuth
# OAuth2 login manifest
oauth2Login:
authorizationCode:
authorizationUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/auth"
scope:
- "openid"
- "finx"
redirectUriTemplate: "{baseUrl}/login/oauth2/code/{registrationId}"
tokenUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/token"
userInfoUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/userinfo"
jwkSetKeyUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/certs"
registrationId: "keycloak"
clientId: "finx_oauth2"
clientSecret:
vaultType: PLAIN_TEXT
secret: "my-secret"
clientName: "FinX"
entryPoints:
- pathMatcher: "/ledger-api/**"
- pathMatcher: "/ledger-api-internal/**"
- pathMatcher: "/ledger-api-ui/**"
# OAuth2 resource server
oauth2ResourceServer:
keySetUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/certs"
pathMatchers:
- "/api/**"
- "/orchestration-api/**"
I have been digging through the spring source code in order to find the implementation of the endpoint 'oauth2/authorize/keycloak', but this is not an easy task.
So looking for someone who can help me with some pointers on what could be missing/wrong in my configuration.
By default, the OAuth 2.0 Login Page is auto-generated by the DefaultLoginPageGeneratingFilter.
The login page for a client defaults to this: OAuth2AuthorizationRequestRedirectFilter.DEFAULT_AUTHORIZATION_REQUEST_BASE_URI + "/{registrationId}". As per your configuration, registrationId: "keycloak", this means (/oauth2/authorization/keycloak).
Please check your WebSecurityConfigurerAdapter configuration. Try to override the default login page by configuring oauth2Login().loginPage() and (optionally) oauth2Login().authorizationEndpoint().baseUri().
The following listing shows an example:
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.oauth2Login()
.loginPage("/login/oauth2")
...
.authorizationEndpoint()
.baseUri("/login/oauth2/authorization")
....
}
Please check OAuth 2.0 Login - Advanced Configuration for more information.

Application templates and instances manager for docker deployment?

I'm looking about application deployment with docker containers for production in some server (not hundreds).
I can see some deployment managers like docker-compose who deploy according to YAML service
description file.
Official docker-compose.yml example file:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis
I'm looking about solution to manage/produce these YAML files and communicate with deployment managers like docker-compose.
This solution should permeit to manage Applications templates, deployeds instances of them, configuration of them, etc.
Illustration of it:
Docker
+-------------------+
docker-compose.yml | |
+---------------+ +-------+ | containers |
| APP manager |------->|Mysql_a| | +---------------+ |
| | |Mysql_b+-----------+ | |MySQL_a |Mysq| |
| MySQL Tpl | |Mysql_c| docker-compose | +---------------+ |
| Wordpress tpl | |Wp_a | | | |l_b |Mysql_c | |
| | +---+---+ | | +---------+-----+ |
| Mysql_a | | +------+ |Wp_a | | |
| Mysql_b +----------> | | | +---------+ | |
| Mysql_c | | | | | | |
| Wp_a | | | | | | |
+---------------+ | | | | | |
+---------------+ | +---------------+ |
+-------------------+
My thirst think is for panamax but is it approriate ? Whats other open source solutions exists ?

Resources