DropWizard can't find field database (v 1.0.5) - dropwizard

My configuration isn't found by DropWizard, don't get it.
AppConfiguration class:
package se.test.app;
import com.fasterxml.jackson.annotation.JsonProperty;
import io.dropwizard.Configuration;
import io.dropwizard.db.DataSourceFactory;
import javax.validation.Valid;
import javax.validation.constraints.NotNull;
public class AppConfiguration extends Configuration {
#Valid
#NotNull
private DataSourceFactory database = new DataSourceFactory();
#JsonProperty("database")
public DataSourceFactory getDataSourceFactory() {
return database;
}
}
And in app.yaml
database:
driverClass: com.mysql.jdbc.Driver
user: db_user
password: db_password
url: jdbc:mysql://localhost/my_db
properties:
charSet: UTF-8
hibernate.dialect: org.hibernate.dialect.MySQLDialect
But in terminal when running:
* Unrecognized field at: database
Did you mean?:
- metrics
- server
- logging

Include dropwizard-jdbi in your pom.xml.
<dependency>
<groupId>io.dropwizard</groupId>
<artifactId>dropwizard-jdbi</artifactId>
<version>1.0.5</version>
</dependency>
See the documentation of Dropwizard JDBI.

In Eclipse:
Run > Run Configurations > Arguments :
"server {name of your yaml file}.yml"

Related

NestJS Neo4j : Cannot find module 'neo4j-driver/types/integer' or its corresponding type declarations

I try to run my nestJS but I've got a problem with my node_module :
node_modules/nest-neo4j/dist/neo4j.service.d.ts:10:32 - error TS2307: Cannot find module 'neo4j-driver/types/integer' or its corresponding type declarations.
10 int(value: number): import("neo4j-driver/types/integer").default;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nest-neo4j/dist/neo4j.service.d.ts:12:47 - error TS2307: Cannot find module 'neo4j-driver/types/session' or its corresponding type declarations.
12 getReadSession(database?: string): import("neo4j-driver/types/session").default;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nest-neo4j/dist/neo4j.service.d.ts:13:48 - error TS2307: Cannot find module 'neo4j-driver/types/session' or its corresponding type declarations.
13 getWriteSession(database?: string): import("neo4j-driver/types/session").default;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Found 3 error(s).
This is my app.module.ts
import { Module } from '#nestjs/common';
import { AppController } from './app.controller';
import { AppService } from './app.service';
import { Neo4jModule } from 'nest-neo4j'
#Module({
imports: [
Neo4jModule.forRoot({
scheme: 'neo4j',
host: 'localhost',
port: 7687,
username: 'neo4j',
password: 'ingrid-ticket-capital-spirit-reform-6035'
})
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
Is any one got the same error ?
Thanks for helping me !
You should either downgrade the neo4j-driver to 4.1.0 (that's what nest-neo4j 0.1.4 has been tested with) or...
remove neo4j-driver entirely from your direct dependencies, since nest-neo4j already depends on it. That way, nest-neo4j is always responsible for the driver version it works with, avoiding conflicts like this.

Spring Boot Admin with Eureka and Docker Swarm

EDIT/SOLUTION:
I've got it, partly thanks to #anemyte's comment. Although the eureka.hostname Property was not the issue at play (though it did warrant correction), looking more closely got me to the true cause of the problem: the network interface in use, port forwarding and (bad) luck.
The services that I chose for this prototypical implementation were those that have port-forwardings in a production setting (I must have unfortunately forgotten to add a port-forwarding to the example service below - dumb, though I do not know if this would have helped).
When a Docker Swarm service has a port forwarding, the container has an additional bridge interface in addition to the overlay interface which is used for internal container-to-container communication.
Unfortunately, the client services were choosing to register with Eureka with their bridge interface IP as the advertised IP instead of the internal swarm IP - possibly because that is what InetAddress.getLocalhost() (which is internally used by Spring Cloud) would return in that case.
This led me to erroneously believe that Spring Boot Admin could reach these services - as I externally could, when it in fact could not as the wrong IP was being advertised. Using cURL to verify this only compounded the confusion as I was using the overlay IP to check whether the services can communicate, which is not the one that was being registered with Eureka.
The (provisional) solution to the issue was setting the spring.cloud.inetutils.preferred-networks setting to 10.0, which is the default IP address pool (more specifically: 10.0.0.0/8) for the internal swarm networks (documentation here). There is also a blacklist approach using spring.cloud.inetutils.ignored-networks, however I prefer not to use it.
In this case, client applications advertised their actual swarm overlay IP to Eureka, and SBA was able to reach them.
I do find it a bit odd that I did not get any error messages from SBA, and will be opening an issue on their tracker. Perhaps I was simply doing something wrong.
(original question follows)
I have the following setup:
Service discovery using Eureka (with eureka.client.fetch-registry=true and eureka.instance.preferIpAddress=true)
Spring Boot Admin running in the same application as Eureka, with spring.boot.admin.context-path=/admin
Keycloak integration, such that:
SBA itself uses a service account to poll the various /actuator endpoints of my client applications.
The SBA UI itself is protected via a login page which expects an administrative login.
Locally, this setup works. When I start my eureka-server application together with client applications, I see the following correct behaviour:
Eureka running on e.g. localhost:8761
Client applications successfully registering with Eureka via IP registration (eureka.instance.preferIpAddress=true)
SBA running at e.g. localhost:8761/admin and discovering my services
localhost:8761/admin correctly redirects to my Keycloak login page, and login correctly provides a session for the SBA UI
SBA itself successfully polling the /actuator endpoints of any registered applications.
However, I have issues replicating this setup inside a Docker Swarm.
I have two Docker Services, let's say eureka-server and client-api - both are created using the same network and the containers can reach each other via this network (via e.g. curl). eureka-server correctly starts and client-api registers with Eureka right away.
Attempting to navigate to eureka_url/admin correctly shows the Keycloak login page and redirects back to the Spring Boot Admin UI after a successful login. However, no applications are registered and I cannot figure out why.
I've attempted to enable more debug/trace logging, but I see absolutely no logs; it's as if SBA is simply not fetching the Eureka registry.
Does anyone know of a way to troubleshoot this behaviour? Has anyone had this issue?
EDIT:
I'm not quite sure which settings may be pertinent to the issue, but here are some of my configuration files (as code snippets since they're not that small, I hope that's OK):
application.yaml
(Includes base eureka properties, SBA properties, and Keycloak properties for SBA)
---
eureka:
hostname: localhost
port: 8761
client:
register-with-eureka: false
# Registry must be fetched so that Spring Boot Admin knows that there are registered applications
fetch-registry: true
serviceUrl:
defaultZone: http://${eureka.hostname}:${eureka.port}/eureka/
instance:
lease-renewal-interval-in-seconds: 10
lease-expiration-duration-in-seconds: 30
environment: eureka-test-${user.name}
server:
enable-self-preservation: false # Intentionally disabled for non-production
spring:
application:
name: eureka-server
boot:
admin:
client:
prefer-ip: true
# Since we are running in Eureka, "/" is already the root path for Eureka itself
# Register SBA under the "/admin" path
context-path: /admin
cloud:
config:
enabled: false
main:
allow-bean-definition-overriding: true
keycloak:
realm: ${realm}
auth-server-url: ${auth_url}
# Client ID
resource: spring-boot-admin-automated
# Client secret used for service account grant
credentials:
secret: ${client_secret}
ssl-required: external
autodetect-bearer-only: true
use-resource-role-mappings: false
token-minimum-time-to-live: 90
principal-attribute: preferred_username
build.gradle
// Versioning / Spring parents poms
apply from: new File(project(':buildscripts').projectDir, '/dm-versions.gradle')
configurations {
all*.exclude module: 'spring-boot-starter-tomcat'
}
ext {
springBootAdminVersion = '2.3.1'
keycloakVersion = '11.0.2'
}
dependencies {
compileOnly 'org.projectlombok:lombok'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-server'
implementation "de.codecentric:spring-boot-admin-starter-server:${springBootAdminVersion}"
implementation 'org.keycloak:keycloak-spring-boot-starter'
implementation 'org.springframework.boot:spring-boot-starter-security'
compile "org.keycloak:keycloak-admin-client:${keycloakVersion}"
testCompileOnly 'org.projectlombok:lombok'
}
dependencyManagement {
imports {
mavenBom "org.keycloak.bom:keycloak-adapter-bom:${keycloakVersion}"
}
}
The actual application code:
package com.app.eureka;
import de.codecentric.boot.admin.server.config.EnableAdminServer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
#EnableAdminServer
#EnableEurekaServer
#SpringBootApplication
public class EurekaServer {
public static void main(String[] args) {
SpringApplication.run(EurekaServer.class, args);
}
}
Keycloak configuration:
package com.app.eureka.keycloak.config;
import de.codecentric.boot.admin.server.web.client.HttpHeadersProvider;
import org.keycloak.KeycloakPrincipal;
import org.keycloak.KeycloakSecurityContext;
import org.keycloak.OAuth2Constants;
import org.keycloak.adapters.springboot.KeycloakSpringBootProperties;
import org.keycloak.adapters.springsecurity.KeycloakConfiguration;
import org.keycloak.adapters.springsecurity.authentication.KeycloakAuthenticationProvider;
import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter;
import org.keycloak.adapters.springsecurity.token.KeycloakAuthenticationToken;
import org.keycloak.admin.client.Keycloak;
import org.keycloak.admin.client.KeycloakBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Scope;
import org.springframework.context.annotation.ScopedProxyMode;
import org.springframework.http.HttpHeaders;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.core.authority.mapping.SimpleAuthorityMapper;
import org.springframework.security.core.session.SessionRegistry;
import org.springframework.security.core.session.SessionRegistryImpl;
import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy;
import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy;
import org.springframework.web.context.WebApplicationContext;
import org.springframework.web.context.request.RequestContextHolder;
import org.springframework.web.context.request.ServletRequestAttributes;
import java.security.Principal;
import java.util.Objects;
#KeycloakConfiguration
#EnableConfigurationProperties(KeycloakSpringBootProperties.class)
class KeycloakConfig extends KeycloakWebSecurityConfigurerAdapter {
private static final String X_API_KEY = System.getProperty("sba_api_key");
#Value("${keycloak.token-minimum-time-to-live:60}")
private int tokenMinimumTimeToLive;
/**
* {#link HttpHeadersProvider} used to populate the {#link HttpHeaders} for
* accessing the state of the disovered clients.
*
* #param keycloak
* #return
*/
#Bean
public HttpHeadersProvider keycloakBearerAuthHeaderProvider(final Keycloak keycloak) {
return provider -> {
String accessToken = keycloak.tokenManager().getAccessTokenString();
HttpHeaders headers = new HttpHeaders();
headers.add("X-Api-Key", X_API_KEY);
headers.add("X-Authorization-Token", "keycloak-bearer " + accessToken);
return headers;
};
}
/**
* The Keycloak Admin client that provides the service-account Access-Token
*
* #param props
* #return keycloakClient the prepared admin client
*/
#Bean
public Keycloak keycloak(KeycloakSpringBootProperties props) {
final String secretString = "secret";
Keycloak keycloakAdminClient = KeycloakBuilder.builder()
.serverUrl(props.getAuthServerUrl())
.realm(props.getRealm())
.grantType(OAuth2Constants.CLIENT_CREDENTIALS)
.clientId(props.getResource())
.clientSecret((String) props.getCredentials().get(secretString))
.build();
keycloakAdminClient.tokenManager().setMinTokenValidity(tokenMinimumTimeToLive);
return keycloakAdminClient;
}
/**
* Put the SBA UI behind a Keycloak-secured login page.
*
* #param http
*/
#Override
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http
.csrf().disable()
.authorizeRequests()
.antMatchers("/**/*.css", "/admin/img/**", "/admin/third-party/**").permitAll()
.antMatchers("/admin/**").hasRole("ADMIN")
.anyRequest().permitAll();
}
#Autowired
public void configureGlobal(final AuthenticationManagerBuilder auth) {
SimpleAuthorityMapper grantedAuthorityMapper = new SimpleAuthorityMapper();
grantedAuthorityMapper.setPrefix("ROLE_");
grantedAuthorityMapper.setConvertToUpperCase(true);
KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider();
keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(grantedAuthorityMapper);
auth.authenticationProvider(keycloakAuthenticationProvider);
}
#Bean
#Override
protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
return new RegisterSessionAuthenticationStrategy(buildSessionRegistry());
}
#Bean
protected SessionRegistry buildSessionRegistry() {
return new SessionRegistryImpl();
}
/**
* Allows to inject requests scoped wrapper for {#link KeycloakSecurityContext}.
* <p>
* Returns the {#link KeycloakSecurityContext} from the Spring
* {#link ServletRequestAttributes}'s {#link Principal}.
* <p>
* The principal must support retrieval of the KeycloakSecurityContext, so at
* this point, only {#link KeycloakPrincipal} values and
* {#link KeycloakAuthenticationToken} are supported.
*
* #return the current <code>KeycloakSecurityContext</code>
*/
#Bean
#Scope(scopeName = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public KeycloakSecurityContext provideKeycloakSecurityContext() {
ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
Principal principal = Objects.requireNonNull(attributes).getRequest().getUserPrincipal();
if (principal == null) {
return null;
}
if (principal instanceof KeycloakAuthenticationToken) {
principal = (Principal) ((KeycloakAuthenticationToken) principal).getPrincipal();
}
if (principal instanceof KeycloakPrincipal<?>) {
return ((KeycloakPrincipal<?>) principal).getKeycloakSecurityContext();
}
return null;
}
}
KeycloakConfigurationResolver
(separate class to prevent circular bean dependency that happens for some reason)
package com.app.eureka.keycloak.config;
import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class KeycloakConfigurationResolver {
/**
* Load Keycloak configuration from application.properties or application.yml
*
* #return
*/
#Bean
public KeycloakSpringBootConfigResolver keycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}
}
Logout controller
package com.app.eureka.keycloak.config;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PostMapping;
import javax.servlet.http.HttpServletRequest;
#Controller
class LogoutController {
/**
* Logs the current user out, preventing access to the SBA UI
* #param request
* #return
* #throws Exception
*/
#PostMapping("/admin/logout")
public String logout(final HttpServletRequest request) throws Exception {
request.logout();
return "redirect:/admin";
}
}
I unfortunately do not have a docker-compose.yaml as our deployment is done mostly through Ansible, and anonymizing those scripts is rather difficult.
The services are ultimately created as follows (using docker service create):
(some of these networks may not be relevant as this is a local swarm running on my personal node, of note are the swarm networks)
dev#ws:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
3ba4a65c319f bridge bridge local
21065811cbff docker_gwbridge bridge local
ti1ksbdxlouo services overlay swarm
c59778b105b5 host host local
379lzdi0ljp4 ingress overlay swarm
dd92d2f75a31 none null local
eureka-server Dockerfile:
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY eureka-server.jar /home/myuser/eureka-server.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom eureka-server.jar \
--server.port=8761; sh
Eureka/SBA app Docker swarm service:
dev#ws:~$ docker service create --name eureka-server -p 8080:8761 --replicas 1 --network services --hostname eureka-server --limit-cpu 1 --limit-memory 768m eureka-server
Client applications are then started as follows:
Dockerfile
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY client-api.jar /home/myuser/client-api.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom -Deureka.instance.hostname=client-api client-api.jar \
--eureka.zone=http://eureka-server:8761/eureka --server.port=0; sh
And then created as Swarm services as follows:
dev#ws:~$ docker service create --name client-api --replicas 1 --network services --hostname client-api --limit-cpu 1 --limit-memory 768m client-api
On the client side, of note are the following eureka.client settings:
eureka:
name: ${spring.application.name}
instance:
leaseRenewalIntervalInSeconds: 10
instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instanceId:${random.int}}
preferIpAddress: true
client:
registryFetchIntervalSeconds: 5
That's all I can think of right now. The created docker services are running in the same network and can ping each other by IP as well as by hostname (cannot show output right now as I am not actively working on this at the moment, unfortunately).
In the Eureka UI I can, in fact, see my client applications registered and running - it's only SBA which does not seem to notice that there are any.
I found nothing wrong with the configuration you have presented. The only weak lead I see is eureka.hostname=localhost from application.yml. localhost and loopback IPs are two things that better to be avoided with swarm. I think you should check if it isn't something network related.

The KafkaContainer in TestContainers hangs till timeout with "Timed out waiting for container port to open"

I am trying to start a kafka container in TestContainers.
My code is looking like this:
import java.io.File;
import java.time.Duration;
import org.junit.jupiter.api.Test;
import org.testcontainers.containers.DockerComposeContainer;
import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.containers.PostgreSQLContainer;
import org.testcontainers.containers.wait.strategy.Wait;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
#Testcontainers
public class FirstTest {
#Container
public static DockerComposeContainer environment =
new DockerComposeContainer(new File("src/test/resources/compose-test.yml"))
.withExposedService(
"redis_1", 6379, Wait.forListeningPort().withStartupTimeout(Duration.ofSeconds(60)));
#Container
public static KafkaContainer kafka =
new KafkaContainer()
.waitingFor(Wait.forListeningPort().withStartupTimeout(Duration.ofSeconds(360)));
#Container public PostgreSQLContainer postgresContainer = new PostgreSQLContainer();
#Test
void integrationTest() {
String redisUrl =
environment.getServiceHost("redis_1", 6379)
+ ":"
+ environment.getServicePort("redis_1", 6379);
String jdbcUrl = postgresContainer.getJdbcUrl();
String username = postgresContainer.getUsername();
String password = postgresContainer.getPassword();
String url = kafka.getBootstrapServers();
}
}
When I run this code the thread hangs in running state until I receive a timeout exception:
Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for container port to open (localhost ports: [32937, 32939] should be listening)
I want to mention that without the kafkaContainer everything works as expected. I am able to start the redis and postgres containers succesfully.
This is the kafkaContainer version which I use:
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>kafka</artifactId>
<version>1.14.3</version>
<scope>test</scope>
</dependency>
From what I see in the source code waitingFor(Wait.forListeningPort()) will only work if you previously exposed some ports. (I'm not 100% sure though.)
What if you just create a Kafka container without waitingFor() call?

External properties file in grails 3

I need read configuration from a external file properties in grails 3. In grails 2.x I link the file with:
grails.config.locations = ["classpath:config.properties"]
In the config.groovy, but this file do not exists in grails 3.
Have you any idea for solve?
Because Grails 3 is built on Spring Boot, you can use the Spring Boot mechanisms for externalized properties. Namely, using the spring.config.location command line parameter, or the SPRING_BOOT_LOCATION environment variable. Here's the Spring documentation page on it.
The example the documentation provides for the command line parameter is this:
$ java -jar myproject.jar --spring.config.location=classpath:/default.properties,classpath:/override.properties
The way I have been using it is by setting an environment variable, like this:
export SPRING_CONFIG_LOCATION="/home/user/application-name/application.yml"
One of the nice features is that you can leave some properties in the properties file that is bundled in the app, but if there are some properties you do not want to include (such as passwords), those can be set specifically in the external config file.
Take a look at https://gist.github.com/reduardo7/d14ea1cd09108425e0f5ecc8d3d7fca0
External configuration in Grails 3
Working on Grails 3 I realized that I no longer can specify external configuration using the standard grails.config.locations property in Config.groovy file.
Reason is obvious! There is no Config.groovy now in Grails 3. Instead we now use application.yml to configure the properties. However, you can't specify the external configuration using this file too!
What the hack?
Now Grails 3 uses Spring's property source concept. To enable external config file to work we need to do something extra now.
Suppose I want to override some properties in application.yml file with my external configuration file.
E.g., from this:
dataSource:
username: sa
password:
driverClassName: "org.h2.Driver"
To this:
dataSource:
username: root
password: mysql
driverClassName: "com.mysql.jdbc.Driver"
First I need to place this file in application's root. E.g., I've following structure of my Grails 3 application with external configuration file app-config.yml in place:
[human#machine tmp]$ tree -L 1 myapp
myapp
├── app-config.yml // <---- external configuration file!
├── build.gradle
├── gradle
├── gradle.properties
├── gradlew
├── gradlew.bat
├── grails-app
└── src
Now, to enable reading this file just add following:
To your build.gradle file
bootRun {
// local.config.location is just a random name. You can use yours.
jvmArgs = ['-Dlocal.config.location=app-config.yml']
}
To your Application.groovy file
package com.mycompany
import grails.boot.GrailsApp
import grails.boot.config.GrailsAutoConfiguration
import org.springframework.beans.factory.config.YamlPropertiesFactoryBean
import org.springframework.context.EnvironmentAware
import org.springframework.core.env.Environment
import org.springframework.core.env.PropertiesPropertySource
import org.springframework.core.io.FileSystemResource
import org.springframework.core.io.Resource
class Application extends GrailsAutoConfiguration implements EnvironmentAware {
public final static String LOCAL_CONFIG_LOCATION = "local.config.location"
static void main(String[] args) {
GrailsApp.run(Application, args)
}
#Override
void setEnvironment(Environment environment) {
String configPath = System.properties[LOCAL_CONFIG_LOCATION]
if (!configPath) {
throw new RuntimeException("Local configuration location variable is required: " + LOCAL_CONFIG_LOCATION)
}
File configFile = new File(configPath)
if (!configFile.exists()) {
throw new RuntimeException("Configuration file is required: " + configPath)
}
Resource resourceConfig = new FileSystemResource(configPath)
YamlPropertiesFactoryBean ypfb = new YamlPropertiesFactoryBean()
ypfb.setResources([resourceConfig] as Resource[])
ypfb.afterPropertiesSet()
Properties properties = ypfb.getObject()
environment.propertySources.addFirst(new PropertiesPropertySource(LOCAL_CONFIG_LOCATION, properties))
}
}
Notice that Application class implements EnvironmentAware Interface and overrides its setEnvironment(Environment environment):void method.
Now this is all what you need to re-enable external config file in Grails 3 application.
Code and guidance is taken from following links with little modification:
http://grails.1312388.n4.nabble.com/Grails-3-External-config-td4658823.html
https://groups.google.com/forum/#!topic/grails-dev-discuss/_5VtFz4SpDY
Source: https://gist.github.com/ManvendraSK/8b166b47514ca817d36e
I am having the same problem to read the properties file from external location in Grails 3. I found this plugin which helpme to read the properties from external location. It has feature to read .yml, .groovy files as well.
Follow the steps mentioned in the documentation and it will work.
The steps are like:
Add dependency in build.gradle:
dependencies {compile 'org.grails.plugins:external-config:1.2.2'}
Add this in application.yml grails:
config:
locations:
- file:///opt/abc/application.yml
Create file at external location. In my case /opt/abc/application.yml.
Build the application and run.
You can use
def cfg = new ConfigSlurper.parse(getClass().classLoader.getResource('path/myExternalConfigfile.groovy'))
When running from a .jar file, I found that Spring Boot looks at the current directory for an application.yml file.
java -jar app-0.3.jar
In the current directory, I created an application.yml file with one line:
org.xyz.app.title: 'XYZZY'
I also used this file to specify the database and other such info.
Seems like there is no externalised config out of the box: http://grails.1312388.n4.nabble.com/Grails-3-External-config-td4658823.html

How to import 'org.apache.log4j.jdbcplus.JDBCAppender' jar to Grails project?

I'm trying to use a custom jdbcappender in my Grails project. I downloaded the jar, added it in the lib folder and refresh the dependencies. When i used the custom appender got this error:
No such property: URL for class: org.apache.log4j.jdbcplus.JDBCAppender
In this code:
appender new org.apache.log4j.jdbcplus.JDBCAppender(
name: "stacktrace",
URL: "jdbc:postgresql://localhost:5432/test",
user: "test",
password: "test",
dbclass: "org.postgresql.Driver",
sql: "INSERT INTO audit VALUES('#MSG#','#THROWABLE#');"
)
Is the error in the jar import or in the appender configuration?
Best Regards,
André Cruz.
org.apache.log4j.jdbc.JDBCAppender (which is the standard JDBCAppender class in the Log4j jar) has a setURL method, but you're using org.apache.log4j.jdbcplus.JDBCAppender which has a setUrl method, so that line should be
url: "jdbc:postgresql://localhost:5432/test",

Resources