The Problem: failed to execute spring batch job using CommandLineJobRunner, where the application defines its own data source and Hibernate configuration.
Error Message (extracted)
DatabaseLookup org.springframework.boot.autoconfigure.orm.jpa.DatabaseLookup getDatabase org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory
...
Caused by: org.hibernate.HibernateException: Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set
A bit about the batch job:
- SCDF is run using docker-compose.yml downloaded from spring web site.
- a number of properties files under /config, which are built into jar, including a Hibernate configuration file defining "hibernate.dialect=org.hibernate.dialect.MySQLDialect"
- the application defines its own data source using properties below
qre.data.driverClassName=org.mariadb.jdbc.Driver
qre.data.url=jdbc:mysql://127.0.0.1:3306/dataflow
qre.data.username=root
qre.data.password=rootpw
a XML configuration with place-holder definition importing these properties
jar file is built using spring-boot-maven-plugin, defining org.springframework.batch.core.launch.support.CommandLineJobRunner as mainClass
org.springframework.boot
spring-boot-maven-plugin
a.b.c.MyCommandLineJobRunner
MyCommandLineJobRunner extends Spring CommandLineJobRunner and pass job name and configuration as name/value pair
job.name=MYJOB
run the jar successfully on local "java -jar application.jar job.name=MYJOB"
register the app on SCDF and create a task. Run the task with arguments "job.name=MYJOB"
The task execute failed with error above
Tried to search SCDF reference guide, but unable to find anything useful yet.
Any help is apprecaited.
I am not sure why your application tries to override the hibernate dialect property as the batch application still needs to use the SCDF's data source. You can override the hibernate dialect for the SCDF server using the property spring.jpa.properties.hibernate.dialect. You can see some of these examples here in the documentation.
After a few changes in configuration, I believe JDBC url wasn't configured correctly.
Changed it to the same value as in docker-compose.yml,
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/dataflow
- SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.mariadb.jdbc.Driver
The job can then be run successfully.
Related
I'm configuring Talend ESB (OpenSource) and I want to be able to redirect the logging, such as from the Camel Log components etc., to a database.
I've tried editing the org.ops4j.pax.logging.cfg file to add a JDBCAppender, but when karaf imports this file I get a message "Unable to invoke factory method in class class org.apache.logging.log4j.core.appender.db.jdbc.JdbcAppender for element JDBC... No factory method found for class org.apache.logging.jog4j.core.appender.db.jdbc.JdbcAppender"
Is this likely to be because the right appender classes aren't registered in Karaf, in which case can someone point me where I can find the feature/bundle I need, or is there something more subtle going on that I am missing?
Thanks!
Note: this is to find your bundle and to install it in karaf
feature:list|grep -i camel
above command shows you which jdbc compenets aren't installed yet and you can install them in karaf using feature:install camel-jdbc try this.
I am building a dropwizard service which will connect to multiple data sources including mySQL and Elasticsearch. All the mySQL settings can be defined in the yaml config file which gets read in after running from the commandline.
But what about other settings that I need to read in for other data sources that I will connect with myself, for example Elasticsearch? Where can I define those settings?
I thought I could add another commandline Command - which I tried, but I can only run a single command (from the commandline) at a time - so I can't seem to run both the 'server' command as well as my custom command, 'custom' which is followed by the my own config file for elasticsearch.
How can I introduce settings either individually or from a file - which are defined at run time (not hard coded)?
Thanks
Anton
Check out the Dropwizard Core documentation on adding custom configuration.
You'd create an ElasticSearchFactory class similar to the MessageQueueFactory in the example, reference this in your Configuration (that's in turn referenced in your Application), and then the options you need can be added to your main yaml configuration.
I am running grails 3.2.1 and want to create Swagger-documentation for my REST endpoints. I added the SwaggyDoc-plugin to the dependencies in my build.gradle script by adding:
compile "org.grails.plugins:swaggydoc-grails3:0.28.0"
And configure it by https://rahulsom.github.io/swaggydoc/ .
In IntelliJ I see the Swaggydoc-dependency added to my list of libraries.
After starting my Grails-application via the grails run-app command and opening my application by entering http://localhost:8080/api/ I get an swagger-ui index.html, but I find error in console log(see in image).
console log errors
And this exceptions in grails.
ERROR org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[grailsDispatcherServlet] - Servlet.service() for servlet [grailsDispatcherServlet] in context with path [] threw exception
org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object '[Digest Input Stream] MD5 Message Digest from SUN
Answer
When we create grails application inside intellij we will get the following dependency inside application.yaml file.
runtime "com.bertramlabs.plugins:asset-pipeline-grails:2.11.6"
We need to replace this with the following :
runtime "org.grails.plugins:asset-pipeline"
This error will be resolved.
I am new to spring-data and neo4j, so if there is something painfully bad about my code please say.
I want to deploy my application to heroku, however I cannot get remote neo4j connections to work. I have tried
Remote connection to a local docker image (which works in browser)
Local server
Heroku add-on (Graphstory)
Sadly, they all give the same exception, the most important being:
Factory method 'typeRepresentationStrategyFactory' threw exception; nested exception is java.lang.RuntimeException: Error reading as JSON ''
You can see my Neo4j Spring Configuration here (unit profile is for unit testing) and part of my pom is here.
Neo4j docker image version: 2.3.3
Neo4j local version: 2.3.3
GraphEntity model (Comment) here
As it turns out my dependencies are incompatible with the Neo4j server I think.
Furthermore the only compatible version is on neo4j's private maven repository.
Flavorwocky by luanne (not me) is a good reference for how it should look
Also go and have a look at Neo4j OGM documentation if you are here.
Because of memory constraint i am trying to build a grails app with smaller memory footprint. I build the war with this argument "--nojars". I created a war file without all the jar and when i deploy within the glassfish i encounter this error
Exception while loading the app : java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: java.lang.IllegalArgumentException: java.lang.ClassNotFoundException: org.codehaus.groovy.grails.web.util.Log4jConfigListener
It seems like the application fail to find where is the jar file.
I had already indicates the path to the library before deploying the application in glassfish.
did i miss out somethinng?
It is commonly recommended to use GlassFish's Common Classloader. That means putting the shared JARS into the $domain-dir/lib folder (but not into a subfolder of that).
You're probably trying to use the Application Classloader with the asadmin deploy --libraries command. This is more complicated and error-prone. If you don't need different versions of the same JARs with different web applications, you should definitely go for the Common Classloader as specified above.
Also see The Classloader Hierarchy for a reference.
EDIT Updated as per the questioner's comment:
The domain/domain1/lib folder definitely works (I've tested that). To validate that, put log4j.jar into that folder and add a test.jsp to domain1/applications/$applicationName, that just contains:
<% out.println(
org.apache.log4j.Logger.getLogger(this.getClass())); %>
If that works but your other code does not, there may be another point to consider: Are you using Log4J's Logger.getLogger(..) or Apache Commons' LogFactory.getInstance(..) in your code?
See the article Taxonomy of class loader problems encountered when using Jakarta Commons Logging for related issues. - I'd also like to advise you to post your complete stacktrace.