How to disable database initialization with Spring-Cloud-Skipper - spring-cloud-skipper

I'm trying to disable the initialization of tables in Spring-Cloud-Skipper. Is there a property much like the spring.cloud.dataflow.rdbms.initialize.enable=false in Spring-Cloud-Dataflow that I can set? If not, how do I disable the initialization of the tables?

Spring Cloud Skipper uses Spring Boot's Hibernate JPA configurations to setup database initialization and Flyway properties to perform database migrations.
Hence, you can pass the following properties to disable database initialization during the Spring Cloud Skipper server startup:
--spring.jpa.hibernate.ddl-auto=none --spring.flyway.enabled=false

Related

Spring Cloud Dataflow Task database configuration

I'm trying to understand the expected behavior when running Batch tasks via Spring Cloud Dataflow wrt datasource configuration.
Is the idea that the Spring Batch database tables (BATCH_JOB_EXECUTION, etc.) would be in the SCDF database itself? There appears to be some magic happening when launching a task via SCDF where it creates those tables in the SCDF database and appears to use them. It seems to be injecting the SCDF datasource into my application?
I'm currently running on the localhost server version 2.0.1. Streams are working as expected, they use the datasource configured in application.properties.
Is the idea that the Spring Batch database tables (BATCH_JOB_EXECUTION, etc.) would be in the SCDF database itself?
Correct. It is required that the Spring Batch, Task, and SCDF share a common datasource if you are interested in tracking and managing the lifecycle of batch-jobs using the SCDF Shell/Dashboard.
If you include a batch-job in the Task application, it is the application that directly creates the Batch and Task schemas when it starts. SCDF doesn't inject datasource creds unless you intentionally request for it to do that when it launches the Tasks.
SCDF happens to partake in the same datasource, so it can in turn query the executions/status tables to show it in the Dashboard.
Here's some more background in the ref. guide.

Spring Cloud data flow does not show Spring cloud task execution details

The Spring cloud dataflow documentation mentions
When executing tasks externally (i.e. command line) and you wish for Spring Cloud Data Flow to show the TaskExecutions in its UI, be sure that common datasource settings are shared among the both. By default Spring Cloud Task will use a local H2 instance and the execution will not be recorded to the database used by Spring Cloud Data Flow.
I am new to Spring cloud dataflow and spring cloud task. Can somebody help me how to setup a common datasource for both. For my development purpose I'm using the embedded H2 database. Can I use the embedded one to see task execution details in Spring Flo/Dashboard?
A common "datasource" must be shared between Spring Cloud Data Flow (SCDF) and your Spring Cloud Task (SCT) applications in order to track and monitor task executions. If the datasource is not shared, both SCDF and SCT applications by default use a individual H2 database. And because they are on different databases, the task-executions in SCDF won't have visibility to independent execution history of SCT microservice applications.
Make sure to supply common DB properties to both. In your case, you can supply the same H2 DB properties. It is as simple as Spring Boot DB property overrides.

Spring Session GemFireSession management on gfsh created Gemfire server

I set up a Spring Session Gemfire client application connecting to a gfsh-create Gemfire server.
However, unless I load ALL the jar files that contain class definitions used by attributes attached to the GemFireSession on the Gemfire server (via the classpath property when creating a server using gfsh) the session persistence fails (ClassNotFoundException's, etc.)
Is there any way to configure Spring Session Gemfire to not require the client classes to be available on the server side?
No, to properly handle Session replication, the Spring Session Data GemFire support currently uses the GemFire DataSerialization framework, not PDX, which is the same as GemFire's own HTTP Session Management module.
I do have an open issue to revisit this, but at the time, there were several limitations in PDX (compared to the pure GemFire DataSerialization framework) in order to support all the functionality required in Spring Session, such as deltas.
Minimally, you just need your application domain objects that are being stored in the Session to be on the GemFire Servers classpath when starting from Gfsh.
Ideally, you could configure your GemFire Servers with Spring as a Spring Boot application that would have all your dependencies already on the classpath, and you could even configure the Spring Session portion of the GemFire Server with Spring Session's #EnableGemFireHttpSession. That will in effect setup the required Region (storing Session state) along with any expiration policy.
-John

Spring session with back end api's instead of JDBC

I have been exploring on spring-session framework for session management in our application, and we want to store session in database. I understand that spring provides implementation with JDBC and we can configure our own DataSource. The problem I'm facing is that we don't have direct access to db and need to make web service call to do any sort of crud operations.
So, is there a way to integrate spring-session to consume web services for session related crud operations in db ?
Another question is, can we change the schema for session related tables. I know that we can change the table names, but is it possible to add or remove further columns in the given tables ?
You can employ your custom session repository fairly easy - use #EnableSpringHttpSession (which imports SpringHttpSessionConfiguration) to configure common Spring Session components and register your SessionRepository implementation #Bean.
Regarding more advanced customization of schema used by JdbcOperationsSessionRepository, this was considered during implementation of JDBC support however decision was made not to provide this initially. If you need this feature please consider creating an feature request in Spring Session issue tracker.

Using spring security plugin with alternative database schema

i want to use spring security and to map my model to a simplistic database schema.
According to this: http://docs.spring.io/spring-security/site/docs/3.0.x/reference/springsecurity-single.html#db_schema_users_authorities (section user schema), i can have a that match the one i have.
Note that this schema is the basic schema of tomcat with the database realm.
But i don't know how to tell spring to which schema i use and what are the tables to use wich what fields.
If you're using the Grails Spring Security plugin you're lookingin the wrong place - you don't configure it using standard Spring Security but within the plugin itself.
This is a common enough customization that it has its own chapter in the plugin docs - see https://grails-plugins.github.io/grails-spring-security-core/guide/userDetailsService.html

Resources