I'd like to use Grails to build application which is a front-end to a complex data search and analysis system written already as a set of stored procedures in a RDBMS (here, oracle).
I'd like to instantiate objects of some class I written as a wrapper against rows of some dataset returned from a stored procedure. For this class I don't need to have any GORM mapping at all, I want to execute the query manually and instantiate these objects from it's rows. Those objects will never be changed and written, as the whole DB should be read-only, only session information can be stored.
Hibernate can do all this, and it has "Immutable" and read-only entities for this, but when I tried to mark this entity as "Immutable" I didn't have much success.
Is it possible to create such a fake mapping and at all, should it be created?
What are other possible ways to do this ?
I don't think you can do that in GORM, but you can use plain Hibernate with Grails.
Add your classes in the src/groovy folder. One example, in src/groovy/example:
package example
import java.io.Serializable;
import javax.persistence.Id
import javax.persistence.Entity
import javax.persistence.Table
import javax.persistence.Column
import org.hibernate.annotations.Immutable
#Entity
#Table(name="first_result")
#Immutable
class FirstResult implements Serializable {
private static final long serialVersionUID = 1L
#Id
Integer id
#Column(name="name")
String resultName
}
(You can also write POJOs in src/java but then you need to write constructors, getters and setters)
And in the grails-app/conf/hibernate/hibernate.cfg.xml add:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<mapping class="example.FirstResult" />
</session-factory>
</hibernate-configuration>
Then, in your controllers and services you can use:
example.FirstResult.findAllById(1)
or any other method that works on domain classes in Grails.
Related
I have a Groovy class definition which looks like the following:
package packageC;
import packageA.ClassA1;
import packageA.ClassA2;
import packageB.ClassB1;
class BuildConfig implements Serializable {
String projectId;
String dockerComposeFile = "docker-compose.yml";
// dozens of other properties elided...
def prepare() {
// performs some setter-like post-processing
}
}
As I understand, GroovyDoc uses Javadoc-style comments to generate documentation. However, the #param tag only applies to methods. Since there is no explicitly-written constructor for this class, I'm not sure how to document the properties.
Question: what is the best way to document Groovy class properties?
I should add that this project/repository is not currently connected to a GroovyDoc generation service (neither CLI nor Apache Ant). However, if I am writing comments, then I might as well do it "the right way" (and easily allow GroovyDoc usage in the future).
I inherited some Jenkins pipeline code and am attempting to document it. I have minimal experience with Java and even less with Groovy. Please excuse any naivety.
As an example, I remember in hadoop I could make classes serializable or give a path with the jars needed for my job. I had either option. I am wondering if this is true in a dataflow job such that I can have all the clients we have in jar files to be packaged for all the workers.
In our case, we have MicroserviceApi and a generated client, etc. and would prefer to output to that downstream microservice without having to make it serializable.
Is there a method to do this?
First, let me clarify about serialization
When you add implements Serializable to a class in Java, you make it such that object instances of that class can be serialized (not the class itself). The destination JVM needs to have access to the class to be able to understand the serialized instances that you send to it. So, in fact you always need to provide the JAR for it.
Beam has code to automatically find all JARs in your classpath and upload them to Dataflow or whatever runner you're using, so if the JAR is in your classpath, then you don't need to worry about it (if you're using Maven/Gradle and specifying it as a dependency, then you're most likely fine).
Now, how can I use a class in Beam if it's not serializable?
In Beam, the more important part of it is to figure out where and when the different parts of the pipeline code will execute. Some things execute at pipeline construction time and some things execute at pipeline running time.
Things that run at construction time
Constructors for all your classes (DoFns, PTransforms, etc)
The expand method of your PTransforms
Things that run at executuion time
For your DoFns: ProcessElement, StartBundle, FinishBundle, Setup, TearDown methods.
If your class does not implement serializable, but you want to access it at execution time, then you need to create it at execution time. So, suppose that you have a DoFn:
class MyDoFnImplementation extends DoFn<String, String> {
// All members of the object need to be serializable. String is easily serializable.
String config;
// Your MicroserviceApi is *not* serializable, so you can mark it as transient.
// The transient keyword ensures that Java will ignore the member when serializing.
transient MicroserviceApi client;
public MyDoFnImplementation(String configuration) {
// This code runs at *construction time*.
// Anything you create here needs to be serialized and sent to the runner.
this.config = configuration;
}
#ProcessElement
public void process(ProcessContext c) {
// This code runs at execution time. You can create your object here.
// Add a null check to ensure it's only created once.
// You can also create it at #Setup or #StartBundle.
if (client == null) client = new MicroserviceApi(this.config);
}
}
By ensuring that objects are created at execution time, you can avoid the need to make them serializable - but your configuration needs to be serializable.
In GORM, it is possible for someone to specify the default id generator in config.groovy by doing:
grails.gorm.default.mapping = {
id generator : 'uuid2', type: 'pg-uuid'
}
However, I have a class in a plugin which is expecting the id to be a long, so it falls over. I could change the plugin, but just wondering if I have any other options here?
Thanks
You could try implementing an AST Transformation on the domain classes of your project which would convert any Long fields to String fields.
Grails scans the package org.codehaus.groovy.grails.compiler for any classes which implement grails.compiler.ast.GrailsArtefactClassInjector. Create a class in this package which scans domain classes and removes any id properties which have a class of Long and replaces them with a property of class String (or whatever class type you need).
I use the following approach to access a logger instance from classes in a Grails app:
In Grails artefacts (controllers, services, domain classes, etc.) I simply use the logger that is added by Grails, e.g.
class MyController {
def someAction() {
log.debug "something"
}
}
For classes under src/groovy I annotate them with #groovy.util.logging.Slf4j, e.g.
#Slf4j
class Foo {
Foo() {
log.debug "log it"
}
}
The logger seems to behave properly in both cases, but it slightly bothers me that the class of the loggers differs. When I use the annotation, the class of the logger is org.slf4j.impl.GrailsLog4jLoggerAdapter, but when I use the logger that's automatically added to Grails artefacts the class is org.apache.commons.logging.impl.SLF4JLog.
Is there a recommended (or better) approach to adding loggers to Grails classes?
I don't see any problem with what you described. SLF4J isn't a logging framework, it's a logging framework wrapper. But aside from some Grails-specific hooks in the Grails class, they both implement the same interface and delegate eventually to the same loggers/appenders/etc. in the real implementation library, typically Log4j.
What I'm pretty sure is different though is the log category/name, because you need to configure the underlying library based on what the logger names become. With annotations the logger name is the same as the full class name an package. With the one Grails adds, there's an extra prefix based on the artifact type. I always forget the naming convention but a quick way to know the logger name is to log it; add this in your class where it will be accessed at runtime:
println log.name
and it will print the full logger name (using println instead of a log method avoids potential misconfiguration issues that could keep the message from being logged
I like to keep things simple and consistent and know that being used, so I skip the wrapper libraries and use Log4j directly. Access the logger is easy. Import the class
import org.apache.log4j.Logger
and then add this as a class field:
Logger log = Logger.getLogger(getClass().name)
This can be copy/pasted to other classes since there's no hard-coded names. It won't work in static scope, so for that I'd add
static Logger LOG = Logger.getLogger(this.name)
which also avoids hard-coding by using Groovy's support for "this" in static scope to refer to the class.
Have you tried the #Log4j (for log4j) instead.
#Log4j (for log4j)
How can i use 'log' inside a src/groovy/ class
Assuming I have a data access object that I've already written, I'd like to be able to use CDI to inject that into say, a service class. Furthermore, I have two implementations of that DAO.
My understanding of CDI is that I'd have to annotate my DAO implementation class so that CDI would know which implementation to inject.
The problem is, the DAO is in a .jar file. By annotating it with CDI annotations, I'm using JavaEE imports in a non-JavaEE class.
For example, let's say I have the following class
public class BusinessService {
#Inject #SomeMybatisQualifier AccountDAO accountDao;
...
}
The #Inject annotation comes from javax.inject.Inject. Now, this service class is dependent on a JavaEE environment.
Can someone please explain to me what I'm missing? How do I inject a non-annotated class into another non-annotated class? This is fairly simple with Spring.
I agree with LightGuard if there's enough classes. But for a couple, why not just produce them with #Produces?
Here's a decent example of implementing your own producer:
Depedency inject request parameter with CDI and JSF2
You should be able to write return new MyObject(); and you can add whatever qualifiers you want
Not sure what's unclear but here's the gist of things: For CDI to scan a jar for beans it must have a beans.xml. Else it will not be scanned and thus not available for injects.A String is not available either. If you try to inject a String say;
#Inject
String myString;
CDI will have no clue what to give you just like your jar. But I know what String I want (a requestparam) and I can let CDI know as well. How? Well I supply a qualifier #RequestParam to my producer (see example again) and now when I want to use it in client code I do it like this:
#Inject
#RequestParam
String myString;
You can do the same thing. Have a producer and just create a new instance of whatever you need and then return it. Now CDI will know just how to dependency inject that particular bean.
Now say you have 40 classes. Then it gets messy to produce them and you want to make sure it gets scanned instead. Then you write your own little extension, observe when CDI is about to scan and instruct it to scan additional jars. Such extension is probably easy to write but I don't know the details because I have not written any extensions like it
By far, the easiest thing would be to create a CDI extension to add the classes in the jar (because there's no beans.xml in that jar so it won't be picked up by CDI) and add additional qualifiers to the metadata.