I'm not a programming savvy person, so please bear with me.
I've read blog entries and docs about command object. I've never used it and was wondering if I should. (I probably should...)
My project requires parsing, sorting, calculating, and saving results into database when users upload files.
So according to one of the blog entries I read and its corresponding github code,
1) SERVICE should receive file uploads, parse uploaded files (mainly docs and pdfs), sort parsed data using RegEx, and calculate data,
2) COMMAND OBJECT should call SERVICE, collect results and send results back to controller, and save results into the database,
3) CONTROLLER should receive request from VIEW, get results from COMMAND OBJECT, and send results back to VIEW.
Did I understand correctly?
Thanks.
I found this to be the best setup. Here is an example that I use on production:
Command Object (to carry data and ensure their validity):
#grails.validation.Validateable
class SearchCommand implements Serializable {
// search query
String s
// page
Integer page
static constraints = {
s nullable: true
page nullable: true
}
}
Controller (directs a request to a Service and then gets a response back from the Service and directs this response to a view):
class SomeController {
//inject service
def someService
def search(SearchCommand cmd) {
def result = someService.search(cmd)
// can access result in .gsp as ${result} or in other forms
render(view: "someView", model: [result: result])
}
}
Service (handles business logic and grabs data from Domain(s)):
class SomeService {
def search(SearchCommand cmd) {
if(cmd.hasErrors()) {
// errors found in cmd.errors
return
}
// do some logic for example calc offset from cmd.page
def result = Stuff.searchAll(cmd.s, offset, max)
return result
}
}
Domain (all database queries are handled here):
class Stuff {
String name
static constraints = {
name nullable: false, blank: false, size: 1..30
}
static searchAll(String searchQuery, int offset, int max) {
return Stuff.executeQuery("select s.name from Stuff s where s.name = :searchQuery ", [searchQuery: searchQuery, offset: offset, max:max])
}
}
Yes, you understood it correctly except the one thing: command object shouldn't save the data to DB - let service to do that. The other advantage of command object is data binding and validation of data from the client. Read more about command objects here grails command object docs
You can also find helpful information regarding your question in this article
grails best practices
I guess not. Its not really related to whether the save is done in a service it should always attempt to carry out complex stuff and specifically db stuff in a service. so that is regardless. I tend to not use command object but have got hooked on helper classes aka beans that sit in src/main/groovy and do all of the validation and formatting. I just did a form and in it has feedback and reason.
Initially I thought I would get away with
def someAction(String feedback, String reason) {
someService.doSomething(feedback,reason)
}
But then I looked closed and my form was firstly a textarea then the selection objects were bytes so above was not working and to simply fix it without adding the complexity to my controller/service I did this:
packe some.package
import grails.validation.Validateable
class SomeBean implements Validateable {
User user
byte reason
String feedback
static constraints = {
user(nullable: true)
reason(nullable:true, inList:UsersRemoved.REASONS)
feedback(nullable:true)
}
void setReason(String t) {
reason=t as byte
}
void setFeedback(String t) {
feedback=t?.trim()
}
}
Now my controller
class SomeController {
def userService
def someService
def doSomething(SomeBean bean){
bean.user = userService.currentUser
if (!bean.validate()) {
flash.message=bean.errors.allErrors.collect{g.message([error : it])}
render view: '/someTemplate', model: [instance: bean,template:'/some/template']
return
}
someService.doSomeThing(bean)
}
}
Now my service
Class SomeService {
def doSomeThing(SomeBean bean) {
if (bean.user=='A') {
.....
}
}
All of that validation would have still had to have been done somewhere, you say no validation but in a good model you should do validation and set things to be stored in proper structures to reduce overloading your db over time. difficult to explain but in short i am talking about your domain class objects and ensuring you are not setting up String something string somethingelse and then not even defining their lenghts etc. be strict and validate
if you have a text area this will be stored in the back end - so you will need to trim it like above - you will need to ensure the input does not exceed the max character of the actual db structure which if not defined will probably be 255
and by doing
static constraints = {
user(nullable: true)
reason(min:1, max:255, nullable:true, inList:UsersRemoved.REASONS)
Has already invalidated it through the bean.validate() in the controller if the user exceeded somehow my front end checks and put in more than 255.
This stuff takes time be patient
Edited to finally add in that example byte - is one to be careful of -
When adding any String or what ever I have started to define the specific like this and in the case of byte if it is a boolean true false - fine if not then define it as a tinyint
static mapping = {
//since there is more than 1 type in this case
reason(sqlType:'tinyint(1)')
feedback(sqlType:'varchar(1000)')
// name(sqlType:'varchar(70)')
}
If you then look at your tables created in the db you should find they have been created as per definition rather than standard 255 varchar which I think is the default for a declared String.
Related
I have a pipeline that successfully outputs an Avro file as follows:
#DefaultCoder(AvroCoder.class)
class MyOutput_T_S {
T foo;
S bar;
Boolean baz;
public MyOutput_T_S() {}
}
#DefaultCoder(AvroCoder.class)
class T {
String id;
public T() {}
}
#DefaultCoder(AvroCoder.class)
class S {
String id;
public S() {}
}
...
PCollection<MyOutput_T_S> output = input.apply(myTransform);
output.apply(AvroIO.Write.to("/out").withSchema(MyOutput_T_S.class));
How can I reproduce this exact behavior except with a parameterized output MyOutput<T, S> (where T and S are both Avro code-able using reflection).
The main issue is that Avro reflection doesn't work for parameterized types. So based on these responses:
Setting Custom Coders & Handling Parameterized types
Using Avrocoder for Custom Types with Generics
1) I think I need to write a custom CoderFactory but, I am having difficulty figuring out exactly how this works (I'm having trouble finding examples). Oddly enough, a completely naive coder factory appears to let me run the pipeline and inspect proper output using DataflowAssert:
cr.RegisterCoder(MyOutput.class, new CoderFactory() {
#Override
public Coder<?> create(List<? excents Coder<?>> componentCoders) {
Schema schema = new Schema.Parser().parse("{\"type\":\"record\,"
+ "\"name\":\"MyOutput\","
+ "\"namespace\":\"mypackage"\","
+ "\"fields\":[]}"
return AvroCoder.of(MyOutput.class, schema);
}
#Override
public List<Object> getInstanceComponents(Object value) {
MyOutput<Object, Object> myOutput = (MyOutput<Object, Object>) value;
List components = new ArrayList();
return components;
}
While I can successfully assert against the output now, I expect this will not cut it for writing to a file. I haven't figured out how I'm supposed to use the provided componentCoders to generate the correct schema and if I try to just shove the schema of T or S into fields I get:
java.lang.IllegalArgumentException: Unable to get field id from class null
2) Assuming I figure out how to encode MyOutput. What do I pass to AvroIO.Write.withSchema? If I pass either MyOutput.class or the schema I get type mismatch errors.
I think there are two questions (correct me if I am wrong):
How do I enable the coder registry to provide coders for various parameterizations of MyOutput<T, S>?
How do I values of MyOutput<T, S> to a file using AvroIO.Write.
The first question is to be solved by registering a CoderFactory as in the linked question you found.
Your naive coder is probably allowing you to run the pipeline without issues because serialization is being optimized away. Certainly an Avro schema with no fields will result in those fields being dropped in a serialization+deserialization round trip.
But assuming you fill in the schema with the fields, your approach to CoderFactory#create looks right. I don't know the exact cause of the message java.lang.IllegalArgumentException: Unable to get field id from class null, but the call to AvroCoder.of(MyOutput.class, schema) should work, for an appropriately assembled schema. If there is an issue with this, more details (such as the rest of the stack track) would be helpful.
However, your override of CoderFactory#getInstanceComponents should return a list of values, one per type parameter of MyOutput. Like so:
#Override
public List<Object> getInstanceComponents(Object value) {
MyOutput<Object, Object> myOutput = (MyOutput<Object, Object>) value;
return ImmutableList.of(myOutput.foo, myOutput.bar);
}
The second question can be answered using some of the same support code as the first, but otherwise is independent. AvroIO.Write.withSchema always explicitly uses the provided schema. It does use AvroCoder under the hood, but this is actually an implementation detail. Providing a compatible schema is all that is necessary - such a schema will have to be composed for each value of T and S for which you want to output MyOutput<T, S>.
I'm having a weird issue with the configureMetadataStore.
My model:
class SourceMaterial {
List<Job> Jobs {get; set;}
}
class Job {
public SourceMaterial SourceMaterial {get; set;}
}
class JobEditing : Job {}
class JobTranslation: Job {}
Module for configuring Job entities:
angular.module('cdt.request.model').factory('jobModel', ['breeze', 'dataService', 'entityService', modelFunc]);
function modelFunc(breeze, dataService, entityService) {
function Ctor() {
}
Ctor.extend = function (modelCtor) {
modelCtor.prototype = new Ctor();
modelCtor.prototype.constructor = modelCtor;
};
Ctor.prototype._configureMetadataStore = _configureMetadataStore;
return Ctor;
// constructor
function jobCtor() {
this.isScreenDeleted = null;
}
function _configureMetadataStore(entityName, metadataStore) {
metadataStore.registerEntityTypeCtor(entityName, jobCtor, jobInitializer);
}
function jobInitializer(job) { /* do stuff here */ }
}
Module for configuring JobEditing entities:
angular.module('cdt.request.model').factory(jobEditingModel, ['jobModel', modelFunc]);
function modelFunc(jobModel) {
function Ctor() {
this.configureMetadataStore = configureMetadataStore;
}
jobModel.extend(Ctor);
return Ctor;
function configureMetadataStore(metadataStore) {
return this._configureMetadataStore('JobEditing', metadataStore)
}
}
Module for configuring JobTranslation entities:
angular.module('cdt.request.model').factory(jobTranslationModel, ['jobModel', modelFunc]);
function modelFunc(jobModel) {
function Ctor() {
this.configureMetadataStore = configureMetadataStore;
}
jobModel.extend(Ctor);
return Ctor;
function configureMetadataStore(metadataStore) {
return this._configureMetadataStore('JobTranslation', metadataStore)
}
}
Then Models are configured like this :
JobEditingModel.configureMetadataStore(dataService.manager.metadataStore);
JobTranslationModel.configureMetadataStore(dataService.manager.metadataStore);
Now when I call createEntity for a JobEditing, the instance is created and at some point, breeze calls setNpValue and adds the newly created Job to the np SourceMaterial.
That's all fine, except that it is added twice !
It happens when rawAccessorFn(newValue); is called. In fact it is called twice.
And if I add a new type of job (hence I register a new type with the metadataStore), then the new Job is added three times to the np.
I can't see what I'm doing wrong. Can anyone help ?
EDIT
I've noticed that if I change:
metadataStore.registerEntityTypeCtor(entityName, jobCtor, jobInitializer);
to
metadataStore.registerEntityTypeCtor(entityName, null, jobInitializer);
Then everything works fine again ! So the problem is registering the same jobCtor function. Should that not be possible ?
Our Bad
Let's start with a Breeze bug, recently discovered, in the Breeze "backingStore" model library adapter.
There's a part of that adapter which is responsible for rewriting data properties of the entity constructor so that they become observable and self-validating and it kicks in when register a type with registerEntityTypeCtor.
It tries to keep track of which properties it has rewritten. The bug is that it records the fact of rewrite on the EntityType rather than on the constructor function. Consequently, every time you registered a new type, it failed to realize that it had already rewritten the properties of the base Job type and re-wrapped the property.
This was happening to you. Every derived type that you registered re-wrapped/re-wrote the properties of the base type (and of its base type, etc).
In your example, a base class Job property would be re-written 3 times and its inner logic executed 3 times if you registered three of its sub-types. And the problem disappeared when you stopped registering constructors of sub-types.
We're working on a revised Breeze "backingStore" model library adapter that won't have this problem and, coincidentally, will behave better in test scenarios (that's how we found the bug in the first place).
Your Bad?
Wow that's some hairy code you've got there. Why so complicated? In particular, why are you adding a one-time MetadataStore configuration to the prototypes of entity constructor functions?
I must be missing something. The code to register types is usually much smaller and simpler. I get that you want to put each type in its own file and have it self-register. The cost of that (as you've written it) is enormous bulk and complexity. Please reconsider your approach. Take a look at other Breeze samples, Zza-Node-Mongo for example.
Thanks for reporting the issue. Hang in there with us. A fix should be arriving soon ... I hope in the next release.
Setting up some integration tests, I'm having issues with domain class equality. The equality works as expected during normal execution, but when testing the Service methods through an integration test, the test for equality is coming back false.
One service (called in the setUp() of the Test Case) puts a Domain object into the session
SomeService {
setSessionVehicle(String name) {
Vehicle vehicle = Vehicle.findByName(name)
session.setAttribute("SessionVehicle", vehicle)
}
getSessionVehicle() {
return session.getAttribute("SessionVehicle")
}
}
Elsewhere in another service, I load an object and make sure the associated attribute object matches the session value:
OtherService {
getEngine(long id) {
Vehicle current = session.getAttribute("SessionVehicle")
Engine engine = Engine.get(id)
if(!engine.vehicle.equals(current)) throw Exception("Blah blah")
}
}
This works as expected during normal operation, preventing from loading the wrong engine (Ok, I sanitized the class names, pretend it makes sense). But in the integration test, that .equals() fails when it should succeed:
Vehicle testVehicle
setUp() {
Vehicle v = new Vehicle("Default")
v.save()
someService.setSessionVehicle("Default")
testVehicle = someService.getSessionVehicle()
}
testGetEngine() {
List<Engine> engines = Engine.findAllByVehicle(testVehicle)
//some assertions and checks
Engine e = otherService.getEngine(engines.get(0).id)
}
The findAll() call is correctly returning the list of all Engines associated with the vehicle in the session, but when I try to look up an individual Engine by ID, the equality check for session Vehicle vs Vehicle on the found Engine fails. Only a single vehicle has been created at this point and the Exception message displays that the session Vehicle and the Engine.Vehicle exist and are the same value.
If I attempt this equality check in the testCase itself, it fails, but I'm able to change the testCase to check if(vehicle.id == sessionVehicle.id) which succeeds, but I'm not keen on changing my production code in order to satisfy an integration test.
Am I doing something wrong when setting up these domain objects in my test case that I should be doing differently?
First of all, the equality check you are doing is just checking the reference. You should not use the default equals method for your check, better override the equals method in domain class.
There are two ways you can override the equals method:
1) you can use your IDE to auto-generate code for equals method(a lot of null checking etc..).
2) Preferred way: You can use EqualsBuilder and HashCodeBuilder classes from the Apache Commons project. The library should be already available to your application, or download the JAR file and place in lib. Here is sample code for using EqualsBuilder:
boolean equals(o) {
if ( !(o instanceof Vehicle) ) {
return false
}
def eb = new EqualsBuilder()
eb.append(id, o.id)
eb.append(name, o.name)
eb.append(otherProperties, o.otherProperties)
....
return eb.isEquals()
}
Another point is: how you are getting session in service? from RequestContextHolder? Its a good practice to not access session directly from service, rather send the value as method parameter in the service.
I'm writing a service that shouldn't have to save anything. It gets some values. Looks up a few things in the database, and then coughs back a response. Is there anything I should do to make the service faster/less overhead? Also whats the best way to pass it something. I usually pass the id and get it again; is that good/bad/dumb?
example
class DoStuffController {
def ExampleProcessingService
def yesDoIt = {
def lookup = "findme"
def theObject = ExampleThing.findByLookie(lookup)
def lolMap = ExampleProcessingService.doYourThing(theObject.id)
if(lolMap["successBool"]){
theObject.imaString = "Stuff"
theObject.save()
}
[]
}
}
service
class ExampleProcessingService{
static transactional = true //???????? false? not-a?
def doYourThing = {theID ->
def returnMap = [:]
def myInstance = ExampleThing.get(theID)
if(myInstance.something)returnMap.put "successBool", true
else returnMap.put "successBool", false
return returnMap
}
}
domain object
class ExampleThing {
String imaString
String lookie
static constraints = {
imaString(nullable:true)
}
def getSomething() {
return true
}
}
bootstrap
import learngrails.*
class BootStrap {
def init = { servletContext ->
def newThing = new ExampleThing(lookie:"findme")
newThing.save()
}
def destroy = {
}
}
Is the an advantage, disadvantage or standard to passing ID and doing get vs. passing the object? Does this change given my case of not going to save anything in the service? Is there something I'm doing glaringly wrong? Do you have a better suggestion for the title?
You've asked a lot of questions, and should split this into several individual questions. But I'll address the overall issue - this approach is fine in general.
There's not a lot of overhead in starting and committing a transaction that doesn't do any database persistence, but it is wasteful, so you should add
static transactional = false
In this case you're using the class as an easily injected singleton helper class. It's convenient to do transactional work in services because they're automatically transactional, but it's far from a requirement.
One thing though - do not use closures in services. They're required in controllers and taglibs (until 2.0 anyway) but should always be avoided in services and other classes. If you're not using the fact that it's a closure - i.e. passing it as an object to a method as a parameter, or setting its delegate, etc. - then you're just being way too groovy. If you're calling it like a method, make it a method. The real downside to closures in services is that when you want them to be transactional, they cannot be. This is because Spring interceptors intercept method calls, not closure calls that Groovy pretends are method calls. So there won't be any interception for transactions, security, etc.
Is it possible to explicitly set the id of a domain object in Grails' Bootstrap.groovy (or anywhere, for that matter)?
I've tried the following:
new Foo(id: 1234, name: "My Foo").save()
and:
def foo = new Foo()
foo.id = 1234
foo.name = "My Foo"
foo.save()
But in both cases, when I print out the results of Foo.list() at runtime, I see that my object has been given an id of 1, or whatever the next id in the sequence is.
Edit:
This is in Grails 1.0.3, and when I'm running my application in 'dev' with the built-in HSQL database.
Edit:
chanwit has provided one good solution below. However, I was actually looking for a way to set the id without changing my domain's id generation method. This is primarily for testing: I'd like to be able to set certain things to known id values either in my test bootstrap or setUp(), but still be able to use auto_increment or a sequence in production.
Yes, with manually GORM mapping:
class Foo {
String name
static mapping = {
id generator:'assigned'
}
}
and your second snippet (not the first one) will do the job (Id won't be assigned when passing it through constructor).
What I ended up using as a workaround was to not try and retrieve objects by their id. So for the example given in the question, I changed my domain object:
class Foo {
short code /* new field */
String name
static constraints = {
code(unique: true)
name()
}
}
I then used an enum to hold all of the possible values for code (which are static), and would retrieve Foo objects by doing a Foo.findByCode() with the appropriate enum value (instead of using Foo.get() with the id like I wanted to do previously).
It's not the most elegant solution, but it worked for me.
As an alternative, assuming that you're importing data or migrating data from an existing app, for test purposes you could use local maps within the Bootstrap file. Think of it like an import.sql with benefits ;-)
Using this approach:
you wouldn't need to change your domain constraints just for
testing,
you'll have a tested migration path from existing data, and
you'll have a good data slice (or full slice) for future integration tests
Cheers!
def init = { servletContext ->
addFoos()
addBars()
}
def foosByImportId = [:]
private addFoos(){
def pattern = ~/.*\{FooID=(.*), FooCode=(.*), FooName=(.*)}/
new File("import/Foos.txt").eachLine {
def matcher = pattern.matcher(it)
if (!matcher.matches()){
return;
}
String fooId = StringUtils.trimToNull(matcher.group(1))
String fooCode = StringUtils.trimToNull(matcher.group(2))
String fooName = StringUtils.trimToNull(matcher.group(3))
def foo = Foo.findByFooName(fooName) ?: new Foo(fooCode:fooCode,fooName:fooName).save(faileOnError:true)
foosByImportId.putAt(Long.valueOf(fooId), foo) // ids could differ
}
}
private addBars(){
...
String fooId = StringUtils.trimToNull(matcher.group(5))
def foo = foosByImportId[Long.valueOf(fooId)]
...
}