I'm using the ValidationRulesServlet to generate valdr JSON for my APIs. Currently the generated JSON looks like this:
{
"Person" : {
"firstName" : {
"size" : {
"min" : 2,
"message" : "{javax.validation.constraints.Size.message}",
"max" : 2147483647
},
"required" : {
"message" : "{javax.validation.constraints.NotNull.message}"
}
},
"lastName" : {
"size" : {
"min" : 2,
"message" : "{javax.validation.constraints.Size.message}",
"max" : 20
},
"required" : {
"message" : "{javax.validation.constraints.NotNull.message}"
}
}
}
}
I'm using Jersey for my REST services and I want the messages in the above JSON to be replaced with values from ValidationMessages.properties.
My ValidationMessages.properties is located in classpath (src/main/resources) and is used correctly by Jackson. This can be confirmed by calling a REST endpoint with an invalid value. Here is an example response:
[
{
"message": "Must be between 2 and 2147483647 characters",
"messageTemplate": "{javax.validation.constraints.Size.message}",
"path": "PersonServiceImpl.updatePerson.arg0.firstName",
"invalidValue": ""
}
]
The corresponding message in my ValidationMessages.properties is
javax.validation.constraints.Size.message = Must be between {min} and {max} characters
How can I get the valdr JSON to output messages from ValidationMessages.properties rather than e.g. {javax.validation.constraints.Size.message}?
You can't, at least not out of the box. The reason is pretty obvious if you check the Bean Validation spec or look at the Constraint Javadoc:
Each constraint annotation must host the following attributes:
String message() default [...]; which should default to an error message key made of the fully-qualified class name of the constraint
followed by .message. For example
{com.acme.constraints.NotSafe.message}
So, in essence "message" : "{javax.validation.constraints.Size.message}" in the valdr constraint JSON signifies the message key rather than the actual validation message. It'd IMO be more sensible to call the JSON property messageKey to make this very clear but we wanted to stick with the Bean Validation lingo. In fact, all the properties in the JSON are extracted 1-by-1 from the Bean Validation Constraint.
So, you need a way to display "Must be between 2 and 2147483647 characters" in your AngularJS front-end if the Person.firstName.size constraint is violated. valdr achieves that by integrating well with angular-translate.
All you need to do is to make your ValidationMessages.properties available to the front-end and to initialize angular-translate with the messages from that file.
Related
I want to dummy in the real-time database in this structure:
{
"0" : {
"0c1592ca-0fa5-43b9-88d2-c9cd77b30611" : {
"token" : "0cu9CJPb_DIUfbr-Ay8vh6:-KQXn....",
"member_id" : "123456789102",
"update_at" : "2021/06/14 08:08:08"
},
"<uid>" : {
"token" : "167 random characters",
"member_id" : "12 random numbers",
"update_at" : "YYYY/mm/DD HH:mm:ss"
}
},
"1" : {
....
},
"2" : {
....
},
.....
"9" : {
....
}
}
A record is like this:
"36 random characters" : {
"token" : "167 random characters",
"member_id" : "12 random numbers",
"update_at" : "YYYY/mm/DD HH:mm:ss"
}
I've tried to import JSON files from the firebase console for a million records per node. But I got a crash from the second node, like the image below. I can't import easily like before.
Is there any other way that I can dummy 10 million child nodes like above, faster and stable?
https://i.stack.imgur.com/nuBoM.png
The error message says that the JSON is invalid, so you might want to pass it through a JSON validator like https://jsonlint.com/.
Aside from that, I can imagine that you browser, the console, or the server runs into memory problems with this number of nodes in one write (see limits). I recommend using the API to instead read the JSON file locally, and then add it to Firebase in chunks, or use a tool like https://github.com/FirebaseExtended/firebase-import.
Also see https://www.google.com/search?q=firebase+realtime+database+upload+large+JSON
I am currently trying to figure out if CouchDB is suitable for my use-case and if so, how. I have a situation similar to the following:
First set of documents (let's call them companies):
{
"_id" : 1,
"name" : "Foo"
}
{
"_id" : 2,
"name" : "Bar"
}
{
"_id" : 3,
"name" : "Baz"
}
Second set of documents (let's call them projects):
{
"_id" : 4,
"name" : "FooProject1",
"company" : 1
}
{
"_id" : 5,
"name" : "FooProject2",
"company" : 1
}
...
{
"_id" : 100,
"name" : "BazProject2",
"company" : 3
}
Third set of documents (let's call them incidents):
{
"_id" : "300",
"project" : 4,
"description" : "...",
"cost" : 200
}
{
"_id" : "301",
"project" : 4,
"description" : "...",
"cost" : 400
}
{
"_id" : "302",
"project" : 4,
"description" : "...",
"cost" : 500
}
...
So in short every company has multiple projects, and every project can have multiple incidents. One reason I model the data is, that I come mainly from a SQL background, so the modelling may be completely unsuitable. The second reason is, that I would like to add new incidents very easily by just using the REST-API provided by couchdb. So the incidents have to be single documents.
However, I now would like to get a view that would allow me to calculate the total cost for each company. I can easily define a view using map-reduce and linked documents which get's me the total amount per project. However once I am at the project level I cannot get any further to the level of the company.
Is this possible at all using couchDb? This kind of summarising data sounds like a perfect use case for map-reduce. In SQL I would just do a three-table join, but it seems like in couchDb the best I can get is two-table joins.
As mentioned you cannot do joins in CouchDb but this isn't a limitation, this is an invitation to both think about your problems and approach them differently. The correct way to do this in CouchDb is to define data structures called for example : IncidentReference composed of :
The project id
And the company id
That way your data would look like :
{
"_id" : "301",
"project" : 4,
"description" : "...",
"cost" : 400,
"reference" : {
"projectId" : 1,
"companyId" : 2
}
}
This is just fine. Once you have that, you can play with Map/Reduce to achieve whatever you want easily. Generally speaking, you need to think about the way you are going to query your data.
I've installed SwiftMongoDB using CocoaPods. Added 2 documents in the collection. When I try to retrieve them using .find() method It only returns one document.
func all() -> [MongoDocument]{
let UsersCollection = MongoCollection(name: "users")
mongodb?.mongodb.registerCollection(UsersCollection)
for (index,value) in UsersCollection.find().successValue!.enumerate(){
debugPrint(value)
}
// UsersCollection.find().successValue!.count
// returns 1.
return UsersCollection.find().successValue!
}
My collection looks like:
{ "_id" : ObjectId("56bb29ca42b9b41900000000"), "address" : "US", "given" : "User", "birthDate" : "1985-08-01", "family" : "UserFam", "identifier" : "E3826", "date" : "10.2.2016 at 14:15:6" }{ "_id" : ObjectId("56bb29ca42b9b41900000000"), "address" : "US", "given" : "User2", "birthDate" : "1985-08-01", "family" : "UserFam2", "identifier" : "E3826", "date" : "10.2.2016 at 14:15:6" }
Is there another way of getting all the documents? Am I doing something wrong?
I have never used SwiftMongoDB but I have used swift for iOS development and mongoDB with Java. First of all, here is your first object's and your second object's ids together:
1st: 56bb29ca42b9b41900000000
2nd: 56bb29ca42b9b41900000000
As you can see they are the same. So I strongly believe your issue arises from that. Have you defined that property as primary key?
This is a bug. Maybe the package is in it's early versions ....
I am new to elastisearch and I just set it up and tried default search. I am using elasticsearch rails gem. I need to write custom query with priority search (some fields in table are more important then others, etc. title, updated_at in last 6 months...). I tried to find explanation or tutorial for how to do this but nothing seems understandable. Can anyone help me with this, soon better.
Never having used the ruby/elasticsearch integration, it doesn't seem too hard... The docs here show that you'd want to do something like this:
client.search index: 'my-index', body: { query: { match: { title: 'test' } } }
To do a basic search.
The ES documentation here shows how to do a field boosted query:
{
"multi_match" : {
"query" : "this is a test",
"fields" : [ "subject^3", "message" ]
}
}
Putting it all together, you'd do something like this:
client.search index: 'my-index', body: { query: { multi_match : {
query : "this is a test",
fields : [ "subject^3", "message" ]
} } }
That will allow you to search/boost on fields -- in the above case, the subject field is given 3 times the score of the message field.
There is a very good blog post about how to do advanced scoring. Part of it shows an example of adjusting the score based on a date:
...
"filter": {
"exists": {
"field": "date"
}
},
"script": "(0.08 / ((3.16*pow(10,-11)) * abs(now - doc['date'].date.getMillis()) + 0.05)) + 1.0"
...
I have done in php, Never used the gem from Ruby on rails. Here you can give the priority for the fields using the caret (^) notation.
Example:- Suppose if we have fields namely name, email, message and address in table and the priority should be given for the name and message then you can write as below
> { "multi_match" : {
> "query" : "this is a test",
> "fields" : [ "name^3", "message^2".... ] } }
Here name has 3 times higher priority than other fields and message has got 2 times higher priority than other fields.
I have this simple model:
abstract class Info {
ObjectId id
Date dateCreated
Date lastUpdated
}
class Question extends Info {
String title
String content
List<Answer> answers = []
static embedded = ['answers']
}
class Answer {
String content
}
Written this way, answer are embedded in question (and no id is maintained for answer). I want to maintain the id, dateCreated, and lastUpdated fields for every answer. So I try the following:
class Answer extends Info {
String content
}
When I run a simple test case (save a question with 1 answer), I get the following:
> db.question.find()
{ "_id" : ObjectId("4ed81d47e4b0777d795ce3c4"), "answers" : [ { "content" : "its very
cool", "dateCreated" : null, "lastUpdated" : null, "version" : null } ], "content" :
"whats up with mongodb?", "dateCreated" : ISODate("2011-12-02T00:35:19.303Z"),
"lastUpdated" : ISODate("2011-12-02T00:35:19.303Z"), "title" : "first question",
"version" : 0 }
I notice here that fields dateCreated and lastUpdate are not auto-maintained by Grails. Also version field was added but has a null value as well, but interestingly no _id field created (even if I defined id in Info class).
In a second scenario, I try this:
class Answer {
ObjectId id
String content
}
and I get the following output:
> db.question.find()
{ "_id" : ObjectId("4ed81c30e4b076cb80ec947d"), "answers" : [ { "content" : "its very
cool" } ], "content" : "whats up with mongodb?", "dateCreated" : ISODate("2011-12-
02T00:30:40.233Z"), "lastUpdated" : ISODate("2011-12-02T00:30:40.233Z"), "title" :
"first question", "version" : 0 }
This time, id is also not created for the embedded document. Any explanation for this scenarios ? Why there is no id property, and why dateCreated, lastUpdated, and version are null? Is this intended to work this way, or is it a bug?
Thank you,
this is probably due to how the grails framework does the conversion (the GORM module).
You may have quicker / better answers from the grails forum.
Basically it seems that some of the automatic behavior (fill in dates and objectid) is only done for the root object, not subobjects.
You can also checkout an alternative ORM based on morphia:
http://www.grails.org/plugin/mongodb-morphia