We are adding CORS support to our Swagger API which includes defining an options operation per path. Since this is boiler-plate code we want to define the option operation once in the definitions section like so
"definitions":{
"CORS":{ .. }
}
And then reference the operation in our paths like so
"paths":{
"/system/info":{
"options" : {
"$ref": "#/definitions/CORS"
}
}
}
This does not seem to work when we upload the swagger definition. What is the proper way to accomplish our goal of defining a path operation once and then re-using it across paths?
You can reference an entire path to an external location:
"paths": {
"/system/info": {
"$ref": "cors.json"
}
}
but not an individual http method. In addition, the spec doesn't allow for a relative reference for a path--you'll have to put it in a separate document.
See here for information on the path item object, and here for the top-level swagger object.
Related
I am having a situation, where once I get pagingData <T: UIModel>, I need to get additional data from a different API. The second Api requires arguments that are there in first API response. Currently I am collecting in UI Layer in lifecyclescope as,
loadResults().collectLatest {
PagingResultAdapter.submitData(lifecycle, it)
// Extracting the data inside PagingData and setting in viewmodel.
it.map { uiModel ->
Timber.e("Getting data inside map function..")
viewModel.setFinalResults(uiModel)
}
}
}
But the problem is, the map{} function on pagingData won't run during data fetching. List is populated, ui is showing the items in recyclerview. But the map function not running..(I am not able see the log)
The UI layer loadResults() function in-turn calls the viewmodel.loadResults() with UI level variables. In terms of paging everything is working fine, but I cannot transform the pagingdata into UIModel in any layer.
Official site suggests to use map{} function only.
https://developer.android.com/topic/libraries/architecture/paging/v3-transform#basic-transformations
But I am not getting at which layer I should apply map{} and also before collecting or after collecting..Any help is good..
PagingData.map is a lazy transformation that runs during collection when you call .submitData(pagingData). Since you are only submitting the original un-transformed PagingData your .map transform will never run.
You should apply the .map to the PagingData you will actually end up submitting in order to have it run. Usually this is done from the ViewModel, so that the results are also cached in case you end up in a config change or cached scenario like when navigating between fragments.
You didn't share your ViewModel / place you are creating your Pager, but assuming this happens at a different layer you would have something like:
MyViewModel.kt
fun loadResults() = Pager(...) { ... }
.flow
.map {
Timber.e("Getting data inside map function..")
setFinalResults(uiModel)
it
}
.cachedIn(viewModelScope)
MyUi.kt
viewModel.loadResults().collectLatest {
pagingDataAdapter.submitData(it)
}
NOTE: You should use the suspending version of .submitData since you are using Flow / Coroutines, because it is able to propagate cancellation direction instead of relying on launched job + eager cancellation via the non-suspending version. There shouldn't be any visible impact, but it is more performant.
Try with:
import androidx.paging.map
.flow.map { item ->
item.map { it.yourTransformation() }
}
As per Grails documentation
Grails also lets you write your domain model in Java or reuse an existing one that already has Hibernate mapping files. Simply place the mapping files into grails-app/conf/hibernate and either put the Java files in src/java or the classes in the project's lib directory if the domain model is packaged as a JAR. You still need the hibernate.cfg.xml though!
So This is exactley what i did.
I have used java domain model and hibernate.cfg.xml file for mapping. I also use
{DomainName}Constraints.groovy for adding Grails constraints. I also used to add functions to {DomainName}Constraints. For example, below is the content of my EmployeeConstraints.groovy
Employee.metaClass.static.findByDepartment = {depCode ->
createCriteria().list {
department{
inList ('code', depCode)
}
}
}
Now this works fine. But, when i add projection to it(code below), just to get the employee code.
Employee.metaClass.static.findByDepartment = {depCode ->
createCriteria().list {
projections { property('empCode', 'empCode') }
department { inList ('code', depCode) }
}
}
I get the below error..
" No signature of method: com.package.script142113.projections() is applicable for argument types.. "
Can someone point me to whats wrong with the code?
Thanks!
The property projection is used to return a subset of an object's properties. For example, to return just the foo and bar properties use:
projections {
property('foo')
property('bar')
}
You're getting an error because you've called the property method with 2 arguments instead of one.
By the way, I see another potential with your code. Grails will automatically create a dynamic finder findByDepartment that has the same name as the method your trying to add via the meta-class. I have no idea which one will take precendence, but I would suggest you avoid this potential problem and simplify your code, by adding this query using Grails' named query support, and call it something like getByDepartment so that the name doesn't class with a dynamic finder.
The answer by Dónal should be the correct one, but I found a strange behavior with grails 3.1. I got the same message using this call:
Announcement.createCriteria().list {
projections {
property('id')
property('title')
}
} .collect { [id: it['id'], title: it['title']] } // it['id'] not found
I fixed it by removing projections closure:
Announcement.createCriteria().list {
property('id')
property('title')
} .collect { [id: it['id'], title: it['title']] } // got the it['id']
Hope this help.
My current understanding-
Elasticsearch creates the mapping indices the first time it receives the JSON datasets.
This mapping cannot be changed, but the datasets can be re-mapped.
Question-
Forget re-mapping. Is there any way to tell ES to behave by default as-
"Consider everything that is not a date to be of string type"?
Also, will i be losing out on much if i do this?
Update-
i added the file- config/mappings/_default/mapping.json with the following contents-
{
"dynamic_templates": [
{
"template_1": {
"match": "*",
"match_mapping_type": "int",
"mapping": {
"type": "string"
}
},
"template_2": {
"match": "*",
"match_mapping_type": "long",
"mapping": {
"type": "string"
}
}
}
]
}
i also tried placing the following at- config/default_mapping.json
{
"_default_" : {
"match": "*",
"match_mapping_type": "int",
"mapping": {
"type": "string"
}
}
}
My 'motive' is to get rid of errors that crop up if int and long types change to string. Will this map all int and long values as string across all indexes that are created in the future? Do i need to nest this dynamic_templates key within _all?
Update II-
Adding this mapping file causes elasticsearch to cough up-
[2014-02-04 10:48:34,396][DEBUG][action.admin.indices.create] [Her] [logstash-2014.02.04] failed to create
org.elasticsearch.index.mapper.MapperParsingException: mapping [mapping.json]
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:312)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:298)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.util.Map
at org.elasticsearch.index.mapper.DocumentMapperParser.extractMapping(DocumentMapperParser.java:268)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:155)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:314)
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:193)
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:309)
... 5 more
2014-02-04 10:48:34 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2014-02-04 10:48:33 +0000 error_class="Net::HTTPServerException" error="400 \"Bad Request\"" instance=17509700
When you start from scratch, thus without mapping, you rely on defaults. Every time you send a document the fields that weren't mapped yet are automatically mapped based on their json type (and conventions for dates). That said, if you send a field in your first document as a number and that same field becomes a string in your second document, the index operation for the second document will return an error.
There are apis to manage mappings, which doesn't mean that you have to declare all your fields. You can just specify the ones that you want to behave differently from the default. You can specify mappings while creating an index, using the put mapping api if the index already exists, or even include them in index templates, for indices that have yet to be created.
Changing the mappings is possible, but only backwards compatible changes can be applied. You can always add new fields, but you can't change the type or the analyzer for an existing field. What you could do in that case is trying to make the change backwards compatible by using multi-fields, otherwise you need to reindex against the updated mappings.
As for your last question, if you index everything as a string, you lose what you can usually do with numbers e.g. range queries. Whether this is feasible or not depends on your data and what you need to do with it.
I have a problem with cross-referencing terminals that are only locally unique (in their block/scope), but not globally. I found tutorials that describe, how I can use fully qualified names or package declarations, but my case is syntactically a little bit different from the example and I cannot change the DSL to support something like explicit fully qualified names or package declarations.
In my DSL I have two types of structured JSON resources:
The instance that contains my data.
A meta model, containing type information etc. for my data.
I can easily parse those two, and get an EMF model with the following Java snippet:
new MyDSLStandaloneSetup().createInjectorAndDoEMFRegistration();
ResourceSet rs = new ResourceSetImpl();
rs.getResource(URI.createPlatformResourceURI("/Foo/meta.json", true), true);
Resource instanceResource= rs.getResource(URI.createPlatformResourceURI("/Bar/instance.json", true), true);
EObject eobject = instanceResource.getContents().get(0);
Simplyfied example:
meta.json
{
"toplevel_1": {
"sublevels": {
"sublevel_1": {
"type": "int"
},
"sublevel_2": {
"type": "long"
}
}
},
"toplevel_2": {
"sublevels": {
"sublevel_1": {
"type": "float"
},
"sublevel_2": {
"type": "double"
}
}
}
}
instance.json
{
"toplevel_1": {
"sublevel_1": "1",
"sublevel_2": "2"
},
"toplevel_2": {
"sublevel_1": "3",
"sublevel_2": "4"
}
}
From this I want to infer that:
toplevel_1:sublevel_1 has type int and value 1
toplevel_1:sublevel_2 has type long and value 2
toplevel_2:sublevel_1 has type float and value 3
toplevel_2:sublevel_2 has type double and value 4
I was able to cross-reference the unique toplevel-elements and iterate over all sublevels until I found the ones that I was looking for, but for my use case that is quite inefficient and complicated. Also, I don't get the generated editor to link between the sublevels this way.
I played around with linking and scoping, but I'm unsure as to what I really need, and if I have to extend the providers-classes AbstractDeclarativeScopeProvider and/or DefaultDeclarativeQualifiedNameProvider.
What's the best way to go?
See also:
Xtext cross reference using custom terminal rule
http://www.eclipse.org/Xtext/documentation.html#scoping
http://www.eclipse.org/Xtext/documentation.html#linking
After some trial and error I solved my problem with a ScopeProvider.
The main issue was that I didn't really understand what a scope is in Xtext-terms, and what I have to provide it to.
Looking at the signature from the documentation:
IScope scope_<RefDeclaringEClass>_<Reference>(<ContextType> ctx, EReference ref)
In my example language:
RefDeclaringEClass would refer to the Sublevel from instance.json,
Reference to the cross-reference to the Sublevel from meta.json, and
ContextType would match the RefDeclaringEClass.
Using the eContainer of ctx I can get the Toplevel from instance.json.
This Toplevel already has a cross-reference to the matching Toplevel from meta.json, which I can use to get the Sublevels from meta.json. This collection of Sublevels is basically the scope within which the current Sublevel should be unique.
To get the IScope I used Scopes#scopeFor(Iterable).
I didn't post any code here because the actual grammar is bigger/different, and therefore doesn't really help the explanation.
I messed around with this a bit yesterday and failed miserably. I want to convert:
"/$controller/$action?/$id?"
To
#in psudo
"/$controller/$id?/$action?"
#ideal regex
"\/(\w+)(\/\d+)?(\/\w+)?"
The most obvious way failed "/$controller/$action?/$id?"
I can write the regex's to do it, but I am having trouble finding a way to using true regexs (I found RegexUrlMapping but could not find out how to use it), and also can't find documentation on how to assign a group to a variable.
My question is 2 parts:
How to I define a URL Resource with a true regex.
How to I bind a "group" to a variable. In other words if I define a regex, how do I bind it to a variable like $controller, $id, $action
I would also like to be able to support the .json notation /user/id.json
Other things I have tried, which I thought would work:
"/$controller$id?$action?"{
constraints {
controller(matches:/\w+/)
id(matches:/\/\d+/)
action(matches:/\/\w+/)
}
}
also tried:
"/$controller/$id?/$action?"{
constraints {
controller(matches:/\w+/)
id(matches:/\d+/)
action(matches:/\w+/)
}
}
The grails way to deal with this is to set
grails.mime.file.extensions = true
in Config.groovy. This will cause Grails to strip off the file extension before applying the URL mappings, but make it available for use by withFormat
def someAction() {
withFormat {
json {
render ([message:"hello"] as JSON)
}
xml {
render(contentType:'text/xml') {
//...
}
}
}
For this you'd just need a URL mapping of "$controller/$id?/$action?"
I'm not aware of any way to use regular expressions in the way you want in the URL mappings, but you could get a forward mapping working using the fact that you can specify closures for parameter values that get evaluated at runtime with access to the other params:
"$controller/$a?/$b?" {
action = { params.b ?: params.a }
id = { params.b ? params.a : null }
}
which says "if b is set then use that as the action and a as the id, otherwise use a as the action and set id to null". But this wouldn't give you a nice reverse mapping, i.e. createLink(controller:'foo', action:'bar', id:1) wouldn't generate anything sensible, you'd have to use createLink(controller:'foo', params:[a:1, b:'bar'])
Edit
A third possibility you could try is to combine the
"/$controller/$id/$action"{
constraints {
controller(matches:/\w+/)
id(matches:/\d+/)
action(matches:/\w+/)
}
}
mapping with a complementary
"/$controller/$action?"{
constraints {
controller(matches:/\w+/)
action(matches:/(?!\d+$)\w+/)
}
}
using negative lookahead to ensure the two mappings are disjoint.