Unable to find a model that matches key - swagger

Upgraded Swagger from 2.9.2 to 3.0.0 based on steps below:
https://springfox.github.io/springfox/docs/snapshot/#migrating-from-existing-2-x-version
In new version, though functionality is working fine, following ERROR is printed in logs when hitting the "/swagger-ui/index.html".
Using springfox-boot-starter 3.0.0. Would appreciate your inputs.
[ReferenceModelSpecificationToPropertyConverter] Unable to find a
model that matches key
ModelKey{qualifiedModelName=ModelName{namespace='java.time',
name='LocalDate'}, viewDiscriminator=null,
validationGroupDiscriminators=[], isResponse=true}

As per the Springfox documentation
http://springfox.github.io/springfox/docs/current/#answers-to-common-questions-and-problems
The way to correctly map the "Date" and "DateTime" types to their
corresponding swagger types:
Substitute "Date" types (java.util.LocalDate, org.joda.time.LocalDate)
by java.sql.Date.
Substitute "DateTime" types (java.util.ZonedDateTime,
org.joda.time.LocalDateTime, …​) by java.util.Date.
docket
.directModelSubstitute(LocalDate.class, java.sql.Date.class)
.directModelSubstitute(LocalDateTime.class, java.util.Date.class)
Example of Docket bean
#Bean
public Docket api()
{
Docket docket = new Docket(DocumentationType.SWAGGER_2) //
.select() //
.apis(RequestHandlerSelectors.basePackage("com.test")) //
.paths(PathSelectors.regex("/api/.*")).build() //
.apiInfo(apiInfo()) //
.pathMapping("/") //
.forCodeGeneration(true) //
.genericModelSubstitutes(ResponseEntity.class) //
.directModelSubstitute(LocalDate.class, java.sql.Date.class)
.directModelSubstitute(LocalDateTime.class, java.util.Date.class)
.useDefaultResponseMessages(false);
return docket;
}

Not sure if this is your case, but to me this problem arised because I had a JsonSubTyped generic inheritance where one of the children had List<LocalDate> and List<LocalDateTime> as their parameter. The fix was - for me - to include extra rules for the types as follows:
myDocket.alternateTypeRules(
AlternateTypeRules.newRule(typeResolver.resolve(List.class, LocalDate.class), typeResolver.resolve(List.class, java.sql.Date.class)),
AlternateTypeRules.newRule(typeResolver.resolve(List.class, LocalDateTime.class), typeResolver.resolve(List.class, java.util.Date.class)))
The replacement classes are from the official site.

Related

How to customize an existing Grails plugin functionality, modifying behavior of doWithSpring method

I am new to grails and while working with Spring Security LDAP plugin it was identified that it accepts the ldap server password in plain text only. The task in hand is to pass an encrypted password which is decrypted before it is consumed by the plugin during its initialization phase.
I have already searched for all possible blogs and stackoverflow questions but could not find a way to extend the main plugin class to simply override the doWithSpring() method so that i can simply add the required decryption logic for the Ldap server password. Any help here will be appreciated.
I have already seen and tried jasypt plugin but it also does not work well if the password is stored in some external file and not application yml. So I am looking for a solution to extend the Spring security plugin main class, add the required behavior and register the custom class.
EDIT
Adding the snippet from Grails LDAP Security plugin, which I am trying to override. So If i am successfully able to update the value of securityConfig object before the plugin loads, the purpose is solved.
Some snippet from the plugin:
def conf = SpringSecurityUtils.securityConfig
...
...
contextSource(DefaultSpringSecurityContextSource, conf.ldap.context.server) { // 'ldap://localhost:389'
authenticationSource = ref('ldapAuthenticationSource')
authenticationStrategy = ref('authenticationStrategy')
userDn = conf.ldap.context.managerDn // 'cn=admin,dc=example,dc=com'
**password = conf.ldap.context.managerPassword // 'secret'**
contextFactory = contextFactoryClass
dirObjectFactory = dirObjectFactoryClass
baseEnvironmentProperties = conf.ldap.context.baseEnvironmentProperties // none
cacheEnvironmentProperties = conf.ldap.context.cacheEnvironmentProperties // true
anonymousReadOnly = conf.ldap.context.anonymousReadOnly // false
referral = conf.ldap.context.referral // null
}
ldapAuthenticationSource(SimpleAuthenticationSource) {
principal = conf.ldap.context.managerDn // 'cn=admin,dc=example,dc=com'
**credentials = conf.ldap.context.managerPassword // 'secret'**
}
You don't need to override the doWithSpring() method in the existing plugin. You can provide your own plugin which loads after the one you want to affect and have your doWithSpring() add whatever you want to the context. If you add beans with the same name as the ones added by the other plugin, yours will replace the ones provided by the other plugin as long as you configure your plugin to load after the other one. Similarly, you could do the same think in resources.groovy of the app if you don't want to write a plugin for this.
You have other options too. You could write a bean post processor or bean definition post processor that affects the beans created by the other plugin. Depending on the particulars, that might be a better idea.
EDIT:
After seeing your comment below I created a simple example that shows how you might use a definition post processor. See the project at https://github.com/jeffbrown/postprocessordemo.
The interesting bits:
https://github.com/jeffbrown/postprocessordemo/blob/master/src/main/groovy/demo/SomeBean.groovy
package demo
class SomeBean {
String someValue
}
https://github.com/jeffbrown/postprocessordemo/blob/master/src/main/groovy/demo/SomePostProcessor.groovy
package demo
import org.springframework.beans.BeansException
import org.springframework.beans.MutablePropertyValues
import org.springframework.beans.PropertyValue
import org.springframework.beans.factory.config.BeanDefinition
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory
import org.springframework.beans.factory.support.BeanDefinitionRegistry
import org.springframework.beans.factory.support.BeanDefinitionRegistryPostProcessor
class SomePostProcessor implements BeanDefinitionRegistryPostProcessor{
#Override
void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
BeanDefinition definition = registry.getBeanDefinition('someBean')
MutablePropertyValues values = definition.getPropertyValues()
PropertyValue value = values.getPropertyValue('someValue')
def originalValue = value.getValue()
// this is where you could do your decrypting...
values.addPropertyValue('someValue', "MODIFIED: ${originalValue}".toString())
}
#Override
void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
}
}
https://github.com/jeffbrown/postprocessordemo/blob/master/grails-app/conf/spring/resources.groovy
beans = {
someBean(demo.SomeBean) {
someValue = 'Some Value'
}
somePostProcessor demo.SomePostProcessor
}
https://github.com/jeffbrown/postprocessordemo/blob/master/grails-app/init/postprocessordemo/BootStrap.groovy
package postprocessordemo
import demo.SomeBean
class BootStrap {
SomeBean someBean
def init = { servletContext ->
log.info "The Value: ${someBean.someValue}"
}
def destroy = {
}
}
At application startup you will see log output that looks something like this...
2017-10-23 19:04:54.356 INFO --- [ main] postprocessordemo.BootStrap : The Value: MODIFIED: Some Value
The "MODIFIED" there is evidence that the bean definition post processor modified the property value in the bean. In my example I am simply prepending some text to the string. In your implementation you could decrypt a password or do whatever you want to do there.
I hope that helps.
After trying Jasypt plugin and BeanPostProcessor solutions unsuccessfully for my use case, I found below solution to work perfectly.
To describe again the problem statement here,
a) we had to keep the passwords in an encrypted format inside properties files
b) and given we were packaging as a war file so the properties must not be kept inside the war to allow automated deployment scripts update the encrypted passwords depending on the environment
Jasypt plugin was a perfect solution for the use case a), but it was not able to cover the b) scenario
Moreover, the Grails LDAP Security plugin was getting loaded quite early hence Bean Post processors were also not helping out here.
Solution:
Created a new class by implementing the interface SpringApplicationRunListener. Extended its methods and parsed the properties file using YamlPropertySourceLoader
Sample code:
YamlPropertySourceLoader loader = new YamlPropertySourceLoader();
PropertySource<?> applicationYamlPropertySource = loader.load(
"application.yml", new ClassPathResource("application.yml"),"default");
return applicationYamlPropertySource;
Once the properties were loaded inside the MapPropertySource object, parsed them for the encrypted values and applied the decryption logic.
This whole implementation was executed before any plugins were initialized during Grails bootup process solving the purpose.
Hope it will help others.

How to specify coder for KV<Boolean, Map<String, Object>> in google Beam/dataflow

I have a table that is described in the json file and based on that I want to create a collection as a sideInput later on.
PCollection<KV<Boolean, Map<String, Object>>> pC = p_jsonstring
.apply("create ...", MapElements.via( (String input) -> {
try {
ObjectMapper mapper = new ObjectMapper();
Map<String, Object> mytable =
mapper.readValue(input, new TypeReference<Map<String, Object>>(){});
Boolean key = (Boolean) mytable.get("mykey");
return KV.of(key, mytable);
} catch (IOException e) {
e.printStackTrace();
return null;
}
}).withOutputType(new TypeDescriptor<KV<Boolean, Map<String, Object>>>() {}));
When running it, I have the following error messages:
SEVERE: Unable to return a default Coder for create KV../Map.out [PCollection]. Correct one of the following root causes:
No Coder has been manually specified; you may do so using .setCoder().
Inferring a Coder from the CoderRegistry failed: Unable to provide a default Coder for org.apache.beam.sdk.values.KV>. Correct one of the following root causes:
Building a Coder using a registered CoderFactory failed: Cannot provide coder for parameterized type org.apache.beam.sdk.values.KV>: Unable to provide a default Coder for java.util.Map. Correct one of the following root causes:
Building a Coder using a registered CoderFactory failed: Cannot provide coder for parameterized type java.util.Map: Unable to provide a default Coder for java.lang.Object. Correct one of the following root causes:
Building a Coder using a registered CoderFactory failed: Cannot provide coder based on value with class java.lang.Object: No CoderFactory has been registered for the class.
Building a Coder from the #DefaultCoder annotation failed: Class java.lang.Object does not have a #DefaultCoder annotation.
I think the issue is mainly related to Object in Map<String, Object>, but in my case the mapping value is only determined at the runtime when reading the json string from the file. The Object type can be string, number or boolean.
Any suggestions?
I think the canned TypeDescriptors.kvs should work here as your output type, and you could consider keeping your input String as a String in the Map values and deserializing when you actually want to process the object. If you want to only deserialize here, consider creating a Schema for the deserialized object and using a Row as your value class. You can generate a Coder from that Schema

Dataflow output parameterized type to avro file

I have a pipeline that successfully outputs an Avro file as follows:
#DefaultCoder(AvroCoder.class)
class MyOutput_T_S {
T foo;
S bar;
Boolean baz;
public MyOutput_T_S() {}
}
#DefaultCoder(AvroCoder.class)
class T {
String id;
public T() {}
}
#DefaultCoder(AvroCoder.class)
class S {
String id;
public S() {}
}
...
PCollection<MyOutput_T_S> output = input.apply(myTransform);
output.apply(AvroIO.Write.to("/out").withSchema(MyOutput_T_S.class));
How can I reproduce this exact behavior except with a parameterized output MyOutput<T, S> (where T and S are both Avro code-able using reflection).
The main issue is that Avro reflection doesn't work for parameterized types. So based on these responses:
Setting Custom Coders & Handling Parameterized types
Using Avrocoder for Custom Types with Generics
1) I think I need to write a custom CoderFactory but, I am having difficulty figuring out exactly how this works (I'm having trouble finding examples). Oddly enough, a completely naive coder factory appears to let me run the pipeline and inspect proper output using DataflowAssert:
cr.RegisterCoder(MyOutput.class, new CoderFactory() {
#Override
public Coder<?> create(List<? excents Coder<?>> componentCoders) {
Schema schema = new Schema.Parser().parse("{\"type\":\"record\,"
+ "\"name\":\"MyOutput\","
+ "\"namespace\":\"mypackage"\","
+ "\"fields\":[]}"
return AvroCoder.of(MyOutput.class, schema);
}
#Override
public List<Object> getInstanceComponents(Object value) {
MyOutput<Object, Object> myOutput = (MyOutput<Object, Object>) value;
List components = new ArrayList();
return components;
}
While I can successfully assert against the output now, I expect this will not cut it for writing to a file. I haven't figured out how I'm supposed to use the provided componentCoders to generate the correct schema and if I try to just shove the schema of T or S into fields I get:
java.lang.IllegalArgumentException: Unable to get field id from class null
2) Assuming I figure out how to encode MyOutput. What do I pass to AvroIO.Write.withSchema? If I pass either MyOutput.class or the schema I get type mismatch errors.
I think there are two questions (correct me if I am wrong):
How do I enable the coder registry to provide coders for various parameterizations of MyOutput<T, S>?
How do I values of MyOutput<T, S> to a file using AvroIO.Write.
The first question is to be solved by registering a CoderFactory as in the linked question you found.
Your naive coder is probably allowing you to run the pipeline without issues because serialization is being optimized away. Certainly an Avro schema with no fields will result in those fields being dropped in a serialization+deserialization round trip.
But assuming you fill in the schema with the fields, your approach to CoderFactory#create looks right. I don't know the exact cause of the message java.lang.IllegalArgumentException: Unable to get field id from class null, but the call to AvroCoder.of(MyOutput.class, schema) should work, for an appropriately assembled schema. If there is an issue with this, more details (such as the rest of the stack track) would be helpful.
However, your override of CoderFactory#getInstanceComponents should return a list of values, one per type parameter of MyOutput. Like so:
#Override
public List<Object> getInstanceComponents(Object value) {
MyOutput<Object, Object> myOutput = (MyOutput<Object, Object>) value;
return ImmutableList.of(myOutput.foo, myOutput.bar);
}
The second question can be answered using some of the same support code as the first, but otherwise is independent. AvroIO.Write.withSchema always explicitly uses the provided schema. It does use AvroCoder under the hood, but this is actually an implementation detail. Providing a compatible schema is all that is necessary - such a schema will have to be composed for each value of T and S for which you want to output MyOutput<T, S>.

Spring Boot 1.3 WebSocket JSON converter Produces Invalid JSON

After upgrading to Spring Boot 1.3 (via Grails 3.1), the JSON output is rendered incorrectly. I believe it is because of the new auto-configured WebSocket JSON converter.
For example, with previous versions of Spring Boot (via Grails 3.0), using the following code:
#MessageMapping("/chat")
#SendTo("/sub/chat")
protected String chatMessage() {
def builder = new groovy.json.JsonBuilder()
builder {
type("message")
text("foobar")
}
builder.toString()
}
This would produce:
{"type": "message", "text": "foobar"}
With Spring Boot 1.3 (via Grails 3.1), that web socket produces the following:
"{\"type\":\"message\",\"text\":\"foobar\"}"
This is not valid JSON. How can I get rid of this new behavior and have it render the JSON as it was before? Please let me know if you have any suggestions.
I tried overriding the new configureMessageConverters method, but it did not have any effect.
looks like you are right. referenced commit shows questionable autoconfiguration imho.
especially b/c in the past, the converter ordering was intentionally changed to that StringMessageConverter takes precedence before MappingJackson2MessageConverter: https://github.com/spring-projects/spring-framework/commit/670c216d3838807fef46cd28cc82165f9abaeb45
for now, you can either disable that autoconfiguration:
#EnableAutoConfiguration(exclude = [WebSocketMessagingAutoConfiguration])
class Application extends GrailsAutoConfiguration { ... }
or, you add yet another StringMessageConverter to the top of the configured converters (maybe because you do want the boot autoconfiguration behavior because it is using the jackson ObjectMapper bean instead of creating a new one):
#Configuration
#EnableWebSocketMessageBroker
class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
boolean configureMessageConverters(List<MessageConverter> messageConverters) {
messageConverters.add 0, new StringMessageConverter()
return true
}
...
}
hope that helps.
I don't know how to do it in Grails but in Java you have to now pass the object instead of an object in the String class. I believe the old behavior was actually incorrect as it was returning a string as an object so there was no way to return a String that had JSON inside it as a String. So create an object with the structure you want and return it and you should be fine. I went through the same issue when upgrading from 1.2.X to 1.3.X. I am not exactly sure what change caused this but I think in the long run it is the correct thing to do.

How to override Grails' default (binding) conversion of parameter values with commas to String array?

I'm using Grails 2.3.7 and have a controller method which binds the request to a command class. One of the fields in the command class is a String array, as it expects multiple params with the same name to be specified - e.g. ?foo=1&foo=2&foo=3&bar=0.
This works fine most of the time. The one case where it fails is if the param value contains a comma - e.g. ?foo=val1,val2&foo=val3,val4. In this case I'm getting an array with 4 values: ["val1","val2","val3","val4"], not the intended output of ["val1,val2","val1,val2"]. URL escaping/encoding the comma does not help and in my cases the param value is surrounded by quote characters as well (foo=%22a+string+with+commas,+etc%22).
I had a similar problem with Spring 3.x which I was able to solve: How to prevent parameter binding from interpreting commas in Spring 3.0.5?. I've attempted one of the answers by adding the following method to my controller:
#InitBinder
public void initBinder(WebDataBinder binder) {
binder.registerCustomEditor(String[].class, new StringArrayPropertyEditor(null));
}
However this didn't work.
I also tried specifying a custom conversion service, based on the comment in https://jira.grails.org/browse/GRAILS-8997.
Config/spring/resources.groovy:
beans = {
conversionService (org.springframework.context.support.ConversionServiceFactoryBean) {
converters = [new CustomStringToArrayConverter()]
}
}
and
import org.springframework.core.convert.converter.Converter
import org.springframework.util.StringUtils
class CustomStringToArrayConverter implements Converter<String, String[]> {
#Override
public String[] convert(String source) {
return StringUtils.delimitedListToStringArray(source, ";");
}
}
but I couldn't get this to work either.
For now, I've come up with a work around. My Controller method has an extra line to set the troublesome field explicitly with:
commandObj.foo = params.list('foo') as String[]
I'm still open to suggestions on how to configure grails to not split on a comma...

Resources