Output Spring-WS Generated WSDL Location - spring-ws

This seems like a simple question to me:
I have a project where I automatically generate a Spring-WS WSDL, something like this:
<sws:dynamic-wsdl id="service"
portTypeName="Service"
locationUri="/Service/"
targetNamespace="http://location.com/Service/schemas/Mos">
<sws:xsd location="classpath:/META-INF/Service.xsd"/>
</sws:dynamic-wsdl>
Is there a way, on application context startup, to output the generated address of the wsdl, including context, location, etc? This would be handy if our integration tests start to fail, we can see if the location of the WSDL has changed.

As far as I know, you can find the WSDL at http://yourHost/yourServletContext/beanId.wsdl. In your case, beanId is 'service'.
Check out 3.7. Publishing the WSDL in the Spring-WS documentation for more information about this subject.
If you plan to expose your XSD's as well, the beanId.xsd (or, in my case the method name in the #Configuration class) format will be used. For instance:
private ClassPathResource exampleXsdResource = new ClassPathResource("example.xsd");
#Bean public SimpleXsdSchema example() {
return new SimpleXsdSchema(exampleXsdResource);
}
This exposes an XSD at http://yourHost/yourServletContext/example.xsd.

Related

How to override logging in dataflow with my logback.xml file?

We are trying to use our logback.xml that we use in GCP Cloud run which has amazing filtering features. Our logback.xml contains this for cloud run
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="com.orderlyhealth.api.logging.logback.GCPCloudLoggingJSONLayout">
<pattern>${CONSOLE_PATTERN}</pattern>
</layout>
</encoder>
</appender>
And our GCPCloudLoggingJSONLayout does a great job at setting all the things we need like clientId, customerRequestId, etc. etc. and we can filter across many many microservices on one customer or one customer request. We lose this in dataflow currently though. We tried adding logback.xml to src/main/resources and deploying the project seems to use it in the shell like so
{"message":"[main][-][:] o.a.b.r.d.DataflowRunner Template successfully created.\n",
"logger":"org.apache.beam.runners.dataflow.DataflowRunner",
"transactionId":null,"socket":null,"clntSocket":null,
"version":null,
"timestamp":{"seconds":1619694798,"nanos":4000000},
"thread":"main",
"severity":"INFO",
"instanceId":null,
"headers":{},
"messageInfo":{"message":"Message short enough. Displayed top level"}
}
thanks for any ideas on modifying dataflow logging.
Currently we see this instead which is not nearly as useful for tracing the customer request through systems
I don't think you can change how Dataflow logs to Cloud logging.
Instead, you can change how/what you log and let Dataflow pass them through to cloud logging. See Logging pipeline messages.
Or you can use cloud logging client libraries in your pipeline directly: https://cloud.google.com/logging/docs/reference/libraries.
Please take a look at How to override Google DataFlow logging with logback? for the latest version of this answer
I copied the current answer there to make it easier for folks who want to look:
Dataflow relies on using java.util.logging (aka JUL) as the logging backend for SLF4J and adds various bridges ensuring that logs from other libraries are output as well. With this kind of setup, we are limited to adding any additional details to the log message itself only.
This also applies to any runner executing a portable job since the container with the SDK harness has a similar logging configuration. For example Dataflow Runner V2.
To do this we want to create a custom formatter to apply to the root JUL logger. For example:
public class CustomFormatter extends SimpleFormatter {
public String formatMessage(LogRecord record) {
// implement whatever logic the is needed to add details to the message portion of the log statement
return super.formatMessage(record);
}
}
And then during start-up of the worker we need to update the root logger to use this formatter. We can achieve this using a JvmInitializer and implement the beforeProcessing method like so:
#AutoService(JvmInitializer.class)
public class LoggerInitializer implements JvmInitializer {
public void beforeProcessing(PipelineOptions options) {
LogManager logManager = LogManager.getLogManager();
Logger rootLogger = logManager.getLogger("");
for (Handler handler : rootLogger.getHandlers()) {
handler.setFormatter(new CustomFormatter());
}
}
}

Generate Swagger 2.0 Specification from JAX-RS 1.0 annotation via Java Main class

I have a legacy project such that we are using JAX-RS 1.x, and Ant builds.
I would like to generate a Swagger Specification via scanning the annotations but.... I don't want to require people to have a running instance of my web application. Instead, I would like to do it via an Ant task that (perhaps) just invokes a java main method that invokes the scanner and writes the specification to an output directory.
I have found lots of documentation on how to generate a Swagger Spec in the context of a running web application, but NOT from a Java main application running context.
I do recognize that the URI of the endpoints is not well-defined outside the context of a running web app, but that doesn't concern us because we are mainly interested in generating documentation.
Here's some code that works except for the fact that I can't seem get the info stuff into the actual generated JSON
final Info info = new Info();
info.setTitle(API_TITLE);
info.setDescription(API_DESCRIPTION);
info.setVersion(API_VERSION);
BeanConfig beanConfig = new BeanConfig();
// TODO Some of these do not seem to end up in the JSON file:
beanConfig.setTitle(API_TITLE);
beanConfig.setDescription(API_DESCRIPTION);
beanConfig.setVersion(API_VERSION);
beanConfig.setSchemes(new String[]{"https"});
beanConfig.setHost(HOST);
beanConfig.setBasePath(BASE_PATH);
beanConfig.setResourcePackage(RESOURCE_PACKAGE);
beanConfig.setScan(true);
beanConfig.setInfo(info); // TODO - This has no effect
Swagger swagger = beanConfig.getSwagger();
swagger.setInfo(info); // TODO - This has no effect
ObjectMapper objectMapper = new ObjectMapper();
File outputFile = new File(SWAGGER_OUTPUT_FILE_PATH + SWAGGER_OUTPUT_FILENAME);
// Force file and directory to be created if it doesn't exist. Otherwise an error will be thrown when we pass
// it to ObjectMapper.
outputFile.getParentFile().mkdirs(); // Force creation of output directory if it doesn't exit
outputFile.createNewFile();
objectMapper.writerWithDefaultPrettyPrinter().writeValue(outputFile, swagger);

F# WsdlService type provider proxy

I am following the MSDN tutorial for the WsdlService type provider found here. When I run it at home, it works as expected. When I write the same code at work, I am getting a design time exception:
The type provider
'Microsoft.FSharp.Data.TypeProviders.DesignTime.DataProviders'
reported an error: Error: Cannot obtain Metadata from
http://msrmaps.com/TerraService2.asmx?WSDL
Work does use a proxy and I have to alter the web.config to use a default proxy when consuming WSDL from a C# project in VS2012. When I looked at the parameters for the type provider, I don't see a mention about a proxy. Does anyone have any suggestions?
Thanks in advance.
Expanding on Tomas's answer...
This is a common pattern in the built-in type providers today:
At design time, if you need any kind of non-default configuration (e.g. credentials, proxy config, ...), the type provider will not work. You need to download some schema file locally (e.g. DB schema file, ODATA $metadata file, WSDL schema file...), and point the type provider at that, usually by passing LocalSchemaFile="...", ForceUpdate=false in the static constructor. This feeds the TP all the info it needs to generate the types.
Then, you set all of your non-default config programmatically on the objects that are created for you, so that everything works at runtime.
Here's another example of essentially the same issue, where this pattern is used to set credentials.
In the case of WSDL, below is the programmatic approach to set the proxy after-the-fact (i.e. step #2). Cribbed entirely from this answer, which is exactly what you want, in C#. You'll probably need to play with this a bit to make it work for you.
#r "System.ServiceModel.dll"
#r "FSharp.Data.TypeProviders.dll"
open Microsoft.FSharp.Data.TypeProviders
type Terra = WsdlService< ServiceUri="N/A", ForceUpdate = false,
LocalSchemaFile = #"C:\temp\terra.wsdlschema">
let terra = Terra.GetTerraServiceSoap()
let binding = terra.DataContext.Endpoint.Binding :?> System.ServiceModel.BasicHttpBinding
binding.ProxyAddress <- System.Uri("http://127.0.0.1:8888")
binding.BypassProxyOnLocal <- false
binding.UseDefaultWebProxy <- false
terra.GetPlaceList("New York", 1, false)
I'm not connecting through a proxy, so I have no way of actually testing this, but I think you should be able to use local WSDL file to load the type provider in the designer.
Try downloading the WSDL schema (from http://msrmaps.com/TerraService2.asmx?WSDL) and saving that to a local file (such as C:\temp\terra.wsdlschema). Then you should be able to write:
#r "System.ServiceModel.dll"
#r "FSharp.Data.TypeProviders.dll"
open Microsoft.FSharp.Data.TypeProviders
type Terra = WsdlService< ServiceUri="N/A", ForceUpdate = false,
LocalSchemaFile = #"C:\temp\terra.wsdlschema">
let terra = Terra.GetTerraServiceSoap()
terra.GetPlaceList("New York", 1, false)
The ServiceUri parameter seems to be required, but it should be ignored if you add ForceUpdate=false. It should only require the cached WSDL file. I'm not entirely sure how to configure the runtime to use your config file setting, but I'm sure this can be done in some way (either it just works or you can pass something to the GetTerraServiceSoap method).
Sadly, the type provider does not statically know (at design time) where to look for the config file, so it ignores it.

Accessing Enterprise Beans through a Remote Interface with #EJB

I have an interface like this:
#Remote
public interface ClientDataAccessRemote
And the EJB implements it:
#Stateless
public class ClientDataAccess implements ClientDataAccessRemote
And in the remote client I can access the EJB with this:
#EJB
private static ClientDataAccessRemote clientDataAccess;
This is everything I did and it works. The client and the EJB reside on the same server. Would it still work if they were separated? And how would the container find the EJB with that interface? I implemented this with Netbeans and I didnĀ“t have to specify any locations or anything like that. How does this work?
Unfortunatelly #EJB annotation works for local (single JVM) injections only. For separate hosts you need to fallback to plain JNDI lookup.
AFAIK there are some proprietary non-portable solutions to perform remote dependency injections, like for WebLogic server (here), but I wouldn't go that way.
JNDI lookup works but is overly complicated and quite ugly:
you need to know server vendor and add its client libraries to your app's dependencies,
you pollute application with:
cryptic vendor-specific URI formats
vendor-specific naming service port numer (often defaults to 1099, but who knows for sure...)
vendor-specific jndi-name pattern
Here is example lookup for bean hosted on remote JBoss 4.x instance:
Properties properties = new Properties();
properties.put(Context.INITIAL_CONTEXT_FACTORY,
"org.jnp.interfaces.NamingContextFactory");
properties.put(Context.URL_PKG_PREFIXES,
"org.jboss.naming:org.jnp.interfaces");
properties.setProperty(Context.PROVIDER_URL, "localhost:1099");
InitialContext context = null;
ClientDataAccessRemote cl = null;
try {
context = new InitialContext(properties);
cl = (ClientDataAccessRemote) context.lookup("ClientDataAccess/remote");
} catch (NamingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Given your EJB is part of EAR you need to prefix name of EJBean with name of EAR:
cl = (ClientDataAccessRemote) context.lookup("MyEAR/ClientDataAccess/remote");
Above example is JBoss specific, I'm not even sure if it will work with JBoss 5.x series without modifications.
Apparently EJB 3.1 specification brings some unification to jndi naming, but I haven't got pleasure to work with it yet.
If this example scared you a little, maybe a better solution would be exposing your EJB as web services (SOAP or REST style).
It brings problems of it's own, but is portable at least.

How do I deal with WS-Security when all I have is a wsdl?

I'm trying to develop a stand-alone client app that uses web services in a Glassfish container (Metro). About all I have to work from is a wsdl for the wervices I'm trying to use. The wsdl is rife with all kinds of 'wsp:Policy' tags. Looks like IssuedToken, Trust13, ecryption are all utilized.
So I generated some code from netbeans and JAX-WS. Everything went well, but when trying to run the client I get:
'WST0029:STS location could not be obtained from either IssuedToken or from client configuration for accessing the service http://localhost:8080/ ....'
That's when it occured to me that I know nothing about WSS. It doesn't look like any code was generated to deal with security. So, I'll have to go from scratch.
So where to start? Books? Tutorials?
TIA
Metro applies the policy in runtime from either the WSDL or the wsit-client.xml config file. That's why no code is generated related to policies. According to this post it is not possible at the moment to do programatically.
This tutorial explains pretty well some of the things you can do with WSS, and though everything do probably not apply in this case it's still a good read.
The simplest way I've found of generating a client with WSS support is by using the wsimport script from Metro:
cd metro/bin/
mkdir src target
./wsimport.sh -s src -d target -extension -Xendorsed -verbose YourService.wsdl
Then install Metro into your application server (copy the libs to the correct places or run the ant script):
ant -f metro-on-glassfish.xml
Then put your local WSDL file in your classpath (e.g. your resource folder), so Metro can get it at runtime to apply the policies from your generated YourService class:
private final static URL YOURSERVICE_WSDL_LOCATION;
// This is enough, you don't need the wsdlLocation attribute
// on the #WebServiceClient annotation if you have this.
static {
YOURSERVICE_WSDL_LOCATION =
CustomerService.class.getClassLoader().getResource("YourService.wsdl");
}
public YourService() {
super(YOURSERVICE_WSDL_LOCATION,
new QName("http://tempuri.org/", "YourService"));
}
And if you want WS-Addressing you might need to add the feature manually to your binding method (Metro has never generated it for me, so I always have to add it myself).
#WebEndpoint(name = "WSHttpBinding_IYourService")
public IYourService getWSHttpBindingIYourService() {
WebServiceFeature wsAddressing = new AddressingFeature(true);
IYourService service =
super.getPort(new QName("http://xmlns.example.com/services/Your",
"WSHttpBinding_IYourService"), IYourService.class,
wsAddressing);
return service;
}

Resources