I'm playing around the idea of a very huge vaadin application, which consists of a skeleton (providing the ui framework) and hundreds of functional units (providing the specific vaadin views).
The main architectural point is to let the skeleton be agnostic about the functional units, so not a single java dependency to a functional unit should be injected into skeleton. Every single functional unit has to be in it's own distinct JAR.
The vaadin container is started by the skeleton-module. It is perfectly possible to build a navigation with all the necessary routes to the functional units (/routeToView001.../routeToView999) without having further details about them.
However, I don't see, how the started vaadin spring boot container would load the java classes from the independent JARs when navigation occurs in browser. Practical attempts failed. Any ideas?
The Spring Boot integration is by default looking for #Route classes within the Java package that contains the #SpringBootApplication class. This can be further configured by passing package names to the #EnableVaadin annotation.
I haven't tested this in practice, but it might be possible to have a multiple #EnableVaadin annotations so that there would be one in each module and through that also provide multiple locations to look for #Route classes from. In that case, the #EnableVaadin class in each module would also have to register itself in the same way as any other #Configuration by using the regular Spring Boot autoconfiguration mechanism.
Another alternative is that you register route classes manually to the application's route registry (accessed using ApplicationRouteRegistry.getInstance(new VaadinServletContext(servletContext))). In that case, you might still need to have at least a dummy #Route in the base module since Vaadin might not automatically enable itself in a Spring Boot environment unless at least one #Route class is discovered in the regular way.
Related
I use Vaadin #JavaScript annotation to load JavaScript files for my application. It works great but I would need different JavaScript loaded for differents builds.
The idea is to have something like these:
#JavaScript("url.from.properties.or.pom")
So for DEV I would get #JavaScript("https://example.com/test/js/embed.js") and for PROD #JavaScript("https://example.com/production/js/embed.js"). The script url value should be taken from application.properties or pom.xml.
I cannot figure out how to do it. I use Vaadin 8 with Maven and Spring Boot. Thank you in advance.
There's no direct support for what you want to do, but I can come up with three different solutions that you could consider.
Register a DependencyFilter that dynamically rewrites the dependency URL from the annotation depending on the situation.
Create separate Java classes for each case (with all the actual functionality in a shared super class). You can then have either runtime logic or use e.g. different Spring configurations to choose exactly which class to use.
Remove the #JavaScript annotation and instead call JavaScript.eval from onAttach to somehow dynamically inject the script you want.
I'm creating a web API using ASP.NET Core, and I'm using SimpleInjector as my DI framework. I understand the basics of how to use SI with ASP.NET Core; my problem is more an architectural one.
I have a project with integration tests for the API project, in order to test the raw API endpoints and responses. Naturally, the test server (set up using Microsoft.AspNetCore.TestHost) should use the API project's real Startup class.
The problem lies in where to register mocks for the controllers' dependencies, because I don't want to have all the production implementations being registered when testing: Firstly, most of them are, of course, dependencies used by the production implementations of the controller dependencies I'll be mocking in the first place; and secondly, in case I update my controllers and forget to register mocks of the new dependencies, I want my code to fail (container verification) instead of silently using production dependencies that are present in the container.
Thus, the dependencies can't be registered in the Startup class. That's fine by me – I think I'd rather keep the composition root in a separate assembly referencing all other assemblies, anyway. AFAICS the ASP.NET Core project would need to reference this project, which exposes a single method that returns a pre-registered container that can be used in the Startup class, where it's needed to register e.g. the controller activator (and will undergo final validation).
But this begs the question: How to get the container – being already registered with all my application components (whether production implementations from the composition root project, or mocks from the integration test project) – into my Startup class?
My initial solution is to simply have a static property on the Startup class called e.g. Container, and assign that before using WebHostBuilder. This seems "pragmatically robust": The application will fail fast (NullReferenceException) if it's not set before the Startup class is run, and the property is only used during setup, so I don't really need to guard against it being set multiple times or being set to null or any such thing – if it's assigned before startup, it works, if not, it won't start.
Does this seem like a solid solution, or am I oblivious to any obvious ways this will will come back to bite me later on? Are there any better solutions?
I have a rather monolithic Grails 2 application that I am attempting to upgrade to Grails 3 (specifically 3.2.7) and refactor into a set of plugins - the current app uses various services on Amazon AWS, I want to refactor it so I can more easily switch to Microsoft Azure or OpenStack.
At present, the big app uses various plugins including Spring Security UI, and the app overrides some of the GSP views from the plugin with its own. In the refactored scenario I have the main app (which will implement the AWS-specific bits), depending on a "core" plugin (with the cloud-agnostic functions) which in turn depends on spring-security-ui. The problem I'm having is that when I put my custom auth.gsp view in the "core" plugin rather than in the top-level app, it no longer overrides the s2ui version of the same view. If I copy the "core" plugin's auth.gsp to the same location in the top-level app, it overrides correctly.
In general, if I have app depends-on plugin1 depends-on plugin2, is there a way to ensure that when I run the app, views provided by plugin1 take precedence over the same views provided by plugin2?
The core plugin will need to specify it should be loaded after spring security ui. You can do that with:
def loadAfter = ['springSecurityUi']
This is documented here: http://docs.grails.org/latest/guide/plugins.html#understandingPluginLoadOrder
I've an ASP.NET MVC application using Ninject3 (NuGet install). The solution contains:
an MVC project (composition root);
a Domain Model project;
a Data Layer project;
a scheduler project (running scheduled jobs within a windows service and holding an alternative composition root);
some other projects.
I'm following the approach to have many small modules spread across the projects defining the bindings. The two composition roots use exactly the same bindings.
I cannot figure out how to configure scope for the modules within the class libraries. For example, given these bindings:
Bind<IDomainService1>()
.To<Service1Impl>()
.InSingletonScope(); //This should always be a singleton
Bind<IDomainService2>()
.To<Service2Impl>(); //No scope specified
I would always want a single instance of Service1Impl, whereas scope for Service2Impl should depend on the composition root used. MVC project should have InRequestScope() for Service2Impl (and for all other bindings with unspecified scope). Scheduler project, which does not run within an http context, should use InThreadScope().
Is this approach correct? If yes, what is the right way of configuring this behaviour?
In Ninject, not specifying the scope means InTransientScope().
Your choices are to either duplicate the bindings or create a custom InScope() scoping rule for the binding.
The cleanest solution (especially given that MVC is already in play) is for you to create a plugin that slots into the InRequestScope() mechanism.
There is a CreateScope() method which currently has minimal documentation in the ninject.extensions.namedscope README, which is used like this. It requires you to select 'Include Prerelease' in NuGet. (And I should be writing a wiki article on it but I have too many other things on my plate...)
I'm writing a Grails app which I'd like 3rd parties to augment at runtime. Ideally they would be able to add a JAR/WAR to the webapp directory which contains new domain, controller and service classes, new views, and other content.
Is there a simple way to do this within grails? Would it be simplest to create a startup script which copies the new classes etc. into the relevant directories and then updates grails.xml and web.xml?
You will be able to do this in version 2 of grails in which plugins will be also OSGI plugins http://jira.codehaus.org/browse/GRAILS/fixforversion/15421
It seems that the Grails plugins will actually fit quite well for this: http://www.grails.org/Understanding+Plugins
A plugin can do just about anything... One thing a plugin cannot do though is modify the web-app/WEB-INF/web.xml or web-app/WEB-INF/applicationContext.xml files. A plugin can participate in web.xml generation, but not modify the file or provide a replacement. A plugin can NEVER change the applicationContext.xml file, but can provide runtime bean definitions