I've been using spring.net with xml configuration for some times, and I just saw that spring team released CodeConfig a month ago.
What I like about the xml config is that if I have a problem on the live server I can easily change the xml configuration to enable some specific debugging settings, or disable a specific component simply changing the xml configuration.
What is the advantage of using a code configuration instead of an xml configuration other than compile time check?
With code config, possible benefits you could get are:
Better refactoring support; e.g. renaming an injected property
More compact configuration, compared to xml
Developers can use code, in which they are often more at home than in xml
Benefit of the last point is also that developers new to the framework will have a significantly less steep learning curve than with the xml config.
From the docs:
While there are several positive
aspects to expressing configuration
metadata in XML files, there are also
many problems with this approach
including the verbosity of XML and its
heavy dependence on string-literals
which are both prone to typing errors
and unusually resistant to most modern
refactoring tools in use today. The
CodeConfig approach removes these
problems by providing a type safe,
code-based, approach to dependency
injection. It keeps the configuration
metadatda external to your class so
your class can be a POCO, free of any
DI related annotations.
Just to highlight one thing, you can mix and match configuration styles. From within a CodeConfig class you can refere to XML config files using the [ImportResource] attribute (see here), and in the XML you can use the namespace (see here).
Cheers,
Mark
Related
[Disclaimer: I'm a long-time Desktop developer slowly learning Web and Blazor, so might be a noob question] but,
How come, when you try to find best-practice for doing Localization in Blazor you are told from official MS Docs (https://learn.microsoft.com/en-us/aspnet/core/blazor/globalization-localization?view=aspnetcore-5.0&pivots=webassembly) and various blogs to do the following:
Add NuGet Package: Microsoft.Extensions.Localization
Register localization "builder.Services.AddLocalization();"
Add your resx Files
Make IStringLocalizer (#inject IStringLocalizer Loc)
And finally use the following in your razor pages: #Loc["Greeting"]
Sure above works, but to a Desktop developer, this feels like a massive step-back in quality and "refactor-safeness" and the new way to use "magic strings" to reference the translations.
I've tested, and the "old way" on a Blazor Page of just:
Adding a MyResource.resx
Let it use the custom tool "PublicResXFileCodeGenerator" to make the .designer file
Simply reference the translation using MyResource.MyTranslationKey;
It works, it is refactor-safe, no need for an injection or NuGet packages... It just works, but despite that, it is not the recommended way... My question is why not? What is the drawback (all the blog and documentation fail to say why the new way is better)
I think there are a number of disadvantages using PublicResXFileCodeGenerator, which may have led to the current recommendations on how to support i18n-capabilities in [blazor-]apps.
Note that this is just a list of reasons I personally came up with finding possible causes which may have led to the current recommendations:
A: Visual Studio exclusivness
The way how PublicResXFileCodeGenerator generates files seem to be VisualStudio exclusive. Today´s teams tend to use a variety of IDEs / editor to build software, (f.e. VS, VSCode, Rider, WebStorm, etc.).
At least from my perception over the last couple of years
using IStringLocalizer works for all editors, even notepad or vim.
B: no default fallback
With the recommended way of accessing a translation, there will always be a useful fallback which is provided in markup. That is not the case when using the generated types to access translation-units.
C: no builtin-support for interpolation
Using IStringLocalizer, there is a built-in, lightweight and formalized way for utilizing interpolated strings. It even encourages using such strings in favor of manually building together such values, which is considered bad-practice when translating software.
DO:
#inject IStringLocalizer<DemoPage> L
<h1>#L["Greetings, {0}", userName]</h1> <!--Greetings, Arthur-->
#code {
string userName = "Arthur";
}
DON´T:
<h1>#DemoPageRessources.Greeting #userName</h1> <!--Greetings Arthur-->
#code {
string userName = "Arthur";
}
this dictates the order of strings, which might be OK for one language, but not for another. Achieving this with the generated type is a bit more verbose, and even may lead to runtime-exceptions, when there is no actual translation, i guess.
I would like to define my objects in a JSON file, and then instantiate them using Typhoon. Is this currently possible with Typhoon? I've downloaded the code from github, and looked through the code and docs, but I don't see a way.
Thanks in advance!
Since Typhoon 2.0 we only support the native format (recommended) along with auto-wiring macros. The main benefits are:
IDE refactoring and code-completion works without any additional plugins
No "magic strings" when wiring by reference.
Components can be resolved using the assembly interface. Since version 2.0 this includes supplying runtime arguments along with static dependencies.
In version 1.x, we supported XML along with the above styles, however it was not at all a popular feature. The main (valid) criticisms were that XML doesn't support the above benefits of the native style. This along with there being some overheads in maintenance lead us to decide to discontinue support in version 2.0.
There was one benefit - the ability to define assemblies at runtime. The closest thing that we have at the moment is Typhoon Config, which allows defining configurations in a text file.
Proceeding with JSON:
It would be quite simple to define a JSON parser.
Create a similar class to v1.8.2's XML parser.
Register the components from the parser. Either manually or by creating a TyphoonComponentFactory sub-class.
Unless you have a strong reason for using JSON, we recommend the native style.
I understand how to implement a StructureMap registry, my question concerns the fact that every project that contains a StructureMap registry requires a static reference to the StructureMap assembly. Is there a best practice for how to structure the configuration for a large number of projects (30+) without forcing each project to take this dependency?
The alternative, I suppose, would be to create a bootstrapper assembly that could be referenced by the host process. The bootstrapper would perform all wire-up. In this scenario, the bootstrap assembly, instead, would have references to all of the projects. This has the upside of centralizing the reference to StructureMap so that all of the projects are unaware of StructureMap.
Using XML-based configuration is not an option for me.
Are there any other options for configuration that minimize the number of static references the projects in the solution must take? I'm guessing that there isn't, but thought I'd solicit some other opinions.
Technically, you only need a single project to reference the container framework, and that is the top-level application project. It references all the other projects and specifies the configuration of the components.
This puts the entire graph configuration out of the hands of each project, opting instead to define graphs only where they are used. This gives each application the complete freedom to configure components, rather than assuming the components will be used in the same way every time (as is implied by the registries which are inherent to each project).
An aside that may or may not be useful: in quantum physics, when we observe a particle, we collapse it from every possible state into a particular one. Frameworks are similar, in that they don't exist in a single state until they are observed, which here means "put to use in an application." This frames the application as the observer, which is the context in which the framework collapses into a single form.
Now, I generally wouldn't want the application be responsible for both being a running application and also configuring that runtime. For this reason, I tend to have a Composition project which references the others as well as the container framework. The actual application project can then reference the Composition project. This externalizes the registries from each project, including the application project, producing a cohesive assembly whose sole purpose is to define the composition of a particular application.
When building ASP.NET projects there is a certain amount of boilerplate, or plumbing that needs to be done, which is often identical across projects. This is especially the case with MVC and ALT.NET approaches. [I'm thinking of things such as: IoC, ORM, Solution structure (projects), Session Management, User Management, I18n etc.]
I would like to know what approach you find best for 'reusing' this plumbing across projects?
Have a 'master solution' which you duplicate and rename somehow? (I'm using a this to a degree at the moment, but it's fairly messy. Would be interested how people do this 'better')
Mainly rely on Shared Library projects? (I find this appropriate for some things, but too restrictive for things that have to be customised)
Code generation tools, such as T4? (Similar to the approach used by SharpArchitecture - have not tried this myself)
Something else?
Visual Studio supports Custom Templates.
I definitely (mostly!) go for T4 templates in conjunction with a modified version of SubSonic 3. I kind of use the database to model my domain and then use the T4 templates to generate the model and associated controllers and views. It takes about 50-60% of the effort out and keeps a consistency in place.
I then work on overrides (partials) of the classes along with filters and extension methods to 'make the app'. Now that I'm familiar with the environment and what I'm doing, I can have a basic model with good plumbing in place in a very short space of time. More importantly, because I create a set of partial class files, I can regenerate all I want without losing any of my 'custom' coding.
It works for me anyway :)
You could do it the bearded, t-shirted, agile style and create a nice template and put it in sourcecontrol. So when you need a new project, you just checkout the template?
For insanely fast MVC site setup, I use modified T4 templates (created with T4 Editor) and with ALOT of help from Oleg Sych's blogs for page generation (for your typical add/edit/index pages) combined with an awesome implementation of an automated create-update-delete called MVCCrud (if LINQ-to-SQL is your preferred data access method)
Using modified T4 templates and MVCCrud you can create fully functional entities (Create/Edit/List/Delete) with error handling and intuitive error messages in about 4 minutes for each.
I create a new project using the new project wizard so that I get unique project GUIDs assigned. Then I would use "Add Existing Item" to copy items from similar projects if it made sense to do so.
I sometimes use a file diff tool to copy references from one project to another, otherwise I just add the references by hand. A file diff tool can also be used to include similar source files, but the underlying files have to be copied anyway, so I prefer "Add Existing Item".
I've used T4 to generate solution and project files, but that definitely seems like an edge case and not something that would normally be necessary. In that case, I'd probably wrap the T4 in a PowerShell like script to create and populate the rest of the directory structure.
I use "shared libraries" pretty aggressively in general, but not specifically due to this scenario.
In general, I don't find myself reusing plumbing between projects much. It's probably more often that I hack away in one "prototype" project, then abandon it, and rebuild the project from scratch following the above approach and only bring over the "non-hacky" code.
I'm creating a MVC2 application template at http://erictopia.com. It will contain all the basic items I think should be in a MVC project. These include BDD specifications, an ORM (NHibernate and possibly Lightspeed), T4 templates, custom providers, ELMAH support, CSS/Javascript minifier, etc.
In Java we have a wonderful tool named CheckStyle that reinforce all our corporate naming conventions. Wonderful tool. I would like to do the same with our XSD and WSDL.
Is there a tools that I could use to reinforce and make sure all coders and analysts will respect conventions like:
<wsdl:operation name="XX> All operations must start with getXX of setXX or deleteXX.
Is the solution to my problem to create an XSD to validate my WSDL?
Look at this tool for the defining rules and running them on WSDL, mainly for naming conventions:
Rule Engine Based Wsdl Auditor
The development of Wsdl Auditor as mentioned by Saikiran Daripelli seems to have stopped. Last commit in the Subversion repository is from Januar 2011.
After doing some research, installing and testing several tools I decided to use Oracle Code Compliance Inspector (CCI) which is available as an extension for JDeveloper IDE and as a command line utility (which allows integration with e.g. Ant as well).
Testing for naming conventions seems not to be the primary goal of Oracle CCI but it works quite good by using XPath + regular expression. As mentioned on their website the primary objective is to enforce design consistency and good coding and documentation practices.