I am currently trying to create Swagger clients using the Swagger-Codegen for multiple languages. Everything is fine for a single language, but when doing it for multiple languages the complexity seems to increase unnecessarily.
Some of the options in my config.json file are not supported/available in other languages and therefore I would have to create a config file for every language, which makes it for example ridiculous complex to do a version bumb. So my current approach would be to write a script that calls the Swagger-Codegen several times passing different config parameters per language. However, this seems more like a hack than a solution, so I was wondering if I missed anything in the docs or if any of you came up with a better solution?
Related
I have an application that can be extended with defmethod calls. The application should be extended in runtime by adding new namespaces to the classpath that contain additional defmethod calls.
I am looking for a dependency injection solution. The question is: how will my application know what namespaces it should require so that the defmethod calls can take effect?
One solution is to have a central configuration file that contains the names of the namespaces that can be required. A drawback is that I need to edit the configurations by hand when I want to enable a plugin.
An other way is to somehow dynamically scan the classpath for additional namespaces and require them based on a predicate (for example a namespce name prefix).
I found only these two solutions but I wonder what other ways may be around to do runtime dependency injection in Clojure. And what libraries are commonly used for this purpose?
Thank you in advance.
There are 3 dependency injection frameworks commonly used in Clojure land:
Component
Mount
Integrant
Of these, Integrant will probably fit your way of thinking best. However, in the past I've thought that I had the problem you are describing, and gone with the scan for namespaces that need to be required approach. But in the fullness of time I realised that I was structuring by code in a sub-optimal way, and thinking about it differently made the code easier to follow and fixed this backwards dependency problem at the same time. Your situation may well be different. The searching for namespaces to load does work though ;)
I have these relatively big log files which are generated from a machine via a serial connection.
This log isn`t structured and I need to check various different things. I wonder if there is some kind of existing language or tool which is specialized in this kind of thing?
languages I currently know:
c and c++
python
some java
various scripting language
I hope some of you have a good recomendationt!
Going with what you already know, Just use regular expressions in python.
XML seems to be the language of the day, but it's not type safe (without an external tool to detect problems) and you end up doing logic in XML. Why not just do it in the same language as the rest of the project. If it's java you could just build a config jar and put it on the classpath.
I must be missing something deep.
The main downside to configuration DI in code is that you force a recompilation in order to change your configuration. By using external files, reconfiguring becomes a runtime change. XML files also provide extra separation between your code and configuration, which many people value highly.
This can make it easier for testing, maintainability, updating on remote systems, etc. However, with many languages, you can use dynamic loading of the code in question and avoid some of the downsides, in which case the advantages diminish.
Martin Fowler covered this decision pretty well here:
http://martinfowler.com/articles/injection.html
Avoiding the urge to plagiarize... just go read the section "Code or configuration files".
There's nothing intrinsically wrong with doing the configuration in code, it's just that the tendency is to use XML to provide some separation.
There's a widespread belief that somehow having your configuration in XML protects you from having to rebuild after a change. The reality in my experience is that you need to repackage and redeploy the application to deliver the changed XML files (in the case of web development anyway), so you could just as easily change some Java "configuration" files instead. Yo could just drop the XML files onto the web server and refresh, but in the environment I work, audit would have a fit if we did.
The main thing that using XML configuration achieves in my opinion is forcing developers to think about dependency injection and separation of concerns. in Spring (amongst others), it also provides a convenient hook to hang your AOP proxies and suchlike. Both of these can be achieved in Java configuration, it is just less obvious where the lines are drawn and the tendency may be to reintroduce direct dependencies and spaghetti code.
For information, there is a Spring project to allow you to do the configuration in code.
The Spring Java Configuration project (JavaConfig for short) provides a type-safe, pure-Java option for configuring the Spring IoC container. While XML is a widely-used configuration approach, Spring's versatility and metadata-based internal handling of bean definitions means alternatives to XML config are easy to implement.
In my experience, close communication between the development team and the infrastructure team can be fostered by releasing frequently. The more you release, the more you actually know what the variability between your environments are. This also allows you to remove unnecessary configurability.
A corollary to conway's law applies here - your config files will resemble the variety of environments your app is deployed to (planned or actual).
When I have a team deploying internal applications, I tend to drive towards config in code for all architectural concerns (connection pools, etc), and config in files for all environmental config (usernames, connection strings, ip addresses). If there different architectural concerns across different environments, then I'll encapsulate those into one class, and make that classname part of the config files - e.g.
container.config=FastInMemoryConfigurationForTesting
container.config=ProductionSizedConfiguration
Each one of these will use some common configuration, but will override/replace those parts of the architecture that need replacing.
This is not always appropriate however. There are several things that will affect your choice:
1) how long it takes after releasing a new drop before it is deployed successfully in each production environment and you receive feedback on that environment (cycle time)
2) The variability in deployed environments
3) The accuracy of feedback garnered from the production environments.
So, when you have a customer who distributes your app to their dev teams for deployment, you are going to have to make your app much more configurable than if you push it live yourself. You could still rely on config in code, but that requires the target audience to understand your code. If you use a common configuration approach (e.g. Spring), you make it easier for the end users to adapt and workaround issues in their production.
But a rubric is: configurability is a substitute for communication.
XML is not meant to have logic, and it's far from being a programming language.
XML is used to store DATA in a way easy to understand and modify.
Has you say, it's often used to store definitions, not business logic.
You mentioned Spring in a comment to your question, so that suggests you may be interested in the fact that Spring 3 lets you express your application contexts in Java rather XML.
It's a bit of a brain-bender, but the definition of your beans, and their inter-dependencies, can be done in Java. It still keeps a clean separation between configuration and logic, but the line becomes that bit more blurred.
XML is mostly a data (noun) format. Code is mostly a processing (verb) format. From the design perspective, it makes sense to have your configuration in XML if it's mostly nouns (addresses, value settings, etc) and code if it's mostly verbs (processing flags, handler configurations, etc).
Its bad because it makes testing harder.
If you're writing code and using methods like getApplicationContext() to obtain the dependencies, you're throwing away some of the benefits of dependency injection.
When your objects and services don't need to know how to create or acquire the resources on which they depend, they're more loosely coupled to those dependencies.
Loose coupling means easier unit testing. Its hard to get something into a junit if you need to instantiate all its dependencies. When a class omits assumptions about its dependencies, its easy to use mock objects in place of real ones for the purpose of testing.
Also, if you can resist the urge to use getApplicationContext() and other code based DI techniques, then you can (sometimes) rely on spring autowiring which means means even less configuration work. Configuration work whether in code or in XML is tedious, right?
I have a freeware scientific app that is used by thousands of people in nearly 100 countries. Many have offered to translate for free. Now that D2009 makes this easier (with integrated and external localization tools, plus native Unicode support) I'd like to make this happen for a few languages and steadily add as many as user energy will support.
I'm thinking that I'll distribute a spreadsheet with a list of strings (dozens but not hundreds) to be translated, have them return it, and compare submissions in the same language from 2-3 users then work to resolve discrepancies by consensus. Then I'll incorporate the localizations using the Integrated Translation Environment, and distribute localized updates.
Has anyone delegated translation to users? Any gotchas, D2009-specific or otherwise?
EDIT: Has anyone compared the localization support built into D2009 versus dxgettext?
I have never been a friend of proprietary localization tools for Freeware or Open Source applications. Using dxgettext, the Delphi port of GNU gettext looks like a much better option to me:
Integration into the program (even much later than its development) is easy.
Extraction of translatable strings can be done by command line programs and is therefore easily introduced into an automated build.
A new translation can be added simply by creating a new directory with the correct structure, copying the empty translation file into it, and starting to translate the strings. This is something each user can do for themselves, there's no need to involve the original author for creation of a new translation. There is also instant gratification with this process - once the program is restarted the new translations are shown immediately.
Changing an existing translation is even easier than creating a new one. Thus if a user finds spelling or other errors or needs for improvement in the translation they can correct them easily and send the changes to the author.
New program versions work with old translations, the system degrades very gracefully - new and untranslated strings are simply shown unmodified.
Translations can be made using only notepad, but there are several free tools for creating and managing translation files too; see the links on the dxgettext page. They are localized themselves, and have some advantages over a spreadsheet as well:
The location of the strings in the source code can be shown (makes sense only for Open Source apps, of course).
The percentage of translated strings is shown.
Modifications to already translated strings are highlighted too.
The whole system is mature and future-proof - I have used dxgettext for Delphi 4 programs, and there should be no changes necessary for Delphi 2009 even - translation files have always been UTF-8 encoded.
Using a spreadsheet for the translation doesn't seem a workable solution to me once you have more than a few languages. Suppose a new program version adds 2 new strings and changes 10 strings only slightly - wouldn't you need to add the new strings to and highlight the changed strings in all of the several dozen spreadsheet files and send them again to your translators? Using dxgettext you just mail the changed po file to all of them.
Edit:
There is an interesting comment about the problems there may be with dxgettext and libraries. I did never experience this, as I have stopped using resource strings altogether. The biggest part of our programs are in German, and only a few are in English or translated into several languages.
Our internal libraries use "_(...)" around all translatable strings. There are defines ENGLISH and USEGETTEXT that are set on a per-project basis. If ENGLISH or USEGETTEXT are defined, then the English texts are compiled into the DCUs, else the German text is compiled into the DCUs. If USEGETTEXT is not defined "_()" is compiled as a function that returns its parameter as-is, else the dxgettext translation lookup is used.
I have... There can be some challenges.
a string does not mean much in itself, it needs a context.
corollary, the same string can need to have more than one translation.
screen real estate: beware of varying length depending on the language, for instance, French tends to be more verbose than English.
unless you are proficient in a given language, you won't be able to evaluate the discrepancies.
I've used TsiLang Translation Suite for enabling end users to translate. I modified the code to allow encryption so that if someone does a really good job they can protect their name against a translation file, but in general the idea is that people can share their translations, and add/edit any small part they wish to. Given that it all happens within the app, and with instant visibility, it works really nicely.
As you have mentioned, D2009 comes with localization tools. Why not simply using them? AFAIK you can distribute the external translation manager (etm.exe). Do you need anything else?
Also, localization is more than just translating text. ETM also supports translation of .dfm resources.
For completeness, here is another Delphi localization tool called Delphi Localizer I recently found that looks to be well designed and polished. The tool is free for commercial use with the exception of Government projects (not exactly sure why the exception).
FWIW I have uses TsiLang Translation Suite in the past and am currently working on another project using the localization tools shipped with DevExpress VCL. The later integrates nicely with their components as well as third-party components.
I believe that most IoC containers allow you to wire dependencies with XML configuration file. What are cons and pros for using configuration file vs. registering dependencies in code?
These pros and cons are based on my work with spring. It may be slightly different for other containers.
XML
pro
flexible
more powerful than annotations in some areas
very explicit modelling of the dependencies of your classes
con
verbose
difficulties with refactoring
switching between several files
Annotations
pro
less file switching
auto configuration for simple wirings
less verbose than xml
con
more deployment specific data in your code (usually you can override this with xml configs)
xml is almopst(?) always needed, at least to set up the annonation based config
annotation magic may lead to confusion when searching for the class that is used as dependency
Code
pro
Can take advantage of strongly-typed languages (e.g. C#, Java)
Some compile-time checking (can't statically check dependencies, though)
Can take advantage of DSLs (e.g. Binsor, Fluent interfaces)
Less verbose than XML (e.g. you don't need to always specify the whole assembly qualified name (when talking .net))
con
wiring via code may lead to complex wirings
hard dependencies to IOC container in the codebase
I am using a mix of XML+Annotation. Some things especially regarding database access are always configured via xml, while things like the controllers or services are mostly configured via annotations in the code.
[EDIT: I have borrowed Mauschs code PROs]
XML pros:
Can change wiring and parameters without recompiling. Sometimes this is nice to have when switching environments (e.g. you can switch a fake email sender used in dev to the real email sender in production)
Code pros:
Can take advantage of strongly-typed languages (e.g. C#, Java)
Some compile-time checking (can't statically check dependencies, though)
Refactorable using regular refactoring tools.
Can take advantage of DSLs (e.g. Binsor, Fluent interfaces)
Less verbose than XML (e.g. you don't need to always specify the whole assembly qualified name (when talking .net))
I concur. I have found ioc containers to give me very little, however they can very easily make it harder to do something. I can solve most of the problems I face just by using my programming language of choice, which have allways turned out to be simpler easier to maintain and easier to navigate.
I'm assuming that by "registering dependencies in code" you mean "use new".
'new' is an extraordinarily powerful dependency injection framework. It allows you to "inject" your "dependencies" at the time of object creation - meaning no forgotten parameters, or half-constructed objects.
The other major potential benefit is that when you use refactoring tools (say in Resharper, or IntelliJ), the calls to new change too
Otherwise you can use some XML nonsense and refactor with XSL.