I'm looking for some pros and cons of creating one operation per wdsl against bundling operations in a single wdsl
Small example below:
<operation name="Divide">
<input message="y:DivideMessage"/>
<output message="y:DivideResponseMessage"/>
</operation>
instead of being divide and this were more complex operations, what are the pros/cons of having a one wsdl per operation
I'm not sure I understand the question... But if you ask about putting all operations in a single WSDL or each in one WSDL...
I think the better is to keep all in one single WSDL if the operations are on the same endpoint, as many tool will allow you to generate a Web service client from the WSDL : using a single WSDL you can then generate a client that will be able to call all the operations. While if the operations are in separate WSDL you will have to generate several clients, one per operation which will be uncomfortable to use...
My proposal would be to use one WSDL for one object (example: customerManagement) or aspect, which then includes several operations (for instance: CRUD=*C*reate,*R*ead,*U*pdate and *D*elete).
When thinking about WSDL-design, it's not only about the wsdl-operations.
You should also think about choosing correct porttype- and targetNamespaces-values in your WSDL(s).
The relation between your WSDL and code-result after compilation/generation is as follows:
- WSDL:targetNamespace -> Package
- WSDL:Porttype -> Class
- WSDL:Operation -> Method
keeping all together:
Pros: one file to synchronize, no dependencies
Cons: one BIG file, probably harder to have a clear image on what goes where
separate files:
Pros: smaller files, easier to maintain and extend
Cons: possibly harder to debug the cross-reference dependencies, or finding duplicate entries
suggestion:
WSDL files are considered similar to contracts. Therefore you should keep together a list of the 'common' sense stuff and only specialise on the ones you need in the current application only. I would suggest keeping the 'objects' in a single 'lexicon' file and some basic (common) operations in a 2nd tier file (that has pointers to include the 1st file). Then in any specialisation I would create a 3rd tier file that specifies only the operations that are unique to the current application needs, or even split the operations in multiple files.
Related
I'm building an ontology in Protégé for data sharing management in IoT environments. For this purpose I want to create a package that contains the observation (raw data), information on its provenance, and the licence that the data consumer will need to accept and respect to be able to use the ressource. The aim of our project is to make this "package" the entity that will be circling between users rather than just the raw data, and therefor for the data owner/producer to not completely lose his ownership once his data is shared.
In order to do this, I created rather spontaneously a class named "Package" composed of three disjointed classes which are: the observation, its provenance information, and the generated licence. However, I realized that this does not mean "a package is composed of those three elements", but rather "each one of those three elements is a package", which is not at all what I'm seeking.
Is there a way to express the composition, without (for example) having to create an Object Property named "isComposedOf" ?
Thank you in advance for your time. Please don't hesitate if you need other details
I am working on automating the translation workflow and improving the Localization process as a whole of a Rails website. I am using SimpleBackend so only YAML files are used for storing translations.
The current locales directory consists of folders, then sub-folders (in some cases) and those sub-folders containing yml files. I am considering to integrate the project with some third-party tool like Transifex for translation management so may be using a single YAML file for each language may be good for management of workflow.
If someone can highlight the pros and cons of both structures then it would be really helpful to decide whether I should switch from nested file structure to single file pattern or not. Also, the project is an Open-Source project with active contributors and so thinking for a long-term solution.
Thanks!
I think whatever tools you are using to make the process flow smoothly factors a lot in this decision. You should explore how exactly Transifex wants things to be structured in output, and try to keep your current input structure, and give that a shot before making a decision.
However, in my opinion, for a large app with a lot of translatable text, my preference would be to allow for multiple yaml files in your default locale, and one or two consolidated yaml files for each foreign translation. If there isn't a lot of translatable text in your app, maybe a single file is fine for you, but given it's already split up, there's a good chance that's the better choice. On a team with many contributors you can end up with a very high churn file (maybe with a lot of merge conflicts) that everyone changes all the time.
Splitting into separate files lets you logically separate out text to match a domain in your app, like a separate yaml file for mailers (or even each mailer), and one for each domain (or controller). Either way, it puts you in control of your organization strategy.
However, there isn't a lot of value, IMO in separating your foreign translations to mirror that structure. The systems I have experience with (not Transifex) generate your foreign translation files for you, so you just need to sync with the web interface and commit the results.
Typically I have started new projects with a solution containing:
Web project: contains the ASP.NET MVC or Web API controllers, Javascript code etc. Makes calls to class library
Class library1: contains DbContext, EF data model, a class with CRUD methods to interface with Db via the DbContext and various "utility" methods
Class library2: contains only POCO classes. This library is referenced by both the web project and library1
Ok, that works well, but when the amount of "business logic" starts to increase, this gets kinda messy, since I start putting in more rules that the business gives you. Makes me think there needs another "layer" or library where we put "business logic" that really is above/beyond just getting a data returned as a filtered list of POCO objects. Things such as checking attributes of orders based on some rules defined by some group within the business.
My question then is: would you force every call from the client layer to go through the business library (see image below case #2), even for simple cases where you just need a simple list of lookup values of some sort?
This question is likely to attract opinionated answers. My take on is - yes I would force everything to go through the business library.
To have consistency more than anything else really, this way you can be sure:
A new member of your team is not trying to understand why some of the DB operations are happening through a different layer compared to other ones.
When you (or some other developer) are adding / removing functionality that belongs to interacting with DB, the location of it is well known.
When there's a problem regarding the DB layer / access / queries - simpler to locate the problem.
If you are testing that layer / methods - we find it to be more convenient to have everything in the same place. (Testability definitely increases) We still split the stuff across files.
We use Dependency Injection - so if you need DB access, you just inject the interface which sets up the connection for you and you're done.
Depending on how your setup is, if you're logging DB related stuff separately (monitoring the QoS of queries separately as an example) this also ensures that you don't end up adding that custom logging all over the code for those simple lookups.
Makes the dependency chain more manageable.
Now - this is not to say that it doesn't get complicated, it does. However there are further ways which you can split things, you don't necessarily need to have a gigantic DBContext class which is handling N number of different queries, depending on our design, we might end up splitting it with partial classes so different functionalities end up on different files, their tests also map to different files; we think this improves overall maintainability.
I would like to know if there are better ways to initialize a large collection of same-type instances. This is not a problem only limited to Swift, but I am using Swift in this case.
Take, for example, a large list of API endpoints. Suppose I have 100 endpoints in this API and each of them share some common functionality, such as headers, parameter lists, parsing formats, etc... albeit with different values for each of these "options".
I could think of a few different ways to express 100 endpoints:
Create a resource file with all of the values and read them in from the file on app launch. The problem with this is that it becomes stringly typed and there is potential for typos and/or lots of copy/paste key values. This would include plist files, json files, sqlite tables, csv files, etc. It centralizes and condenses the data, but it doesn't seem maintenance friendly or swiftly. Furthermore, it seems like resource files are harder to obfuscate should the details be somewhat private.
Create a giant enum-ish function with all of the API endpoint instance initialization code blobbed all in the same area/function/file. This would be equivalent of doing a giant switch statement or making a collection literal with all the instantiation happening in one spot. The advantage here is that it can be strongly typed and it is also contained to one area, similar to what a resource file would do. However, it will be a BIG file with lots of scrolling. Maybe too big?
Create a separate file/module/instance/subtype for each endpoint and, more or less, hardcode computed properties inside the instance. This would be maybe creating an extension and/or subclass for each endpoint and putting them in a separate swift file. This limits the visual scope for each endpoint, but it also just turns your project files into the blob of data instead.
I'm wondering if there are philosophical arguments for either of these options. Or, are there other options I have not thought of. Is it preference? Are there best practices when initializing a large collection of what seems like a bunch of complex literals?
If you have lots of this static data, or machine-generated classes, consider the advice in WWDC 2016's Optimizing App Startup Time. It's a great talk. The loader has to initialize and fix up all your static object instances and classes; if you have a lot, your app load time will be adversely affected.
For static data, one piece of advice is to use Swift, which you've already done, as Swift knows to defer the instantiations until run time.
Swift doesn't help with mass-produced classes; though you can switch to structs instead.
Even ignoring the startup time issue, I'd err on the side of being data driven. Option 1. Less code to maintain. IMHO There's nothing wrong with stringly typed here, this code is unlikely to change much; adding endpoints will be trivial. It's cool to see new function when you didn't even write new code!
What are common reasons to split a development project (e.g. ASP.NET MVC application) into multiple projects? Code organization can be done via folders just as well. Multiple projects tend to generate circular reference conflicts and increase complexity by having to manage/resolve those.
So, why?
Some reasons are
Encapsulation - By packaging a set of routines into another library, either as a static library or a set of dlls, it becomes a black box. For it to be a good black box, all you need to do is to make sure you give the right inputs and get the right outputs. It helps when you re-use that library. It also enforces certain rules and prevent programming by hacks ('hmm...I'll just make that member function public for now')
Reduces compile time - the library is already complied; you don't have to rebuild it at compile time, just link to it (assuming you are doing C++).
Decoupling - By encasing your classes into a standalone libraries, you can reduce coupling and allows you to reuse the library for other purpose. Likewise, as long as the interface of the library does not change, you can make changes to the library all you like, and others who link to it or refer to it does not need to change their code at all. DLLs are useful in this aspect that no re-compilation is required, but can be tricky to work with if many applications install different versions of the same DLLs. You can update libraries without impacting the client's code. While you can do the same with just folders, there is no explicit mechanism to force this behaviour.
Also, by practicing this discipline of having different libraries, you can also make sure what you have written is generic and decoupled from implementation.
Licensing/Commercialization - Well, I think this is quite obvious.
One possibility is to have a system that a given group (or single developer) can work on independently of the rest of the code. Another is to factor out common utility code that the rest of the system needs -- things like error handling, logging, and common utilities come to mind.
Of course, just when thinking about what goes in a particular function / class / file, where the boundaries are is a matter of art, not science.
One example I can think of is that you might find in developing one project that you end up developing a library which may be of more general use and which deserves to be its own project. For instance maybe you're working on a video game, and you end up writing an audio library that's in no way tied specifically to the game project.
Code reuse. Let's say you have project A and you start a new project B which has many of the same functions as project A. It makes sense to pull out the shared parts of A and make them into a library which can be used by both A and B. This allows you to have the code in both without having to maintain the same code in two places.
Code reuse, inverted. Let's say you have a project which works on one platform. Now you want it to work on two platforms. If you can separate out the platform-dependent code, you can start different projects for each platform-dependent library and then compile your central codebase with different libraries for different platforms.
Some tips about split your project into multiple projects:
One reason for separating a project into multiple class libraries is re-usability. I’ve yet to see the BLL or DAL part of application re-used in another application. This is what textbooks from the 90s used to tell us! But most if not all modern applications are too specific and even in the same enterprise, I’ve never seen the same BLL or DAL parts re-used across multiple applications. Most of the time what you have in those class libraries is purely to serve what the user sees in that particular application, and it’s not something that can be easily re-used (if at all).
Another reason for separating a project into multiple class libraries is about deployability. If you want to independently version and deploy these pieces, it does make sense to go down this path. But this is often a use case for frameworks, not enterprise applications. Entity Framework is a good example. It’s composed of multiple assemblies each focusing on different areas of functionality. We have one core assembly which includes the main artifacts, we have another assembly for talking to a SQL Server database, another one for SQLite and so on. With this modular architecture, we can reference and download only the parts that we need.
Imagine if Entity Framework was only one assembly! It would be one gigantic assembly with lots of code that we won’t need. Also, every time the support team added a new feature or fixed a bug, the entire monolithic assembly would have to be compiled and deployed. This would make this assembly very fragile. If we’re using Entity Framework on top of SQL Server, why should an upgrade because of a bug fix for SQLite impact our application? It shouldn’t! That’s why it’s designed in a modular way.
In most web applications out there, we version and deploy all these assemblies (Web, BLL and DAL) together. So, separating a project into 3 projects do not add any values.
Layers are conceptual. They don’t have a physical representation in
code. Having a folder or an assembly called BLL or DAL doesn’t mean
you have properly layered your application, neither does it mean you
have improved maintainability. Maintainability is about clean code,
small methods, small classes each having a single responsibility and
limited coupling between these classes. Splitting a project with fat
classes and fat methods into BLL/DAL projects doesn’t improve the
maintainability of your software. Assemblies are units of versioning
and deployment. Split a project into multiple projects if you want to
re-use certain parts of that in other projects, or if you want to
independently version and deploy each project.
Source: https://programmingwithmosh.com/csharp/should-you-split-your-asp-net-mvc-project-into-multiple-projects/
Ownership for one thing. If you have developers responsible for different parts of the code base then splitting the project up is the natural thing to do. One would also split projects by functionality. This reduces conflicts and complexity. If it increases, that just means a lack of communication and you are just doing it wrong.
Instead of questioning the value of code in multiple assemblies, question the value of clumping all of your code in one place.
Would you put everything in your kitchen in a single cabinet?
Circular references are circular references, whether they happen between assemblies or within them. The design of the offending components is most likely sub-optimal; eschewing organization via assemblies ironically prevents the compiler from detecting the situation for you.
I don't understand the statement that you can organize code just as well with folders as with projects. If that were true, our operating systems wouldn't have the concept of separate drives; they would just have one giant folder structure. Higher-order organizational patterns express a different kind of intent than simple folders.
Projects say "These concepts are closely related, and only peripherally related to other concepts."
There are some good answers here so I'll try not to repeat.
One benefit of splitting code out to it's own project is to reuse the assembly across multiple applications.
I liked the functional approach mentioned as well (e.g. Inventory, Shipping, etc. could all get their own projects). Another idea is to consider the deployment model. Code shared between layers, tiers, or servers should probably be in it's own common project (or set of projects if finer control is desired). Code earmarked for a certain tier may be in it's own project. e.g. if you had a separate web server and application server then you wouldn't want to deploy the UI code on the application server.
Another reason to split may be to allow small incremental deploys once the application is in production. Let's say you get an emergency production bug that needs to be fixed. If the small change requires a rebuild of the entire (one project) application you might have a hard time justifying a small test cycle to QA. You might have an easier sell if you were deploying only one assembly with a smaller set of functionality.