The question is telling everything
Reason why I ask:
Maven is more of a platform than a
tool, while you could consider Maven
an alternative to Ant, you are
comparing apples to oranges. "Maven"
includes more than just a build tool.
What the heck does this mean?
Different tools make apps for the same platform ... The quotation you brought up just claim that Maven does more than Ant so that it's not fair to compare the two.
You build "things" on top of a platform. You build those "things" with tools.
To use an analogy, think of a city as a platform. The city (platform) provides basic services such as:
Power
Communications channels
Sewage lines
Streets
Land/lots to build on
Etc.
You can build all kinds of "things" within a city - e.g. retail stores, office buildings, homes, movie theaters, etc. These "things" are all built using a wide variety of tools - e.g. hammers, saws, power drills, cranes, etc.
What all of these "things" have in common is that many/most of them make use of the various services provided by the city (platform). This allows you to build those "things" much more quickly and efficiently since you don't have to re-invent these services for each and every "thing" that you build.
It's also possible, due to economies of scale, that the services will be cheaper when they're provided as part of a platform vs. creating them for each use case. For example, you wouldn't want to create an electrical power plant for each home that you build.
In some cases, the "things" that get built can become part of the overall platform. For example, building a new electrical substation at the edge of a city can allow for new things to be built with easier, more efficient access to electricity increasing the overall capabilities of the platform.
Related
We are writing our first micro services using Docker containers using Amazon fargate. We have many doubts on the implementation level using Spring Boot
We will have multiple micro services in the project, is it a good practice we are writing all the micro services in a single container or I have to create separate Docker container for separate micro services. In a cost effective way we use single container but is that make any problems for our project structure in future?
We are planning to deploy the application in AWS fargate and our application will have large option to extend in future and expecting around 100 to 150 different micro services. In this case is it cost effective if we are uploading all these microservices in different containers too?
The most important thing to remember with microservices is that they're not primarily about solving technical problems but organisational problems. So when we look at whether an organisation should be using microservices, and how those services are deployed, we need to look at whether the org has the problems that the microservices style solves.
The answer to your question about your architecture, then, will mostly depend on the size of your technology team, the organisational structure, the age of your product, your current deployment practices, and how those are likely to change over the medium term.
As an example, if your organisation:
has less than 25 tech staff,
organised into 1 or 2 teams,
each of which works on any part of the product,
which is less than 12 months old,
and is deployed all at once on a regular basis (e.g. daily, weekly, monthly),
and the org isn't about to grow rapidly,
then you almost definitely want to forget about microservices for now. In a situation like this, the team is still new in learning about the domain, so likely don't know everything they'd need to know to really understand what would be a great way to split the system up into a distributed architecture. That means if they split it up now, they'll probably want to change the boundaries later, and that becomes very expensive when you already have a distributed system, while being far simpler in a monolith. What's more, with only a small team who can all work on (and support) any part of the system, there's little reason to invest in building a platform where individual teams can deploy and maintain individual services. An organisation at this stage will typically be far more concerned with finding customers and iterating the product quickly, perhaps even pivoting the product, as opposed to making teams autonomous and building a high-scaling, resilient architecture. A monolithic architecture makes sense at this point, but a well-designed monolith, with clear component boundaries enforced by APIs, and encapsulated data access, making it easy to pull out services into separate processes later.
Let's look a little further on and consider an organisation that is...
over 50 tech staff,
organised into 7 teams,
each of which works only on specific areas of the product,
which is 3 years old,
and has teams wanting to deploy their work independently of what other teams are doing.
Such an organisation should definitely be building a distributed architecture. If they don't, and have all these teams working in a monolith instead, they will run into all kinds of organisational problems, with teams needing to coordinate their work, releases being delayed while the one team finishes QA on their new feature, patch deploys being a big hassle for staff and customers. What's more, with a mature product, the organisation should know enough about the domain to be able to sensibly split both the domain and the teams (in that order; see Conway's Law) into sensible, autonomous units that can make progress while minimising coordination.
You seem to have chosen microservices already. Depending on where you sit on the scales above, maybe you want to revisit that decision.
If you want to keep developing with microservices but deploying them all in one container, know that there's nothing wrong with that if it suits the way your organisation works at the moment. Will it make problems for your project structure in future? Well, if you're successful and your organisation grows, there will probably come a time where this single-container deployment is no longer the best fit, in particular when teams start owning services and want to deploy just their service without deploying the whole application. But that autonomy will come at the cost of extra work and complexity, and it may give you no benefit at this point in time. Just because it won't be the right approach for your system in the future doesn't mean that it isn't the right approach for today. The trick is in keeping an eye on it and knowing when to make the extra investment.
No problem if you are using single container for your microservices but the main goal of microservices is to maintain each services separately each service should be loosely coupled and each service should have separate database (if you want to achieve database per service architecture).
So try to achieve this thing run your services in separate container and orchestrate those services with docker swarm or Kubernetes .
I know cost matters but if you do it in right way you will then see the power of microservices architecture then.
I am doing a project about the topic desinging and implementing an M2M Application using OM2M. When I found some documentation in the internet, I know that the OM2M is defined based on the ETSI-M2M and OneM2M Standard. These two standards make me a bit confused about the similarity. Can anyone tell me what is the difference between these two standards, the ETSI-M2M standard and OneM2M standard?
Thank you so much!
I will try to help from the standard prospective, not in terms of the implementation
ETSI M2M was developed starting form 2009, and two releases were completed. In the meantime was identified the need to globalize the solution, so ETSI and its members approched other companies and other Standard Organizations to build a common project, and this is today oneM2M.
It is worth to remember that oneM2M is not a new Standard Organization, it is simply shared Partnership Project among existing organizations to merge the efforts and the expertize to provide better specifications.
Technically speaking, the principles are the same, the key Resources are still Applications, Containers and Access Rights (ACP in oneM2M). And the principle of separation of the semantic treatment from the platform is still the same.
So de facto Release 1 of oneM2M is a sort of "Release 3" of ETSI M2M. But be careful, they are not backward compatible.
Being practical, I would suggest you to look directly at Release 1 and 2 of oneM2M. A lot of improvement has been added by the different partners making it more easily usable.
In particular Release 2 finalizes the semantic interworking framework to be build around the platform, providing inter technology interworking and data sharing.
I hope I was usefull.
Enrico Scarrone,
Telecom Italia - TIM
ETSI SmartM2M Chairman,
oneM2M Steering Comittee Vice Chair
Traditional categorization of processes is talking about integration, human centric and document centric processes, with the last one as a good candidate for placing inside the DMS system (of course, the prerequisite is that there is a built-in support for BPM).
But I was unable to find some concrete,more detailed explanation of the distinction between those options.
Imagine a company, that have Enterprise BPM solution , and also a DMS system with quite good support for BPM (i.e. Filenet DMS).
In both systems you can create user screens and workflows (process logic) as well.
Also, most processes working with documents are also quite "human-centric".
I am perfectly aware of the fact, that choosing the target platform always depends on the requirements and specific circumstances, but I wonder, if there are some general rules, or principles, based on which I can better decide where to put the process layer of the whole solution.
Additional clarification:
I don't want to implement any new platform. As I indicated a little bit in the previous post, we already have BPM platform (Oracle) and DMS as well (Filenet with BPM support - Case Foundation). So the question is not about choosing the new platform...but more about setting the rules for using the existing products/platforms. There are a lot new projects in the queue...and for some of them (that are touching the area of working with documents) we need to decide the target platform/s. For example, when you have a simple process with a few steps, and in all steps there is some work with an existing document (the document - or at least his original version, is also input to this process), the requirements on the front-end are not very complicated etc...it would simpler to build the whole solution in the Filenet platform( mostly because of the cost). But I am wondering if there are some similar rules....Like you should think about that or that... when you want use only the DMS platform...or both platforms etc. You can call these rules the principles for development, references architectures or something like that....that is guiding you when designing the target architecture/s.
Thank you
I'm reposting the answer because I don't see a reason for deletion (by #Bohemian).
I think it adds value to anyone asking the same question. #Bohemian could have at least specified why he deleted the post.
Here it goes:
You gave us rather small amount of information. And what exactly is
the question? What do you mean by "where to put the process layer"?
You shouldn't constrain yourself to only those DM systems that claim
to have BPM built-in. That's marketing speak behind which often lay
two half-baked products. You should instead question which
standards-based integration points the system has, so you can
integrate effortlessly. And then invest in best-of-breed DM and best
BPM separately. All-in-one solutions are often too closed, difficult
to extend and above all, they bring free vendor-lock-in with them.
What are your business requirements, i.e. what do you have to do?
Implement BPM inside organization that already has DM or not? Do you
have some BPM platform already? Do you have any
constraints/requirements when choosing either of those (vendor,
technology foundation, Gartner quadrant...)?
What are the options you're considering for DM and which options are
you evaluating (if any) as a BPM platform? Have you already settled on
IBM or you can go elsewhere? Is open source an option?
What is your role/responsibility in this project?
EDIT - after the author's clarifications:
I have not worked with Oracle's BPM, but I can tell you that, although Case Foundation is more suited to Case Management, you can develop a complete Process Management solution with it (workflows, tasks, roles, deadlines, in-baskets, etc.).
If you go that path and later come across the business need to allow business users to define their own case templates, take a look at IBM Case Manager, as it builds on top of Case Foundation, but also brings additional WebUI features (built on IBM Content Navigator), suitable for business users (although, more often than not, it turns out the IT does that job).
A few IBM redbooks about Case & Content management that might help you make an informed decision:
Introducing IBM FileNet Business Process Manager - this is the former name for Case Foundation - the same product, new version.
Advanced Case Management with IBM Case Manager
Customizing and Extending IBM Content Navigator - you'll need this one for customizations, if you decide to go with CF (instead of Oracle).
Building IBM Enterprise Content Management Solutions From End to End - from ingestion to case/process management (contains Case Manager).
I agree with #Robert regarding integration, after all, before version 5.2 FileNet Content Platform Engine was FN Content Engine + FN Process Engine.
The word of advice I can give you is to first document all features that business requires from BPM. Then do a due diligence on both products, noting down which of those features each of those products supports. Then the answer, if not laid out in front of you, will at least be much easier.
You also have to take into account that IBM is oriented towards IBM BPM (former Lombardi) when process management is concerned. Former FN BPM is now more pushed into Case Management (but those two are very similar paradigms).
You should definitely post back about your experience, whichever option you choose.
Good "luck" :)
I wanted to know if there are any open source tool for load testing any web application.
Is LoadRunner a perfect tool from an enterprise perspective for this purpose?
Could you clarify your question a bit? Are you looking to take the queries generated by the web application and then to reproduce them with a performance testing tool directly against the database or are you looking to exercise the web app and then analyze the database?
As far as what is best, this is a very subjective item and it comes back to that most dangerous of concepts, "requirements." The requirements for one organization may point the way to one tool over another depending upon the technical needs of the application, the available skills within the existing/planned performance testing team and budget. Mercury certainly made the case for the ROI for LoadRunner on the enterprise level long before it became part of HP's software offerings with market responding by giving it the largest overall market share. However, as evidenced by it's non-monopoly position the requirements of other organizations have lead to the adoption of different tools.
Build your requirements; technical, skills required and business; then evaluate the various market offerings to see which one works for you. The more interfaces you add the more compelling a commercial tool becomes over an open source one. The greater your skills depth in your performance team the more flexibility you have in using an open source tool as you will need to build out some of the analytical pieces that a commercial tool includes by default. ...
We are currently evolving our development processes in an effort to become CMMI compliant (we will start with level 2, and move up from there). We are trying to locate a tool that is inexpensive (or free) that will allow us to develop requirements in the spirit of CMMI. In other words, we need to be able to enter our requirements, track changes to them, provide alerts to individuals when requirements change, perform traceability, etc. Our projects are typically small (typically 3 - 7 developers and a tester or two).
We have looked at many of the commercial tools, but they cost more than we are able to afford. We looked at a few on SourceForge (OSRM and others) but could not find anything that was sufficiently mature that also had the features that we needed.
We are looking for suggestions for a tool that meets the above requirements.
INCOSE is an excellent resource for this sort of question. They maintain a Tools Database that indexes COTS and GOTS System Engineering tools. Some of the tools that perform requirements management also have high-level System Eng functionality (CORE, for example) whereas others are more narrowly-focused (i.e. RequistePro).
Most of these tools will cost money, but may provide some limited free functionality. Workspace.com, for example, provides some free functionality. I would recommend against rolling your own solution, or adapting a tool that is not specifically intended for requirements management, because the hidden cost of getting it going, as well as inefficiency at the intended task could become burdensome.
If you absolutely can't afford to spend any money on a requirements tool, it would be better to use the free functionality from a commercial tool. But don't do that... pony up the cash for RequisitePro and sleep better knowing that you're getting the right tool for the job.
How about starting of with a Wiki? We use TWiki but there are many others available. The wiki we uses
sends an email when any pages change
stores the history of changes to each page
by using the auto-linking of wikis you can create a hierarchy of requirements
This seems to cover most of your items. Wikis like TWiki have plugins which may also help you.
If you only have 3-7 developers on a project using one of the big commercial tools may be far too complex for what you need.
We're heavily into CMMI at our company, but all of our tools are developed in-house.
All I can recommend is to develop your own tools. You will at least have the advantage that it will reflect your business process.
In general, for a new tool, we start off with a tool developed on a project, which is then shared with the rest of the company, if it has been successful. Don't be afraid to use Excel to trace your requirements along with a statuts, which along with a good change control system, such as subversion, gives you a lot of traceability.
A team in the company I used to work for was working on customizing Visual Studio Team System work item templates to handle requirements tracking. One goal, which you should consider as well, was to enable traceability from requirements through to developer work items and then defects. This enables some powerful analysis of which requirements are tied to the most defects.