I keep seeing different implementations on aws-cdk, and I want to know if there is a preferred convention.
Single stack. This makes sense if all the resources are related (i.e., a Lambda and an IAM role should be in the same stack, not in different ones). However, this has the problem of the 200 resource limit, which can be a problem once you start creating alarms for each table and whatnot.
Multiple stacks. This makes sense because it compartmentalizes the application nice and easy (i.e. all DDB in one stack, all IAM roles in another). However, I've had a bunch of issues with ChangeSets versions when there's dependencies among stacks (My Lambda stack depends on my IAM stack, but the Lambda stack used the old ChangeSet from IAM, so now I can't update my stuff). I ended up taking the single stack approach because of constant issues with this.
I would like to know the opinions on this; I would have expected to find a rule of thumb here, but so far I haven't come across one.
Thanks!
CloudFormation stacks aim to be updated atomically. That is either all updates to the stack complete successfully or all updates are rolled back. I use the following check list as rule of thumb when grouping infrastructure components into stacks:
if the stack update fails, would I want to rollback all resources
is it likely to use some resources from this stack in another stack
are the resources I'm about to add functionally related e.g. the refer to same business capability
If the answer to any of the above is yes I tend to extract the resources into separate stacks.
Related
So let's say I have an app with two resources User and Messages. Currently, these resources are in their own zomes, where each Zome has only one entry. But now I am thinking about moving everything into one zome (to reduce code and logic repetition), so one zome with two entries (User and Message). Is this a bad way of structuring an hApp? How should I decide whether an Entry deserves its own zome?
zomes are the lowest level of composition in the Holochain ecosystem. You want to break things into zomes so you can re-use them in other places. So the answer to your question is: put them together if your idea of a user is tightly bound to your idea of a message. Put them is separate zomes if you think you would use your message or user zome in another project.
I watched some videos, read some blogs about it. SO has many questions and answers on that subject but I can not find anywhere exact answer for my question.
Almost every question and answer has a lack of usage context.
I have one middle sized, asp.net-mvc, monolith application which is running in one process on IIS. I want to (refactor and) go all the way with DDD (and CQRS without separated storage for reads and writes for now) but for me it looks like impossible mission without breaking some rules/guides/etc.
Bounded Context
For example I have more than one BCs. Each should not cross their boundaries which means should not share their storage. Right?
It is not possible if you use the whole known (everywhere scattered over the web) solution to work with NHibernate session and UoW.
Aggregate Root
Only one AR should be modified in one transaction. When others ARs are involved should introduce eventual consistency (if I remember those are Eric Evans words).
I try to do it but it is not easy in app like that. Pub/Sub not working as desired because if event is published then all subscribers are take their action within one transaction (NSB/MT does that way).
If event handlers wants to modify others ARs they should be executed in separated transactions, right?
Is it possible to deal with it in monolith application (application where whole code works in one process)?
It is not possible if you use the whole known (everywhere scattered
over the web) solution
[...]
if event is published then all subscribers are take their action
within one transaction
I think you're setting yourself useless and harmful constraints by trying to stick to some "state of the art".
Migrating an entire application to DDD + CQRS is a massive undertaking. Some areas of it don't have well-documented beaten paths yet and you'll probably have a fair bit of exploration to do. My best advice would be to stay at a reasonable distance from "the way people do things". Both in traditional ASP.Net web apps because mainstream practices often don't match the way DDD+CQRS works, and in CQRS itself because the case studies out there are few and far between and most probably very domain specific, with a tendency to advocate the use of heavy tools which may not make sense in your context.
You may need to think out of the box, adopt things incrementally and refrain from goldplating everything. You'll be better off starting with very simple implementations that suit your needs exactly than throwing a ton of tools and frameworks at your codebase.
For instance, do you really need a service bus or could a simple Observer pattern suffice ?
Regarding NHibernate, most implementations out there use a (single) Session Per Request approach, but just because it's the most popular doesn't mean it's the only one. Have you tried using multiple ISessions (one for each BC) available at a more programmable level, such as per-action, or managed entirely manually ? Conversely, have you considered sharing a database between Bounded Contexts at first and see for yourself if that's bad or not ?
I am currently working on a Rails 3 project that is divided up into four parts:
The public facing website
The administration website/backend
The models
The API for third party data access
As the models are shared between the three key components I want to keep them away from being in one main project, however each part needs access to the models, but I don't want to repeat the code and have different versions everywhere.
Currently I have the model code in a gem, and in each project's Gemfile I am referencing them with the following line:
gem "my_models", :path => "../my_models/"
However when I deploy to our test servers for my co-workers to evaluate the system on I need to pull the models from an external repository, so I swap out the above line with the following:
gem "my_models", :git => "git#private.repository.com:username/my_models.git"
This in its self works well, but its quite clunky in terms of 'versions' (i.e. I need to bump the version every time I wish to deploy the changes to the test servers), switch the line over to use git instead of local, and make sure that I'm pushing the files properly.
Previously I was using a shared git submodule, but this was just as awkward.
I would rather not build everything into one mega-project, as these tend to become monstrous and difficult to maintain, and I would also like to separate concerns if possible, so any changes I make to the administration site doesn't have much of a chance to impact the other components - obviously the models have the potential to cause issues, but that is a risk I have considered and understand.
What would people out there suggest when it comes to something like this? Or, am I going about it completely the wrong way?
Some additional background:
This app is a rewrite of an existing website which followed the model of 'lump everything into the one project' - unfortunately there are two issues here:
The app was badly developed - I inherited this project and when I first picked it up the load times were ~2 minutes per page with a single user - this has since been reduced but still has issues throughout
We are currently at our capacity limit of the current site and we anticipate that we will need to take on more load in the next 6 months - however scaling out with an 'all in one' app means we'll be wasting resources on scaling out the back end of the site which doesn't need it.
Essentially there are two things I want to separate - the Front end (being the public website and the API) and the back end - everything I know about software development tells me that combining all this together is not an ideal solution (and past history shows me that splitting these two is a good move in terms of ensuring front end performance).
Perhaps I need to look at this from another angle - keep the models in each project, and instead of sharing them between projects have a cut-down subset of functionality for each functional area (i.e. the backend needs to know who created a post, but the front end doesn't really care about that, so omit that logic when reading in the model).
drop the models project(put models into one of other parts, i'd suggest whatever you consider "more important"), put all projects into single repository(separate project folders) and make symlinks to models/libs/apis/whatever
your code is highly coupled together and you often need to make changes to few projects at once(like updating models and updating APIs that use them, etc)
one nice thing about single-repo-symlink setup is that your commits will be less fragmented and will usually represent full feature implementation - easier to track bugs, read history and maintain codebase
also when you deploy you don't need to read from many repositories - one less point of failure right there
release process is also simpler with such model as branch will now hold the scope of all projects
there are some drawbacks like symlinks dont work that well on windows and whatnot but for me it works perfectly
You can create a mountable engine that contains the shared models and create a gem out of it. This will handle the name spacing issues elegantly. Other nice aspect here is you get to share your assets also.
Watch this railscast for more details.
You'll still have to manage the 'versions' by pushing changes that need to be tested to a remote repo, but you can use the new local config of Bundler 1.2
http://gembundler.com/man/bundle-config.1.html#LOCAL-GIT-REPOS
This way it will pick up your local commits and you won't have to keep change your Gemfile upon deployment.
I know that this is not an solution for your particular problem. But I really suggest you to merge all projects into one. It is very usual to have all this parts in one application and there is no overhead. I think there is no not-awkward solution for this problem.
Take look at Git subtree.
This may work for you..
http://igor-alexandrov.github.io/blog/2013/03/28/using-git-subtree-to-share-code-between-rails-applications/
OR
You can write Rake task..
Example:-
namespace :sync do
desc 'Copy common models and tests from Master'
task :copy do
source_path = '/home/project/src-path'
dest_path = '/home/project/dest-path'
# Copy all models & tests
%x{cp #{source_path}/app/models/*.rb #{dest_path}/app/models/}
%x{cp #{source_path}/spec/models/*_spec.rb #{dest_path}/spec/models/}
# Database YML
%x{cp #{source_path}/config/database.yml #{dest_path}/config/database.yml}
end
See the below link.
http://hiltmon.com/blog/2013/10/14/rails-tricks-sharing-the-model/
Does your project have enough code coverage? If it does, I would try to separate the logic where it makes sense, and if a model is used in different projects, just pick one that fits best and write an API on top of that.
Then you could use that API to access those models (preferably using something like ActiveModel) on the other project. You would still have a simple CRUD, but all the core model logic would be handled externally.
Be sure to think well before splitting them up, though. You want to keep your domain tight on each app you create out of the Behemoth you want to torn apart.
Regarding engines:
I have used Engines for the same issue and it does help, but I also had to change my Gemfile to either point to a local path when developing, pushing the gem, then pulling it on the current project, which is the behavior you're not fond of.
We have a very lage Rails app that has two distinct sections: the front end and the CMS/Admin. We would like to break up the app into two pieces (for maintenance, as we have distinct teams that work on the front end vs. back end and they could have different release cycles).
One thought was to start a new Admin 2.0 app that has access to the models/schema from the original application, but has its own controllers/views and its own models that extend the original models until it is safe to fully decouple. Is this advisable? If not, what would be an appropriate plan to migrate away from one monolithic codebase?
warning, this is a bit ranty, and does not go anywhere.
Having worked on a very large app that operates in the manor you describe (for scalability reasons), I still have mixed opinions (an no conclusive answers).
Currently we operate 3 major apps (+ one or two smaller ones that use a fragment of the schema).
RVW (our admin app): This is the only app that writes, runs on a single server, and is responsible for maintaining the schema.
reevoo.com: ecommerce, price comparison, stuff like that. This (for historic reasons runs on a slightly different schema, run on a read only slave of RVW, with database views to map the schemas. All writes are done by sticking things on queues that RVW picks up and acts on. This works very well, although the number of random db related issues (mostly related to the views) is an issue. The main problem with this app is the difficulty sharing code (gems work well, I've often dreamed of bringing the schemas into line and sharing the core models in a gem!). We share code between apps using ruby gems. And test using lots of integration tests that cross app boundaries (using drunit (presentation on this available)).
reevoomark: very high load b2b app. This has many servers each with a full stack (db server, app server one per node). These have their databases populated with a db export - import batch job. This works very well in the short term, the shear flexibility of it is just ace, but integration testing between apps is very hard.
My advice would be to avoid splitting the apps at all costs, keeping things DRY quickly becomes a major challenge. My advice would be to stick with one app, two sets of routes (selected at startup by environment variables).
This gives you all the advantages of the other solutions, while making code sharing implicit. Splitting your test packs out would make your test cycles shorter and make things more manageable for the two teams. I would avoid working on different code bases, as doing this promotes the apps drifting apart and making code sharing tricky (as in .com).
If you decide do split, have a good set of high level cross app tests. Custom (per app) extensions to a core set of models sounds like a good plan, although with distinct code bases and teams you may still end up with duplicate code. Rails engines should be a good way of sharing the models, but be prepared for model reloading to become a little schizophrenic.
Good luck!
Have you namespaced your admin controllers? That would be a relatively easy point of subdivision and also avoid many of the negative side effects of forking your code into two apps.
Have you considered looking at Rails Engines? Added to Rails in 2.3.
I've been following Daniel Cazzulino's series about building a DI container using TDD. In part five of the series, he adds support for container hierarchies without commenting on what makes this feature useful. I've seen mention of support for hierarchies in many of the DI frameworks, but I'm having trouble understanding when they'd be used, and why. Can someone offer some insight?
I left a comment on kzu's blog asking the same question. It's a shame he didn't clarify the use-case for such a feature before coding it.
The only thing I could think of is if you wanted to have different types resolved from your container in different parts of your app. For example, if you had an order-entry system with two separate sections, and each section was identical except that they needed to present a different product list, you could create a child container for each section, and "override" the registration of your product repository in each. Whenever a section tried to resolve a product repository (or anything that depended on one) it would get the instance you set up in the child container rather than the parent. Sort of like overriding a virtual method.
This might be way off base, but it's the best I could come up with.
Here's a sample that uses child containers in a scenario similar to the one Matt describes. It uses child containers to select between different database configurations.
The key here is that most of the configuration is shared between the child containers (that shared part belongs in the parent container)
There is good reason for having child containers if dependency injection is fully embraced by the project. Let's imagine an application that has processes messages from two different, but similar systems. Most of the processing is similar, but there are variations to support compatability from those systems. Our aim is to re-use the code we can, while writing different code as requirements differ.
In OO programming, we wire together a series of classes that will collaborate to meet the system requirements. The DI container takes this responsibility. When a message arrives from a system, we want to build a set of collaborating classes suitable for processing a message from that particular system.
We have a top level container which has items that do not vary between the two systems. Then, we have child containers that do vary between systems. When a message arrives, we ask the approriate child DI container for a messageProcessor. Based on the configuration of that container (falling back to the parent container as necessary) the DI framework can return a messageProcessor (being an object backed by approriate collaborators) for the system in question.
Please leave a comment if this is not a clear answer. Also, you can search for "robot legs problem". Each leg is identical but one needs a left foot and the other needs a right foot. We could have a child DI container for each leg.
The best example that I'm aware of for nested containers is a windowing system. It's very nice for just separation of concerns to have each tab/window have it's own container independent of the other tabs/windows, with all window containers inheriting global dependencies from a parent container.
This is especially needed if you can have duplicate tab/windows, since in many cases you want to distinct instances of various classes for each duplicate tab/window