How do I properly organize my terraform configuration? - devops

I'm trying to implement a couple of services using terraform and I am not quite sure how to efficiently handle variables (ideally the proper terraform way).
Let's say I want to spin up a couple of vms in a couple of datacenters, one each and every datacenter differs slightly (think aws-regions, VPC-IDs, Securitygroup-IDs etc.)
Currently (in ansible) I have a dict that contains a dict per region containing the configuration specific to the region.
I would like to be able to deploy each datacenter on its own.
I have read through a lot of documentation and I came up with a couple of ways I could use to realise this.
1. use vars-files
have one vars-file per datacenter containing exactly the config per DC and call terraform -var-file ${file}
That somehow seems not that cool, but I'd rethink that if there was a way to dynamically load the vars-file according to the datacenter-name I set.
2. use maps
have loads of maps in an auto-loaded vars-file and reference them by data-center-name.
I've looked at this and that does not look like it's really readable in the future. It could work out if I create separate workspaces per datacenter, but since maps are string -> string only I can't use lists.
3. use an external source
Somehow that sounds good, but since the docs already label the external data source as an 'escape hatch for exceptional situations' it's probably not what I'm looking for.
4. use modules and vars in .tf-file
Set up a module that does the work, set up one directory per datacenter, set up one .tf-file per datacenter-directory that contains the appropriate variables and uses the module
Seems the most elegant, but then I don't have one central config but lots of them to keep track of.
Which way is the 'proper' way to tackle this?

To at least provide an answer to anyone else that's got the same problem:
I went ahead with option 4.
That means I've set up modules that take care of orchestrating the services, defaulting all variables I use to reflect the testing-environment (as in: If you don't specify anything extra you're setting up testing, if you want anything else you've got to override the defaults).
Then I set up three 'branches' in my directory tree, testing, staging and production, and added subdirectories for every datacenter/region.
Every region-directory contains main.tf that sources the modules, all but testing contain terraform.tfvars that define the overrides. I also have backend.tf in all of those directories that defines the backends for state-storage and locking.
I initially thought that doing it this way is a bit too complex and that I may be overengineering the problem, but it turned out that this solution is easier to understand and maintain.

Related

What are the differences in lambdas created by aws-cdk in the different packages?

In aws_cdk there are three different classes for creating lambdas:
aws_cdk.aws_lambda.CfnFunction
aws_cdk.aws_lambda.Function
aws_cdk.aws_sam.CfnFunction
What are the differences and reasons for choosing one over the other?
For every AWS Resource that is supported by CloudFormation there are always two classes. The Cfn version maps directly to what you would normally do in YAML. The other version, in this case aws_lambda.Function, is a higher level class that already sets useful default values to get up and running faster. e.g. memorySize and timeout.
I'm not completely sure about the aws_sam thing, but I wouldn't recommend using it as the libary is not stable and you can achieve the same thing without this libary.
e.g. https://github.com/aws-samples/aws-cdk-examples/tree/master/typescript/api-cors-lambda-crud-dynamodb
Our documentation deals with this subject: https://docs.aws.amazon.com/cdk/latest/guide/constructs.html#constructs_lib

Saving different sets of values of variables with a changing structure

I have several sets of values (factory setting, user setting...) for a structure of variables and these values are saved in a binary file. So when I want to apply certain setting I just load the specific file containing desired values and these values are applied to the variables accordingly to the structure. This works fine when the structure of variables doesn't change.
I can't figure out how to do it when I add a variable but need to retain the values of the rest (when a structure in a program changes, I need to change the files so that they would contain the new values accordingly to the new structure and at the same time keep the old ones).
I'm using a PLC system that is written in ST language. But I'm looking for some overall approach for solving this issue.
Thank you.
This is not an easy task to provide a solution that is generic and works with different plc platforms. There are many different ways to accomplish this depending on the system/interface you actually want to use e.g. PLC Source Code / OPC / ADS / MODBUS / special functions, addins from the vendor and there are some more possibilities e.g. language features on the PLC. I wrote three solutions to this with C#/ST(with OOP Extensions) and ADS/OPC communication, one with source code parsing first in C#, the other with automatic generation from PLC side and another with an automatic registration system of the parameters with an EntityFramework compatible Database as ParameterStore. If you don't want to invest too much time in this you should try out the parameter management systems that are provided by your plc vendor and live by those restrictions.

Ruby on Rails object reporting

I am currently developing a ruby application that has a large number of different objects. As part of this application, I would like to add a reporting engine that allows a user to create custom reports on virtually any variable within the application - for example, they could create a report that shows what percentage of customers have a telephone number, or the absolute number of suppliers whose street name starts with an E. The point is, they should be able to create any report on the data in the app, regardless of how obscure, without needing to rely on it having been created in the application already.
My question is: how do I start creating a structure that allows this to happen? Will it be necessary to specify all possible variables that could be used as part of a report (e.g. I would need to specify that customers.count, customers.email_address and suppliers.addresses.street_name are all variables available to the reporting engine for the example above), or could these somehow be made available automatically?
If it is necessary to specify the variables, what would be the best way to do this?
I have searched for some resources on this, but have not yet found any - if anyone can recommend a source, it would also be appreciated.
Thanks!
Consider yourself warned that this likely violates YAGNI. I would highly recommend building reports first for the most common types of reports your users will want, so that you can make them usable and pretty. Doing this at the abstract level is an order of magnitude more complex, is error prone, may lead to some security issues if you're not careful, and will be difficult to make pretty reports rather than generic looking ones.
That said, take a look at something like Active Admin, which provides custom filters and data exports. You should be able to add custom scopes to have it do what you want, but if it still doesn't, then looking at the implementation should give you a good idea of what's involved.

Shared models between two Rails apps - what is the ideal solution for Workflow?

I am currently working on a Rails 3 project that is divided up into four parts:
The public facing website
The administration website/backend
The models
The API for third party data access
As the models are shared between the three key components I want to keep them away from being in one main project, however each part needs access to the models, but I don't want to repeat the code and have different versions everywhere.
Currently I have the model code in a gem, and in each project's Gemfile I am referencing them with the following line:
gem "my_models", :path => "../my_models/"
However when I deploy to our test servers for my co-workers to evaluate the system on I need to pull the models from an external repository, so I swap out the above line with the following:
gem "my_models", :git => "git#private.repository.com:username/my_models.git"
This in its self works well, but its quite clunky in terms of 'versions' (i.e. I need to bump the version every time I wish to deploy the changes to the test servers), switch the line over to use git instead of local, and make sure that I'm pushing the files properly.
Previously I was using a shared git submodule, but this was just as awkward.
I would rather not build everything into one mega-project, as these tend to become monstrous and difficult to maintain, and I would also like to separate concerns if possible, so any changes I make to the administration site doesn't have much of a chance to impact the other components - obviously the models have the potential to cause issues, but that is a risk I have considered and understand.
What would people out there suggest when it comes to something like this? Or, am I going about it completely the wrong way?
Some additional background:
This app is a rewrite of an existing website which followed the model of 'lump everything into the one project' - unfortunately there are two issues here:
The app was badly developed - I inherited this project and when I first picked it up the load times were ~2 minutes per page with a single user - this has since been reduced but still has issues throughout
We are currently at our capacity limit of the current site and we anticipate that we will need to take on more load in the next 6 months - however scaling out with an 'all in one' app means we'll be wasting resources on scaling out the back end of the site which doesn't need it.
Essentially there are two things I want to separate - the Front end (being the public website and the API) and the back end - everything I know about software development tells me that combining all this together is not an ideal solution (and past history shows me that splitting these two is a good move in terms of ensuring front end performance).
Perhaps I need to look at this from another angle - keep the models in each project, and instead of sharing them between projects have a cut-down subset of functionality for each functional area (i.e. the backend needs to know who created a post, but the front end doesn't really care about that, so omit that logic when reading in the model).
drop the models project(put models into one of other parts, i'd suggest whatever you consider "more important"), put all projects into single repository(separate project folders) and make symlinks to models/libs/apis/whatever
your code is highly coupled together and you often need to make changes to few projects at once(like updating models and updating APIs that use them, etc)
one nice thing about single-repo-symlink setup is that your commits will be less fragmented and will usually represent full feature implementation - easier to track bugs, read history and maintain codebase
also when you deploy you don't need to read from many repositories - one less point of failure right there
release process is also simpler with such model as branch will now hold the scope of all projects
there are some drawbacks like symlinks dont work that well on windows and whatnot but for me it works perfectly
You can create a mountable engine that contains the shared models and create a gem out of it. This will handle the name spacing issues elegantly. Other nice aspect here is you get to share your assets also.
Watch this railscast for more details.
You'll still have to manage the 'versions' by pushing changes that need to be tested to a remote repo, but you can use the new local config of Bundler 1.2
http://gembundler.com/man/bundle-config.1.html#LOCAL-GIT-REPOS
This way it will pick up your local commits and you won't have to keep change your Gemfile upon deployment.
I know that this is not an solution for your particular problem. But I really suggest you to merge all projects into one. It is very usual to have all this parts in one application and there is no overhead. I think there is no not-awkward solution for this problem.
Take look at Git subtree.
This may work for you..
http://igor-alexandrov.github.io/blog/2013/03/28/using-git-subtree-to-share-code-between-rails-applications/
OR
You can write Rake task..
Example:-
namespace :sync do
desc 'Copy common models and tests from Master'
task :copy do
source_path = '/home/project/src-path'
dest_path = '/home/project/dest-path'
# Copy all models & tests
%x{cp #{source_path}/app/models/*.rb #{dest_path}/app/models/}
%x{cp #{source_path}/spec/models/*_spec.rb #{dest_path}/spec/models/}
# Database YML
%x{cp #{source_path}/config/database.yml #{dest_path}/config/database.yml}
end
See the below link.
http://hiltmon.com/blog/2013/10/14/rails-tricks-sharing-the-model/
Does your project have enough code coverage? If it does, I would try to separate the logic where it makes sense, and if a model is used in different projects, just pick one that fits best and write an API on top of that.
Then you could use that API to access those models (preferably using something like ActiveModel) on the other project. You would still have a simple CRUD, but all the core model logic would be handled externally.
Be sure to think well before splitting them up, though. You want to keep your domain tight on each app you create out of the Behemoth you want to torn apart.
Regarding engines:
I have used Engines for the same issue and it does help, but I also had to change my Gemfile to either point to a local path when developing, pushing the gem, then pulling it on the current project, which is the behavior you're not fond of.

Can I maintain two versions of one application with Git?

I’m writing an application with Ruby on Rails. This application will be delivered to a minimum of two different customer types. The base is always the same but some of the views differ. That’s mostly it. For now.
Is there any way, for example using branches, to use the same code base and separate only the views, for example? I read the Git manual for branching but am still not sure if this is the best way to achieve what I need.
Another idea would be forking. But, is that clever? If I change something in the code of fork A, is it easy to merge these changes into fork B?
Branching and forking in git is not bad at all, as the merge support is great (possible the best of al VCMs).
Personally, I don't like the idea of branching or forking a project to provide different customization as it can very quickly become really difficult, e.g. what are you going to do if you have 15 different deployments?
I think a better approach is to build the application so it behaves differently depending on some parameters. I'm well aware that sometimes the implementations are very different, so this might not be useful anymore.
Another approach is to build the core of your app in a GEM which acts as a service to the application, and the only thing you customize per client are the views. Of course, the GEM should be generic enough to provide all the services you need.
Please share with us what you decided, as there's no best answer for your question.
It would probably be better to make you product select between the types at either build or runtime, that way you can use a single set of source.
Otherwise it is possible with branches, and merging, but you'll have more difficulty managing things. Forking is basically branching at this level.
I agree with #Augusto. You could run your app in 2 different environments, ie production_A and production_B. From there, you could use SettingsLogic to define configurations based on Rails.env, then reference those settings in your app when selecting which view to use for example.

Resources