How do i reference an existing timestream table in CDK? - aws-cdk

I am using AWS CDK (Python) to manage infrastructure, including Amazon Timestream databases and tables.
Suppose i have an existing Timestream table that i want to set permissions on using CDK.
The only way i have seen to get a hold of a Timestream table is to use the CfnTable construct, a so called level 1 construct. This is because Timestream does not expose level 2 constructs yet. However, using this construct, i am creating a table as part of my stack, I am not referencing an existing resource.
For level 2 constructs, such as Function for Lambda, it is possible to reference an existing resource, for example by using the Function.from_function_name() method. I have not found any way of doing something similar for level 1 constructs.
Is it possible? If so, how?

Just use the table's ARN directly as a resource in an IAM identity-based policy. Then add the policy to the appropriate Role, e.g. a Lambda Role.

Related

What's the best way to migrate DynamoDB tables data when using CDK?

We already have our infra deployed on AWS and now we're introducing IaC using AWS CDK. One of the goals of this migration is to handle our database (DynamoDB) changes without or having a minimum downtime as possible and preserve the database data from the previous infra version.
As mentioned on the CDK best practices page we should Use generated resource names, not physical names but it implies destroying the database tables instead of changing it every time we need to change something, right? And then how can we migrate the data from the old table to the new one reliable and with minimum downtime (preferable zero down-time)?
DMS (Database Migration Service) looks like the best approach but I'm wondering what and how you guys are handling this requirement.
Reference
https://aws.amazon.com/blogs/devops/best-practices-for-developing-cloud-applications-with-aws-cdk/
One approach would be to import DynamoDB table into CloudFormation using CloudFormation import feature, and then retroactively model it in CDK, making sure that logical ID of resource matched the one in CloudFormation after import. You can overwrite logical ID in CDK using overrideLogicalId method of the CfnResourse construct.
This way your table remains in place and no migration is needed.

Rails app with test & live data access similar to stripe

I have a rails 4 app that exposes API to external users. The users also get access to a web dashboard where they can see & manage data related to API calls, similar to stripe. Stripe dashboard also allows you to switch between live & test data. I am looking to replicate similar behavior. Are there any design recommendations or a Rails way on how to do this? Use separate database (db_live vs db_test) or use separate tables inside db_live, and then use *_test table naming convention to access test data inside live database.
Whats the Rails/ActiveRecord way to do this? I am using Postgres as the database.
One potential solution would be to simply add a live (or test) boolean column to the appropriate database tables and use scopes to apply the desired where condition. An index on the column would also help with performance.
The practicality of this solution depends on exactly how test data is generated and how much of it you expect there to be per user/account.
Was searching for the same answer as well. Till now, the best option I can think of is to use a multi-tenant system. You can set a session variable as test|live and based on it connect to different databases OR in case of postgres different schemas. This way, all our code will remain DRY and all the switching logic between test and live systems can be moved in a single place.
Here's a basic idea on multi-tenant systems:
http://jerodsanto.net/2011/07/building-multi-tenant-rails-apps-with-postgresql-schemas/

One Rails model with two database choices, chosen on instantiation

My Rails app (let's call it "Mira") will be interfacing with an existing app (let's call it "Jira"). Mira will store information about Jira and will be able to directly manipulate its database (because Jira, we'll say, has an incomplete API).
Since I want to directly manipulate Jira's database, it makes sense to have models representing each of Jira's tables in my Mira app. That way I can use ActiveRecord to manipulate it.
But in fact! There are two Jiras. A Staging instance and a Production instance.
So now I want my model that was happily interfacing with one instance of Jira to be able to use a different database.
It would be super sweet if I could do this when I instantiate my model, perhaps like this:
Jira::CustomField.new(:staging)
or something like that.
Thoughts? Better ways to accomplish this? Is my goal as stated even possible?
As the documentation for ActiveRecord::Base discusses, it is easy to have different Rails model objects connecting to different databases using the establish_connection method.
However, if you want the same class to connect to multiple databases based on configuration, that will be kind of a pain. Do you need to use ActiveRecord here or could you use DataMapper? That would work better in this scenario I think. Check out What ORM to use in one process multiple db connections sinatra application? for an example

Store Selected User Info in Database

Using Symfony 1.4.x (with Propel), I've been given a task that requires me to share specific user info with multiple external systems. This is currently stored as a session (in memory) attribute, but I need to get it into a database so that I can create an API that will provide that info to authorized consumers.
I'd rather not overhaul the system to store all session data in the database (unless it's trivial and can handle namespaces), but I can't find any information on a recommended way for the myUser class to write data to the database. Is it possible to do this since the class doesn't have a model, per se (that I'm aware of)? Are there any recommended solutions or best practices for doing this?
Thanks.
UPDATE
If I were to boil this all the way down to its bare essentials, I guess the key question is this: What's the "best" way to read from/write to a database from the myUser class? Or, alternatively, is there another path that's recommended to accomplish the same end result?
Will storing result of json_encodeing or serializeing of
$myUserInstance->getAttributeHolder()->getAll()
do the job?
In the absence of a good means of accessing a database from the myUser class, I opted for a slightly different path. I installed memcached in a location accessible by both apps and extended PHP's Memcached class to apply a few customizations. The apps can now share information by writing specially formatted keys to the memcached instance.
I opted not to overhaul my existing cache storage mechanism (why upset the apple cart?) and am reading from/writing to memcached selectively for information that truly needs to be shared.

Allow mass asignment in certain contexts

I have several Rails models that I'm trying to expose via a REST api. I'm looking for a simple way to allow mass assignment in certain contexts (through the api or admin interface) but to disallow when populating from user based forms.
There are a few catches as well. First, I'm populating a bunch of child objects using accepts_nested_attributes_for. Second, I'm using resource_controller plugin which automatically applies params correctly for you in standard update case. The api controllers are in their own "namespace" so I'm open to something DRY that could be implemented in a base controller.
Several solutions come to mind but no clean solution presents itself at the moment. Any suggestions?
You can use a mixed solution.
That said, you can tweak ActiveRecord in order to allow mass assignment depending on user role.
You can implement the system yourself or use an existing plugin, such as safe_mass_assignment.

Resources