Right now I have a container for an API that I am looking to push to an AWS Fargate instance that has a connection string for a DB on a privately hosted server. For testing this has been stored in a string in my Golang program, but I don't really want to push that even with the program already compiled.
I have looked into using the GO AWS SDK for the SecretsManager, but I am not sure if that is the best way to go, or if it will even work like I am hoping it will. What is the best way to handle this?
Hardcoding things into the program is obviously never the best choice, so I share your pain and the need for something better, that could be:
Define the connection string into an environment variable. This solution does not keep the information "secret", so if it's something that you would not like to have it readable in any way, try with the next
Define the connection string into Secrets Manager and refer to in the environment variable definition
Doing this with CloudFormation we will have in the first case:
...
Environment:
-
Name: CONNECTION_STRING
Value: 'YOUR VALUE'
...
While in the second case we would have:
...
Environment:
-
Name: CONNECTION_STRING
Value: '{{resolve:secretsmanager:MySecret:SecretString:connection_string}}'
...
Related
For my work, we are trying to spin up a docker swarm cluster with Puppet. We use puppetlabs-docker for this, which has a module docker::swarm. This module allows you to instantiate a docker swarm manager on your master node. This works so far.
On the docker workers you can join to docker swarm manager with exported resources:
node 'manager' {
##docker::swarm {'cluster_worker':
join => true,
advertise_addr => '192.168.1.2',
listen_addr => '192.168.1.2',
manager_ip => '192.168.1.1',
token => 'your_join_token'
tag => 'docker-join'
}
}
However, the your_join_token needs to be retrieved from the docker swarm manager with docker swarm join-token worker -q. This is possible with Exec.
My question is: is there a way (without breaking Puppet philosophy on idempotent and convergence) to get the output from the join-token Exec and pass this along to the exported resource, so that my workers can join master?
My question is: is there a way (without breaking Puppet philosophy on
idempotent and convergence) to get the output from the join-token Exec
and pass this along to the exported resource, so that my workers can
join master?
No, because the properties of resource declarations, exported or otherwise, are determined when the target node's catalog is built (on the master), whereas the command of an Exec resource is run only later, when the fully-built catalog is applied to the target node.
I'm uncertain about the detailed requirements for token generation, but possibly you could use Puppet's generate() function to obtain one at need, during catalog building on the master.
Update
Another alternative would be an external (or custom) fact. This is the conventional mechanism for gathering information from a node to be used during catalog building for that node, and as such, it might be more suited to your particular needs. There are some potential issues with this, but I'm unsure how many actually apply:
The fact has to know for which nodes to generate join tokens. This might be easier / more robust or trickier / more brittle depending on factors including
whether join tokens are node-specific (harder if they are)
whether it is important to avoid generating multiple join tokens for the same node (over multiple Puppet runs; harder if this is important)
notwithstanding the preceding, whether there is ever a need to generate a new join token for a node for which one was already generated (harder if this is a requirement)
If implemented as a dynamically-generated external fact -- which seems a reasonable option -- then when a new node is added to the list, the fact implementation will be updated on the manager's next puppet run, but the data will not be available until the following one. This is not necessarily a big deal, however, as it is a one-time delay with respect to each new node, and you can always manually perform a catalog run on the manager node to move things along more quickly.
It has more moving parts, with more complex relationships among them, hence there is a larger cross-section for bugs and unexpected behavior.
Thanks to #John Bollinger I seem to have fixed my issue. In the end, it was a bit more worked than I envisioned, but this is the idea:
My puppet setup now uses PuppetDB for storing facts and sharing exported resources.
I have added an additional custom fact to the code base of Docker (in ./lib/facter/docker.rb).
The bare minimum in the site.pp file, now contains:
node 'manager' {
docker::swarm {'cluster_manager':
init => true,
advertise_addr => "${::ipaddress}",
listen_addr => "${::ipaddress}",
require => Class['docker'],
}
##docker::swarm {'cluster_worker':
join => true,
manager_ip => "${::ipaddress}",
token => "${worker_join_token}",
tag => "cluster_join_command",
require => Class['docker'],
}
}
node 'worker' {
Docker::Swarm<<| tag == 'cluster_join_command' |>> {
advertise_addr => "${::ipaddress}",
listen_addr => "${::ipaddress}",
}
}
Do keep in mind that for this to work, puppet agent -t has to be run twice on the manager node, and once (after this) on the worker node. The first run on the manager will start the cluster_manager, while the second one will fetch the worker_join_token and upload it to PuppetDB. After this fact is set, the manifest for the worker can be properly compiled and run.
In the case of a different module, you have to add a custom fact yourself. When I was researching how to do this, I added the custom fact to the LOAD_PATH of ruby, but was unable to find it in my PuppetDB. After some browsing I found that facts from a module are uploaded to PuppetDB, which is the reason that I tweaked the upstream Docker module.
In one of my old apps, I'm using several API connectors - like AWS or Mandill as example.
For some reason (may be I saw it somewhere, don't remember), I using class constant to initialize this objects on init stage of application.
As example:
/initializers/mandrill.rb:
require 'mandrill'
MANDRILL = Mandrill::API.new ENV['MANDRILL_APIKEY']
Now I can access MANDRILL class constant of my application in any method and use it. (full path MyApplication::Application::MANDRILL, or just MANDRILL). All working fine, example:
def update_mandrill
result = MANDRILL.inbound.update_route id, pattern, url
end
The question is: it is good practice to use such class constants? Or better create new class instance in every method that using this instance, like in example:
def update_mandrill
require 'mandrill'
mandrill = Mandrill::API.new ENV['MANDRILL_APIKEY']
result = mandrill.inbound.update_route id, pattern, url
end
Interesting question.
It's very handy approach but it may have cons in some scenarios.
Imagine you have a constant that either takes a lot of time to initialize or it loads a lot of data into memory. When its initialization takes long you essentially degrade app boot time (which may or may not be a problem, usually it will in development).
If it loads a lot of data into memory it may turn out it's gonna be a problem when running rake tasks for example which load entire environment. You may hit memory boundaries in use cases in which you essentially do not need this data at all.
I know one application which load a lot of data during boot - and it's done very deliberately. Sure, use case is a bit uncommon, but still.
Another thing to consider is - imagine, you're trying to establish connection to external service like Mongo or anything else. If this service is unavailable (what happens) your application won't be able to boot. Maybe this service is essential for app to work, and without it it would be "useless" anyway, but it's also possible that you essentially stop everything because storage in which you keeps log does not work.
I'm not saying you shouldn't use it as you suggested - I do it also in my apps, but you should be aware of potential drawbacks.
Yes, pre-creating a pseudo-constant object (like that api client) is usually a good idea. However, there is, approximately, a thousand ways go about it and the constant is not on top of my personal list.
These days I usually go with setting it in the env files.
# config/environments/production.rb
config.email_client = Mandrill::API.new ENV['MANDRILL_APIKEY'] # the real thing
# config/environments/test.rb
config.email_client = a_null_object # something that conforms to the same api, but does absolutely nothing
# config/environments/development.rb
config.email_client = a_dev_object # post to local smtp, or something
Then you refer to the client like this:
Rails.application.configuration.email_client
And the correct behaviour will be picked up in each env.
If I don't need this per-env variation, then I either use some kind of singleton object (EmailClient.get) or a global variable in the initializer ($email_client). It can be argued that a constant is better than global variable, semantically and because it raises a warning when you try to re-assign it. But I like that global variable stands out more. You see right away that it's something special. (And then again, it's only #3 in the list, so I don't do it very often.).
Here is my scenario, i'm using resque to queue a job in redis, the usual way its done in ROR. The format of my key looks something like this (as per my namespace convention)
"resque:lock:Jobs::XYZ::SomeCreator-{:my_ids=>[101]}"
The job runs successfully to completition. But the key still exists in redis. For a certain flow, i need to queue and execute the job again for the same parameters (the key will essentially be same). But seems like the job does not get queued.
My guess is that since the key already exists in Redis, it does not queue the job again.
Questions:
Is this behavior of resque normal (not removing the key after successful completition)?
If Yes, how should i tackle this scenario (as per best practices)?
If No, can you help me understand what is going wrong?
After a couple of hours of debugging, finally this is the observed behavior:
I was creating the job and passing the options (parameters) with symbolized keys which when created the Redis key for the same job with symbolized param in the key.
Example:
Jobs::Abc::SomeJobCreator.create({:some_ids => [101]}) would create the "redis key" as "resque:lock:Jobs::Abc::SomeJobCreator.create({:some_ids => [101]})" (Notice the key being a symbol in the key)
Now when the after_perform_hook executes, it tries to remove the Redis Key but it searches the key with Stringified keys: "resque:lock:Jobs::Abc::SomeJobCreator-({\"some_ids\"=>[101]}" Which obviously won't be found, as the key in Redis has symbolized params in key.
To fix this issue i had to change the calls to job creation in the code and use stringified params like this: Jobs::Abc::SomeJobCreator.create({'some_ids' => [101]}). This works fine.
Not sure if this has anything to do with the version of Resque. Since its a old codebase i haven't yet updated the version. Its currently at Resque v1.25.2
I am using django multiple DB router concepts, having multiple sites with different db's. Base database user will login with all other sub sites.
When i try syncdb in base site its worked properly(at any time), but trying syncdb with other sites works first time only, if we try next time on-wards it throws integiry error like below
django.db.utils.IntegrityError: (1062, "Duplicate entry
'22-add_somesame' for key 'content_type_id'")
Once i removed multiple DB router settings in that project means syncdb works properly(at any time).
So is this relates to multiple db router? or what else?
Please anyone advise on this, thanks.
The problem here is with the db router and django system objects. I've experienced the same issue with multiple DBs and routers. As I remember the problem here is with the auth.permission content types, which get mixed in between databases. The syncdb script otherwise tries to create these in all databases, and theb it creates permission content type for some object, which id is already reserved for a local model.
I have the following
BASE_DB_TYPES = (
'auth.user',
'auth.group',
'auth.permission',
'sessions.session',
)
and then in the db router:
def db_for_read(self, model, **hints):
if hasattr(model, '_meta') and str(model._meta) in BASE_DB_TYPES:
return 'base_db' # the alias of base db that will store users
return None # the default database, or some custom mapping
EDIT:
Also, the exception might say that you're declaring a permission 'add_somesame' for your model 'somesame', while Django automatically creates add_, delete_, edit_ permissions for all objects.
I'm accessing Solr in a Ruby on Rails application by using rsolr (not Sunspot). I create the local solr object that I use to send requests like this:
solr = RSolr.connect(:url => "http://localhost:8983/solr")
as far as I understand, this is not really a connection but just an object that will issue requests on demand, so it shouldn't be expensive to keep it initialized and it should never disconnect. According to that, it should be ok to have one global solr object, create it at start time and forget about it. Right? But maybe it's not thread safe?
When should I create the solr connection?
All that the RSolr.connect method really does is sanitize and save the options that you're using. You can see that method here. It's passed a new connection object (which, notably, doesn't have an initialize method, so it's not doing anything when created) and the options that you pass to RSolr.connect.
So yes, you're right -- no harm at all in connecting once and leaving it connected forever hanging around in a variable somewhere. (For example, I memoize the result of RSolr.connect in my Solr/Rails app.)