I am newbie in CDK, need to understand difference between Higher Level constructs Vs Lower Level constructs.
Can someone explain in simple words :)
Warm regards.
CDK was built to make it easy for anyone to create components with preconfigured defaults. This means that you can (using higher level constructs) create a resource, using default values, approved by AWS (and the community in general).
For example, using CDK, if you were to create a S3 bucket using Lower Level Constructs, you would have to define the resource exactly as you would define a CloudFormation template, meaning you pick the options and you set the values for those options.
This is what a CloudFormation template would look like if you were to use a Lower Level Resource (L1):
CDK Python (L1 Construct):
s3.CfnBucket(self, 's3_bucket_l1')
CloudFormation Template:
s3bucketl1:
Type: AWS::S3::Bucket
If you were to create the same resource using the Higher Level Constructs (L2), CDK adds some defaults to the resources it creates, here's the same example using L2 resources:
CDK Python (L2 Construct):
s3.Bucket(self, 's3_bucket_l2')
CloudFormation Template:
s3bucketl249651147:
Type: AWS::S3::Bucket
UpdateReplacePolicy: Retain
DeletionPolicy: Retain
Expanding on the same, if I were to add PublicAccessBlockConfiguration to these S3 buckets, the L2 constructs do the same thing in less lines of code:
CDK Python (L1 Construct):
s3.CfnBucket(
self,
's3_bucket_l1',
public_access_block_configuration=s3.CfnBucket.PublicAccessBlockConfigurationProperty(
block_public_acls=True,
block_public_policy=True,
ignore_public_acls=True,
restrict_public_buckets=True
)
)
CloudFormation Template:
s3bucketl1:
Type: AWS::S3::Bucket
Properties:
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
CDK Python (L2 Construct):
s3.Bucket(
self,
's3_bucket_l2',
block_public_access=s3.BlockPublicAccess.BLOCK_ALL
)
CloudFormation Template:
s3bucketl249651147:
Type: AWS::S3::Bucket
Properties:
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
UpdateReplacePolicy: Retain
DeletionPolicy: Retain
This is just one example using S3 as a resource. In the case of Lambda functions, things get really interesting. Using L2 constructs, with only a few lines of code, you can create a Lambda function, and CDK will also generate an IAM role with default permissions. If you were to do the same thing with L1 constructs, you would have to define both the Lambda construct as well as the IAM role construct to generate the CloudFormation template.
In essence, both constructs create CloudFormation templates, but the Higher Level Constructs gives the user powerful ways to create AWS resources using preconfigured defaults using less code.
Related
I have pointed mass transit at my AWS account and given it a "scope". This creates one SNS topic per scope. Here's my config with the scope set to "development":
container.AddMassTransit(config =>
config.AddConsumer<FooConsumer>(typeof(FooConsumerDefinition));
config.UsingAmazonSqs((amazonContext, amazonConfig) =>
{
amazonConfig.Host(
new UriBuilder("amazonsqs://host")
{
Host = "eu-north-1"
}.Uri, h =>
{
h.AccessKey("my-access-key-is-very-secret");
h.SecretKey("my-secret-key-also-secret");
h.Scope("development", true);
});
amazonConfig.ConfigureEndpoints(amazonContext);
});
);
// Somewhere else:
public class FooConsumerDefinition : ConsumerDefinition<FooConsumer>
{
public FooConsumerDefinition ()
{
ConcurrentMessageLimit = 1;
// I used to set EndpointName, but I stopped doing that to test scopes
}
}
If I change the scope and run it again, I get more SNS topics and subscriptions, which are prefixed with my scope. Something like:
development_Namespace_ObjectThatFooRecieves
However, the SQS queues aren't prefixed and won't increase in number.
Foo
The more scopes you run, the more SNS subscriptions 'Foo' will get. And on top of that, if I start a consuming application that is configured to say "development" it will start consuming all the messages for all the different scopes. So as a consequence, I'm not getting any environment separation.
Is there anyway to have these different topics feeding different queues? Is there anyway to neatly prefix my queues alongside my topics?
In fact, What's the point of the "Scope" configuration if it only separates out topics, and then after that they all go to the same queue to get processed indiscriminately?
NB. I don't think the solution here is to just use a separate subscription. That's significant overhead, and I feel like "Scope" should just work.
MassTransit 7.0.4 introduced new options on the endpoint name formatter, including the option to include the namespace, as well as add a prefix in front of the queue name. This should cover most of your requirements.
In your example above, you could use:
services.AddSingleton<IEndpointNameFormatter>(provider =>
new DefaultEndpointNameFormatter("development", true));
That should give you what you want. You will still need to specify the scope as you have already done, which is applied to SNS topics names.
SQS support for MassTransit has a long history, and teams hate breaking changes. That is why these aren't the new defaults as they'd break existing applications.
For my work, we are trying to spin up a docker swarm cluster with Puppet. We use puppetlabs-docker for this, which has a module docker::swarm. This module allows you to instantiate a docker swarm manager on your master node. This works so far.
On the docker workers you can join to docker swarm manager with exported resources:
node 'manager' {
##docker::swarm {'cluster_worker':
join => true,
advertise_addr => '192.168.1.2',
listen_addr => '192.168.1.2',
manager_ip => '192.168.1.1',
token => 'your_join_token'
tag => 'docker-join'
}
}
However, the your_join_token needs to be retrieved from the docker swarm manager with docker swarm join-token worker -q. This is possible with Exec.
My question is: is there a way (without breaking Puppet philosophy on idempotent and convergence) to get the output from the join-token Exec and pass this along to the exported resource, so that my workers can join master?
My question is: is there a way (without breaking Puppet philosophy on
idempotent and convergence) to get the output from the join-token Exec
and pass this along to the exported resource, so that my workers can
join master?
No, because the properties of resource declarations, exported or otherwise, are determined when the target node's catalog is built (on the master), whereas the command of an Exec resource is run only later, when the fully-built catalog is applied to the target node.
I'm uncertain about the detailed requirements for token generation, but possibly you could use Puppet's generate() function to obtain one at need, during catalog building on the master.
Update
Another alternative would be an external (or custom) fact. This is the conventional mechanism for gathering information from a node to be used during catalog building for that node, and as such, it might be more suited to your particular needs. There are some potential issues with this, but I'm unsure how many actually apply:
The fact has to know for which nodes to generate join tokens. This might be easier / more robust or trickier / more brittle depending on factors including
whether join tokens are node-specific (harder if they are)
whether it is important to avoid generating multiple join tokens for the same node (over multiple Puppet runs; harder if this is important)
notwithstanding the preceding, whether there is ever a need to generate a new join token for a node for which one was already generated (harder if this is a requirement)
If implemented as a dynamically-generated external fact -- which seems a reasonable option -- then when a new node is added to the list, the fact implementation will be updated on the manager's next puppet run, but the data will not be available until the following one. This is not necessarily a big deal, however, as it is a one-time delay with respect to each new node, and you can always manually perform a catalog run on the manager node to move things along more quickly.
It has more moving parts, with more complex relationships among them, hence there is a larger cross-section for bugs and unexpected behavior.
Thanks to #John Bollinger I seem to have fixed my issue. In the end, it was a bit more worked than I envisioned, but this is the idea:
My puppet setup now uses PuppetDB for storing facts and sharing exported resources.
I have added an additional custom fact to the code base of Docker (in ./lib/facter/docker.rb).
The bare minimum in the site.pp file, now contains:
node 'manager' {
docker::swarm {'cluster_manager':
init => true,
advertise_addr => "${::ipaddress}",
listen_addr => "${::ipaddress}",
require => Class['docker'],
}
##docker::swarm {'cluster_worker':
join => true,
manager_ip => "${::ipaddress}",
token => "${worker_join_token}",
tag => "cluster_join_command",
require => Class['docker'],
}
}
node 'worker' {
Docker::Swarm<<| tag == 'cluster_join_command' |>> {
advertise_addr => "${::ipaddress}",
listen_addr => "${::ipaddress}",
}
}
Do keep in mind that for this to work, puppet agent -t has to be run twice on the manager node, and once (after this) on the worker node. The first run on the manager will start the cluster_manager, while the second one will fetch the worker_join_token and upload it to PuppetDB. After this fact is set, the manifest for the worker can be properly compiled and run.
In the case of a different module, you have to add a custom fact yourself. When I was researching how to do this, I added the custom fact to the LOAD_PATH of ruby, but was unable to find it in my PuppetDB. After some browsing I found that facts from a module are uploaded to PuppetDB, which is the reason that I tweaked the upstream Docker module.
I have an assignment to make an AI Agent that will learn to play a video game using ML. I want to create a new environment using OpenAI Gym because I don't want to use an existing environment. How can I create a new, custom Environment?
Also, is there any other way I can start to develop making AI Agent to play a specific video game without the help of OpenAI Gym?
See my banana-gym for an extremely small environment.
Create new environments
See the main page of the repository:
https://github.com/openai/gym/blob/master/docs/creating_environments.md
The steps are:
Create a new repository with a PIP-package structure
It should look like this
gym-foo/
README.md
setup.py
gym_foo/
__init__.py
envs/
__init__.py
foo_env.py
foo_extrahard_env.py
For the contents of it, follow the link above. Details which are not mentioned there are especially how some functions in foo_env.py should look like. Looking at examples and at gym.openai.com/docs/ helps. Here is an example:
class FooEnv(gym.Env):
metadata = {'render.modes': ['human']}
def __init__(self):
pass
def _step(self, action):
"""
Parameters
----------
action :
Returns
-------
ob, reward, episode_over, info : tuple
ob (object) :
an environment-specific object representing your observation of
the environment.
reward (float) :
amount of reward achieved by the previous action. The scale
varies between environments, but the goal is always to increase
your total reward.
episode_over (bool) :
whether it's time to reset the environment again. Most (but not
all) tasks are divided up into well-defined episodes, and done
being True indicates the episode has terminated. (For example,
perhaps the pole tipped too far, or you lost your last life.)
info (dict) :
diagnostic information useful for debugging. It can sometimes
be useful for learning (for example, it might contain the raw
probabilities behind the environment's last state change).
However, official evaluations of your agent are not allowed to
use this for learning.
"""
self._take_action(action)
self.status = self.env.step()
reward = self._get_reward()
ob = self.env.getState()
episode_over = self.status != hfo_py.IN_GAME
return ob, reward, episode_over, {}
def _reset(self):
pass
def _render(self, mode='human', close=False):
pass
def _take_action(self, action):
pass
def _get_reward(self):
""" Reward is given for XY. """
if self.status == FOOBAR:
return 1
elif self.status == ABC:
return self.somestate ** 2
else:
return 0
Use your environment
import gym
import gym_foo
env = gym.make('MyEnv-v0')
Examples
https://github.com/openai/gym-soccer
https://github.com/openai/gym-wikinav
https://github.com/alibaba/gym-starcraft
https://github.com/endgameinc/gym-malware
https://github.com/hackthemarket/gym-trading
https://github.com/tambetm/gym-minecraft
https://github.com/ppaquette/gym-doom
https://github.com/ppaquette/gym-super-mario
https://github.com/tuzzer/gym-maze
Its definitely possible. They say so in the Documentation page, close to the end.
https://gym.openai.com/docs
As to how to do it, you should look at the source code of the existing environments for inspiration. Its available in github:
https://github.com/openai/gym#installation
Most of their environments they did not implement from scratch, but rather created a wrapper around existing environments and gave it all an interface that is convenient for reinforcement learning.
If you want to make your own, you should probably go in this direction and try to adapt something that already exists to the gym interface. Although there is a good chance that this is very time consuming.
There is another option that may be interesting for your purpose. It's OpenAI's Universe
https://universe.openai.com/
It can integrate with websites so that you train your models on kongregate games, for example. But Universe is not as easy to use as Gym.
If you are a beginner, my recommendation is that you start with a vanilla implementation on a standard environment. After you get passed the problems with the basics, go on to increment...
With delayed_job, I was able to do simple operations like this:
#foo.delay.increment!(:myfield)
Is it possible to do the same with Rails' new ActiveJob? (without creating a whole bunch of job classes that do these small operations)
ActiveJob is merely an abstraction on top of various background job processors, so many capabilities depend on which provider you're actually using. But I'll try to not depend on any backend.
Typically, a job provider consists of persistence mechanism and runners. When offloading a job, you write it into persistence mechanism in some way, then later one of the runners retrieves it and runs it. So the question is: can you express your job data in a format, compatible with any action you need?
That will be tricky.
Let's define what is a job definition then. For instance, it could be a single method call. Assuming this syntax:
Model.find(42).delay.foo(1, 2)
We can use the following format:
{
class: 'Model',
id: '42', # whatever
method: 'foo',
args: [
1, 2
]
}
Now how do we build such a hash from a given call and enqueue it to a job queue?
First of all, as it appears, we'll need to define a class that has a method_missing to catch the called method name:
class JobMacro
attr_accessor :data
def initialize(record = nil)
self.data = {}
if record.present?
self.data[:class] = record.class.to_s
self.data[:id] = record.id
end
end
def method_missing(action, *args)
self.data[:method] = action.to_s
self.data[:args] = args
GenericJob.perform_later(data)
end
end
The job itself will have to reconstruct that expression like so:
data[:class].constantize.find(data[:id]).public_send(data[:method], *data[:args])
Of course, you'll have to define the delay macro on your model. It may be best to factor it out into a module, since the definition is quite generic:
def delay
JobMacro.new(self)
end
It does have some limitations:
Only supports running jobs on persisted ActiveRecord models. A job needs a way to reconstruct the callee to call the method, I've picked the most probable one. You can also use marshalling, if you want, but I consider that unreliable: the unmarshalled object may be invalid by the time the job gets to execute. Same about "GlobalID".
It uses Ruby's reflection. It's a tempting solution to many problems, but it isn't fast and is a bit risky in terms of security. So use this approach cautiously.
Only one method call. No procs (you could probably do that with ruby2ruby gem). Relies on job provider to serialize arguments properly, if it fails to, help it with your own code. For instance, que uses JSON internally, so whatever works in JSON, works in que. Symbols don't, for instance.
Things will break in spectacular ways at first.
So make sure to set up your debugging tools before starting off.
An example of this is Sidekiq's backward (Delayed::Job) compatibility extension for ActiveRecord.
As far as I know, this is currently not supported. You can easily simulate this feature using a custom-defined proxy-job that accepts a model or instance, a method to be performed and a list of arguments.
However, for the sake of code testing and maintainability, this shortcut is not a good approach. It's more effective (even if you need to write a little bit more of code) to have a specific job for everything you want to enqueue. It forces you to think more about the design of your app.
I wrote a gem that can help you with that https://github.com/cristianbica/activejob-perform_later. But be aware that I believe that having methods all around your code that might be executed in workers is the perfect recipe for disaster is not handled carefully :)
I was reading a text describing Ruby and it said the following:
Ruby is considered a “reflective”
language because it’s possible for a
Ruby program to analyze itself (in
terms of its make-up), make
adjustments to the way it works, and
even overwrite its own code with other
code.
I'm confused by this term 'reflective' - is this mainly talking about the way Ruby can look at a variable and figure out whether it's an Integer or a String (duck typing), e.g.:
x = 3
x = "three" # Ruby reassigns x to a String type
To say Ruby is "reflective" means that you can, for instance, find out at runtime what methods a class has:
>> Array.methods
=> ["inspect", "private_class_method", "const_missing",
[ ... and many more ... ]
(You can do the same thing with an object of the class.)
Or you can find out what class a given object is...
>> arr = Array.new
=> []
>> arr.class
=> Array
And find out what it is within the class hierarchy...
>> arr.kind_of?
>> arr.kind_of? Array
=> true
>> arr.kind_of? String
=> false
In the quote where they say "it’s possible for a Ruby program to analyze itself" that's what they're talking about.
Other languages such as Java do that too, but with Ruby it's easier, more convenient, and more of an everyday part of using the language. Hence, Ruby is "reflective."
No, it means that you can issue a ruby command to get information about, well, just about anything. For example, you can type the command File.methods() to get a listing of all methods belonging to the File module. You can do similar things with classes and objects -- listing methods, variables, etc.
Class reopening is a good example of this. Here's a simple example:
class Integer
def moxy
if self.zero?
self - 2
elsif self.nonzero?
self + 2
end
end
end
puts 10.moxy
By reopening a standard Ruby class - Integer - and defining a new method within it called 'moxy', we can perform a newly defined operation directly on a number. In this case, I've defined this made up 'moxy' method to subtract 2 from the Integer if it's zero and add two if it's nonzero. This makes the moxy method available to all objects of class Integer in Ruby. (Here we use the 'self' keyword to get the content of the integer object).
As you can see, it's a very powerful feature of Ruby.
EDIT: Some commenters have questioned whether this is really reflection. In the English language the word reflection refers to looking in on your own thoughts. And that's certainly an important aspect of reflection in programming also - using Ruby methods like is_a, kind_of, instance_of to perform runtime self-inspection. But reflection also refers to the the ability of a program to modify its own behavior at runtime. Reopening classes is one of the key examples of this. It's also called monkey patching. It's not without its risks but all I am doing is describing it here in the context of reflection, of which it is an example.
It refers mainly at how easy is to inspect and modify internal representations during run-time in Ruby programs, such as classes, constants, methods and so on.
Most modern languages offer some kind of reflective capabilities (even statically typed ones such as Java), but in Ruby, it is so easy and natural to use these capabilities, that it really make a real difference when you need them.
It just makes meta-programming, for example, an almost trivial task, which is not true at all in other languages, even dynamic ones.