AWS CDK Get Pinpoint Project/Application ID - aws-cdk

In the AWS CDK, it's straight forward to create a Pinpoint Service. But how do you get the Project ID (also referred to as the Pinpoint App ID or Application ID) for use in subsequent CDK code.
Create a Pinpoint project:
const pinpointProject = new pinpoint.CfnApp(this, 'PinpointNotificationProject', {
name: 'myProject',
});
In the AWS CloudFormation docs it says:
"When you pass the logical ID of this resource to the intrinsic Ref function, Ref returns the unique identifier (ApplicationId) for the Amazon Pinpoint application."
However, the following CDK code returns the project name not the id. The value of logicalId = myProject.
cdk.Fn.ref(pinpointProject.logicalId); // This returns 'myProject'
pinpointProject.ref; // This also returns 'myProject'

This is confirmed fixed in the latest CDK version 1.130.0. The ref property now returns the Pinpoint ProjectId.

The problem you are running into is that pinpoint is not a finished module. You can see this that all the functions within are prefixed with Cfn - cloudformation. This means that they are barebones and not tied into all the interface hooks that the rest of CDK is making use of to toss things around.
First, the logical ID is NOT the project name. the Logical Id is part of the Cloudformation Template that is generated for any resource Cloudformation is going to stand up. It links the given resource to the stack, so that any changes under the same logical id will be applied to the same stood up resource. It is only referenced internally to the cloudformation stack and never known outside. CDK uses the LogicalID to generate the name of the resource if you do not specify one.
Second, Taking a look at the documentation shows that CfnApp has the following property: attrArn. Meaning in your code, you would reference this by pinpointProject.attrArn - the arn of a pinpoint resource is something like: arn:aws:mobiletargeting:region:accountId:apps/projectId. with, as you guessed it, the projectId as the last value. you can split the string and get that value out, or use the arn manipulation methods provided as part of the core module to extract what you need.
Finally, even though the Pinpoint module is pretty much just barebones, it may still be possible to pass the variable storing your Pinpoint Construct Object to whatever other resource requires it. I say may because, as mentioned, most of the Cfn prefixed functions do not have the proper hooks to do this well - but some do, and Ive never worked with Pinpoint directly.
I recommend spending some time to understand how the CDK Documentation is laid out. Its bare bones in places, but once you understand how they structured it, these kinds of questions are readily answered within.

Related

CFBundleGetFunctionPointerForName and dlsym return NULL for exported function

I have a fork of the JavaScriptCore framework, where I have added a function of my own, which is exported. The framework compiles just find. Running nm on the framework reveals that the function (JSContextCreateBacktrace_unsafe) is indeed exported:
Leo-Natans-Wix-MPB:JavaScriptCore.framework lnatan$ nm -gU JavaScriptCore.framework/JavaScriptCore | grep JSContextCreateBacktrace
00000000004cb860 T _JSContextCreateBacktrace
00000000004cba10 T _JSContextCreateBacktrace_unsafe
However, I am unable to obtain the pointer of that function using CFBundleGetFunctionPointerForName or dlsym; both return NULL. At first, I used dlopen to open my framework, then tried using CFBundleCreate and then CFBundleGetFunctionPointerForName but that also returns NULL.
What could cause this?
Update
Something fishy is going on. I renamed one of the JSC functions, and nm reflects this. However, dlsym is still able to find the function with the original name, rather than the renamed.
It's hard to track this down since it's highly dependent on your specific environment and circumstances, but it is very likely you're running into this issue because the system image has already been loaded and you haven't changed the name of the framework.
If you look at the source code for dlopen in dyld/dyldAPIS.cpp:1458, you'll notice the context passed to dyld is configured with matchByInstallName = true. This context is then passed to load which executes the various stages necessary for image loading. There are a few phases worth noting:
loadPhase2 in dyld/dyld.cpp:2896 extracts the ending of the framework path and searches for it in the search path
loadPhase5check in dyld/dyld:2712 iterates over all loaded images and determines if any of them have a matching install name, and if one does, it returns that instead of loading a new one.
loadPhase5load in dyld/dyld:2601 finally loads the image if it wasn't loaded/found by any earlier steps. (It's worth noting loadPhase5check is executed first, since image loading is a two pass process.)
Given all of the above, I'd try renaming your framework to something besides JavaScriptCore.framework. Depending on the install name of both the system framework and your framework, I'd also recommend changing the install name. (There are plenty of blog articles and StackOverflow posts that document how to do this using install_name_tool -id.)

Dataflow/Beam Templates, Productionization, Initialization, and ValueProviders

I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.

Dropwizard: customize health check address and format

Is it possible to customize Dropwizrd's healthcheck output so that, e.g.: /health for healthchecks instead of /healthcheck and some output like {“status”: 200}.
I realise I could simply write a new resource that does what ever I need, I was just wondering if there is a more standard way to do this.
From what I have read on the 0.7.1 source code it's not possible to change the resource URI for healthchecks unfortunately, I highly doubt you can change the healthcheck format. I also remember people complaining about not being able to add REST resources to admin page, only servlets. Maybe on 0.8.0?
Here are the details of what I've tracked so far on the source code. Maybe I have misread or misunderstood something, so somebody could fix it.
Metrics has actually written AdminServlet to add healtcheck servlet in a way that it checks the servlet config whether the URI is defined or not.
this.healthcheckUri = getParam(config.getInitParameter(HEALTHCHECK_URI_PARAM_KEY), DEFAULT_HEALTHCHECK_URI);
But dropwizard doesn't provide a way to inject this configuration in any way on AbstractServerFactory.
handler.addServlet(new NonblockingServletHolder(new AdminServlet()), "/*");
NonblockingServletHolder is the one which is providing the config to AdminServlet but is created by AbstractServerFactory with empty constructor and provides no way to change the config.
I've thought of and tried to access the ServletHolder from the Environment object on Application.run method but the admin servlets are not created until after run method is run.
environment.getAdminContext().getServletHandler().getServlets()[0].setInitParameter("healthcheck-uri", "/health");
Something like this in your run() function will help you control the URI of your healthchecks:
environment.servlets().addServlet(
"HealthCheckServlet",
new HealthCheckServlet(environment.healthChecks())
).addMapping("/health");
If you want to actually control what's returned you need to write your own resource file. Fetch all the healthchecks from the registery, run them and return whatever aggregated value you want based on their results.

Similar function like getScriptPath from ZF1 in ZF2

I'm currently mapping my ZF1 applications to ZF2 and was wondering if there is a similar function like $this->view->getScriptPath() from ZF1 in ZF2? I spend already my half day, but didn't find a good solution. It would also be fine to get at least the basePath of the Module or the template folder of the Module.
Based on the follow-up questions, what you are really looking for is the path to a given template file. This is actually relatively easy, assuming you're using the default PhpRenderer: you grab the resolver, and resolve the path.
If you're inside a view script already, the following should work:
$path = $this->resolver($templateName);
If you're elsewhere, you need access to either the PhpRenderer, or the ViewResolver. If you have access to the service manager, pull the ViewResolver service, and call resolve() on it:
$resolver = $services->get('ViewResolver');
$path = $resolver->resolve($templateName);
This is superior to knowing where the module lives, as the developer may have chosen to override the template within the application; the resolver will know where even the new location is.

How does TFS's convertworkspaceitem work?

I'm trying to follow the instructions for deploying a database via TFS build listed here:
http://www.mytechfinds.com/articles/software-testing/6-test-automation/64-db-deployment-tfs
The instructions include notes about how to configure a ConvertWorkspaceItem element. I've followed the directions, but TFS remains unhappy with my setting for 'Result' and 'Workspace'. For now, I simply entered the text from the directions ('dbproj' and 'Workspace', respectively). TFS complains about my values:
Compiler error(s) encountered processing expression "dbproj". 'dbproj' is not declared. It may be inaccessible due to its production level.
I'm trying to find basic tutorial information on the ConvertWorkspaceItem element, but other than the MSDN reference page there isn't a lot of info. Does anyone know much about configuring this element?
You need to specify valid variable names for both of these properties. there should already be a variable declared in the workflow called workspace, You will need to declare a variable of type string that you wish to receive the result of this activity and specify it's name as the Result property. It looks like in your linked article the author must have already created a variable called dbproj. At the bottom of the workflow designer is a variables tab where you can define your own variables.

Resources