How can I delete the target group targets in CDK while the targets is in a separate stack that depends on the target group stack - aws-cdk

We have an issue remove ECS service target from target group and then remove the target group which is no longer needed via CDK , because the CDK will complain about that it is still in used by the ECS in ecs_service stack. We have our target group in one stack ecs_load_balancer and our ECS services in a different stack ecs_service. ecs_service stack depends on ecs_load_balancer.
When we break the dependency: props.targetGroup.addTarget(this.service); in ecs_service stack. CDK will try to delete the output of the target group and modify the target group resource (as I can see when I check the bb cdk diff). While CDK deploys ecs_load_balancer stack first, CDK will think that this resource target group is still in used and throw an error when it tries to delete/modifies the target group :
Export Personal-Dev-us-east-1-ACSCalling-EcsLoadBalancer:ExportsOutputRefLoadBalancerListenerTargetGroupGroup27D2B0EED93AD008 cannot be deleted as it is in use by Personal-Dev-us-east-1-ACSCalling-EcsService and Personal-Dev-us-east-1-ACSCalling-OnePod-EcsService
We can delete the ECS target from the target group on the AWS console. But not via CDK because of this cross stack reference problem.
I've tried to manually add back the Output that gets deleted in ecs_load_balancer. But I still can't get rid of this change which will still cause deployment failure since the it can't be modified:
[~] AWS::ElasticLoadBalancingV2::TargetGroup LoadBalancer/Listener/TargetGroupGroup LoadBalancerListenerTargetGroupGroup27D2B0EE replace
└─ [-] TargetType (requires replacement)
└─ ip
We couldn't find a way to get around this error from CDK. How can we remove this target group that we no longer need?

Forcibly export the value when removing from consumer. In another deployment remove the resource and its forced export from the producer. Ref CDK docs.

Related

How to set dependency-mapping binding in gradle bootBuildImage (Spring-boot 2.7.1, native)

I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.

Could not detect supported target files in 'project directory'see documentation supported languages target, make sure you are in the right directory

I am new to Snyk and I have installed synk-cli and ran the command snyk monitor on the root directory of my project which contains two apps,
client == reactJS, server== python(Django), I have authenticated my VS code to connect to my Snyk account but I got this error
Could not detect supported target files in /Users/yusuf/projects/ project_name.
Please see our documentation for supported languages and target files: https://snyk.co/udVgQ and make sure you are in the right directory.
I have looked at the official documentation and it says you might get this error by either the language is not supported or I am in the wrong directory, however I have check both, as I mentioned I am running node(react) and python(django) and both are supported by Snyk and I am definitely in the correct directory, also I have cd to server to run only on language at a time (in this case will be user/yusuf/projects/project_name/server) but still get the same error above
I have some information that might help.
The CLI looks for the manifest in the current directory. It's default behavior is to find a manifest and perform a scan for open source scanning using "snyk test".
You can have Snyk look for multiple manifests in a mono repo using --all-projects
The first step to identify the issue is to determine the package.json, pipy, pip-env files in that root. Not only is it in the root but is a supported manifest? If it's not in the root we can always point to those files.
Second step is to either run it with --all-projects or to target each one with a "snyk test --file=.." and you will likely have to specify the package manager. The docs indicate to do this
--file=
Specify a package file.
When testing locally or monitoring a project, you can specify the file that Snyk should inspect for package information. When the file is not specified, Snyk tries to detect the appropriate file for your project.
--package-manager=<PACKAGE_MANAGER_NAME>
Specify the name of the package manager when the filename specified with the
--file= option is not standard. This allows Snyk to find the file.
Example:$ snyk test --file=req.txt --package-manager=pip
The commands are mentioned here: commands: https://docs.snyk.io/snyk-cli/commands/test

How can I configure nodejs rollup in bazel a workspace with multiple npm packages?

I am trying to sort out how to correctly use rollup_bundle() in bazel in a workspace using multiple languages, including JavaScript, and defining multiple npm packages in different directories.
It appears that the rollup bazel rules assume a more centralized setup than I would like: in the context of this workspace I would like each npm package to manage its own node_modules (see WORKSPACE.bazel in reproducible).
Small reproducible (toy example):
https://github.com/mdittmer/bazel-nodejs-rollup-multi-package/commit/a14408c083836ae024719e540ee1115e61eabc97
$ bazel build //js/alpha
Starting local Bazel server and connecting to it...
ERROR: /path/to/bazel-nodejs-rollup-multi-package/js/alpha/BUILD.bazel:6:14: every rule of type rollup_bundle implicitly depends upon the target '//#bazel/rollup/bin:rollup-worker', but this target could not be found because of: no such package '#bazel/rollup/bin': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package.
- /path/to/bazel-nodejs-rollup-multi-package/#bazel/rollup/bin
Documentation for implicit attribute rollup_worker_bin of rules of type rollup_bundle:
Internal use only
ERROR: Analysis of target '//js/alpha:alpha' failed; build aborted: Analysis failed
INFO: Elapsed time: 2.363s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (4 packages loaded, 5 targets configured)
currently loading: #rules_nodejs//nodejs/stamp
This error seems to suggest that I may need to specify rollup_bin and/or rollup_worker_bin, but I have had no luck attempting to refer to sources in //js/alpha/node_modules/#bazel/... or //js/alpha/node_modules/rollup/.... The npm documentation for #bazel/rollup suggests that because I am using npm_install() to set up my in-tree npm packages I should not need to set rollup_bin (because related instructions begin with "If you didn't use the yarn_install or npm_install rule, [...]").
How do I keep my multiple-npm-packages-in-same-workspace setup while using bazel rules to run rollup?
I posted a viable solution here:
https://github.com/mdittmer/bazel-nodejs-rollup-multi-package/commit/07c0810a7e2d4185311f4bab4f36555648585b97
The issue appears to stem from the #build_bazel_rules_nodejs rules hard-coding #npm//... dependencies, i.e., they assume that the correct repository for loading npm module dependencies must be named #npm. To get multiple notions of #npm (theoretically--I only bothered to remap one such repository), I declared a local repository in the root //WORKSPACE.bazel file:
//WORKSPACE.bazel:
npm_install(
name = "npm_alpha",
package_json = "#alpha//:package.json",
package_lock_json = "#alpha//:package-lock.json",
)
local_repository(
name = "alpha",
path = "js/alpha",
repo_mapping = {
"#npm": "#npm_alpha",
},
)
The repo_mapping maps my choice of repository to the name #npm in the #alpha repository. Presumably, repeating this with similarly configured #beta and #npm_beta repositories will achieve the desired configuration across multiple npm modules. In this setup the root //WORKSPACE.bazel does most of the "heavy lifting" and //js/alpha/WORKSPACE.bazel is practically empty.
Aside: It appears that re-defining #npm with the configuration up to and including the npm_install() with appropriately renamed name, package_json, and package_lock_json in //js/alpha/WORKSPACE.bazel does not interfere with the root repository working as intended and has the side benefit of the //js/alpha repository working correctly as a root repository if desired.
One unresolved issue I was surprised to encounter was this:
https://github.com/mdittmer/bazel-nodejs-rollup-multi-package/commit/c2bde0fc02bce8f4b1b42dc5881a4f9bc26186db
switching from npm_install() to yarn_install() did not work (see commit message in linked commit for details).

Share common code between 2 iOS apps: Swift Package Manager? Cocoapods? Monorepo?

I am about to start working on two new iOS apps, that will share a bunch of business logic, like the whole API layer plus the models. Instead of copying the shared code in both projects and trying to keep that in sync, I want to use some way to share the code.
My first though was that I'd use the Swift Package Manager. It's really easy to create a new package in Xcode, but I don't really know how to use the local version of the shared package in both apps while I am working on it, while also making it work for other devs (or CI!) checking out the apps. Can't really commit the apps having a local path dependency, because that would only work for me, right?
So, can I use a local version of that core package while working on the 2 apps and that core package, but still have it all pushed to GitHub and that it compiles on CI and for other devs too? Or would it only work on my machine since it would reference a local path?
Would using Cocoapods with a private pod make things easier or would I run into the exact same problem of working on a local path dependency on my computer, but wanting to have it work for other devs too?
Or should I just go with a monorepo containing both apps and the shared code, and just have everything use relative paths? Would a SPM package inside of the monorepo be helpful in this situation, or just use relative paths in both apps?
I would recommend creating a (private) cocoapod for your business logic. Both your apps can refer to that cocoapod either via the release, somewhere in the repo or as a development pod, as needed.
In order to avoid editing Podfile all the time, you can control your pod's source via external files and/or environment variables. Here's what I do in a couple of my projects:
def library_pod
src = ENV["POD_SOURCE"]
if src == nil && File.file?(".pod_source")
src = File.foreach(".pod_source").first.chomp
end
src = (src || "").split(":")
case src[0]
when "dev"
# local dev pod, editable
pod 'MyLibrary', :path => '../mylibrary', :inhibit_warnings => false
when "head"
pod 'MyLibrary', :git => 'https://github.com/mycompany/MyLibrary'
when "branch"
pod 'MyLibrary', :git => 'https://github.com/mycompany/MyLibrary', :branch=> src[1]
else
# default: use the release version
pod 'MyLibrary'
end
end
target 'MyApp' do
pod 'Pod1'
pod 'Pod2'
library_pod
end
where usually I have POD_SOURCE=dev in the environment on my dev machine. .pod_source contains release on master, and whatever branch name is appropriate in feature branches, so that my CI can to the right thing.

iOS Google Tag Manager Integration: How to add multiple containers per App environment?

I completed the integration of the latest Google Tag Manager (v5) for iOS together with Firebase (https://developers.google.com/tag-manager/ios/v5/).
The big change here is that the default container file is not binary anymore, it is plain JSON.
The integration requires that you have a folder (not group!) with the name "container" inside your app workspace. Within this folder the container file should be located. This raises my issue: We have two different GTM Containers, one for the testing/development app and one for production.
By using a folder it is not possible for me to add a different container file and set target references.
I can not create an additional folder since GTM requires the folder on root level and with the exact name "container"
Does anybody have an idea how this can be solved?
Thanks,
Fahim
You should be able to configure an XCode "run script" build step that clears the container directory and copies the correct container into place.
Sample Run Script (if somebody has the same issue):
rm -vf ${SRCROOT}/root_folder/container/*
cp "${SRCROOT}/root_folder/target/test/GTM-XXXXX.json" "${SRCROOT}/root_folder/container/"
It is important that this copy job is done at first within Build Phases, otherwise some other precompiling stuff of GTM does not recognize the container.

Resources