Could not detect supported target files in 'project directory'see documentation supported languages target, make sure you are in the right directory - monitor

I am new to Snyk and I have installed synk-cli and ran the command snyk monitor on the root directory of my project which contains two apps,
client == reactJS, server== python(Django), I have authenticated my VS code to connect to my Snyk account but I got this error
Could not detect supported target files in /Users/yusuf/projects/ project_name.
Please see our documentation for supported languages and target files: https://snyk.co/udVgQ and make sure you are in the right directory.
I have looked at the official documentation and it says you might get this error by either the language is not supported or I am in the wrong directory, however I have check both, as I mentioned I am running node(react) and python(django) and both are supported by Snyk and I am definitely in the correct directory, also I have cd to server to run only on language at a time (in this case will be user/yusuf/projects/project_name/server) but still get the same error above

I have some information that might help.
The CLI looks for the manifest in the current directory. It's default behavior is to find a manifest and perform a scan for open source scanning using "snyk test".
You can have Snyk look for multiple manifests in a mono repo using --all-projects
The first step to identify the issue is to determine the package.json, pipy, pip-env files in that root. Not only is it in the root but is a supported manifest? If it's not in the root we can always point to those files.
Second step is to either run it with --all-projects or to target each one with a "snyk test --file=.." and you will likely have to specify the package manager. The docs indicate to do this
--file=
Specify a package file.
When testing locally or monitoring a project, you can specify the file that Snyk should inspect for package information. When the file is not specified, Snyk tries to detect the appropriate file for your project.
--package-manager=<PACKAGE_MANAGER_NAME>
Specify the name of the package manager when the filename specified with the
--file= option is not standard. This allows Snyk to find the file.
Example:$ snyk test --file=req.txt --package-manager=pip
The commands are mentioned here: commands: https://docs.snyk.io/snyk-cli/commands/test

Related

How to set dependency-mapping binding in gradle bootBuildImage (Spring-boot 2.7.1, native)

I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.

How does AOSP build system produce .rsp files and how to get them?

According to How does AOSP 9.0 build system link the executable? and What does # mean in this clang command in AOSP build log? , when linking a module, AOSP seems to produce a .rsp file that contains all the obj files that the module need,and pass the file name as a parameter to the link command, for example:
prebuilts/clang/host/linux-x86/clang-4691093/bin/clang++ /OpenSource/Build/Android/9.0.0_r30/soong/.intermediates/bionic/libc/crtbegin_so/android_x86_64_core/crtbegin_so.o #/OpenSource/Build/Android/9.0.0_r30/soong/.intermediates/frameworks/base/libs/hwui/libhwui/android_x86_64_core_shared/libhwui.so.rsp ......
But the .rsp files seems to be removed after build.
The question is, how are these file generated and how to get these files? This may require to learn and modify the build scripts which is out of reach for me.
There maybe the answer for you, read the ninja build manual , in that manual .rsp file mentioned.
https://ninja-build.org/manual.html
the following is info that I copy out.
rspfile, rspfile_content
if present (both), Ninja will use a response file for the given command, i.e. write the selected string (rspfile_content) to the given file (rspfile) before calling the command and delete the file after successful execution of the command.
This is particularly useful on Windows OS, where the maximal length of a command line is limited and response files must be used instead.

finding lib directory during common test

My question is, how should my Erlang app reliably find a binary in the priv directory, not just in production; when installed properly, but during common test?
I realised today when I added a travis-ci configuration to an old Erlang app and pushed it to git-hub, that the process by which it works locally for me, is a little more fragile than I thought. The travis-ci build failed because it, not unreasonably, checked out my repo into a directory named after the repo, which is of the form erlang-APP. Locally my app is in a directory called APP-VSN though.
The result of this is that a call to code:lib_dir(APP) returns a correct result during the common test run locally, but if I rename my current directory to erlang-APP instead of APP-VSN (or just APP works too) my local build fails, just like it does for travis-ci, because code:lib_dir(APP) returns {error,bad_name}. The behaviour as though .. is added to the library path for rebar ct.
Renaming my github repo from erlang-APP to APP resolves the travis-ci build failure... but knowing the build tests only pass depending on the name of the directory the repo is checked out into doesn't sit right with me.
One way could be to use a soft link (either in the repo under version control, or created when initializing the tests), and make your Erlang code path go via the link. E.g., "./APP" -> ".", or "./lib/APP" -> "..".

iOS Google Tag Manager Integration: How to add multiple containers per App environment?

I completed the integration of the latest Google Tag Manager (v5) for iOS together with Firebase (https://developers.google.com/tag-manager/ios/v5/).
The big change here is that the default container file is not binary anymore, it is plain JSON.
The integration requires that you have a folder (not group!) with the name "container" inside your app workspace. Within this folder the container file should be located. This raises my issue: We have two different GTM Containers, one for the testing/development app and one for production.
By using a folder it is not possible for me to add a different container file and set target references.
I can not create an additional folder since GTM requires the folder on root level and with the exact name "container"
Does anybody have an idea how this can be solved?
Thanks,
Fahim
You should be able to configure an XCode "run script" build step that clears the container directory and copies the correct container into place.
Sample Run Script (if somebody has the same issue):
rm -vf ${SRCROOT}/root_folder/container/*
cp "${SRCROOT}/root_folder/target/test/GTM-XXXXX.json" "${SRCROOT}/root_folder/container/"
It is important that this copy job is done at first within Build Phases, otherwise some other precompiling stuff of GTM does not recognize the container.

Physical Path in Beanstalk

I'm totally newbie in Beanstalk. I'm developing a web application in which a sealed and black-box plugin is used. That plugin needs a physical path with full permission to use for cache.
Any solution?
You can use the .ebextensions files in the main project that will, for example, create a directory and change the access rights to it. It is not clear from your question how you install the plugin (e.g. is it a service that is loaded after the web application is installed or is it part of the web application).
Execute a command in the .ebextensions file such as in:
How to grant permission to users for a directory using command line in Windows?
You'll find a introduction into container customization in
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html
Be careful about the format of the files (ie. spaces, no tabs, the best is to edit it in a separate text editor). Experiment with simple commands first, so that you get the hang of how the commands are executed.
Note: The ebextensions commands are executed for each deployment, so your script should check if the directory exists already and only create it if it doesn't. Otherwise the execution will fail as you try to create a directory that exists already. In a second step you can add the permissions.

Resources