How to set dependency-mapping binding in gradle bootBuildImage (Spring-boot 2.7.1, native) - docker

I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)

Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.

Related

Could not detect supported target files in 'project directory'see documentation supported languages target, make sure you are in the right directory

I am new to Snyk and I have installed synk-cli and ran the command snyk monitor on the root directory of my project which contains two apps,
client == reactJS, server== python(Django), I have authenticated my VS code to connect to my Snyk account but I got this error
Could not detect supported target files in /Users/yusuf/projects/ project_name.
Please see our documentation for supported languages and target files: https://snyk.co/udVgQ and make sure you are in the right directory.
I have looked at the official documentation and it says you might get this error by either the language is not supported or I am in the wrong directory, however I have check both, as I mentioned I am running node(react) and python(django) and both are supported by Snyk and I am definitely in the correct directory, also I have cd to server to run only on language at a time (in this case will be user/yusuf/projects/project_name/server) but still get the same error above
I have some information that might help.
The CLI looks for the manifest in the current directory. It's default behavior is to find a manifest and perform a scan for open source scanning using "snyk test".
You can have Snyk look for multiple manifests in a mono repo using --all-projects
The first step to identify the issue is to determine the package.json, pipy, pip-env files in that root. Not only is it in the root but is a supported manifest? If it's not in the root we can always point to those files.
Second step is to either run it with --all-projects or to target each one with a "snyk test --file=.." and you will likely have to specify the package manager. The docs indicate to do this
--file=
Specify a package file.
When testing locally or monitoring a project, you can specify the file that Snyk should inspect for package information. When the file is not specified, Snyk tries to detect the appropriate file for your project.
--package-manager=<PACKAGE_MANAGER_NAME>
Specify the name of the package manager when the filename specified with the
--file= option is not standard. This allows Snyk to find the file.
Example:$ snyk test --file=req.txt --package-manager=pip
The commands are mentioned here: commands: https://docs.snyk.io/snyk-cli/commands/test

How to change *.properties after ant production

I have 2 .properties files for my project on hybris .
First one is used for CI process and as a result a got 4 zip files with my already built platform(after ant production).
On my prod instance i need to switch to another properties because there are all my connections to extended services such as mysql solr.. etc
How i can do that without running all ANT steps.
. ./setantenv.sh && sync && ant config -Denv=my_new_properties
then ./hybrisserver.sh start doesn't work.
There is no information on wiki https://cxwiki.sap.com/display/release5/ant+production+improvements
Check if Updating Configuration Settings at Runtime will be useful for you. You will need to use the FileBasedConfigLoader class and the runtime.config.file.path property.
Other best practices include using system variables for secure settings like DB URL. See "Using Environment Variables instead of Files for Secure Settings" section in Configuring the Behavior of SAP Commerce.
Another option you can look at is to have different config folders for different environments (e.g. config-dev, config-prd), and pass it to ant. e..g -Denv=config--dev

How can I unblacklist 'libnvomx.so', in order to resolve "no such element factory 'omxh264enc'!"?

(Background: In a docker container on a NVidia Jetson TX2 board I have decompressed NVidia's Linux For Tegra tarball which contains lots of drivers and shared object files, some of which provide GStreamer element factories which produce elements that I use in my GStreamer pipeline. I am trying to run the pipeline in the docker container.)
However, there is an element in my GStreamer pipeline (on this Tegra board), called 'omxh264enc', which I haven't been able to create.
I've put the corresponding 'libnvomx.so' in my drivers folder which is in the exported paths GST_PLUGIN_PATH and the LD_LIBRARY_PATH.
ldd -r does not show any missing libraries for libnvomx.so
HOWEVER when I try and run the pipeline, output includes
WARN omx gstomx.c:2826:plugin_init: Failed to load configuration file: Valid key file could not be found in search dirs (searched in: /root/.config:/etc/xdg as per GST_OMX_CONFIG_DIR environment variable, the xdg user config directory (or XDG_CONFIG_HOME) and the system config directory (or XDG_CONFIG_DIRS)
INFO omx gstomx.c:2831:plugin_init: Using default configuration
ERROR omx gstomx.c:2894:plugin_init: Core '/usr/lib/aarch64-linux-gnu/tegra/libnvomx.so' does not exist for element 'omxh264enc'
WARN GST_PLUGIN_LOADING gstplugin.c:526:gst_plugin_register_func: plugin "/gst_1.8.3/libs/gstreamer-1.0/libnvomx.so" failed to initialise
and when I use GST_DEBUG=3 gst-inspect-1.0 libnvomx.so, libnvomx.so is blacklisted.
Plugin Details:
Name libnvomx.so
Description Plugin for blacklisted file
Filename /gst_1.8.3/libs/gstreamer-1.0/libnvomx.so
Version 0.0.0
License BLACKLIST
Source module BLACKLIST
Binary package BLACKLIST
Origin URL BLACKLIST
I have copied libnvomx.so into /usr/lib/aarch64-linux-gnu/tegra but this did not make a difference (probably because libnvomx is blacklisted).
I don't know where to find the gstomx.conf file, where maybe I can change the path /usr/lib/aarch64-linux-gnu/tegra/libnvomx.so to my designated drivers folder (/gst_1.8.3/libs/gstreamer-1.0/). I have used 'find' on /etc and some other folders but didn't find it (I didn't actually find a .config folder on the system).
(There are also other plugins with missing symbols, nvidia_drv.so with undefined symbol TimerSet and libglx.so with undefined symbol serverClient. I would like to find out what is suppose to provide these symbols? But these are not (direct) dependencies of libnvomx.so)
So how can I initialise / unblacklist libnvomx.so so I can use 'omxh264enc'? Do I need to find / make a gstomx configuration file or can I make it work with the default configuration? I read somewhere there may be a solution using a 'symlink' but at the moment I'm not familiar with what these are or how these work.
Let me know if you need more info, thanks.

How to read in other config information into a dropwizard service

I am building a dropwizard service which will connect to multiple data sources including mySQL and Elasticsearch. All the mySQL settings can be defined in the yaml config file which gets read in after running from the commandline.
But what about other settings that I need to read in for other data sources that I will connect with myself, for example Elasticsearch? Where can I define those settings?
I thought I could add another commandline Command - which I tried, but I can only run a single command (from the commandline) at a time - so I can't seem to run both the 'server' command as well as my custom command, 'custom' which is followed by the my own config file for elasticsearch.
How can I introduce settings either individually or from a file - which are defined at run time (not hard coded)?
Thanks
Anton
Check out the Dropwizard Core documentation on adding custom configuration.
You'd create an ElasticSearchFactory class similar to the MessageQueueFactory in the example, reference this in your Configuration (that's in turn referenced in your Application), and then the options you need can be added to your main yaml configuration.

Getting version name from git-tag when deploying from travis-ci to bintray

I would like to deploy some js files to bintray, reading the instructions on http://docs.travis-ci.com/user/deployment/bintray/ I see that the name of the version is configured in the descriptor file. I would like it to use the git-tag instead. So that to make a release I only need to push a tag without having to modify a configuration file on each release. Is this possible?
You need to use the built-in environment variables. The one that you are looking for is TRAVIS_TAG. The full list is here.
you have to generate the descriptor file as part of your build. There you can check the environment variable using e.g. python or shell.

Resources