I use OpenEJB to run unit tests for applications ultimately deployed to WebSphere Application Server. My problem is a(n unavoidable) dependency on the WAS runtime jar. I've added an expression to the exclude property (I've also tried the physical path):
p.put("openejb.deployments.classpath.exclude", ".*?runtime-6.1.*?");
// p.put("openejb.deployments.classpath.exclude", "C:/Users/user/.m3/repository/was/runtime/6.1/runtime-6.1.jar");
p.put("openejb.exclude-include.order", "include-exclude");
This value is confirmed in the logs:
OpenEJB.options-2014-08-07-main--INFO -OpenEJB.options:Using 'openejb.exclude-include.order=include-exclude'
OpenEJB.options-2014-08-07-main--INFO -OpenEJB.options:Using 'openejb.deployments.classpath.include=.*eed-jar.*'
OpenEJB.options-2014-08-07-main--INFO -OpenEJB.options:Using 'openejb.deployments.classpath.exclude=.*?runtime-6.1.*?'
but the jar is still being inspected for loadable modules:
OpenEJB.startup.config-2014-08-07-main--INFO -OpenEJB.startup.config:Found EjbModule in classpath: C:\Users\user\.m3\repository\was\runtime\6.1\runtime-6.1.jar
The result is startup failure for OpenEJB:
org.apache.openejb.OpenEjbContainer$InvalidApplicationException: org.apache.openejb.config.ValidationFailedException: Module failed validation. AppModule(name=)
and
WARNING: can't load com.ibm.ws.management.j2ee.ManagementBean
org.apache.openejb.OpenEJBRuntimeException: Management
I've been creating a 0-length dummy file and renaming runtime-6.1.jar to execute the tests in Eclipse, but I need to restore the jar to run the Maven build (which Eclipse will then bind to, necessitating a restart to rename the jar).
What's the right way to exclude this jar?
You might need to set openejb.deployments.classpath.filter.descriptors to true as well. See http://tomee.apache.org/application-discovery-via-the-classpath.html
Otherwise, filters will not be applied to resources that contain a descriptor file (ejb-jar.xml).
If you still have problems, you can debug through org.apache.openejb.config.DeploymentsResolves and see why your resources aren't excluded.
Related
I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.
I am trying to sort out how to correctly use rollup_bundle() in bazel in a workspace using multiple languages, including JavaScript, and defining multiple npm packages in different directories.
It appears that the rollup bazel rules assume a more centralized setup than I would like: in the context of this workspace I would like each npm package to manage its own node_modules (see WORKSPACE.bazel in reproducible).
Small reproducible (toy example):
https://github.com/mdittmer/bazel-nodejs-rollup-multi-package/commit/a14408c083836ae024719e540ee1115e61eabc97
$ bazel build //js/alpha
Starting local Bazel server and connecting to it...
ERROR: /path/to/bazel-nodejs-rollup-multi-package/js/alpha/BUILD.bazel:6:14: every rule of type rollup_bundle implicitly depends upon the target '//#bazel/rollup/bin:rollup-worker', but this target could not be found because of: no such package '#bazel/rollup/bin': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package.
- /path/to/bazel-nodejs-rollup-multi-package/#bazel/rollup/bin
Documentation for implicit attribute rollup_worker_bin of rules of type rollup_bundle:
Internal use only
ERROR: Analysis of target '//js/alpha:alpha' failed; build aborted: Analysis failed
INFO: Elapsed time: 2.363s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (4 packages loaded, 5 targets configured)
currently loading: #rules_nodejs//nodejs/stamp
This error seems to suggest that I may need to specify rollup_bin and/or rollup_worker_bin, but I have had no luck attempting to refer to sources in //js/alpha/node_modules/#bazel/... or //js/alpha/node_modules/rollup/.... The npm documentation for #bazel/rollup suggests that because I am using npm_install() to set up my in-tree npm packages I should not need to set rollup_bin (because related instructions begin with "If you didn't use the yarn_install or npm_install rule, [...]").
How do I keep my multiple-npm-packages-in-same-workspace setup while using bazel rules to run rollup?
I posted a viable solution here:
https://github.com/mdittmer/bazel-nodejs-rollup-multi-package/commit/07c0810a7e2d4185311f4bab4f36555648585b97
The issue appears to stem from the #build_bazel_rules_nodejs rules hard-coding #npm//... dependencies, i.e., they assume that the correct repository for loading npm module dependencies must be named #npm. To get multiple notions of #npm (theoretically--I only bothered to remap one such repository), I declared a local repository in the root //WORKSPACE.bazel file:
//WORKSPACE.bazel:
npm_install(
name = "npm_alpha",
package_json = "#alpha//:package.json",
package_lock_json = "#alpha//:package-lock.json",
)
local_repository(
name = "alpha",
path = "js/alpha",
repo_mapping = {
"#npm": "#npm_alpha",
},
)
The repo_mapping maps my choice of repository to the name #npm in the #alpha repository. Presumably, repeating this with similarly configured #beta and #npm_beta repositories will achieve the desired configuration across multiple npm modules. In this setup the root //WORKSPACE.bazel does most of the "heavy lifting" and //js/alpha/WORKSPACE.bazel is practically empty.
Aside: It appears that re-defining #npm with the configuration up to and including the npm_install() with appropriately renamed name, package_json, and package_lock_json in //js/alpha/WORKSPACE.bazel does not interfere with the root repository working as intended and has the side benefit of the //js/alpha repository working correctly as a root repository if desired.
One unresolved issue I was surprised to encounter was this:
https://github.com/mdittmer/bazel-nodejs-rollup-multi-package/commit/c2bde0fc02bce8f4b1b42dc5881a4f9bc26186db
switching from npm_install() to yarn_install() did not work (see commit message in linked commit for details).
I have a working version of my project in eclipse.
I exported the project as a runnable jar.
Extracted (after converting to .zip)and tried to compile a particular java file from the command prompt
(Doing it this way since I have a project requirement, where input parameter inside that particular file can be modified and recompiled/run by users who wont have Eclipse)
I have used some external libraries( for Eg:json-simple,gson etc).They arent getting recognized , during compilation.
But if I run the class file(from the Eclipse compiled version), it gets executed properly
a)Tried to compile from root folder(using package name)
javac packageName.javaFileName.java
b) and went inside the package and compiled directly.
javac javaFileName.java
The a)part didnt compile at all saying classNotFound. The b)part started compiling but threw an error where none of the external libraries got recognized.(Getting --> error: cannot find symbol for places wherever the code/import of the external lib is used)
a)Tried to compile from root folder(using package name) javac
packageName.javaFileName.java b) and went inside the package and
compiled directly. javac javaFileName.java
The a)part didnt compile at all saying classNotFound.
Yes. javac requires you to specify a filesystem path to the (first) source(s) to compile. You appear instead to have tacked .java onto the end of the desired fully-qualified class name. Probably you want to compile from the root of the unpacked jar, specifying a correct path:
javac [options] package/name/className.java
for class package.name.className. (You can also compile from a different working directory if you specify an appropriate option, as discussed below.)
The b)part
started compiling but threw an error where none of the external
libraries got recognized.(Getting --> error: cannot find symbol for
places wherever the code/import of the external lib is used)
If the class you're compiling depends on others that also need to be compiled then javac would likely make a similar complaint about them. Either compile from the root (as in (a)), or specify the path to the source root via the -sourcepath option. Either way, there's no reason to descend into the source tree to compile.
But the external libs are actually a separate, albeit related, question. You don't need to compile these, but you do need to tell javac to use them as sources of classes. You would do that via the -classpath option, which you can abbreviate to -cp. If those were packaged in the jar itself (i.e. a "fat jar") then that should be fairly easy, something along these lines:
javac -cp .:lib/dependency1.jar:lib/dependency2.jar package/name/className.java
The "lib" part may vary, and the separator definitely differs depending on OS (on Windows it is ;, whereas on Mac / Linux / Solaris is is :, as shown).
If the external libs were not packaged into the main jar then the procedure is the same, but you might have a bigger challenge finding the needed jars. Also, such a jar is probably not runnable if you move it to a different machine. Nevertheless, you should probably look in META_INF/MANIFEST.MF, as it should contain the needed information.
Suppose I have a Grails application named myCoolApp.
Furthermore, suppose I have defined some basic properties in grails-app/conf/Config.groovy and grails-app/conf/DataSource.groovy under the test and production profiles.
Config.groovy is set to merge with the following .groovy external configuration file, if found at all:
grails.config.locations = ["file:${userHome}/.grails/${appName}-config.groovy"]
Finally, I have also defined, inside an external testing server, such a file, under a local *NIX user path:
/home/appServerTestUser/.grails/myCoolApp-config.groovy
Sadly, I find that for some reason, the external .groovy file's properties are not being merged when a Tomcat instance starts up.
What options do I have to make Grails tell me: "Oh, yes, I found your external config file at: /home/appServerTestUser/.grails/myCoolApp-config.groovy" and these are the properties I merged into Config.groovy?
Thanks!
File-based paths don't make much sense in a deployed app. It could work when deploying on your dev machine for testing, but will likely fail on any other machine. And it'll probably fail locally even if configured with the correct path since the process will like run as a different low-priviledge user.
You can use absolute paths, e.g. /etc/myapp/path/to/file, but this tends to couple deployment to filesystem structure, so if you develop in Windows and deploy on Linux it'd be tricky to get them both working.
So the best bet for Tomcat deployment is to use the classpath syntax. You can specify multiple files and it will load all that it finds, so I usually have one entry for local dev and one for the deployed app:
grails.config.locations = [
"classpath:${appName}-config.groovy",
"file:./${appName}-config.groovy"
]
I delete the entries for .properties files because Groovy syntax is so much more flexible, but use them if you like. I also keep the dev files in the project root (and exclude from source control).
So when deploying, name the file correctly (different apps will have different files, so no clashing there) and put them in Tomcat's lib directory. This is in the classpath, and so the classpath: entry will see it.
When managing a grails plugin in Intellij Idea (I'm using version 10.5.2 on OSX, if it matters), I can't get it to add the *GrailsPlugin.groovy file to the classpath in a way that it can be recognized by unit tests run from the IDE. The tests run fine from the command line using grails test-app; I just want IDEA to be able to run them too.
The problem is that the *GrailsPlugin.groovy file is at the top level directory in the plugin, which is not marked as a source directory in IDEA. I definitely don't want to make it a source folder, since that will screw up lots of things such as the package path to all of my regular source files (amongst many other issues).
I've tried adding *GrailsPlugin.groovy to my Settings->Compiler Resource Patterns, but to no avail. Since the file isn't in a source folder, it's ignored.
I tried creating a source folder that has a symlink to the *GrailsPlugin.groovy file, but that introduces all kinds of synchronization issues.
Anyone have any ideas?