At the moment I'm trying to link my Rust library with my Java project with JNI.
When I build my Rust library with cargo build there are no errors. My code works!
My problem is the interface between Java and Rust. When I run my Java project with InteliJ there appears this error message:
OpenJDK 64-Bit Server VM warning: You have loaded library /home/.../Encoding/libs/poc which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Exception in thread "main" java.lang.UnsatisfiedLinkError: cannot read file data: Is a directory
at java.base/java.lang.ClassLoader$NativeLibrary.load0(Native Method)
at java.base/java.lang.ClassLoader$NativeLibrary.load(ClassLoader.java:2445)
at java.base/java.lang.ClassLoader$NativeLibrary.loadLibrary(ClassLoader.java:2501)
at java.base/java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2700)
at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2630)
at java.base/java.lang.Runtime.load0(Runtime.java:768)
at java.base/java.lang.System.load(System.java:1837)
at com.example.Main.<clinit>(Main.java:16)
My next step was to compile my Rust library with execstack with the following command:
cargo build --verbose -Z build-std=core,alloc -c link-args=-znoexecstack
Another method for me was to add this arguments to my Cargo.toml:
[lib]
link-args = ["-z", "noexecstack"]
When I try to build my Rust library now, these arguments will be ignored:
$ cargo build
warning: unused manifest key: lib.link-args
Compiling poc v0.1.0 (/home/.../Encoding/libs/poc)
Finished dev [unoptimized + debuginfo] target(s) in 1.50s
Now I hope that anyone of you can help me with my problem. I would be very thankful!
Maybe this is also interesting: my project structure
Encoding
|-- libs
|-- poc
|-- src
|-- lib.rs
|-- cargo.toml
|-- src
|-- main/java/org/example
|-- Main.java
|--target
|-- ...
|-- pom.xml
As described I tried following commands
cargo build --verbose -Z build-std=core,alloc -c link-args=-znoexecstack
cargo build with [lib] -> link-args = ["-z", "noexecstack"]
And I need this to run my Java project without the risk of an executable stack!
Related
I've been working on a small project trying to teach myself how to use Bazel. The goal is to download a http_archive that contains a python interpreter. once the python interpreter has been added to the environment I want to run the command `python.exe --version and write the output of that into a file
The issue I have the most difficulty with at the moment are the following:
I am not confident that I am correctly injecting the hermetic python BUILD file into the hermetic python package (I keep getting the message "BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package").
I'm pretty sure that when I pass in python_compiler = ["#hermetic_python"] in the BUILD file I'm just getting a string and not a reference to the files in the package
Here is an overview of my project and the code files. Any help would be appreciated! :D
Project structure:
|-- WORKSPACE
|-- BUILD
|-- custom_rules.bzl
|-- main.py
|-- custom-rules/
|-- BUILD.custom_python
|-- custom_python_rules.bzl
WORKSPACE
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "hermetic_python",
urls = ["https://www.python.org/ftp/python/3.9.10/python-3.9.10-embed-amd64.zip"],
sha256 = "67161cab713a52f6658b76274f8dbe0cd2f256aab1e84f94cd557d4a906fa5d2",
build_file = "#//:custom-rules/BUILD.custom_python"
)
BUILD File:
load("//:custom_rules.bzl","build_with_custom_python")
build_with_custom_python(
name = "write-to-file",
python_compiler = ["#hermetic_python"]
)
custom_rules.bzl
def _build_with_custom_python_impl(ctx):
out_file = ctx.actions.declare_file("file_with_python_version.txt")
ctx.actions.run(
outputs = [out_file]
executable = ctx.attr.python_compiler,
arguments = [--version],
)
return DefaultInfo(files=[out_file])
build_with_custom_python = rule(
implementation = _build_with_custom_python_impl,
attrs = {
"python_compiler": attr.label_list(allow_files=True)
}
)
BUILD.custom_python
load("//:custom_python_rules.bzl","run_me")
run_me(
name="my_py_run",
python_files = glob(["**"]),
visibility = ["//visibility:public"],
)
custom_python_rules.bzl
def _run_me_impl(ctx):
pass
run_me = rule (
implementation = _run_me_impl,
attrs = {
"python_files" : attr.label_list(allow_files=True),
}
)
I've spent some more time on this I've managed to mostly do what I intended to do. Here is an overview of what I've learned that fixes the problems in the original question
1 - Mark the custom build file / custom rule folder as a package
Just by adding an empty BUILD file into the custom-rules folder I had market it as a bazel package. That allowed me to reference the files
|-- WORKSPACE
|-- BUILD
|-- custom_rules.bzl
|-- main.py
|-- custom-rules/
|-- BUILD # NEW
|-- BUILD.custom_python
|-- custom_python_rules.bzl
2 - The workspace file references a BUILD file. That build file needs to reference a custom rules file
The workspace should just reference the build file directly // means workspace root and then its just the path to the BUILD file
http_archive(
# build_file = "#//:custom-rules/BUILD.custom_python" // WRONG
build_file = "//custom-rules/BUILD.custom_python" // RIGHT
)
3 - BUILD file can not reference the custom rules (bzl file)
This is the one that took the longest time for me to figure out! The BUILD file that is loaded into the package (hermetic_python from the workspace file) NEEDS TO reference the current workspace in order to
# BUILD.custom_python
load("#root_workspace//custom-rules:custom-python-rules.bzl","execute_python_file")
Notice that this starts with #root_workspace which tells the BUILD file to look in that workspace. the WORKSPACE file in the project now contains this line
# WORKSPACE
workspace(name = "root_workspace")
4 - Passing in a build step as the dependancy
build_with_custom_python(
name = "write-to-file",
# python_compiler = ["#hermetic_python"] WRONG
python_compiler = ["#hermetic_python:my_py_run"] RIGHT
)
The key here is that I need to reference a build action which then makes the files in that build action available through the dependency. In this case #hermetic_python is the package and :my_py_run is the build action.
I still need to figure out how to properly use the files in the dependency but that's outside of the scope of this question
We are following this tutorial to be able to use the Data Platform IRIS:
https://github.com/es-comunidad-intersystems/webinar-gestion-apis
We have found an issue, because of it looks like the IRIS version which is being requested in the tutorial, is not longer being available at the download page.
We have downloaded the closest version which is:
InterSystems IRIS
2019.4
Then we have tried to follow the steps:
docker load -i iris-2019.4.0.383.0-docker.tar.gz
It outputs:
Loaded image: intersystems/iris:2019.4.0.383.0
Then we have downloaded the webinar code:
git clone https://github.com/es-comunidad-intersystems/webinar-gestion-apis.git
After that we tried to build the Docker image as follows:
docker build . --tag webinar-gestion-apis:stable --no-cache
And we have seen the output:
Sending build context to Docker daemon 754.2kB
Step 1/9 : FROM intersystems/iris:2019.3.0.302.0
pull access denied for intersystems/iris, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
So then, we have thought that this issue is related to the Dockerfile, because of it had the following command:
# building from the InterSystems IRIS
FROM intersystems/iris:2019.3.0.302.0
To adjust it to get the verion we have download, we have written:
# building from the InterSystems IRIS
FROM intersystems/iris:2019.4.0.383.0
So after it, we wrote:
docker build . --tag webinar-gestion-apis:stable --no-cache
And it installed the image correctly:
Being the output:
Sending build context to Docker daemon 758.8kB
Step 1/9 : FROM intersystems/iris:2019.4.0.383.0
---> 46e2532c2583
Step 2/9 : USER root
---> Running in 3de765837aa5
Removing intermediate container 3de765837aa5
---> 35a7d04b1f5a
Step 3/9 : RUN mkdir -p /opt/webinar/install
---> Running in 1b1690dff84f
Removing intermediate container 1b1690dff84f
---> 64d42f352bb9
Step 4/9 : COPY install /opt/webinar/install
---> 2710ae3d8265
Step 5/9 : RUN mkdir -p /opt/webinar/src
---> Running in 12ccd30d880b
Removing intermediate container 12ccd30d880b
---> c2e5d7dff819
Step 6/9 : COPY src /opt/webinar/src/
---> 943d888243a9
Step 7/9 : RUN chown -R ${ISC_PACKAGE_MGRUSER}:${ISC_PACKAGE_IRISGROUP} /opt/webinar
---> Running in 57a5b34bbf70
Removing intermediate container 57a5b34bbf70
---> a8629b4948a0
Step 8/9 : USER irisowner
---> Running in 93d6814d7452
Removing intermediate container 93d6814d7452
---> 4e9faf862ebe
Step 9/9 : RUN iris start iris && printf 'zn "USER" \n do $system.OBJ.Load("/opt/webinar/src/Webinar/Installer.cls","c")\n do ##class(Webinar.Installer).Run()\n zn "%%SYS"\n do ##class(SYS.Container).QuiesceForBundling()\n h\n' | irissession IRIS && iris stop iris quietly
---> Running in 2e28a60b29b4
Using 'iris.cpf' configuration file
This copy of InterSystems IRIS has been licensed for use exclusively by:
Local license key file not found.
Copyright (c) 1986-2019 by InterSystems Corporation
Any other use is a violation of your license agreement
1 alert(s) during startup. See messages.log for details.
Starting IRIS
Node: 2e28a60b29b4, Instance: IRIS
USER>
USER>
Load started on 06/13/2020 08:18:52
Loading file /opt/webinar/src/Webinar/Installer.cls as udl
Compiling class Webinar.Installer
Compiling routine Webinar.Installer.1
Load finished successfully.
USER>
START INSTALLER
2020-06-13 08:18:58 0 Webinar.Installer: Installation starting at 2020-06-13 08:18:58, LogLevel=0
2020-06-13 08:18:58 0 : Creating namespace WEBINAR
Load of directory started on 06/13/2020 08:19:08
Loading file /opt/webinar/src/Webinar/Installer.cls as udl
Loading file /opt/webinar/src/Webinar/API/Leaderboard/v1/impl.cls as udl
Loading file /opt/webinar/src/Webinar/API/Leaderboard/v1/spec.cls as udl
Loading file /opt/webinar/src/Webinar/Data/Player.cls as udl
Loading file /opt/webinar/src/Webinar/Data/Team.cls as udl
Compilation started on 06/13/2020 08:19:08 with qualifiers 'cuk'
Compiling 5 classes, using 2 worker jobs
Compiling class Webinar.API.Leaderboard.v1.impl
Compiling class Webinar.API.Leaderboard.v1.spec
Compiling class Webinar.Data.Player
Compiling class Webinar.Installer
Compiling class Webinar.Data.Team
Compiling table Webinar_Data.Player
Compiling table Webinar_Data.Team
Compiling routine Webinar.API.Leaderboard.v1.impl.1
Compiling routine Webinar.Data.Team.1
Compiling routine Webinar.Installer.1
Compiling routine Webinar.Data.Player.1
Compiling class Webinar.API.Leaderboard.v1.impl
Compiling class Webinar.API.Leaderboard.v1.disp
Compiling routine Webinar.API.Leaderboard.v1.impl.1
Compiling routine Webinar.API.Leaderboard.v1.disp.1
Compilation finished successfully in 0.470s.
Load finished successfully.
Load started on 06/13/2020 08:19:09
Loading file /opt/webinar/install/WebTerminal-v4.9.0.xml as xml
Imported class: WebTerminal.Analytics
Imported class: WebTerminal.Autocomplete
Imported class: WebTerminal.Common
Imported class: WebTerminal.Core
Imported class: WebTerminal.Engine
Imported class: WebTerminal.ErrorDecomposer
Imported class: WebTerminal.Handlers
Imported class: WebTerminal.Installer
Imported class: WebTerminal.Router
Imported class: WebTerminal.StaticContent
Imported class: WebTerminal.Trace
Imported class: WebTerminal.Updater
Compiling 12 classes, using 2 worker jobs
Compiling class WebTerminal.Analytics
Compiling class WebTerminal.ErrorDecomposer
Compiling class WebTerminal.Common
Compiling class WebTerminal.StaticContent
Compiling class WebTerminal.Handlers
Compiling class WebTerminal.Updater
Compiling class WebTerminal.Autocomplete
Compiling class WebTerminal.Core
Compiling class WebTerminal.Trace
Compiling class WebTerminal.Router
Compiling class WebTerminal.Engine
Compiling routine WebTerminal.Analytics.1
Compiling routine WebTerminal.ErrorDecomposer.1
Compiling routine WebTerminal.Common.1
Compiling routine WebTerminal.StaticContent.1
Compiling routine WebTerminal.Updater.1
Compiling routine WebTerminal.Handlers.1
Compiling routine WebTerminal.Core.1
Compiling routine WebTerminal.Router.1
Compiling routine WebTerminal.Trace.1
Compiling routine WebTerminal.Autocomplete.1
Compiling routine WebTerminal.Engine.1
Compiling class WebTerminal.Installer
Compiling routine WebTerminal.Installer.1
Installing WebTerminal application to WEBINAR
Creating WEB application "/terminal"...
WEB application "/terminal" is created.
Assigning role %DB_IRISSYS to a web application; resulting roles: :%DB_IRISSYS:%DB_USER
Creating WEB application "/terminalsocket"...
WEB application "/terminalsocket" is created.
%All namespace is created.
Mapping %WebTerminal package into all namespaces: %All
WebTerminal package successfully mapped into all namespaces.
Load finished successfully.
2020-06-13 08:19:09 0 Webinar.Installer: Installation succeeded at 2020-06-13 08:19:09
2020-06-13 08:19:09 0 %Installer: Elapsed time 11.29037s
INSTALLER SUCCESS
USER>
%SYS>
%SYS>
Removing intermediate container 2e28a60b29b4
---> e23ab1a58cd2
Successfully built e23ab1a58cd2
Successfully tagged webinar-gestion-apis:stable
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Now is the difficulty, when we try to run the container it says "unhealthy"
docker-compose up -d
The outputs is:
Starting iris-2019.4 ... done
Being the docker-compose.yml (we have kept the original git hub repo file, just changing the container-name from iris-2019.3 to iris-2019.4)
version: '3.2'
services:
iris:
image: webinar-gestion-apis:stable
container_name: iris-2019.4
ports:
- "51773:51773"
- "52773:52773"
volumes:
- ./config/iris.key:/usr/irissys/mgr/iris.key
- ./shared:/shared
When we try to use:
docker-compose ps
We observe:
Name Command State Ports
----------------------------------------------------------------------------------------------
iris-2019.4 /iris-main Up (unhealthy) 0.0.0.0:51773->51773/tcp, 0.0.0.0:52773->52773/tcp
And if we try to debug it and see the logs, we have:
docker inspect --format "{{json .State.Health }}" iris-2019.4
And it shows:
{"Status":"unhealthy","FailingStreak":4,"Log":[{"Start":"2020-06-13T08:30:56.232804406Z","End":"2020-06-13T08:30:56.328718067Z","ExitCode":1,"Output":""},{"Start":"2020-06-13T08:31:56.332937629Z","End":"2020-06-13T08:31:56.427169416Z","ExitCode":1,"Output":""},{"Start":"2020-06-13T08:32:56.43026636Z","End":"2020-06-13T08:32:56.5141952Z","ExitCode":1,"Output":""},{"Start":"2020-06-13T08:33:56.520060854Z","End":"2020-06-13T08:33:56.605017629Z","ExitCode":1,"Output":""}]}
Being the result that we can not connect to:
http://localhost:52773/csp/sys/UtilHome.csp
How could we debug a Docker container which status in unhealthy?
Maybe a little late but probably that error you were having was caused because of IAM version + license file you were using.
You can try a newer version which runs on IRIS 2021 and IAM 2.3.3:
https://openexchange.intersystems.com/package/workshop-rest-iam
I've got several 'unhealthy' and 'warn' container startups, some of them without access to the management portal, the solution was to enter in the container via command line by clicking the console button (first one)
command line icon
And then write
bash
cat /usr/irissys/mgr/messages.log
This will show the startup log where you will hopefully able to see the error
The path to my project is /project/
My build file structure is src/main/package/subpackage/Class.java
My test file structure is src/test/package/subpackage/Test.java
I would like my compiled code to be in bin/main/package/subpackage/Class.class
Compiled test code in bin/test/package/subpackage/Test.class
My pom.xml has the entry
<build>
<sourceDirectory>${project.basedir}/src/main</sourceDirectory>
<testSourceDirectory>${project.basedir}/src/test</testSourceDirectory>
<outputDirectory>${project.basedir}/bin/main</outputDirectory>
<testOutputDirectory>${project.basedir}/bin/test</testOutputDirectory>
<finalName>${project.artifactId}-${project.version}</finalName>
</build>
Running mvn clean install causes the following.
Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:2.6:resources (default-resources) on project Hello-Maven: Error loading property file '/project/': /project (Is a directory) -> [Help 1]
...
[Help 1] https://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Now I've tried the link, but it suggest that it's an issue with the plugins.
However, commenting this block out, and running mvn clean install again returns an almost empty jar file inside /project/target/Hello-Maven-1.0-SNAPSHOT.jar, containing only the pom and the manifest. Additionally, there aren't any plugins, only dependencies: junit and javafx.
EDIT: I realize that plugins specifically for running maven but finding information over why the error happens for "install" is difficult at best.
I'm setting up an Angular 4 SPA with automatic testing in Jenkins CI. The SPA is part of a larger, Maven-managed project, so the build is also Maven-managed. So far I've:
Installed the NodeJS plugin on Jenkins, using install from nodejs.org with version 8.6.0
Configured "Global npm packages to install" = "karma-cli phantomjs-prebuilt jasmine-core karma-jasmine karma-phantomjs-launcher karma-junit-reporter karma-coverage"
Added the "maven-karma-plugin" in pom.xml with browsers=PhantomJS / singleRun=true / reporters=dots,junit
Enabled "Provide Node & npm bin/ folder to PATH" on the Jenkins job configuration
The build process starts up quite ok, but eventually I get:
[INFO] --- maven-karma-plugin:1.6:start (default) # webclient ---
[INFO] Executing Karma Test Suite ...
/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/karma start /var/lib/jenkins/workspace/funnel_build/webclient/karma.conf.js --browsers PhantomJS --reporters dots,junit --single-run
07 10 2017 17:07:52.801:ERROR [config]: Error in config file!
{ Error: Cannot find module 'karma-jasmine'
at Function.Module._resolveFilename (module.js:527:15)
at Function.Module._load (module.js:476:23)
at Module.require (module.js:568:17)
at require (internal/module.js:11:18)
at module.exports (/var/lib/jenkins/workspace/funnel_build/webclient/karma.conf.js:9:7)
at Object.parseConfig (/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/karma/lib/config.js:410:5)
The npm install at the very beginning of the build logs:
$ /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/npm install -g karma-cli phantomjs-prebuilt jasmine-core karma-jasmine karma-phantomjs-launcher karma-junit-reporter karma-coverage
/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/karma -> /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/karma-cli/bin/karma
/var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/phantomjs -> /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/bin/phantomjs
> phantomjs-prebuilt#2.1.15 install /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt
> node install.js
Considering PhantomJS found at /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/bin/phantomjs
Looks like an `npm install -g`
Could not link global install, skipping...
Download already available at /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2
Verified checksum of previously downloaded file
Extracting tar contents (via spawned process)
Removing /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/lib/phantom
Copying extracted folder /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2-extract-1507388835905/phantomjs-2.1.1-linux-x86_64 -> /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/lib/phantom
Writing location.js file
Done. Phantomjs binary available at /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/Node.js_8.6.0/lib/node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs
npm WARN karma-jasmine#1.1.0 requires a peer of karma#* but none was installed.
npm WARN karma-junit-reporter#1.2.0 requires a peer of karma#>=0.9 but none was installed.
npm WARN karma-phantomjs-launcher#1.0.4 requires a peer of karma#>=0.9 but none was installed.
+ karma-phantomjs-launcher#1.0.4
+ karma-coverage#1.1.1
+ karma-jasmine#1.1.0
+ karma-cli#1.0.1
+ karma-junit-reporter#1.2.0
+ jasmine-core#2.8.0
+ phantomjs-prebuilt#2.1.15
updated 7 packages in 10.553s
(The reason the package 'karma' is currently not on the list is that I read somewhere that karma-cli should be used in place of karma. Adding the 'karma' package doesn't change anything, however.)
Any idea why that "Cannot find module 'karma-jasmine'" pops up? In (2) you'll see that the karma-jasmine package is listed, I find it on the server, but still it's not found by the NodeJS plugin.
Thanks, Simon
I managed to get it to work by running "npm install" as part of the build process, and then run everything on local npm packages.
The entire setup is described here: https://funneltravel.wordpress.com/2017/10/16/running-karma-with-maven-on-jenkins-ci/
Part of my build process is to create a tar file of an input directory, located at src/bundle/bundle. In src/bundle/SConscript:
Import('*')
bundleDir = Dir("bundle")
jsontar = Command("bundle.tar", bundleDir,
"/home/dbender/bin/mkvgconf $SOURCE $TARGET")
in my SConstruct:
SConscript(Split('src/bundle/SConscript'),
exports='bin_env lib_env', build_dir='tmp/bundle')
When attempting to build:
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
/home/dbender/bin/mkvgconf tmp/bundle/bundle tmp/bundle/bundle.tar
Input directory tmp/bundle/bundle not found!
scons: *** [tmp/bundle/bundle.tar] Error 1
scons: building terminated because of errors.
Clearly scons is not copying the src/bundle/bundle to tmp/bundle/bundle, but I am stumped as to why.
Footnotes:
Using absolute pathname for mkvgconf is bad practice but just intermediate until I have this problem solved.
SCons doesn't know anything about the contents of your input src/bundle/bundle - only the program mkvgconf knows what it does with that directory.
One solution is to add an explicit dependency in the SConscript:
import os
Depends('bundle.tar', Glob(str(bundleDir) + os.path.sep + '*'))
That also means that when you update the contents of the bundle directory, the mkvgconf script will be rerun.
PS. you might want to change the build_dir argument name to variant_dir, as the former is deprecated in favor of the latter in recent SCons releases.