I created the following docker image containing a spring boot application that performs opencv and tesseract operations:
FROM openjdk:17-alpine
RUN apk update && \
apk upgrade && \
apk add --no-cache tesseract-ocr opencv gcompat
COPY myapp-1.0.0.jar myapp-1.0.0.jar
ENTRYPOINT ["java","-jar","/myapp-1.0.0.jar"]
When i try to run it on a linux machine (2 differnt vms or google.cloud), it starts, other endpoints work but when i call the endpoint that starts the opencv operations, i get the following error:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00000000001e1766, pid=1, tid=124
#
# JRE version: OpenJDK Runtime Environment (17.0+14) (build 17-ea+14)
# Java VM: OpenJDK 64-Bit Server VM (17-ea+14, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# C 0x00000000001e1766
#
# Core dump will be written. Default location: Core dumps may be processed with "/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e %P %I %h" (or dumping to //core.1)
#
# An error report file with more information is saved as:
# //hs_err_pid1.log
[thread 125 also had an error][thread 127 also had an error][thread 128 also had an error]
Can anybody help me to analyze that? I understand its a native C issue. Probably opencv or a dependency? But Why?
I think it starts with the following opencv function:
Mat map = Imgcodecs.imdecode(new MatOfByte(byteArray), Imgcodecs.IMREAD_UNCHANGED);
complete error log here
Related
I've been trying to install cv2 on my machine (linux-Ubuntu20.04) with environment manager conda
my python and conda information is pasted below
Conda info:
active environment : base
active env location : /home/anonymous/anaconda3
shell level : 1
user config file : /home/anonymous/.condarc
populated config files : /home/anonymous/.condarc
conda version : 22.9.0
conda-build version : 3.22.0
python version : 3.9.13.final.0
virtual packages : __linux=5.15.0=0
__glibc=2.31=0
__unix=0=0
__archspec=1=x86_64
base environment : /home/anonymous/anaconda3 (writable)
conda av data dir : /home/anonymous/anaconda3/etc/conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/anonymous/anaconda3/pkgs
/home/anonymous/.conda/pkgs
envs directories : /home/anonymous/anaconda3/envs
/home/anonymous/.conda/envs
platform : linux-64
user-agent : conda/22.9.0 requests/2.28.1 CPython/3.9.13 Linux/5.15.0-52-generic ubuntu/20.04.5 glibc/2.31
UID:GID : 1000:1000
netrc file : None
offline mode : False
# conda environments:
#
base * /home/anonymous/anaconda3
sys.version: 3.9.13 (main, Oct 13 2022, 21:15:33)
[G...
sys.prefix: /home/anonymous/anaconda3
sys.executable: /home/anonymous/anaconda3/bin/python
conda location: /home/anonymous/anaconda3/lib/python3.9/site-packages/conda
conda-build: /home/anonymous/anaconda3/bin/conda-build
conda-content-trust: /home/anonymous/anaconda3/bin/conda-content-trust
conda-convert: /home/anonymous/anaconda3/bin/conda-convert
conda-debug: /home/anonymous/anaconda3/bin/conda-debug
conda-develop: /home/anonymous/anaconda3/bin/conda-develop
conda-env: /home/anonymous/anaconda3/bin/conda-env
conda-index: /home/anonymous/anaconda3/bin/conda-index
conda-inspect: /home/anonymous/anaconda3/bin/conda-inspect
conda-metapackage: /home/anonymous/anaconda3/bin/conda-metapackage
conda-pack: /home/anonymous/anaconda3/bin/conda-pack
conda-render: /home/anonymous/anaconda3/bin/conda-render
conda-repo: /home/anonymous/anaconda3/bin/conda-repo
conda-server: /home/anonymous/anaconda3/bin/conda-server
conda-skeleton: /home/anonymous/anaconda3/bin/conda-skeleton
conda-token: /home/anonymous/anaconda3/bin/conda-token
conda-verify: /home/anonymous/anaconda3/bin/conda-verify
user site dirs:
CIO_TEST: <not set>
CONDA_DEFAULT_ENV: base
CONDA_EXE: /home/anonymous/anaconda3/bin/conda
CONDA_PREFIX: /home/anonymous/anaconda3
CONDA_PROMPT_MODIFIER: (base)
CONDA_PYTHON_EXE: /home/anonymous/anaconda3/bin/python
CONDA_ROOT: /home/anonymous/anaconda3
I've been trying several days installing opencv(cv2) on conda base environment with this line of code:
conda install -c conda-forge opencv
conda install -c "conda-forge/label/broken" opencv
conda install -c "conda-forge/label/cf201901" opencv
conda install -c "conda-forge/label/cf202003" opencv
conda install -c "conda-forge/label/gcc7" opencv
i had tried
conda install -c menpo opencv
to but it didn't work for me.
for last try i run this code:
conda install --channel https://conda.anaconda.org/menpo opencv3
and i received this massage:
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- opencv3 -> python[version='2.7.*|3.4.*|3.5.*|3.6.*']
Your python: python=3.9
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.31=0
- feature:|#/linux-64::__glibc==2.31=0
Your installed version is: 2.31
Any help is appreciated.
I am new to EDK2.
For porting ekd2 firmware to a new ARM64 platform, it would be good to first get a minimum edk2 port which can run UEFI Shell at least, improvements can be added gradually based on that.
It seems that the first step is rather steep, e.g., how to determine a minimal set of "items" in .dsc and .fdf file for a platform? In my case, I would like to build the .fd for my platform and treat it as BL33 of TF-A, effectively I would like to build an edk2 firmware to replace u-boot.
It seems that such a guide is hard to find on the web. I found a old version of edk2 which contains some instructions, but apparently they are obsolete (not exist in latest master branch, while can be found in UDK branches such as UDK2014), and I am not sure why those documents are removed from master branch.
Currently I can build .fd for FVP (edk2-platforms/Platform/ARM/VExpressPkg/ArmVExpress-FVP-AArch64.dsc), and it seems that the build output FVP_AARCH64_EFI.fd is supposed to be treated as BL33. Theoretically this could be a prototype for my new ARM64 platform, but to me it's too complex to start with: the firmware is about 2.5MiB in size (as compare to 500K of u-boot), so I guess it's far from a "minimum" version. but it's hard to figure out what features to be removed (and how).
I am wondering if there is a detailed guide on such topic...
After 1 month of trial and error, today I managed to bring my ARM64 platform into a UEFI Shell environment. I treat it as my 1st milestone on the EDK2 journey. Below I will try to summarize the steps I took so far, as a tentative answer to my question above. Guidance/corrections/comments are welcomed.
Get familiar with UEFI/PI spec and EDK2 implementation by reading books/specs/articles. Well, UEFI/PI specs are thousands of pages long...how to start? My main reading list is:
"Beyond Bios--Developing with the Unified Extensible Firmware Interface", 3rd ed, by Vincent Zimmer, et al. As the authors explained, the book is a kind of high level summary of the thousands-paged specs. And I find that the book is well organized for a new comer to get familiar with various UEFI related concepts. The main purpose of the 1st read (before playing with edk2 code base) is to get familiar with concepts and architectural ideas, not the details yet. Related sections need to be consulted later when reading EDK2 implementations.
EDK2 specs, including:
EDKII User Manual
EDKII Build Specification
EDKII DSC/FDF/DEC/INF File Specification
Various articles on the web...
Get a reference platform which can correctly boot a FD image built from latest EDK2 source, and play with the boot manager and Shell environment a bit. In my case, I chose RPi4B. For me, this is very important, as the reference platform serves as a handrail during the whole process, that whenever I encounter bugs or have doubts, I check the source/log of the reference platform. This solves most of the problems I encountered. Btw, always generating "build log" and "build report" for both reference platform and the target platform, as the two files contains very detailed information for comparison and check. Consult the EDK2 build spec on how to generate these two files during build.
I use the following script to build for RPi4B platform:
#!/bin/bash
# https://github.com/tianocore/edk2-platforms#how-to-build-linux-environment
export WORKSPACE=/home/bruin/work/tianocore
export PACKAGES_PATH=$WORKSPACE/edk2:$WORKSPACE/edk2-platforms:$WORKSPACE/edk2-non-osi
pushd $WORKSPACE
rm -rf ./Build/RPi4
source edk2/edksetup.sh
echo "Building BaseTools..."
make -C edk2/BaseTools all
#sudo apt install acpica-tools # iasl
# pip install antlr4-python3-runtime # -Y EXECUTION_ORDER
echo "Building firmware for Pi4B..."
GCC5_AARCH64_PREFIX=aarch64-none-linux-gnu- build \
-n 4 \
-a AARCH64 \
-p Platform/RaspberryPi/RPi4/RPi4.dsc \
-t GCC5 \
-b NOOPT \
-v -d 9 -j RPi4-build.log \
-y RPi4-build-report.txt \
-Y PCD \
-Y LIBRARY \
-Y DEPEX \
-Y HASH \
-Y BUILD_FLAGS \
-Y FLASH \
-Y FIXED_ADDRESS \
-Y EXECUTION_ORDER \
all
How to use the build result RPI_EFI.fd on RPi4B, consult the following:
edk2-platforms/Platform/RaspberryPi/RPi4/Readme.md
readme.md inside https://github.com/pftf/RPi4/releases/download/v1.17/RPi4_UEFI_Firmware_v1.32.zip. btw, I need to replace the original start4.elf and fixup4.dat with the ones in the zip file, otherwise, the boot of RPi4 will fail, complaining something like below:
RpiFirmwareGetClockRate: Get Clock Rate return: ClockRate=0 ClockId=C
ASSERT [ArasanMMCHost] /home/bruin/work/tianocore/edk2-platforms/Platform/RaspberryPi/
Drivers/ArasanMmcHostDxe/ArasanMmcHostDxe.c(263): BaseFrequency != 0
It's worth to analysis the RPI_EFI.fd content to some extend, by using some UEFI utilities. I mainly use the GUI version UEFITool of sudo apt install uefitool uefitool-cli. Other tools are also available. The anotomy of RPI_EFI.fd is of help when reading EDK2 build specs for checking understanding of the concepts.
One special aspect of RPI_EFI.fd is that the 1st 128K is bl31.bin binary from ATF. I guess this is due to the special booting connfiguration methods for RPi. For my platform, I don't need such kind of packaging, I only need to build the UEFI image MY.fd, which is treated as BL33 image and packaged into fip.bin togehter with BL2 and BL31 images by ATF build script.
Another aspect to notice is the "reset vector" in the begining of the .fd file. This related to the entry point of UEFI image (and entry point of each EDK2 modules), as well as interpreting the BL instruction for AArch64. Basically, it can be summarized as below:
The first [Components] in RPI_EFI.fd is ArmPlatformPkg/PrePi/PeiUniCore.inf, which is of MODULE_TYPE = SEC.
What's this component: this is the first (and only) SEC (Security) module in RPi4. What the name PrePi and Pei implies?
... the PI spec is not tied to edk2 PEIMs, and I don't see where EDKII PEI modules are currently the only "acknowledged" silicon init environment. The edk2 tree itself seems to contain platforms that don't use the edk2 PEI module set at all, but (IIRC) jump from SEC to DXE. I believe "ArmPlatformPkg/PrePi" and "ArmVirtPkg/PrePi" are related to this.
--- https://listman.redhat.com/archives/edk2-devel-archive/2020-November/msg00021.html
Its entry point: all UEFI components have the same entry point (_ModuleEntryPoint).
By "component", it means either a UEFI driver and UEFI app, both are PE32 executables, usually with suffix .efi.
The .efis are converted from ELF executables (.dll) by GenFw tool: modifying the file headers.
To verify that "all components' entry point is _ModuleEntryPoint":
Check the .dll generating command line in build report (build -y <BUILD_REPORT_FILE>), we have two flags "aarch64-none-linux-gnu-gcc" -o xxx.dll -u _ModuleEntryPoint -Wl,-e,_ModuleEntryPoint ...:
-u: gcc --help -v|grep "undefined SYMBOL" gives -u SYMBOL --undefined SYMBOL: star with undefined reference to SYMBOL.
Wl,-e: ld --help|grep "entry" gives -e ADDRESS, --entry ADDRESS Set start address.
Check all .dll files that Entry point address == _ModuleEntryPoint: find . -type f -name "*.dll" -exec sh -c "readelf -a {} |grep -E 'Entry point address|_ModuleEntryPoint'" \;
Its entry point is the entry point of whole UEFI FD image (i.e., from bl33_base_addr jump to this _ModuleEntryPoint):
Topology of the UEFI Firmware File
A UEFI Firmware File (actually a UEFI Firmware Device - FD file) is a collection of UEFI binaries encapsulated into a single image. The format of this image is defined by the Platform Initialization Specification Volume 3. A Vector Table is located at the base of this file. A 'BL' branch instruction at the base of the firwmare (location of the Reset Entry into the Vector Table) will jump to the first 'SEC' module of the UEFI Firmware Image.
--- https://github.com/lzeng14/tianocore/wiki/ArmPkg-Debugging
To verify the statements above:
Disassember the reset vector (i.e., the 1st word) of generated .FD (we got offset=0x360):
$ xxd -l 4 -e TEST.fd <== dump 4 bytes in little endian
00000000: 140000d8 <== BL {PC}+(0xd8<<2); offset=0x360
Check the Entry point in .dll (we got offset=0x240):
$ aarch64-none-elf-objdump -t ArmPlatformPrePiUniCore.dll|grep _ModuleEntryPoint
0000000000000240 g F .text 0000000000000000 _ModuleEntryPoint
$ readelf -h ArmPlatformPrePiUniCore.dll|grep Entry
Entry point address: 0x240
Compare contents of two files at different offset (we got identicial content):
$ xxd -s 0x360 -l 64 TEST.fd <== skip 0x360 bytes, dump 64 bytes
00000360: 901e 0094 050a 0094 ea03 00aa a1cd 0a58 ...............X
00000370: 0200 e0d2 2200 c0f2 0240 a0f2 0200 80f2 ...."....#......
00000380: c303 a0d2 e3ff 9ff2 6304 00d1 6300 028b ........c...c...
00000390: 0400 a1d2 0400 80f2 2000 03eb 8400 0054 ........ ......T
$ xxd -s 0x240 -l 64 ArmPlatformPrePiUniCore.dll <== skip 0x240 bytes
00000240: 901e 0094 050a 0094 ea03 00aa a1cd 0a58 ...............X
00000250: 0200 e0d2 2200 c0f2 0240 a0f2 0200 80f2 ...."....#......
00000260: c303 a0d2 e3ff 9ff2 6304 00d1 6300 028b ........c...c...
00000270: 0400 a1d2 0400 80f2 2000 03eb 8400 0054 ........ ......T
Prepare an empty pkg, and make it build ok. The main purpuse is to do some exercise with EDK2 build system, and use the empty pkg as the start point for the new platform.
Make a copy of RaspberryPi.dec, change all gRaspberry to gMyPlatform.
Make a copy of RPi4.dsc and RPi4.fdf, and comment out all stuff in DSC and FDF file.
Replace all GUIDs in DSC/FDF/DEC files, generating new ones using online guid generator.
Note that PCD are declared in DEC files, and DEC files are refered by modules (INF files). As the empty package contains no module, no PCD definition will be available in FDF. So for a success build of the empty package, we need to comment out all PCD reference in FDF.
The NOOPT build command for MyPlatform is as below:
#!/bin/bash
export WORKSPACE=/home/bruin/work/tianocore
export PACKAGES_PATH=$WORKSPACE/edk2:$WORKSPACE/edk2-platforms:$WORKSPACE/edk2-non-osi
pushd $WORKSPACE
source edk2/edksetup.sh
echo "Building BaseTools..."
make -C edk2/BaseTools all
echo "Building UEFI firmware for MyPlatform..."
GCC5_AARCH64_PREFIX=aarch64-none-linux-gnu- build \
-n 4 \
-a AARCH64 \
-p Platform/MyCorp/MyPlatform/MyPlatform.dsc \
-t GCC5 \
-b NOOPT \
-v -d 9 -j MyPlatform-build.log \
-y MyPlatform-build-report.txt \
-Y EXECUTION_ORDER \
-Y PCD \
-Y LIBRARY \
-Y DEPEX \
-Y HASH \
-Y BUILD_FLAGS \
-Y FLASH \
-Y FIXED_ADDRESS \
all
popd
Add the 1st component ArmPlatformPrePiUniCore. This component is to prepare the HOBs for DXE phae. The main purpose is to get serial port working and memory config correct. Another purpose of this step is to familiar with steps for adding a component/module/lib. Below is a brief summary of the steps:
Uncomment the module's INF into both DSC ([Components] section), and FDF ([FV.FVMAIN_COMPACT]).
Rebuild the pkg, and resolve all Instance of library class [xxxLib] is not found errors reported, by updating [LibraryClasses] sections of DSC.
This step is a repeating process for dozens of times.
Some lib-class has multiple lib-instances, making sure choose the appropriate lib-instance (ref the build-report of RPi4).
if encounter ModuleEntryPoint.iiii:31: Error: immediate out of range: enable gArmTokenSpaceGuid.PcdFdBaseAddress and gArmTokenSpaceGuid.PcdFdSize in FDF.
if encounter undefined reference to _gPcd_BinaryPatch_PcdSerialClockRate: set PcdSerialClockRate in [PcdsPatchableInModule] section in DSC. FIXME: why? ref.
Check the PCDs listed in build log: inspect any abnormal PCD values, and supply correct values.
Customize platform-specific drivers or libraries.
SerialPortLib: locate the lib-class header file (MdePkg/Include/Library/SerialPortLib.h) by find edk2 -type f -name "*.dec" -exec grep -Hn SerialPortLib. The following functions are required:
SerialPortInitialize()
SerialPortWrite()
SerialPortRead()
SerialPortPoll()
SerialPortSetControl(): RETURN_UNSUPPORTED
SerialPortGetControl(): RETURN_UNSUPPORTED
SerialPortSetAttributes(): RETURN_UNSUPPORTED
ArmPlatformLib: interface header at Include/Library/ArmPlatformLib.h. The following functions are required:
ArmPlatformGetCorePosition(): return cpu idx in the cluster given the MPIDR value. this function is used in _ModuleEntryPoint for setting stack for secondary cores. Assuming one cluster for now.
ArmPlatformIsPrimaryCore()
ArmPlatformGetPrimaryCoreMpId()
ArmPlatformGetBootMode()
ArmPlatformPeiBootAction()
ArmPlatformInitialize()
ArmPlatformGetVirtualMemoryMap()
ArmPlatformGetPlatformPpiList()
etc...
Uncomment more modules in DSC/FDF, module by module...For driver/libs which are RPi platform specific, we can:
either search the edk2/edk2-platform for similiar driver or lib instances, or
copy the RPi4 implementation and comment out most of the content, make the pkg build success first, and then bug fixing.
Debugging: my current main debugging method is through adding "printf()", i.e., the edk2 macro DEBUG((DEBUG_INFO,)). One needs to set gEfiMdePkgTokenSpaceGuid.PcdDebugPrintErrorLevel to an appropriate value to see more debug info.
When attempting to run a Jupyter notebook that runs export_png via nbconvert in a Docker container, I get the following error.
Neither firefox and geckodriver nor a variant of chromium browser and chromedriver are available on system PATH. You can install the former with 'conda install -c conda-forge firefox geckodriver'.
I have a Linux vm configured with similar dependencies and it all works fine. I see there is a bug around finding firefox if the path points to a script, but mine does not, as shown in the last lines of output below. I'm starting to think the runtime error produced is a red herring and it's actually throwing a different error, but hardcoded to that message.
Are there any know issues of running bokeh export_png within a docker image? Any necessary config? Anyone doing that successfully with versions of firefox, geckodriver, selenium, and bokeh you could share? Installing chromium instead is not easy, as this is all behind a company conda mirror without those packages.
/opt/conda/lib/python3.7/site-packages/bokeh/io/webdriver.py in get(self)
116 if not self.reuse or self.current is None:
117 self.reset()
--> 118 self.current = self.create()
119 return self.current
120
/opt/conda/lib/python3.7/site-packages/bokeh/io/webdriver.py in create(self, kind)
120
121 def create(self, kind: Optional[DriverKind] = None) -> WebDriver:
--> 122 driver = self._create(kind)
123 self._drivers.add(driver)
124 return driver
/opt/conda/lib/python3.7/site-packages/bokeh/io/webdriver.py in _create(self, kind)
138 return driver
139
--> 140 raise RuntimeError("Neither firefox and geckodriver nor a variant of chromium browser and " \
141 "chromedriver are available on system PATH. You can install the former " \
142 "with 'conda install -c conda-forge firefox geckodriver'.")
RuntimeError: Neither firefox and geckodriver nor a variant of chromium browser and chromedriver are available on system PATH. You can install the former with 'conda install -c conda-forge firefox geckodriver'.
RuntimeError: Neither firefox and geckodriver nor a variant of chromium browser and chromedriver are available on system PATH. You can install the former with 'conda install -c conda-forge firefox geckodriver'.
[g_de_auto#0ff0d47b4a0f data_science_foundation]$ which firefox
/opt/conda/bin/FirefoxApp/firefox
[g_de_auto#0ff0d47b4a0f data_science_foundation]$ which geckodriver
/opt/conda/bin/geckodriver
[g_de_auto#0ff0d47b4a0f data_science_foundation]$ cat /opt/conda/bin/FirefoxApp/firefox
ELF>##1#8
######h###### # #-=#=#-=#=#0##DDPtd` ` #` #<<QtdRtd-=#=#PP/lib64/ld-linux-x86-64.so.2GNU GNUB
m5Jzq
ms- libpthread.so.0__errno_locationlibdl.so.2__gmon_start__libstdc++.so.6libm.so.6libgcc_s.so.1libc.so.6execv__stack_chk_fail_]`##D]fD`##UH`##HHHH?HHtHt ##(##0##8#####H#H/Ht5/%/#%/h%/h%/h%/h%/h%/h%/h%R/f1I^HHPTI#Hp#H#fDU`##H=`##HtHt
17dH%(H;tf.DAWAAVIAUIATL%(+UH-(+SL)1H]HtLLDAHH9u[]A\A]A^A_ff./proc/self/exeCouldn't find the application directory.
-binExec failed with error: %s
Update
The underlying error I get out of the _try_create_firefox_webdriver function is SessionNotCreatedException('Unable to find a matching set of capabilities')
Chose to use Xvfb. Looked into Kaleido, but I did not see an easy integration with Bokeh. Steps were:
Install the following dependencies.
yum install gtk3-devel xorg-x11-server-Xvfb.x86_64 selenium geckodriver=0.26.0 firefox=74.0
Telling firefox what display to use (the virtual one).
export DISPLAY=:0
Launch Xvfb along with my command.
Xvfb :0 -screen 0 1920x1080x24 & script_that_turns_charts_to_images
How can I rebuild jna-4.1.0.jar file to include the linux-s390x specific libjnidispatch.so file.
This is needed by one of my application and failing on the dependency of this libjnidispatch.so file.
Did try to follow this question: How to use JNAerator with multiple dynamic libraries under one header?
Syntax Used:
java -jar jnaerator-0.11-shaded.jar \
> -arch linux-s390x linux-s390x/libjnidispatch.so \
> -mode jna-3.3.0-jenkins-3.jar \
> -jar jna-3.3.0-jenkins-3_updated.jar
Getting below error:
ERROR: JNAeration failed !
#
# Error parsing arguments :
# -arch linux-s390x linux-s390x/libjnidispatch.so -mode jna-3.3.0-jenkins-3.jar -jar jna-3.3.0-jenkins-3_updated.jar : com.ochafik.lang.jnaerator.JNAerator$CommandLineException: Argument 'linux_s390x' is not one of the expected values :
# linux_x64,
# linux_x86,
# armeabi,
# sunos_x86,
# sunos_sparc,
# darwin_universal,
# win32,
# win64
# Please use -h for help on the command-line options available.
Clone the git repository.
Make sure you have a JDK installed, with Apache ant and native build tools (make, gcc, grep, etc).
Then just run ant native; ant jar.
Note that s390x may not be recognized out of the box, but if you look through the build files for how the other platforms are handled it should be straightforward to add in a switch for s390x (see build.xml and native/Makefile).
If you have a package distribution for a previous JNA version, that package may include the necessary packages and if so, consider submitting them to the JNA project as a github pull request to have them incorporating into the project proper.
I am attempting to rebuild my development/test environment on a new laptop running windows7 32 bit. imageMagick is one of the modules I had installed in an (ancient) version on my old laptop. I have downloaded and installed perl from dwimperl, which is v5.14.2 and had a couple modules install, and several did not.
cpanm Image::Magick - failed
cpanm DB_File - failed
cpanm Time::HiRes - failed
I searched around and found a discussion on magick failing to install on v5.12, but couldn't tell if that was supposed to have been fixed, or if I need to attempt to recreate the fix discussed for 5.12
should I try installing perl from Strawberry? looks like it is a little newer v5.18 ?
is there something I can tweak and re-run cpan installs?
I installed the binaries from imagemagick, from Link first, then cpan again, and cpan still fails....
This is the top part of the build log down to where it really goes south and starts kicking out errors:
cpanm (App::cpanminus) 1.6941 on perl 5.014002 built for MSWin32-x86-multi-thread
Work directory is C:\Users\dtbaker/.cpanm/work/1377281741.8420
You have make C:\Dwimperl\c\bin\dmake.exe
You have LWP 6.03
Falling back to Archive::Tar 1.80
Searching Image::Magick on cpanmetadb ...
--> Working on Image::Magick
Fetching http://www.cpan.org/authors/id/J/JC/JCRISTY/PerlMagick-6.86.tar.gz
-> OK
Unpacking PerlMagick-6.86.tar.gz
Entering PerlMagick-6.86
META.yml/json not found. Creating skelton for it.
Configuring PerlMagick-6.86
Running Makefile.PL
################################### WARNING! ###################################
# It seems that you are trying to install Perl::Magick on a MS Windows box with
# perl + gcc compiler (e.g. strawberry perl), however we cannot find ImageMagick
# binaries installed on your system.
#
# Please check the following prerequisites:
#
# 1) You need to have installed ImageMagick Windows binaries from
# http://www.imagemagick.org/script/binary-releases.php#windows
#
# 2) We only support dynamic (DLL) ImageMagick binaries
# note: it is not possible to mix 32/64-bit binaries of perl and ImageMagick
#
# 3) During installation select that you want to install ImageMagick's
# development files (libraries+headers)
#
# 4) You also need to have ImageMagick's directory in your PATH
# note: we are checking the presence of convert.exe and/or identify.exe tools
#
# 5) You might need Visual C++ Redistributable Package installed on your system
# see instructions on ImageMagick's Binary Release webpage
#
# We are gonna continue, but chances for successful build are very low!
################################################################################
Checking if your kit is complete...
Looks good
Note (probably harmless): No library found for -lMagickCore-6.Q16
Writing Makefile for Image::Magick
Writing MYMETA.yml and MYMETA.json
-> OK
Checking dependencies from MYMETA.json ...
Checking if you have ExtUtils::MakeMaker 0 ... Yes (6.62)
Building and testing Image-Magick-6.86
cp Magick.pm blib\lib\Image\Magick.pm
AutoSplitting blib\lib\Image\Magick.pm (blib\lib\auto\Image\Magick)
C:\Dwimperl\perl\bin\perl.exe C:\Dwimperl\perl\lib\ExtUtils\xsubpp -typemap C:\Dwimperl\perl\lib\ExtUtils\typemap -typemap typemap Magick.xs > Magick.xsc && C:\Dwimperl\perl\bin\perl.exe -MExtUtils::Command -e mv -- Magick.xsc Magick.c
gcc -c -s -O2 -DWIN32 -DPERL_TEXTMODE_SCRIPTS -DUSE_SITECUSTOMIZE -DPERL_IMPLICIT_CONTEXT -DPERL_IMPLICIT_SYS -fno-strict-aliasing -mms-bitfields -s -O2 -DVERSION=\"6.86\" -DXS_VERSION=\"6.86\" "-IC:\Dwimperl\perl\lib\CORE" -D_LARGE_FILES=1 -DHAVE_CONFIG_H Magick.c
Magick.xs:60:31: error: magick/MagickCore.h: No such file or directory
Magick.xs:167: error: expected specifier-qualifier-list before 'MagickRealType'
Magick.xs:188: error: expected specifier-qualifier-list before 'ImageInfo'
Magick.xs:210: error: 'MagickNoiseOptions' undeclared here (not in a function)
however we cannot find ImageMagick binaries installed on your system.
Is ImageMagick is in your path, as recommanded in the 4) point? Open cmd.exe and type convert -v or convert.exe -v. If you don't see informations about Image Magick (Windows have a built-in convert command), it is very likely you have to add it to your path.
You also have to check points 1), 3), and 5). After that try again to run installation process through cpanm.
Trying to install for Citrus Perl, I discovered on debugging through the Perl part of the install that the mingw64 install had not included 'pexports.exe'. Downloading that from https://sourceforge.net/projects/mingw/files/MinGW/Extension/pexports/
and placing it in the mingw64 directory was necessary to solve the problem of a long list of library export symbols not found.
Prior to that I had also set the environment variables CPATH, C_INCLUDE_PATH, and CPLUS_INCLUDE_PATH to point to the "include" directory of the ImageMagick install include directory in C:\Program Files (x86). (When you install ImageMagick you should check the box to install also for Strawberry Perl.)