How to change the system variable in Geoserver - environment-variables

I'm on Linux.I'm using postgresql - geoserver - openlayers.
I want to display a shapefile with GeoServer. I store it in Postgresql and import the table on Geoserver. The size of the shapefile is 2.2GB.
When I want to display my shapefile with the Openlayers viewer (on Geoserver), I have a white screen and this error is the logs:
ERROR [geoserver.ows] org.geoserver.platform.ServiceException: Rendering process failed ....
Caused by: java.lang.RuntimeException: org.postgresql.util.PSQLException: ERROR: could not write to tuplestore temporary file: No space left on device where: SQL function "st_force_2d" statement 2
I saw here: https://docs.geoserver.org/stable/en/user/services/wfs/outputformats.html, that's the limit size is 2GB for shapefile but we can modify this limit changing the system variables GS_SHP_MAX_SIZE.
How can I do that ? I checked on Internet but impossible to find a solution.

In the link you mentioned it said:
it’s possible to modify those limits by setting the GS_SHP_MAX_SIZE and GS_DBF_MAX_SIZE system variables to a different value.
So I think it's similar to GEOSERVER_DATA_DIR config.
For binaries installation: You should change OS system variables. I'm not sure but, the command is something like this:
$ export GS_SHP_MAX_SIZE=Limit of .shp size in bytes
$ export GS_SHP_MAX_SIZE=3000000000
If it didn't work search for changing system var in your Linux dist.
For web archive installation: You should change the webserver or GeoServer configuration. There are 2 ways of doing it:
Context parameter: Find and edit web.xml in WEB-INF folder. then add this context parameter at root element(<web-app> tag)
<context-param>
<param-name>GS_SHP_MAX_SIZE</param-name>
<param-value>Limit of .shp size in bytes</param-value>
</context-param>
Java system property: It's very similar to binaries installation except you should add system variable for the webserver. If you are using tomcat add this to your system variables.
$ export CATALINA_OPTS="-GS_SHP_MAX_SIZE=Limit of .shp size in bytes"
$ export CATALINA_OPTS="-GS_SHP_MAX_SIZE=3000000000"
Be careful about changing java system property! it will effect whole Apache tomcat and might cause problem in other web apps installed.

Related

How to set dependency-mapping binding in gradle bootBuildImage (Spring-boot 2.7.1, native)

I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.

Can't create datasets and load images in COCO annotator

I'm trying to annotate images with COCO key points for pose estimation using https://github.com/jsbroks/coco-annotator. As described in the Installation section I cloned the repo. I installed Docker and Docker-compose. Following this I started the container with $ docker-compose up and it is running.
I am now on the website https://annotator.justinbrooks.ca/, I created one user and datasets but they do not appear in the repo datasets/ folder. I tried to create them manually and to load images in them but they do not appear on the website graphic interface.
I tried to scan, reload the webpage, create other datasets but it does not work. The container seems to work properly, it detects when I put an image in the datasets/ folder but it throws some errors.
Here is the last lines (I can post the whole log):
annotator_webclient | [File Watcher] File /datasets/haricot.jpg for created
annotator_webclient | [File Watcher] Adding new file to database: /datasets/image
annotator_message_q | 2019-05-16 13:01:08.841 [error] <0.461.0> closing AMQP connection <0.461.0> (172.18.0.4:42614 -> 172.18.0.2:5672):
annotator_message_q | missed heartbeats from client, timeout: 60s
Am I missing something fundamental or there is a bug?
I'm using Safari on macOS and I also tried Firefox on Ubuntu 18. I'm not behind a proxy, but maybe some ports are not open or something like this.
Creator of COCO Annotator here.
I think you are missing a fundamental concept. The demo runs on the VM and has nothing to do with your local instance.
Once you have the docker-compose running you can access your instance local by typing http://localhost:5000/ .

How can I unblacklist 'libnvomx.so', in order to resolve "no such element factory 'omxh264enc'!"?

(Background: In a docker container on a NVidia Jetson TX2 board I have decompressed NVidia's Linux For Tegra tarball which contains lots of drivers and shared object files, some of which provide GStreamer element factories which produce elements that I use in my GStreamer pipeline. I am trying to run the pipeline in the docker container.)
However, there is an element in my GStreamer pipeline (on this Tegra board), called 'omxh264enc', which I haven't been able to create.
I've put the corresponding 'libnvomx.so' in my drivers folder which is in the exported paths GST_PLUGIN_PATH and the LD_LIBRARY_PATH.
ldd -r does not show any missing libraries for libnvomx.so
HOWEVER when I try and run the pipeline, output includes
WARN omx gstomx.c:2826:plugin_init: Failed to load configuration file: Valid key file could not be found in search dirs (searched in: /root/.config:/etc/xdg as per GST_OMX_CONFIG_DIR environment variable, the xdg user config directory (or XDG_CONFIG_HOME) and the system config directory (or XDG_CONFIG_DIRS)
INFO omx gstomx.c:2831:plugin_init: Using default configuration
ERROR omx gstomx.c:2894:plugin_init: Core '/usr/lib/aarch64-linux-gnu/tegra/libnvomx.so' does not exist for element 'omxh264enc'
WARN GST_PLUGIN_LOADING gstplugin.c:526:gst_plugin_register_func: plugin "/gst_1.8.3/libs/gstreamer-1.0/libnvomx.so" failed to initialise
and when I use GST_DEBUG=3 gst-inspect-1.0 libnvomx.so, libnvomx.so is blacklisted.
Plugin Details:
Name libnvomx.so
Description Plugin for blacklisted file
Filename /gst_1.8.3/libs/gstreamer-1.0/libnvomx.so
Version 0.0.0
License BLACKLIST
Source module BLACKLIST
Binary package BLACKLIST
Origin URL BLACKLIST
I have copied libnvomx.so into /usr/lib/aarch64-linux-gnu/tegra but this did not make a difference (probably because libnvomx is blacklisted).
I don't know where to find the gstomx.conf file, where maybe I can change the path /usr/lib/aarch64-linux-gnu/tegra/libnvomx.so to my designated drivers folder (/gst_1.8.3/libs/gstreamer-1.0/). I have used 'find' on /etc and some other folders but didn't find it (I didn't actually find a .config folder on the system).
(There are also other plugins with missing symbols, nvidia_drv.so with undefined symbol TimerSet and libglx.so with undefined symbol serverClient. I would like to find out what is suppose to provide these symbols? But these are not (direct) dependencies of libnvomx.so)
So how can I initialise / unblacklist libnvomx.so so I can use 'omxh264enc'? Do I need to find / make a gstomx configuration file or can I make it work with the default configuration? I read somewhere there may be a solution using a 'symlink' but at the moment I'm not familiar with what these are or how these work.
Let me know if you need more info, thanks.

Neo4J windows installation

Downloaded Neo4j community version, the file named "neo4j-desktop-offline-1.1.13-setup.exe". Running the binary successfully installed. Launching the application takes me to configuration of database location "Please choose path where you want to store application data" and regardless of what path i choose, clicking "Confirm" does not do anything at all and it remains on the same screen.
Am using Windows 10.
I also got the same problem. I fixed it by adding a new PATH in environmental variables,
Advanced System Settings > Environment Variable > system variable > path > new: C:\Windows\System32\wbem
After setting, reopen the Neo4j application and it will allow you a select the path where you want to save application data.
REASON
In the developer console, you will see the following error
Uncaught (in promise) Error: Command failed: wmic logicaldisk get
size,freespace,caption 'wmic' is not recognized as an internal or
external command, operable program or batch file.
The actual WMIC binary is located at C:\Windows\System32\wbem\WMIC.exe you need to set the path if it is not yet defined.
try opening http://127.0.0.1:7477 or http://localhost:7474 in your browser
I had the same problem using Windows 7, and after trying the solutions others have suggested around fixing the Path environment variable to no avail, I found upgrading PowerShell solved the problem. To do this, I upgraded my Windows Management Framework (to 5.1 at the time of this writing) by following the appropriate link for my version of Windows from here:
https://learn.microsoft.com/en-us/powershell/scripting/windows-powershell/install/installing-windows-powershell
For Windows 7, I retrieved Win7AndW2K8R2-KB3191566-x64.zip from the Download link found here: https://www.microsoft.com/en-us/download/details.aspx?id=54616.
I unzipped and installed it, and after a reboot it worked (for Neo4j Desktop 1.4.15). Upgrades to WMF for other versions of Windows are on the same page.

Not able to start Informix server

I am new to Informix. I installed Informix in the folder - /opt/Informix_Software_Bundle.
I am using the following command to start Informix server.
cd /opt/Informix_Software_Bundle
sudo ./oninit -i
And I am getting the following error message.
Unable to read $INFORMIXDIR (/usr/informix).
So I tried to set the environment variable INFORMIXDIR in ~/.profile as given below:
export INFORMIXDIR="/opt/Informix_Software_Bundle"
I also tried the following command:
INFORMIXDIR=/opt/Informix_Software_Bundle;export INFORMIXDIR
I tried logging out and in again. But nothing helped.
I will greatly appreciate if somebody can help me with this. Thanks for reading.
System - Mac OS Sierra
Informix version - 12.10
There are a number of environment variables used by the Informix server which include:
INFORMIXDIR : points to the directory where the product is installed. Add $INFORMIXDIR/bin to $PATH.
ONCONFIG : indicates the name of the main server configuration file in $INFORMIXDIR/etc. If not specified this will default to "onconfig."
INFORMIXSERVER : indicates the name of the Informix instance to connect to and refers to the DBSERVERNAME or a DBSERVERALIAS entry in the onconfig file referenced by $ONCONFIG.
INFORMIXSQLHOSTS : indicates the full path name of the communications configuration file used by Informix clients and servers. This will include an entry for the name specified by $INFORMIXSERVER. If this environment variable is not specified it defaults to $INFORMIXDIR/etc/sqlhosts.
Just having some or all of these environment variables set is not sufficient for starting an Informix server from scratch. There are setup tasks such as editing the onconfig file to specify the location of the root chunk - this is the initial location where the server stores its data. For more details you might like to refer to the Informix online documentation:
Getting started with IBM Informix
Overview of database server configuration and administration
One of the most straightforward ways to get started is to use the demonstration server instance that may be created during installation. I'm not directly familiar with the Mac OS installation but for all other platforms the Informix installation process includes an option to configure and initialize a demonstration server, and I suspect Mac OS will be the same. This will create an environment file in $INFORMIXDIR which will include settings for the variables named above.
One word of caution: the oninit command is used to both start the Informix server and initialize the chunks where the data is stored. The -i argument will perform the complete disk initialization which is precisely what you want for the first time the Informix server is started. On subsequent restarts you would not normally use this argument or else all the data stored by the server may be lost.
As sudo by default resets environment variables it is usual to have
A) A script to set the required environment variables for a given server.
B) Another wrapper script to start/stop the server.

Resources