upload file to eXist-db running on Docker container - docker

I'm using eXist-db over docker container - installing Java over Ubuntu, installing the eXist installation headless jar, and also adding data Volume (Azure file) to store all the physical files and the db data files.
I need to upload automatically files to the eXist-db, after I generate a new file and save it to the volume drive (using C#).
According to the eXist documentation on uploading files there are several methods to upload files to eXist, but none of them work for me.
Dashboard or eXide - not relevant since these are GUI applications.
Java Admin Client - not working because have no GUI -> I'm getting this failure: 'No X11 DISPLAY variable was set, but this program performed an operation which requires it...'
Over REST or WebDAV via web client (using browser or by code), I can run XQuery for queries, but for storing new files, how?
So, the solution I found is to write an XQuery file, using the xmldb:store function.
This query saved the posted file using the specified name and location (in the volume), and the stored file can then be retrieved via REST or WebDAV.
But I feel that there must be a simpler solution...
Can anyone help?
BTW, here is the xmldb:store XQuery:
xquery version "3.1";
declare function local:upload() {
let $filename := request:get-uploaded-file-name("file")
let $log-in := xmldb:login("/db", "Admin", "admin")
let $file := "file:///usr/new_file_location.xml"
let $record := doc($file)
let $store := xmldb:store("/db/akn", "new_file_name.xml", $record)
return
<results>
<message>File {$file} has been stored.</message>
</results>
};
local:upload()

When starting eXist as described in the eXist Docker documentation - with it listening on port 8080 - you can access all of eXist's standard endpoints:
http://localhost:8080/exist/webdav/db for WebDAV
http://localhost:8080/exist/rest/db for REST
http://localhost:8080/exist/xmlrpc/db for XML-RPC
http://localhost:8080/exist/apps for apps installed in /db/apps.
Of course if you've configured Docker's eXist to listen on a different port, just switch the port.
Thus, to upload files to a Dockerized eXist programmatically, the methods outlined in the documentation article you referenced, Uploading files, should all work: WebDAV, client.sh, Ant, or even curl. For WebDAV, if you haven't configured users and passwords, you'd just connect with the URL http://localhost:8080/exist/webdav/db, username "admin", and a blank password. For Ant, see the Ant tasks documentation. For curl, you would perform an HTTP PUT request to the REST interface:
curl -s -f -H 'Content-Type: application/xml' \
-T <filename> \
--user <user>:<password> \
"http://localhost:8080/exist/rest/db/apps/<collection>/<filename>"

This is also possible:
echo put /home/files/<FILRPATH>/main.xml | /usr/local/eXist-db/
bin/client.sh -s

Related

Nitrogen - File upload directly to database

In the Nitrogen Web framework, files uploaded always end in the ./scratch/ directory when using #upload{}. From here you are supposed to manage the uploaded files, for example, by copying them to their final destination directory.
However, in case the destination is a database, is there a way of uploading these files straight to the database? Use case RIAK-KV.
You can upload a file to Riak KV using an HTTP POST request. You can see the details at in the Creating Objects documentation which shows how to do it using curl.
To send the contents of a file instead of a value, something like this should work:
curl -XPOST http://127.0.0.1:8098/types/default/buckets/scratch/keys/newkey
-d #path/to/scratch.file
-H "Content-Type: application/octet-stream"

Is there a way to get the config.pbtxt file from triton inferencing server

Recently, I have come across a solution of the triton serving config file disable flag "--strict-model-config=false" while running the inferencing server. This would enable to create its own config file while loading the model from the model repository.
sudo docker run --rm --net=host -p 8000:8000 -p 8001:8001 -p 8002:8002 \
-v /home/rajesh/custom_repository:/models nvcr.io/nvidia/tritonserver:22.06-py3 \
tritonserver --model-repository=/models --strict-model-config=false
I would like to get the generated config file from the triton inferencing server since we can play around with the batch config and other parameters. Is there a way to get the inbuilt generated config.pbtxt file for the models I have loaded in the server so that I can play around the batch size and other parameters.
The above answer which the uses curl command would return the json response.
If the results should be in the protobuf format, try loading the model using triton inferencing server with strict model config as false and fetch the results by using the below python script which would return the results in necessary protobuf format. Use this to get the format of the model and edit it easily as per the needs in config pbtxt file instead of cnoverting json to protobuf results.
import tritonclient.grpc as grpcclient
triton_client = grpcclient.InferenceServerClient(url=<triton_server_url>)
model_config = triton_client.get_model_config(model_name=<model_name>, model_version=<model_version>)
As per Triton docs (source), the loaded model configuration can be found by curl'ing the /config endpoint:
Command:
curl localhost:8000/v2/models/<model_name>/config
[source]

How to use PEM passphrase/TrustedRoot/TLS Mutual Auth Cert/Private Key in a .netCore 3.1 Ubuntu container

I am trying to write .netCore 3.1 API in an Ubuntu Linux container that runs the equivalent of this Curl command.
WORKING LINUX CONTAINER CURL COMMAND:
curl --cacert /etc/root/trustedroot.crt --cert /etc/mutualauth/tls.crt --key /etc/mutualauth/tls.key
--header "SOAPAction:actionName" --data #test.xml https://this.is.the/instance --verbose
Enter PEM pass phrase: *****
<Success...>
We use Windows development laptops so everything starts with Windows.
So far, I have the following HttpClientHandler that my HttpClient is using on a Windows development machine. This code works on Windows with the cert in my local machine and current user personal stores and does not work in Linux:
WORKING WINDOWS HTTPCLIENTHANDLER CODE:
X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
try
{
var cert = store.Certificates.Find(X509FindType.FindByThumbprint, "<<cert thumbprint here>>", true);
var handler = new HttpClientHandler
{
ClientCertificateOptions = ClientCertificateOption.Manual,
SslProtocols = SslProtocols.Tls12,
AllowAutoRedirect = false,
AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip
};
handler.ClientCertificates.Add(cert[0]);
}
catch (Exception e)
{
//Handle errors
}
finally
{
store.Close();
}
The cert I imported was .PFX format so as I understand it, the password went in at the time of import and the code for Windows doesn't need to be concerned with it.
The Curl command mentioned above works from the container. So by that logic, if coded or configured properly, the code should be able to do the same thing. As I see it, the Curl command shown above contains four elements that I need to account for in my HttpClientHandler somehow:
The Trusted Root(CA) Certificate: /etc/root/trustedroot.crt
The TLS Certificate: /etc/mutualauth/tls.crt
The Private Key - /etc/mutualauth/tls.key
The PEM Passphrase
I have been reading into this for a couple of months now and have seen various articles and stack overflow posts but there is a staggering amount of variables and details involved with SSL and I cant find anything that directly addresses this in a way that makes sense to me with my limited understanding.
I also have the option of running a Linux script at the time of deployment to add different/other formats of certs/keys to the stores/filesystem in the container. This is how I get the certs and keys into the container in the first place, so I have some control over what I can make happen here as well:
LINUX CONFIG SCRIPT:
cp /etc/root/trustedroot.crt /usr/share/ca-certificates
cp /etc/mutualauth/tls.crt /usr/share/ca-certificates
cp /etc/mutualauth/tls.key /etc/ssl/private
echo "trustedroot.crt" >> /etc/ca-certificates.conf
echo "tls.crt" >> /etc/ca-certificates.conf
update-ca-certificates
dotnet wsdltest.dll --environment=Production --server.urls http://*:80
I do not believe I can get the binary .PFX file into the container due to security policies and limitations, but I definitely can get its string encoded cert and key formats into the container.
...so if there is a way of using different styles of certs that I can extract from the .PFX or specifying password and cert when the server 'spins up' to make my code not require a password, that would work too - I might just be missing something basic in the Linux config.
Would anyone be so kind as to point me in the proper direction to find out how I can uplift my HttpClientHandler code OR Linux config to be able to make this API call? Any ideas are welcome at this point, this has been a thorn in my side for a long time now... Thank you so much!
This was not the right approach.
The correct approach was an NGINX reverse proxy terminating mutual auth TLS so that Dotnetcore doesn't have to.
Save yourself some time and go NGINX!. :D

Configuring a Data Virt resource adapter to handle an F5 redirect

How do I configure the resource adapter and/or the vdb for a url that sits behind an F5? Suppose that my resource adapter and vdb are configured to read data from
https://foo.org/data?cat='pricing'&page=1&rows=20
If this is a direct hostname then Data Virt reads the data correctly. If it is an F5 then I get an ArrayIndexOutOfBoundsException because the InputStream size is zero.
I verified that the authentication configuration works correctly, so it's not authentication-related.
If I curl the above url (when behind F5) then I get a failed 302 and no results. If I curl -L then I get static html error page (generated apparently because the server did not receive the required parameters). If I curl -L -b cookies.txt then I get the expected data. So basically, my challenge it to apply the equivalent of curl -L and -b cookies.txt options to a Data Virt resource adapter and/or vdb.
The web services translator directly does not support 302 (redirection), however it uses CXF underneath to make the connections. So, configure cxf configuration file on web service as defined in examples here 1 look at Configuring Https, then add the redirect configuration to this file as described at 2
<http:client AutoRedirect="true" Connection="Keep-Alive"/>
http://teiid.github.io/teiid-documents/master/content/admin/Web_Service_Data_Sources.html
http://cxf.apache.org/docs/client-http-transport-including-ssl-support.html

Upload XML file to server on another machine

How can I upload a XML file that resides on a computer (let's call it workstation) onto a BaseX server that runs on another computer (server)?
To upload a XML file to the BaseX server on workstation I use
basexclient -n localhost -d -w -c "CREATE DATABASE ${db_name} ${file}"
When the hostname is changed from localhost to server, this command fails with
org.basex.core.BaseXException: Resource "[complete FILE path]" not found.
IIUC, the error happens because this command does not upload the XML file itself, but just asks the server to read it from the path ${file}. The command then fails because ${file} is not available on server but only on workstation.
What command should I use to upload the XML file to the remote server?
(Obviously without copying the file to the server and then executing the command locally on the server.)
Assuming that -n means what you seem to be using it to mean, and that a local client can in fact communicate with a remote server, and assuming also that your XML document is a standalone document, I'd try something like the following (not tested), with $server, $dbname, $file, and $baseurl defined as environment variables:
(echo CREATE DATABASE ${dbname};
echo ADD TO ${baseurl};
cat ${file};
echo EXIT ) | basexclient -n myserver -d -w
But otherwise I'd use the BaseX HTTP server and use curl or wget to sent a PUT request with the file to the address http://myserver.example.org:8984/webdav/mydb/myfile.xml (and of course, if necessary, I'd use curl multiple times to make the database and then add data to it).

Resources