Configuring a Data Virt resource adapter to handle an F5 redirect - session-cookies

How do I configure the resource adapter and/or the vdb for a url that sits behind an F5? Suppose that my resource adapter and vdb are configured to read data from
https://foo.org/data?cat='pricing'&page=1&rows=20
If this is a direct hostname then Data Virt reads the data correctly. If it is an F5 then I get an ArrayIndexOutOfBoundsException because the InputStream size is zero.
I verified that the authentication configuration works correctly, so it's not authentication-related.
If I curl the above url (when behind F5) then I get a failed 302 and no results. If I curl -L then I get static html error page (generated apparently because the server did not receive the required parameters). If I curl -L -b cookies.txt then I get the expected data. So basically, my challenge it to apply the equivalent of curl -L and -b cookies.txt options to a Data Virt resource adapter and/or vdb.

The web services translator directly does not support 302 (redirection), however it uses CXF underneath to make the connections. So, configure cxf configuration file on web service as defined in examples here 1 look at Configuring Https, then add the redirect configuration to this file as described at 2
<http:client AutoRedirect="true" Connection="Keep-Alive"/>
http://teiid.github.io/teiid-documents/master/content/admin/Web_Service_Data_Sources.html
http://cxf.apache.org/docs/client-http-transport-including-ssl-support.html

Related

mlflow static_prefix url in set_tracking_uri is not working

I am starting mlflow with below command
mlflow server --static_prefix=/myprefix --backend-store-uri postgresql://psql_user_name:psql_password#localhost/mlflow_db --default-artifact-root s3://my-mlflow-bucket/ --host 0.0.0.0 -p 8000
everything worked fine and I can see mlflow UI when I open url http://localhost:8000/myprefix
but when I use mlflow.set_tracking_uri() i have to give url path as "http://localhost:8000/"
why cant we use full url , which has static prefix "http://localhost:8000/myprefix" ?
if i use full url ,I am getting request to api endpoint fail and api is experiments/list error 404 !=200
is there any way to add url with static prefix in set_tracking_uri
FYI it appears via https://github.com/mlflow/mlflow/issues/4484#issue-925407532 that this isn't supported. The --static-prefix flag only impacts the UI (e.g. all paths under /ajax-api/2.0) while the REST API is under /api and is not impacted by --static-prefix.
It looks like the way around this is to used some sort of load balancer / frontend that can do path rewrites (an example w/ nginx is given in that issue)

ActiveMQ Artemis: Obtain list of acceptors via JMX

How can I retrieve the list of configured acceptors in ActiveMQ Artemis via Jolokia/JMX (and curl)? I need to reload the acceptors after a TLS certificate update but looks like passing the acceptor name is mandatory. Unfortunately, I cannot just pass a static name because we use different acceptors, all using TLS – and don’t want to change the reloading code just because the acceptor config changed.
curl -s -f -u username:password -H 'Origin: localhost' 'http://127.0.0.1:8161/console/jolokia/read/org.apache.activemq.artemis:broker="borker-primary-0"'
shows the connectors, but not the acceptors.
This question is related to a change introduced in v2.18.0, see question on TLS certificate reload.
There is a getConnectors method on the main ActiveMQServerControl MBean which is why Jolokia's read command returns those values. However, there is no corresponding getAcceptors method, but you can use Jolokia's list command to effectively get the same information. Use something like this:
curl -s -f -u username:password -H 'Origin: localhost' 'http://127.0.0.1:8161/console/jolokia/list/org.apache.activemq.artemis:broker="borker-primary-0"'
Then look through the results for component=acceptors and you'll be able to find all the acceptors with their respective names.
This is a bit of a hack but a necessary one at this point given the lack of a management method to get the acceptors. I've opened ARTEMIS-3601 and sent a PR to deal with this use-case so in future versions this won't be necessary. You'll just be able to invoke getAcceptors or inspect them from the output of Jolokia's read command.

upload file to eXist-db running on Docker container

I'm using eXist-db over docker container - installing Java over Ubuntu, installing the eXist installation headless jar, and also adding data Volume (Azure file) to store all the physical files and the db data files.
I need to upload automatically files to the eXist-db, after I generate a new file and save it to the volume drive (using C#).
According to the eXist documentation on uploading files there are several methods to upload files to eXist, but none of them work for me.
Dashboard or eXide - not relevant since these are GUI applications.
Java Admin Client - not working because have no GUI -> I'm getting this failure: 'No X11 DISPLAY variable was set, but this program performed an operation which requires it...'
Over REST or WebDAV via web client (using browser or by code), I can run XQuery for queries, but for storing new files, how?
So, the solution I found is to write an XQuery file, using the xmldb:store function.
This query saved the posted file using the specified name and location (in the volume), and the stored file can then be retrieved via REST or WebDAV.
But I feel that there must be a simpler solution...
Can anyone help?
BTW, here is the xmldb:store XQuery:
xquery version "3.1";
declare function local:upload() {
let $filename := request:get-uploaded-file-name("file")
let $log-in := xmldb:login("/db", "Admin", "admin")
let $file := "file:///usr/new_file_location.xml"
let $record := doc($file)
let $store := xmldb:store("/db/akn", "new_file_name.xml", $record)
return
<results>
<message>File {$file} has been stored.</message>
</results>
};
local:upload()
When starting eXist as described in the eXist Docker documentation - with it listening on port 8080 - you can access all of eXist's standard endpoints:
http://localhost:8080/exist/webdav/db for WebDAV
http://localhost:8080/exist/rest/db for REST
http://localhost:8080/exist/xmlrpc/db for XML-RPC
http://localhost:8080/exist/apps for apps installed in /db/apps.
Of course if you've configured Docker's eXist to listen on a different port, just switch the port.
Thus, to upload files to a Dockerized eXist programmatically, the methods outlined in the documentation article you referenced, Uploading files, should all work: WebDAV, client.sh, Ant, or even curl. For WebDAV, if you haven't configured users and passwords, you'd just connect with the URL http://localhost:8080/exist/webdav/db, username "admin", and a blank password. For Ant, see the Ant tasks documentation. For curl, you would perform an HTTP PUT request to the REST interface:
curl -s -f -H 'Content-Type: application/xml' \
-T <filename> \
--user <user>:<password> \
"http://localhost:8080/exist/rest/db/apps/<collection>/<filename>"
This is also possible:
echo put /home/files/<FILRPATH>/main.xml | /usr/local/eXist-db/
bin/client.sh -s

lost logout functionality for grails app using spring security

I have a grails app that moved to a new subnet with a change to the DNS. As a result, the logout functionality stopped working. When I inspect the network using chrome, I get this message under request headers: CAUTION: Provisional headers are shown.
This means request to retrieve that resource was never made, so the headers being shown are not the real thing.
The logout function is executing this action
package edu.example.performanceevaluations
import org.codehaus.groovy.grails.plugins.springsecurity.SpringSecurityUtils
class LogoutController {
def index = {
// Put any pre-logout code here
redirect uri: SpringSecurityUtils.securityConfig.logout.filterProcessesUrl // '/j_spring_security_logout'
}
}
Would greatly appreciate a direction to look towards.
As suggested by that link run chrome://net-internals and see if you get anywhere
If you are still lost, I would suggest a two way debugging if you have Linux find something related to your traffic and run either something like tcpdump or if thats too complex install and run ngrep -W byline -d any port 8080 -q. and look for the pattern see what is going on.
ngrep/tcpdump and look for that old ip or subnet on entire traffic see if anything is still trying get through - (this all be best on grails app server ofcourse
(unsure possibly port 8080 or any other clear text port that your app may be running on)
Look for your ip in the apache logs does it hit the actual server when you log out etc?
Has the application been restarted since subnet change since it could have cached the next point from application in the running Java process:
pgrep java|awk '{print "netstat -plant "$1" |grep "$1 }'|/bin/sh
or
pgrep java|awk '{print " lsof -p "$1" |grep -i listen"}'|/bin/sh
I personally think something somewhere needs to be restarted since its hooking on to a cache of something .
Also check the hosts files of any end machines involved ensure nothing has previous subnet physically configured in there.

Curl not downloading XML file as expected

When adding a URL into a web browser, I get the usual prompt to open the XML file and view it. However, when I use the same URL within a Curl batch file it only appears to download the login aspx page.
//stuff/stuff/Report.aspx?Report=All_Nodes_IP_Report&DataFormat=XML&AccountID=<UID>&Password=<password>
My batch file looks like this:
curl -L "//stuff/stuff/Report.aspx?Report=All_Nodes_IP_Report&DataFormat=XML&AccountID=<UID>&Password=<Password>" -o "local.xml" -v
pause
What am I doing wrong? There's no proxy server between me and the report URL..? The web site is https but I can't include that as the validation checker keeps moaning at me :)
why use CURL when you can use one application called MGET that i create.
Download Link:
http://bit.ly/1i1FpGE
Syntax of the command:
MGET //stuff/stuff/Report.aspx?Report=All_Nodes_IP_Report&DataFormat=XML&AccountID=<UID>&Password=<Password> local.xml
And if you want to use HTTPS do it, for best experience use HTTP

Resources