Web service URL for ComIbmWSRequestNode - messagebroker

Any reason for why executing this shows the web service URL for ComIbmSOAPRequestNode but not for ComIbmWSRequestNode?
mqsireportproperties MYBROKER -o AllMessageFlows -e default -r
I'm using MB6.1 and I have a requirement to extract all hardcoded URLs of HTTP nodes that vary between two environments.

You could raise a requirement here to try to get the property added:
http://www.ibm.com/developerworks/rfe/?PROD_ID=532
It is almost certainly jsut an oversight, but, given that 6.1 is almost out of support you may find that an RFE will not be accepted, or even if it is it will be targeted to a future release.
The embedded EG level listener outputs it's URI registrations with the following command:
mqsireportproperties -e -o HTTP(S)Connector -r
So if you could temporarily swith the http nodes to use the EG level listener using:
mqsichangeproeprties -e -o ExecutionGroup -n httpNodesUseEmbeddedListener -v true
Then mqsireportproperties will give you what you need.
Otherwise I think the best bet is writing a small CMP app that examines the flows that are deployed looking for SOAP and HTTP Input Nodes and extract the property that way.

Related

How do I specify multiple node values in a buildWithParameters call to Jenkins?

I have a webhook service which kicks off a buildWithParameters Jenkins job, and I want to be able to specify which buildservers are being used.
This is easy enough in the job configuration - I've added a Node parameter which lets me specify which nodes are valid, and when starting the job manually in the Jenkins web UI, I can select which nodes I want:
I'm able to kick off the job via curl using the buildWithParameters Jenkins feature:
curl -vvv 'https://webhook:examplepassword#jenkins.example.com/job/build-sideboard-plugin/buildWithParameters?token=exampletoken&GIT_REPO=example/repo&YUM_REPO=example&BUILDSERVER=sideboard.build.dev.xr'
However, I can't figure out how to specify multiple parameters. I expected that I'd simply be able to add a second &BUILDSERVER=xxx value and have that work, but running this:
curl -vvv 'https://webhook:examplepassword#jenkins.example.com/job/build-sideboard-plugin/buildWithParameters?token=exampletoken&GIT_REPO=example/repo&YUM_REPO=example&BUILDSERVER=sideboard.build.dev.xr&BUILDSERVER=sideboard.rocky8.build.dev.xr'
Returns a 500 error. I also tried providing a single value with a comma separating the two values, i.e.
curl -vvv 'https://webhook:examplepassword#jenkins.example.com/job/build-sideboard-plugin/buildWithParameters?token=exampletoken&GIT_REPO=example/repo&YUM_REPO=example&BUILDSERVER=sideboard.build.dev.xr,sideboard.rocky8.build.dev.xr'
but Jenkins interpreted that as a single Node value which didn't match any node since there's no node named sideboard.build.dev.xr,sideboard.rocky8.build.dev.xr. I got the same result when submitting the two values separated by a space.
Is there any way to get Jenkins to do this while still using the buildWithParameter functionality? I'd hate to have to redo the structure of our build triggering or switch to Jenkins Pipeline. Even making 2 different curl commands would be somewhat of a pain given how our webhooks are structured, so I'd love to be able to provide both parameters just like I can in the Jenkins web UI.
I don't think it is possible using the query parameters like you have tried, due to the fact the the plugin actually triggers two different builds.
What you can do is pass the parameters with the submit command as JSON data, which will simulate the trigger of the build with multiple servers selected.
The general syntax will be something like:
curl -u USER:PASSWORD --show-error \
--data 'json={"parameter":[{"name":"PARAMNAME","value":["node1","node2"]}]}' \
http://localhost:8080/job/remote/build?token=TOKEN
or in your case:
curl -u webhook:examplepassword --show-error \
--data 'json={"parameter":[{"name":"BUILDSERVER","value":["sideboard.build.dev.xr","sideboard.rocky8.build.dev.xr"]}]}' \
https://jenkins.example.com/job/build-sideboard-plugin/build?token=exampletoken
You can of course pass all other needed parameters alongside the BUILDSERVERin the JSON data:
curl -u webhook:examplepassword --show-error \
--data 'json={"parameter":[{"name":"BUILDSERVER","value":["sideboard.build.dev.xr","sideboard.rocky8.build.dev.xr"]},{"name":"YUM_REPO","value":"example"},{"name":"GIT_REPO","value":"=example/repo"}]}' \
https://jenkins.example.com/job/build-sideboard-plugin/build?token=exampletoken
In addition it is probably better to use the --data-urlencode instead of the --data flag for the curl commands to avoid encoding issues in case the values of your parameters have special characters.
More info on submitting jobs via Remote Access API is available Here.

ActiveMQ Artemis: Obtain list of acceptors via JMX

How can I retrieve the list of configured acceptors in ActiveMQ Artemis via Jolokia/JMX (and curl)? I need to reload the acceptors after a TLS certificate update but looks like passing the acceptor name is mandatory. Unfortunately, I cannot just pass a static name because we use different acceptors, all using TLS – and don’t want to change the reloading code just because the acceptor config changed.
curl -s -f -u username:password -H 'Origin: localhost' 'http://127.0.0.1:8161/console/jolokia/read/org.apache.activemq.artemis:broker="borker-primary-0"'
shows the connectors, but not the acceptors.
This question is related to a change introduced in v2.18.0, see question on TLS certificate reload.
There is a getConnectors method on the main ActiveMQServerControl MBean which is why Jolokia's read command returns those values. However, there is no corresponding getAcceptors method, but you can use Jolokia's list command to effectively get the same information. Use something like this:
curl -s -f -u username:password -H 'Origin: localhost' 'http://127.0.0.1:8161/console/jolokia/list/org.apache.activemq.artemis:broker="borker-primary-0"'
Then look through the results for component=acceptors and you'll be able to find all the acceptors with their respective names.
This is a bit of a hack but a necessary one at this point given the lack of a management method to get the acceptors. I've opened ARTEMIS-3601 and sent a PR to deal with this use-case so in future versions this won't be necessary. You'll just be able to invoke getAcceptors or inspect them from the output of Jolokia's read command.

How to pass zap session files to dockerized zap scanner?

How to properly pass session files (.session .session.data .session.properties .session.script and context) to the following command before the scan is executed?
docker run -rm -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py \
-t https://www.example.com -r testreport.html
Why do you want to?
It is (probably) possible, but it wont do anything useful. The ZAP baseline scan spiders the target you specify, so supplying an existing ZAP session file wont help you. Unless I'm missing something ;)
Use the core/action/loadSession/ API endpoint. Something like this:
from zapv2 import ZAPv2 as zap
import time
apikey = 'apikey12345' #Change this to match your setup
z = zap(apikey=apikey, proxies={'http': 'http://127.0.0.1:9999', 'https': 'http://127.0.0.1:9999'})
time.sleep(5)
print 'start..'
z.core.load_session('/root/Download/zaptmp/test.session') #Obviously this needs to be your session path
sites = z.core.sites
# Check that the session loaded... I'm printing, you could check count not zero, whatever
print 'Listing sites in loaded session:'
for site in sites:
print site

lost logout functionality for grails app using spring security

I have a grails app that moved to a new subnet with a change to the DNS. As a result, the logout functionality stopped working. When I inspect the network using chrome, I get this message under request headers: CAUTION: Provisional headers are shown.
This means request to retrieve that resource was never made, so the headers being shown are not the real thing.
The logout function is executing this action
package edu.example.performanceevaluations
import org.codehaus.groovy.grails.plugins.springsecurity.SpringSecurityUtils
class LogoutController {
def index = {
// Put any pre-logout code here
redirect uri: SpringSecurityUtils.securityConfig.logout.filterProcessesUrl // '/j_spring_security_logout'
}
}
Would greatly appreciate a direction to look towards.
As suggested by that link run chrome://net-internals and see if you get anywhere
If you are still lost, I would suggest a two way debugging if you have Linux find something related to your traffic and run either something like tcpdump or if thats too complex install and run ngrep -W byline -d any port 8080 -q. and look for the pattern see what is going on.
ngrep/tcpdump and look for that old ip or subnet on entire traffic see if anything is still trying get through - (this all be best on grails app server ofcourse
(unsure possibly port 8080 or any other clear text port that your app may be running on)
Look for your ip in the apache logs does it hit the actual server when you log out etc?
Has the application been restarted since subnet change since it could have cached the next point from application in the running Java process:
pgrep java|awk '{print "netstat -plant "$1" |grep "$1 }'|/bin/sh
or
pgrep java|awk '{print " lsof -p "$1" |grep -i listen"}'|/bin/sh
I personally think something somewhere needs to be restarted since its hooking on to a cache of something .
Also check the hosts files of any end machines involved ensure nothing has previous subnet physically configured in there.

how to swap solr core from shell

I have a solr setup with two cores. I want to schedule a core(core1, backend) for full import frequently(e.g. after every 5 mins), then swap with the live(core0, serving) core from shell command through a shceduler.
For full-import command, I am using following shell command
wget -o - -q -t 1 http://localhost:8080/solr/core1/dataimport?command=full-import
Which works fine. If I do a core swap from browser by hitting
http://localhost:8080/solr/admin/cores?action=SWAP&core=core1&other=core0, I get latest update instantly on search. But if I schedule this URL as shell command similar to dataimport, it doesn't do that swap.
Did you try with
curl
"http://'localhost':8080/solr/admin/cores?action=SWAP&core=core1&other=core0"
from shell?
There is catch with the SWAPs
Apache Solr allows to swap two cores around for non-Cloud configurations. They take each other’s name, so it is a good way to push an updated core into a production without downtime.
But an interesting question is how this is achieved. Normally, core name is it’s directory name too. So, does Solr rename the directory on the filesystem too?
Not really! Instead name property in the core.properties file is updated to use the name of the other core. Usually that property is used to give an alternave name of the core for when the directory naming conventions are not suitable.
The gotcha is - of course - that you still have two directories with right looking names for the cores you see in the Admin UI. So, it is very easy to forget that extra redirection/rename step when troubleshooting somebody else’s - or even your own old - setup.

Resources