Since, ejabberd no longer supports the iqdisc option. I am trying to create my own custom queue to handle all the iq-packets with a custom namespace and return the result back to the client.
I am able to spawn my own process and run the functions and obtain the result. But the process fails in the end with the following error:
09:07:42.850 [error] Failed to route packet:
#iq{id = <<"5b660017-c52c-4fe8-8a56-213c1a86bc12-3">>,type = get,
lang = <<"en">>,
from =
#jid{
user = <<"check">>,server = <<"localhost">>,
resource = <<"64195176449188261912514">>,luser = <<"check">>,
lserver = <<"localhost">>,
lresource = <<"64195176449188261912514">>},
to =
#jid{
user = <<>>,server = <<"localhost">>,resource = <<>>,luser = <<>>,
lserver = <<"localhost">>,lresource = <<>>},
sub_els =
[#xmlel{
name = <<"....
......
.......
exception error: no try clause matching ok
in function gen_iq_handler:process_iq/4 (src/gen_iq_handler.erl, line 110)
in call from ejabberd_router:do_route/1 (src/ejabberd_router.erl, line 399)
in call from ejabberd_router:route/1 (src/ejabberd_router.erl, line 92)
in call from ejabberd_c2s:check_privacy_then_route/2 (src/ejabberd_c2s.erl, line 842)
in call from xmpp_stream_in:process_authenticated_packet/2 (src/xmpp_stream_in.erl, line 697)
in call from xmpp_stream_in:handle_info/2 (src/xmpp_stream_in.erl, line 392)
in call from p1_server:handle_msg/8 (src/p1_server.erl, line 696)
in call from proc_lib:init_p_do_apply/3 (proc_lib.erl, line 249)
The packet shown above is from the client to server: which looks fine to me.
I am using the latest ejabberd version: 19.09.1
I guess, I am having trouble routing the iq-packet response back to the client? Am I missing something else here? The packet seems fine to me. What is the ideal way to dedicate a queue to process all the iq-packets to a custom namespace and then return the responses back to the client, without blocking the main queue in the meantime, so that ejabberd can still handle other incoming packets.
Any ideas/pointers would be really appreciated. Thanks!
I figured this out.
The spawned process was not returning the desired result and hence the request failed.
I am still trying to figure out how to process multiple packets (target to different namespaces) at once on ejabberd, so that I can run all packets to my custom namespace to run on a single queue.
Related
i tried to post data to api with khttp but there is an error
this is my code
val payload = mapOf("review" to addReviewET.text, "rating" to quantity.toString(), "id_user" to "1", "id_movie" to id)
val r = post(localhost.insertReview(), data = payload)
println(r.text)
but it doesn't work it says error like this
FATAL EXCEPTION: main
Process: com.mqa.android.moviereview, PID: 23984
android.os.NetworkOnMainThreadException
at android.os.StrictMode$AndroidBlockGuardPolicy.onNetwork(StrictMode.java:1448)
at java.net.SocketInputStream.read(SocketInputStream.java:169)
at java.net.SocketInputStream.read(SocketInputStream.java:139)
at com.android.okhttp.okio.Okio$2.read(Okio.java:136)
at com.android.okhttp.okio.AsyncTimeout$2.read(AsyncTimeout.java:211)
at com.android.okhttp.okio.RealBufferedSource.exhausted(RealBufferedSource.java:60)
at com.android.okhttp.internal.io.RealConnection.isHealthy(RealConnection.java:361)
at com.android.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:137)
at com.android.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.android.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.android.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:461)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.connect(HttpURLConnectionImpl.java:127)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getOutputStream(HttpURLConnectionImpl.java:258)
at khttp.responses.GenericResponse$Companion$defaultEndInitializers$1.invoke(GenericResponse.kt:90)
at khttp.responses.GenericResponse$Companion$defaultEndInitializers$1.invoke(GenericResponse.kt:32)
at khttp.responses.GenericResponse$connection$2.invoke(GenericResponse.kt:164)
at khttp.responses.GenericResponse$connection$2.invoke(GenericResponse.kt:30)
at khttp.responses.GenericResponse.openRedirectingConnection$khttp(GenericResponse.kt:124)
at khttp.responses.GenericResponse.getConnection(GenericResponse.kt:163)
at khttp.responses.GenericResponse.getRaw(GenericResponse.kt:207)
at khttp.responses.GenericResponse.getContent(GenericResponse.kt:216)
at khttp.responses.GenericResponse.init$khttp(GenericResponse.kt:350)
at khttp.KHttp.request(KHttp.kt:59)
at khttp.KHttp.post(KHttp.kt:48)
at khttp.KHttp.post$default(KHttp.kt:47)
at com.mqa.android.moviereview.module.activity.AddActivity$showMovieList$3.invokeSuspend(AddActivity.kt:90)
at com.mqa.android.moviereview.module.activity.AddActivity$showMovieList$3.invoke(Unknown Source:40)
at org.jetbrains.anko.sdk27.coroutines.Sdk27CoroutinesListenersWithCoroutinesKt$onClick$1$1.invokeSuspend(ListenersWithCoroutines.kt:300)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:32)
at kotlinx.coroutines.DispatchedTask$DefaultImpls.run(Dispatched.kt:235)
at kotlinx.coroutines.DispatchedContinuation.run(Dispatched.kt:81)
at android.os.Handler.handleCallback(Handler.java:789)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:164)
at android.app.ActivityThread.main(ActivityThread.java:6541)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.Zygote$MethodAndArgsCaller.run(Zygote.java:240)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:767)
what is this error exactly? and how to make it right? or if you have other way please let me know. thank you
i got the way from here
The issue is that you want to run the http request on the main thread.
There are two way to avoid this error.
Disable strict mode, which means you can run the request on the main thread.
Make the request with AsyncTask. This is the recommended way, however it involves some job to do and look for the results by having the app state under control.
The ejabberd module I'm using, mod_pottymouth is not filtering messages as expected. After adding logging I see a generic handler method being called instead of the one that does the actual filtering. Problem is, I am not able to parse the ejabberd message to ensure the proper function is called. Can anyone help?
on_filter_packet({_From, _To, {xmlel, <<"message">>, _Attrs, Els} = _Packet} = _Msg) ->
%This is what should be called to filter messages, but is never called
FilteredEls = filterMessageBodyElements(Els, []),
{_From, _To, {xmlel, <<"message">>, _Attrs, FilteredEls}};
on_filter_packet(Msg) ->
% This is what actually gets called
Msg.
This is using ejabberd 17.01
Starting from 16.12 ejabberd doesn't route xmlel elements. You should process new style records: message, presence or iq.
Please read https://docs.ejabberd.im/developer/guide/#ejabberd-router
and https://github.com/processone/xmpp/blob/master/README.md
So, basically, your code should look like this:
on_filter_packet(#message{body = Body} = Msg) ->
NewBody = filterMessageBody(Body),
Msg#message{body = NewBody};
on_filter_packet(Stanza) ->
Stanza.
Have you tried using xmlel as record instead of tuple ?
I cannot wrap my head around this PROMELA problem: I have N processes ("pc") which may both send and receive messages over a channel ("to_pc"). Each process has its own channel over which it receives messages.
For a process to be able to receive, I have to keep it in a loop which checks the channel for incoming messages. As a second loop option the process sends a message to all other channels.
However, in simulation mode, this always causes a timeout, without anything being sent at all. My theory so far is that I created a deadlock where all processes want to send at once, causing them all to be unable to receive (since they are stuck in their "send" part of the code).
So far I have been unable to resolve this problem. I have tried to use a global variable as a semaphore to "forbid" sending, so that only one channel may send. However, this did not change the results. My only other idea is to use a timeout as the trigger for the sending, but this does not seem right to me at all.
Any ideas? Thanks in advance!
#define N 4
mtype={request,reply}
typedef message {
mtype type;
byte target;
byte sender;
};
chan to_pc[N] = [0] of {message}
inline send() {
byte j = 0;
for (j : 0 .. N-1) {
if
:: j != address ->
to_pc[j]!msg;
:: else;
fi
}
}
active [N] proctype pc(){
byte address = _pid;
message msg;
do
:: to_pc[address]?msg -> /* Here I am receiving a message. */
if
::msg.type == request->
if
:: msg.target == address ->
d_step {
msg.target = msg.sender
msg.sender = address;
msg.type = reply;
}
send();
:: else
fi
:: msg.type == reply;
:: else;
fi
:: /* Here I want to send a message! */
d_step {
msg.target = (address + 1) % N;
msg.sender = address;
msg.type = request;
}
send();
od
};
I can write a full-fledged working version of your source code if you want, but perhaps it is sufficient to highlight the source of the issue you are dealing with and let you have fun solving it.
Branching Rules
any branch with an executable condition can be taken, non-deterministically
if there is no branch with an executable condition, the else branch is taken
if there is no branch with an executable condition and no else branch, then the process hangs till when one of the conditions becomes true
Consider this
1: if
2: :: in?stuff -> ...
3: :: out!stuff -> ...
4: fi
where in and out are both synchronous channels (size is 0).
Then
if someone is sending on the other end of in then in?stuff is executable, otherwise it is not
if someone is receiving on the other end of out then out!stuff is executable, otherwise it is not
the process blocks at line 1: up until when at least one of the two conditions is executable.
Compare that code to this
1: if
2: :: true -> in?stuff; ...
3: :: true -> out!stuff; ...
4: fi
where in and out are again synchronous channels (size is 0).
Then
both branches have an executable condition (true)
the process immediately commits itself to either send or receive something, by non-deterministically choosing to execute a branch either at line 2: or 3:
if the process chooses 2: then it blocks if in?stuff is not executable, even when out!stuff would be executable
if the process chooses 3: then it blocks if out!stuff is not executable, even when in!stuff would be executable
Your code falls in the latter situation, since all the instructions within d_step { } are executable and your process commits to send way too early.
To sum up: in order to fix your model, you should refactor your code so that it's always possible to jump from send to receive mode and viceversa. Hint: get rid of that inline code, separate the decision to send from actual sending.
2 Jenkins jobs: A and B.
A triggers B as blocking build step ("Block until the triggered projects finish their builds"). Is there a way to include B's console output into A's console output?
Motivation: for browser use of Jenkins A's console output contains a link to B's console output which is fine. But when using Jenkins via command line tools (jenkins-cli) there's no quick and easy way to see B's console output.
Any ideas?
Interesting. I'd try something like this.
From http://jenkinsurl/job/jobname/lastBuild/api/
Accessing Progressive Console Output
You can retrieve in-progress console output by making repeated GET requests with a parameter. You'll basically send GET request to this URL (or this URL if you want HTML that can be put into tag.) The start parameter controls the byte offset of where you start.
The response will contain a chunk of the console output, as well as the X-Text-Size header that represents the bytes offset (of the raw log file). This is the number you want to use as the start parameter for the next call.
If the response also contains the X-More-Data: true header, the server is indicating that the build is in progress, and you need to repeat the request after some delay. The Jenkins UI waits 5 seconds before making the next call. When this header is not present, you know that you've retrieved all the data and the build is complete.
So you can trigger a downstream job, but don't "block until downstream completes". Instead, add an extra step (execute shell, probably) and write a script that will read the console output of the other job as indicated above, and display it in console output of current job. You will have to detect when the child job finished by looking for X-More-Data: true header, as detailed above.
I know this is an old question, but i had to this myself recently. I figure this would help someone else looking to do the same. Here's a Groovy script that will read a given job's progressiveText URL. The code is written in such a way that it should be plug and play. Make sure to set the jenkinsBase and jobName first. The approach is no different to what has already been mentioned.
Here's a short set of instructions on how to use this: (1) Configure downstream job so that anonymous users hasRead and ViewStatus rights. (2) In the upstream job, create a Trigger/call builds on other projects step that will call the downstream job. (3) Do not check the "Block until the triggered projects finish their builds. (4) Right after that step, create an Execute Groovy script step and paste the following code:
def jenkinsBase = // Set to Jenkins base URL here
def jobName = // Set to jenkins job name
def jobNumber = 'lastBuild' // Tail last build
def address = null
def response = null
def start = 0 // Start at offset 0
def cont = true // This semaphore holds the value of X-More-Data header value
try {
while (cont == true) { // Loop while X-More-Data value is equal to true
address = "${jenkinsBase}/job/${jobName}/${jobNumber}/logText/progressiveText?start=${start}"
def urlInfo = address.toURL()
response = urlInfo.openConnection()
if (response.getResponseCode() != 200) {
throw new Exception("Unable to connect to " + address) // Throw an exception to get out of loop if response is anything but 200
}
if (start != response.getHeaderField('X-Text-Size')) { // Print content if the starting offset is not equal the value of X-Text-Size header
response.getInputStream().getText().eachLine { line ->
println(line)
}
}
start = response.getHeaderField('X-Text-Size') // Set new start offset to next byte
cont = response.getHeaderField('X-More-Data') // Set semaphore to value of X-More-Data field. If this is anything but true, we will fall out of while loop
sleep(3000) // wait for 3 seconds
}
}
catch (Exception ex) {
println (ex.getMessage())
}
This script can be further improved by programatically getting the downstream job number.
There is also a Python version of this approach here.
A basic RabbitMQ install with user guest/guest.
Given the following simple erlang test code for RabbitMQ (erlang client), I am getting the error bellow. The queue TEST_DIRECT_QUEUE exists and has 7 messages in it, and the RabbitMQ server is up and running.
If I try to create a new with a declare API command, I also get a similar error.
Overall the error appears during any the << channel:call >> command
Any thoughts ? Thanks.
=ERROR REPORT==== 16-Feb-2013::10:39:42 ===
Connection (<0.38.0>) closing: internal error in channel (<0.50.0>): shutdown
** exception exit: {shutdown,{gen_server,call,
[<0.50.0>,
{call,{'queue.declare',0,"TEST_DIRECT_QUEUE",false,false,
false,false,false,[]},
none,<0.31.0>},
infinity]}}
in function gen_server:call/3 (gen_server.erl, line 188)
in call from test:test_message/0 (test.erl, line 12)
==============================================
-module(test).
-export([test_message/0]).
-include_lib("amqp_client/include/amqp_client.hrl").
-record(state, {channel}).
test_message() ->
{ok, Connection} = amqp_connection:start(#amqp_params_network{}),
{ok, Channel} = amqp_connection:open_channel(Connection),
Get = #'basic.get'{queue = "TEST_DIRECT_QUEUE"},
{#'basic.get_ok'{}, Content} = amqp_channel:call(Channel, Get), <=== error here
#'basic.get_empty'{} = amqp_channel:call(Channel, Get),
amqp_channel:call(Channel, #'channel.close'{}).
I have identified the issue myself after some frustrating hours. Overall, let me confess to be upset with the vague tutorias and documentation about RabbitMQ.... Anyways, here is what the problem was:
1) Queue Names are supposed to be in binary form, therefore preceded by "<<" and superceded by ">>". For example : <<"my queue name">> ( quotes included as well )
2) In a different scenario where I was trying to create the queue with queue.declare, the fact that the queue already existed was not a problem, but the fact that the queue was durable and the queue.declare did not specify that set of parameters caused the program to throw an error and interrupt execution. This is an unfortunate behavior where normally, developers would expect the queue matching to be done simply by name and then proceed. So to fix that I had to specify the durable value.
Here is a simple working code:
-module(test).
-export([test/0]).
-include_lib("amqp_client/include/amqp_client.hrl").
test() ->
{ok, Connection} = amqp_connection:start(#amqp_params_network{}),
{ok, Channel} = amqp_connection:open_channel(Connection),
Declare = #'queue.declare'{queue = <<"TEST_DIRECT_QUEUE">>, durable = true},
#'queue.declare_ok'{} = amqp_channel:call(Channel, Declare),
Get = #'basic.get'{queue = <<"TEST_DIRECT_QUEUE">>, no_ack = true},
{#'basic.get_ok'{}, Content} = amqp_channel:call(Channel, Get),
#amqp_msg{payload = Payload} = Content.