Notifications Configuration in S/4Hana 1610 - sap-fiori

I have configured Notifications as per blog in our S4Hana 1610(Embedded Architecture) Landscape.
But while checking bgRFC Monitor (Transaction sbgrfcmon), it is showing Data Not Available
Although I have
1.Created an RFC destination name IWNGW_BGRFC with transfer protocol Classic with bgRFC.
2.Created an RFC destination IWNGW_BEP_OUT_BGRFC for the background RFC queue with queue prefix set to Q.
3.Registered the IWNGW_BEP_OUT_BGRFC destination for background processing by creating a queue.
4.Supervisor destination BGRFC_SUPERVISOR already existed in the system.
Please guide.
Regards,
Rehan Sayed
Additional Attachments :

In transaction SM59 you can run a “Connection Test” on these RFC-destinations.
Is this working?
When you call transaction “sbgrfcmon”, which parameters do you provide at the selection screen?
Maybe a screenshot could help.
I think “sbgrfcmon” is a Monitor, where you will only see entries which are in error state or queued up, waiting for processing.
If you have no or only limited activity without any errors, you will be lucky, to see here any entries.
Please see also bgRFC Monitor
What happens if you produce some activity in the development system (maybe with errors) and keep refreshing “sbgrfcmon”?

The issue is resolved,since our system's SAP_BASIS component was on SP000 and it does not detect workflow maintained in SWF_PUSH_NOTIF1 transaction with Customizing set to 'X'.
Hence maintained workflow in SWF_PUSH_NOTIF1 transaction without Customizing and notification worked.
Regards,
Sayed

Related

Time to send SDO

I am working on CANopen architecture and had three questions:
1- When the 'synchronous window' is closed until the next SYNC message, should we send the SDO message? Can we not send a message during this period?
2- Is it possible not to send the PDO message during the simultaneous window?
3- What is the answer that the slaves give in the SYNC message?
Disclaimer: I don't have exact answers but I just wanted to share my assumptions & thoughts.
CiA 301 doesn't mention the relation between synchronous window and SDOs. In normal operation after the initial configuration, one may assume that SDOs aren't present on the system, or at least they are rare compared to PDOs. Although not strictly necessary, SDOs are generally initiated by a device which has the master role, and that device also produces the SYNC messages (again, it's not strictly necessary but it's the usual/common implementation). So, the master device may adjust the timing of SDO requests according to the synchronous window.
Here is a quote from CiA 301:
If the synchronous window length expires all synchronous TPDOs may be
discarded and an EMCY message may be transmitted; all synchronous
RPDOs may be discarded until the next SYNC message is received.
Synchronous RPDO processing is resumed with the next SYNC message.
CiA 301 uses the word "may" (see the quote above). So I'm not sure if it's mandatory or not. In my opinion, it makes sense to follow the advice and abort synchronous TPDO transmissions after the synchronous window and send an EMCY message. Event-driven (non-synchronous) TPDOs can be sent within or after the synchronous window.
There is no direct response to SYNC messages. On SYNC reception, SYNC consumers (slaves) sample their inputs, drive their outputs according to the previous RPDOs, and start transmitting their TPDOs containing the previous samples (or the current ones? I'm not sure about this).
Synchronous windows are for specific PDO synchronization only. For hard real-time systems, data might be required to arrive within certain fixed time intervals - not too early, not too late. That is, it acts as a real-time deadline. If such features are enabled, you need to take that in consideration when doing the specific CANopen bus implementation.
For example if some SDO communication would occupy the bus so that the PDO can't meet its time window, that would be a problem. But this is easily solved by giving the PDO a lower COBID than the SDO, which should already be the case in most default device profile setups like "DS401 GPIO module". Other than that, you would have to make sure there is no ridiculous bus loads or that nodes hang up or get busy doing other things.
In systems with hard real-time requirements you probably don't want to allow any SDO communication during operational mode to begin with.
What is the answer that the slaves give in the SYNC message?
That question doesn't make any sense. You need to study what the SYNC message does and what it is for.

How to handle errors during asynchronous response delivery from SMSR back to SMDP/Operator?

In many scenarios the response with the result of the operation execution is delivered asynchronously to the operation initiator (SMDP or Operator). For example step (13) in 3.3.1 of SGP.02 v4.2:
(13) The SM-SR SHALL return the response to the “ES3.EnableProfile” function to SM-DP, indicating that the Profile has been enabled
It is not clear how SMSR should act if the call that contains the result of the operation fails. Should SMSR retry such call all the time or it is ok to try just once and give up after that? Does this depend on the type of error that happened during such call?
I'm concerned about the cases when the result is sent and may have been processed by the initiator but the information about that was not properly delivered back to SMSR. In order for SMSR to be required to retry the initiator should be ready to receive the same operation result status again and process it accordingly that is ignore and just acknowledge.
But I can't see anything in the SGP02 v4.2 that specifies what the behaviour of SMSR and SMDP should be in this case. Any pointers to the documentation specifying this are much appreciated.
In general it is not clear how the rollback to a valid know state should happen in this situation. Who is responsible for that (SMSR or SMDP in this example of profile enabling)?
I'm not aware of any part of the specification defining this. Neither in SGP.02, SGP.01 and the test specification SGP.11. There are operational requirements in the SAS certification for a continuous service. But this is not technically defined.
I have experience in implementing the specification. The approach was a message queue with Kafka and a retry policy. The specification says SHALL, which means try very hard. Any implementation dropping the message after a single try is not very quality oriented. The common sense in distributed (micro service) based systems is that there are failures which have to be handled, so this assumption was taken without being expressed in the SGP specification.
The example of the status of a profile should be idempotent, sending a message twice should not be harmful. The MessageID and RelatesTo is also useful here. I assume for auditing the request / responses are recorded anyway in your system.
In case you are sitting at the other end and are facing a badly implemented SM-SR and nt status message arrives, the ES3.GetEIS can be called by the SM-DP later to get the current status.
I have already contacted the authors directly. At the end of the document the email is mentioned:
It is our intention to provide a quality product for your use. If you
find any errors or omissions, please contact us with your comments.
You may notify us at prd#gsma.com

Way to get current editor id (doc id) in HiveServerClient in Hue source code

When having multiple hue pages run tez applications at the same time, it, sometimes, will apply the same session to two different tasks, which will cause of of them receiving KILL signal and the other one complains that current app master is being used and retrying. I looked into the code of HiveServerClient._get_tez_session and I think the problem lies in the way busy_sessions is retrieved, which is not thread-safe. So there's chance that two query will be allocated to the same session when submitted virtually the same time.
I'd like to know is there any way to get current editor id (doc_id) from HiveServerClient._get_tez_session method, so I could do some hacking for a quick solution now. Thanks.
You can solve this by disabling Tez session mode
set tez.am.mode.session=false;
Session mode is more aggressive in reserving execution resources and
is typically used for interactive applications where multiple DAGs are
submitted in quick succession by the same user. For long running
applications, one-off executions, batch jobs etc non-session mode is
recommended. If session mode is enabled then container reuse is
recommended.
Also try to disable container reuse:
set tez.am.container.reuse.enabled=false;
See all Tez configuration settings here.
Also read this thread about Tez session naming.
I did not test it myself, maybe you can use hive.session.id property for getting/setting session id's.

Effects of setting PersistMessages to N and FileStorePath issues in QuickFixJ:

I am running into out of memory issues after a certain amount of time when I run my quickfixj app. After a little investigation I found out that this was being caused by messages that quickfixj caches for re sending when a resend request is received.
So for testing I set this flag to N on a particular session. After that my memory problems completely disappeared. But I do not understand why quickfixj is keeping these message in memory when I have properly set this property : FileStorePath. These messages should be stored into a file but they are not. I do see some files present in the directory I set in FileStorePath but none of them seems to be storing messages, I can only see sequence number in them. Do I need to set other flags besides this in order to make this work?
I do not plan to use PersisMessages flag outside testing. I would prefer FileStoreMaxCachedMsgs flag with a reasonable figure. I also need to know what will happen if my app receives a resend request when I have set PersisMessages to N? Will quickfixj send gapfills instead or will it crash with some exception message?
Thanks
i ve found that quickfixj sends gap fills when it could not find messages. also the config flag FileStoreMaxCachedMsgs is used to tell quickfixj about how much messages it should keep in cache before pushing them down to files. so this flag, in my opinion, should be altered in order to get your app to work without running out of memory due to message caching.
hope it ll be helpful for somebody. Thanks

iOS job queue similar to Path's android priority job queue

Does anyone have an iOS job queue similar to Path's Android Priority Job Queue that they don't mind sharing with the community? I am very new to iOS so I am not sure if the platform itself provides such a solution. On android no such thing exists so I had to use the library that Path has generously made available. If iOS itself or Xcode has such a solution/API please point me to it. If not please share yours if you don't mind. Thanks.
Basically I am looking for a job queue that can allow users to send data to server even when there is no network: which means the queue would hold on to the data even if user should turn off the iPhone. And then at some later time, when the system detects network, push the data to the server.
There is a similar question on SO already so I am including it for more detail: How to queue up data for server dispatch on android. The difference is that my question is for iOS and theirs is for android.
Use case
My case is imagining a user is boarding a train in a subway (no network) but decides to send an email. Then close the app, or even turn off the phone. Then an hour later, after user turns phone back on, when network is detected, the app sends the email.
https://github.com/thisandagain/queue is quite promising. It has ability to retry and is persistent.
AFNetworking's request operations and request operation manager could be modified to do this with not too much work.
Needed modifications:
When an AFHTTPRequestOperation fails due to no connectivity, make a copy of the operation and store it (in an NSArray, for example)
Use the built-in reachability manager, and retry the operations in the array when reachability returns
Remove the operations from the array if they're successful
Note that the completion blocks aren't copied when an operation is copied. From the documentation:
-copy and -copyWithZone: return a new operation with the NSURLRequest of the original. So rather than an exact copy of the operation at that particular instant, the copying mechanism returns a completely new instance, which can be useful for retrying operations.
A copy of an operation will not include the outputStream of the original.
Operation copies do not include completionBlock, as it often strongly captures a reference to self, which would otherwise have the unintuitive side-effect of pointing to the original operation when copied.
I don't know of any open source library that has already implemented these modifications, or I'd point you there.
I found very simmilar lib like path's job priority queue
https://github.com/lucas34/SwiftQueue/wiki
Hope this helps someone since this is very old questions but ot might help someone who is still finding like me :)

Resources