I want to know if it is possible to obtain data of the other modules from a module. I am using ejabberd server 15.10, I implemented modules using Erlang.
Here is the case:
I have a module that filters messages: mod_filter
I have another module that makes some calculations while the server is running: mod_calculate
Is it possible to get fresh data from mod_calculate every time the ejabberd server filters a message at mod_filter.
Data isn't stored in modules but in variables. And you won't have access to internal variables on which code in one module operates without that that module exporting those variables to the external world somehow.
The module may have some functions already exported. Check with:
rp(mod_calculate:module_info()).
This will show you all functions exported in the module. Some of those functions may expose the variables from the module to other modules. If not, then you would need to add such functions and call them from mod_filter.
What #Amiramix states is accurate, but it's not the whole picture.
There's a low-coupling mechanism to communicate events between modules in ejabberd - it's the hooks and handlers concept. The link points at MongooseIM documentation, but this mechanism is more or less the same in both codebases.
In general, one module can call a hook, which is alike a function call, but depending on the registered handlers may or may not result in some action(s) being carried out. Other modules can register handlers for hooks they choose. If you're authoring the modules in question, this is a mechanism which might give you the required communication channel.
To make things more concrete - each time mod_filter needs some information that only mod_calculate has access to, it can run ejabberd_hooks:run_fold/4 with a custom hook name. If mod_calculate registers a handler for that hook (usually in its start function), it can return some data relevant to mod_filter. However, different modules can implement a handler for the hook, so mod_filter and mod_calculate aren't coupled as they would be if you used a direct function call (like mod_calculate:some_function(...)).
Related
TL;DR
If OTP application A makes calls to a globally registered gen_server in application B, and I don't want to install all of app B on nodes that don't run it, how do I handle the gen_servers client code?
Background (slightly simplified)
I have a system using distributed erlang, 2 nodes with distinct purposes, running mostly different code. So far, I have been using hand made Makefiles and installed all software on both nodes. Some of the code is run as OTP applications with supervisors, but it is not done systematically so not all modules listed in any app-files of part of proper supervision trees.
The dependencies of the code running at each node is different enough that I want to divide it into OTP applications (one per node), to build releases and install them separately. I hope this would let me ditch my handmade Makefiles and switch to rebar3.
One node runs a central server in all erlang, it has dependencies (cowboy) which are not relevant for the other node. The other node runs a client program that use the server, but also use different port programs and gui libs which are not needed in the server node.
Problem
The way the client interact with the server is by making regular function calls to the client API of a globally registered gen_server. I.e. the gen_server which runs on the server node, has its client functions in the same module. This means that this gen_servers beam file needs to be present in both nodes, but it should only be part of a supervision tree in one of the applications.
The server side code in this gen_server uses other modules that are only needed in the server node, thus there are test code for the gen_server that also depend on those other modules. (I realise this could be solved by proper mocking in the tests.)
What solutions have I considered?
Put it in a library application
I could put the gen_servers code in a library app which both the others depend on. It would be strange for few reasons.
The gen_server module would no be part of the same app as the other modules it depends on (and the app-level dependency would be reversed compared to the actual dependency in the code).
Test code would either need to stay in the server app (not the same app as the code it tests) or be re-worked to not depend on surrounding modules (which would be good but time consuming).
Include the server app in both releases
I could include the server app in both nodes, and have the supervisor code check if it should actually start anything based on init-arguments or node name. But it would kind of defeat the purpose of what I'm trying to do.
Include the gen_server module in both apps
I could use a symlink or something to include the gen_server module in the client app as well. I guess it would work but it feels dirty.
Split the gen_server module into a client- and a server-module
Then the client module could be put in the client app (or in a lib if some part of that server also use it). It would divert a lot from the way gen_severs are are usually written.
I am trying to implement a simple client and server using standalone Asio (non-boost). I saw on this page (in the ducumentation):
http://think-async.com/Asio/asio-1.12.2/doc/asio/net_ts.html
that Asio is currently implementing the interface for networking that will be supported in the C++20 standard. I would like to use that interface for my application so that when the new standard libraries will be available, I will have just to change the headers and still have working application. So my question are:
1) Do you think is possible to use Asio with only the interfaces reported on the page?
2) If yes, could you show me simple code samples to do DNS resolution, connect (client), accept (server) and read/write operations (no asychronous stuff, only blocking)? Please indicate also the namespaces you used.
As a reference for the operations I want to perform (look under section echo and the non blocking client and server):
http://think-async.com/Asio/asio-1.12.2/doc/asio/examples/cpp11_examples.html
The standard way of setting up a network of applications communicating over the ACE/TAO CORBA framework has always been
run the naming service
run the event channel
run your applications
I'd like to alleviate my end-users from having to spawn multiple background services by hand and am looking for a clean solution. I'd also like to have my networks as plug 'n play as possible. That means we're synchronizing various hardware components with the help of a central controller instance. Each of these pairings makes up an (isolated) network, so we can have multiples of these in one environment and don't want any interference between them.
My idea was to just spawn a naming service and and event service on the controller's initialization but I haven't found a nice way yet to spawn both processes (tao_cosnaming, tao_rtevent) as child processes, so that they are really tied to the controller instance and don't keep running if the controller crashes i.e. Is there already a mechanism inside TAO that allows this?
The Implementation Repository could do this for you. Another option is to just link the Naming Service and Event Channel into your controller, just one process that also delivers these services.
The PCs in my area at work have Avaya one-X Agent 2.5.7.6.
I am writing a program to automate some commonly used call functions. I have the Avaya one-X Agent 2.5 API guide and using it,I have managed to interface to one-X and perform some of the functions I need (dialing numbers, answering/releasing calls, putting them on hold).
Nevertheless, there are some additional things I need to do that the guide doesn't mention. Specifically, I need to be able to :
query and set the work state (auto-in/ready, ACW, and some of the aux modes)
transferring the current call to one of several commonly dialed numbers.
Can you point to me to any documentation or links where I can find information about these operations?
one-X Agent HTTP API does not support these methods.
This functionality can be implemented using AVAYA AES server-side API.
You could use DMCC (which has bindings to different languages and also a language-agnostic XML interface), which implements CSTA ECMA-269 industrial standard.
Specifically, you'll need Get/Set Agent State methods to control work state and Single Step Transfer Call method to redirect a call. You'll need DMCC 6.x version for Agent State functionality.
I'm in the planning phase of our new site - it's an extension of some mobile apps we've built. We want to provide our users with a central point for communication and also provide features for users who don't want to/can't use the mobile apps. One of the features we're looking at adding is a reputation system similar in nature to the SO badge system. We're designing the system to use SOA.
I don't want to have to code all of this logic into the main app as discreet chunks. I'm thinking of creating a means to accomplish this which will allow us to define new thresholds and rules for gaining reputation and have them injected into some service. The two ways I've thought of doing this so far are:
To look for certain traits in a users actions and respond, this would mean having a service running that can run through the 'plugged in' award definitions and check for thresholds that have been met and respond appropriately.
To fire events when the user performs actions - listen out for those events and respond appropriately. Because the services which will be carrying out these actions are running in separate app domains potentially on separate servers the only way I can see having a central message bus to listen and respond to these events is by using something like MassTransit, nServiceBus or Rhino.Esb.
I know that using a service bus can very easily be inappropriately designed into an application that simply doesn't need it and most times - unless you're integrating disparate, heterogenous systems - you most likely won't need one when designing a new system but I'm a bit lost for options as to the best way to do this. I don't like the idea of having a service hammer the Db all the time in the background. But it does sound like it might be a lot simpler early on - later on - I dread to think!
Has anyone here designed a system like this? How did you accomplish this? We're designing for high throughput as we expect there will be times when the system will need to be able to cope with bursts of users.
I've designed a system that had similar requirements. To achieve this the key elements were:
Plugins
Event messaging - using Emesary
The basic concept is that the core is not aware of exactly which module will perform any given task.
The messages are defined and at points within the system they are dispatched. The sender is not aware if the message is required. This effectively decouples vast chunks of the system.
So to perform a job some code is plugged in, that registers with the event messaging bus and will receive messages. When it receives a message that it needs to process it will process it.
The Emesary code is extremely small and efficient in the first instance I've called it (Emesary and you're free to use it; or from Emesary CodePlex
As the system becomes more complex it is possible that there are lots of events flying about, if you get more than 20k a second it was always in my design to add filtering and routing (implemented by the recipient interface being extended to allow a recipient to specify messages it wants to receive during registration). I've never needed to add this filtering because Emesary is sufficiently efficient that it is the processing of the messages that takes the time.
I've build a version of Emesary which bridges two Notifiers across disparate systems using WCF, Corba and TCP/IP. I investigated using RabbitMQ and decided it was possible to use this underneath Emesary if needed.
Base Class Diagram
Scalable server.
This is a fairly complex example however it shows where Emesary fits in. In this diagram anything with a drop shadow can have multiple instances and this is managed outside of what I'm trying to explain here.