I have been trying to implement DI for Azure Functions where the functions is triggered by ServiceBus (topics/subscriptions in this case):
[Singleton]
[FunctionName("Alert")]
public static async Task Alert([ServiceBusTrigger(Topic.Alert, Subscription.PowerBi, Connection = "servicebusconnectionstring")] Message message, [Inject]IPowerBiService powerBiService, [Inject]IQueueService queueService)
I have read about Azure Functions and DI on following sites:
https://mcguirev10.com/2018/04/03/service-locator-azure-functions-v2.html
https://blog.wille-zone.de/post/azure-functions-proper-dependency-injection/
https://github.com/introtocomputerscience/azure-function-autofac-dependency-injection
All examples works fins using HTTP trigger, I assume the IIS host is up and running and is containing the services. But using ServiceBus trigger, I can't get it to work. I have implemented the solutions mention above, and a few more but all get same issues. The code works, bu the services are created for message/trigger.
Anyone out there that has manage to do this, or arn't it possible to do?
NOTE (update):
I got some more information that I haven’t got time to verify yet, but I have been using a consumption plan for my Azure Functions. It may be the case that you need an App Service Plan instead (using consumption since that price model is more convenient). If anyone know more about this?
I will look into this later this week.
I just want to confirm that it work’s fine now using an App Service Plan instead of an Consumption Plan. The difference is the "cold start" instead of a "warm" host.
I guess all different once of DI implementations should work fine.
I have been using following : https://github.com/MV10/Azure.FunctionsV2.Service.Locator
Related
I have a function app with a durable task running on Azure. What is the best way of changing the 'MaxConcurrentActivityFunctions' binding (i.e. in host.json) once it's been deployed/published?
"extensions": {
"durableTask": {
"MaxConcurrentActivityFunctions": 4, // ensures scale-out for mappers
}
}
I want to change it to 1 or 4 depending on the type of service plan I use.
I haven't been able to find a way to do it with the Azure SDK, or with a rest api. The only way I can think of getting it to work is to stop the function app, download the host.json file, modify it, upload it, and restart the function app.
Seems like overkill to me. Am I missing something?
host.json uses the aspnet core config system.
You can set an app setting (environment variable) AzureFunctionsJobHost__extensions__durableTask__MaxConcurrentActivityFunctions=1 through the portal (or Azure ARM REST APIs) and it should take precedence over the value in host.json
I have spent my afternoon getting very excited about the container-native serverless platform 'fn project' - http://fnproject.io/.
I love the idea of the FaaS model but have no intention of locking myself into a particular cloud vendor for most of the lifetime of an app - and several other reasons including the desire to spin up the entire app on a small server anywhere if I choose.
fn project seems great for my needs until I finish perusing the documentation and all the relevant blog posts and suddenly think 'what? Wait....what??? Where are the http operations?'.
I cannot find a single reference anywhere that states if it is even possible to to have http triggers for different http operations (ie POST, PUT, PATCH, DELETE), let alone how I would do it.
I want to build REST api's (or certainly at the very least json-serving http-based RPC apis - if it doesn't have hypermedia links it isn't REST ;) but let's not get into that one in this thread)
Am I missing something here (certainly the correct bit of documentation)??
Can anybody please enlighten me as to how I would do this, or even tell me if I have totally misunderstood what I should use this for?
My excitement has gone soft for now but I'm hoping someone that will change with the right information.
It feels odd that I can't find anyone else complaining about this, so I think that indicates my misunderstanding perhaps.
Other solutions such as OpenFaaS look interesting but I dont wan't to have to learn how to deploy kubernetes and docker swarms if I can avoid it :)
I'm not an expert, but as of now it seems not possible to specify the http method inside the trigger. Check latest trigger spec : as you can see, there is no notion of http method here.
However, handling different HTTP methods can be done inside the function itself.
For example, in Java (with fdk-java v1.0.80), you can use com.fnproject.fn.api.httpgateway.HTTPGatewayContext as the first parameter of the function, as described in the section "Accessing HTTP Information From Functions" of the documentation :
In Fn for Java, when your function is being served by an HTTP trigger (or another compatible HTTP gateway) you can get access to both the incoming request headers for your function by adding a 'com.fnproject.fn.api.httpgateway.HTTPGatewayContext' parameter to your function's parameters.
Using this allows you to :
...
Access the method and request URL for the trigger
...
You can then retrieve the HTTP method by calling getMethod() on the HTTPGatewayContext passed as parameter.
In other languages (with others fdk), it's possible to do the same :
in Go : example calling RequestMethod() on context
in Ruby : class HTTPContext
in Python : class HTTPGatewayContext
in Node : class HTTPGatewayContext
From this different contexts, you'll then be able to get method parameter passed when fn invoke --method=[GET|POST|...] (via fn-http-method header).
The main drawback here is that all HTTP methods should be handled in the same function. Nonetheless, you can structure your code to have only one class per method.
After some further thought it seems fairly clear now what my actual misunderstanding was....
When I have built Serverless framework services in the past (or built and deployed Lambda functions using terraform) I have been deploying to AWS and so have been using AWS's API Gateway offering (their product is actually called API gateway but its important to recognise that API Gateway is a distributed systems / micro-sevices design pattern).
API gateway makes it possible to route specific http request types including the method (GET,POST,PUT,DELETE) to the desired functions.
Platforms such as Fn project and OpenFaaS do not provide an out of the box api gateway solution and it seems we would need to take care of this ourselves.
These above mentioned platforms are about deployment of functions. We find the other bits via our product of choice.
Hi i have googled all day long but i can't find an answer.
I have to write a web app which talks to asterisk.
It should be able to do ClicktoCall operations.
Can you guys recommend something ?
I came across a few projects but I'm still not sure.
I just want to connect to Asterisk and do calls from the web app.
thanks
If you're a Ruby programmer the best way for you to hook into Asterisk is adhearsion. It wraps up Asterisk's AGI and Manager (MAPI) APIs for you.
Also hAve a look at SIP, asterisk, adhearson and VoIP and in particular Adam Kalsey's answer. He works for Tropo which sponsor the adhearsion project.
First you need to know, that the protocol Asterisk uses is SIP, you can learn more at the Wikipedia.
Since you want to use an rails application, you may want to use ruby as well, so there's a ruby implementation named OverSip, you can check their API and see if it fits your requirements.
If you are aiming at web calls, you'll need an WebRTC, Flash or Java applet. For WebRTC you can check sipML5 for an opensource solution.
You can also opt for an interface, that will start a call from one number to another, using your phone. When the first call is picked up the server starts ringing in the destination.
Also you could make use of cloud communications providers like twilio, tropo, etc.
Try this Google search:
rails asterisk manager interface
I saw some interesting things right off. I am not trying to be one if those Use Google type people, just didn't want to paste all the links in that I found from this Google search.
Check it out, hope it helps.
There are several ways to do this but the three easiest ones are
1. Generate a call file on the Asterisk server
These files should be written to the dir
/var/spool/asterisk/outgoing
Asterisk will then pickup the file, process and delete it.
It's pretty aggressive when doing this so it's recommended to write the file into a temporary directory and then move it to the spool dir for processing.
An tutorial of the file format is here:
https://www.voip-info.org/asterisk-auto-dial-out/
(I personally feel this is a bit "hacky", and prefer doing it with an API call)
2. Generate the call by the AMI API interface.
Use the Originate function of the AMI API to generate the call. It's pretty easy to set this up just configure the manager.conf file whitch sets up a HTTP server on port 5038 from witch you can call the API.
https://www.voip-info.org/asterisk-config-managerconf/
3. Set up the call using the ARI API
First you need to setup ari.conf, this is enough for now:
[general]
enabled = yes
pretty=yes
allowed_origins=http://ari.asterisk.org
[my_username]
type = user
read_only = no
password = my_password
password_format = plain
This is a little bit more complicated to set up, but it really isn't that hard if you just get past the technical geek-speak. Just set up two channels, setup a mixing bridge and add both channels to the bridge.
To set up a click2call you dont even need to do that...
This is the call we use (ruby):
where
#{sip_id} is your registered SIP username
#{number} is the extension that is sent to the dialplan
#{USERNAME}
#{PASSWORD} is from ari.conf
HTTParty.post("http://sipserver.com/ari/channels?endpoint=SIP/#{sip_id}&extension=#{number}&context=outgoing&priority=1&timeout=30&api_key=#{USERNAME}:#{PASSWORD}")
(Note that you need to send the variabels for the variable parameter as a separate JSON for the originate command if you need to send them)
A really useful tool to understand how this works is the swagger at
http://ari.asterisk.org. We already allowed this origin in ari.conf so it should be ready to go. Remember to open your ports in firewalls etc.
Setup your Server IP and port and the API_KEY is in this format: my_username:my_password
I've been doing some research on BPEL for about two weeks now and still don't quite get it.
I have deployed the HelloWorld sample in ODE and have also managed to deploy this other one.
My intention was to do something like the second example but with my own real WS deployed and working.
I'm now at the point of having a process with no errors and correctly deployed in ODE with the following structure:
I have started the project from a service definition importing my Multiply.wsdl. The Designer has composed the import tag into the MuktiplyProcessArtifacts.wsdl next to the PartnerLinkTypes all automagically so I assume all namespaces, etc are ok.
There is a few concepts I misundertand in order to make all of this work:
In my original Multiply.wsdl I have
soap:address
location="http://localhost:8080/WS-multiply/multiply"
but ODE tells me my soap:address must have the form host.port/ode/processes..
This doesn't sound reasonable to me since my WS could be implemented anywhere outside my ODE_HOME.
The second example I mentioned before explains how the Designer presumably creates a "Caller.wsdl", which in fact has the function I would desire, which is to implement a "wrapper" WSDL, providing the BPEL process with entry and exit points. The issue is the Designer does not generate that interface. Am I supposed to create it myself? Do I have to create it at all?
If that 3rd wsdl is really needed, is it the one I would have to call if I wanted to test the whole process?
It looks like your partner WSDL is associated to a myrole of a partnerlink. Partnerlinks and partnerlink types are a concept in BPEL that is used to define dual interfaces in a sense that if a partner A wants to communicate with a BPEL process as a buyer, it needs to provide a certain set of functionality that the process can use for further communications (i.e. sending a shipment confirmation to the buyer). Thus, a partnerlink maintains two roles, the myRole is the portType (aka interface) that the process itself provides, the partnerRole refers to a portType the process expects to be implemented by the partner. MyRoles must be of course implemented by the BPEL process and thus needs to have an endpoint that is exposed by the BPEL engine. PartnerRoles can be bound to arbitrary endpoints. This happens in the deployment descriptor, which is the deploy.xml in ODE.
I guess you can fix your process by assigning your partner WSDL to a partner role.
I hope http://thiliniishaka.blogspot.com/2012/10/develop-ws-bpel-process-using-wso2.html
and http://thiliniishaka.blogspot.com/2012/10/part-2-developing-ws-bpel-process-using.html may help you to resolve aforementioned queries.
Thanks
Thilini
Its mandatory to have Ode.war deployed at tomcat server, tomcat create a path like the picture, you need to config your endpoit with the complete path /ode/processes
c:\apache-tomcat-7.0.55\webapps\ode\WEB-INF\processes\BPEL_WS\
I have created a WCF service using the NetMsmq binding for which i created a private queue on my machine and executed the project. This works fine as such and my WCF service is started and accesses the message using the queue in the debugging environment. Now, I wanted to host the service using the windows service and for the same I created a new project and windows installer as well (This service runs under Local System Account). Then I tried installing this windows service using the InstallUtil command through the command prompt. When installation is happening and during the service host opening, I get an exception saying:
There was an error opening the queue. Ensure that MSMQ is installed and running, the queue exists and has proper authorization to be read from. The inner exception may contain additional information.
Inner Exception System.ServiceModel.MsmqException: An error occurred while opening the queue:Access is denied. (-1072824283, 0xc00e0025). The message cannot be sent or received from the queue. Ensure that MSMQ is installed and running. Also ensure that the queue is available to open with the required access mode and authorization.
at System.ServiceModel.Channels.MsmqQueue.OpenQueue()
at System.ServiceModel.Channels.MsmqQueue.GetHandle()
at System.ServiceModel.Channels.MsmqQueue.SupportsAccessMode(String formatName, Int32 accessType, MsmqException& msmqException)
Could anyone suggest the possible solution for the above issue? Am I missing any permissions to be set for the queue as well as the windows service, if so could you suggest where should these permissions be added?
Tom Hollander had a great three-part blog series on using MSMQ from WCF - well worth checking out!
MSMQ, WCF and IIS: Getting them to play nice (Part 1)
MSMQ, WCF and IIS: Getting them to play nice (Part 2)
MSMQ, WCF and IIS: Getting them to play nice (Part 3)
Maybe you'll find the solution to your problem mentioned somewhere!
Yes, it looks like a permissions issue.
Right click on your private queue from the Server Manager, and select Properties. Proceed to the Security tab, and make sure you have the right permissions in there for your Local System Account.
This is also confirmed in Nicholas Allen's article: Diagnosing Common Queue Errors, where the author defines the error code 0xC00E0025 as a permissions problem.
I ran into same problem, here is the solution.
Right click "My Computer" --> Manage. In Computer Management window go to "Services and Applications --> Message Queueing --> ur queue", select ur queue and access properties. Add the user running ur WCF application and give full access. This should solve the issue.
Can simple be that the service can't find the it's queue.
The queue name must exact match the endpoint address.
Example:
net.msmq://localhost/private/wf.listener_srv/service.svc
points to local queue
private$\wf.listener_srv\service.svc
If queue name and endpoint are according to each other, then is most like that the credentials defined on the IIS pool don't grant access to the queue.