I'm trying to send different message cards to multiple teams channels.
I have already created a webhook (telekom/webhook) for this which gives me the right variables via json.
There are four department receiver channels (telekom/rest-api-component) which are also configured to send pre-formatted teams message cards with the variables they have submitted.
Currently this happens to all channels at the same time. In between I would need an "action" in which I can decide which of the channels is served based on the input values. Unfortunately I don't find anything suitable due to the variety of the apis. Do you know how I could realize this ? So something like if value department = Backoffice then (Teams "Account Management") action.
In order to be able to talk with the different applications from Office 365 I wanted to use the Microsoft Graph api which is now available for some time. I couldn't find them in Flowground. Are you planning to include this module ?
For the implementation with Office365 flows this would be absolutely necessary for me.
I want to come back to this question: The CBR is a good choice for executing decisions indeed. But is is this the best solution in every situation? I do not think so.
Assume the following task:
Depending on an input parameter test you want to fire a request to different web services (WS1:google.de and WS2:bing.de)
Solution 1: You realize the requests with dedicated connectors for WS1 and WS2.
In this case you need the CBR in front of WS1 connector and WS2 connector to decide, what connector has to been used next.
Solution 2: You are able to realize both requests with REST-API connector. In this case you can use a JSONATA expression as URL mapping, e.g.
(test="google") ? "http://google.de" : http://bing.de
By using JSONATA expressions every connector has (limited) capability for executing decisions.
Solution 2 has a big advantage when you are using realtime flows. In this case you are able to reduce the number of connectors they are needed for running the flow and (very important from a cost perspective ) the number of permanently claimed token by this flow.
For reducing the complexity of JSONATA expressions (e.g. when you add further search engines) and for separation of individual configuration items you can use the configuration connector (we can discuss this in a separate thread if needed).
Solution 1 is the choice without alternative when you have to decide between different structures/connectors they need to be executed within a flow.
Please try the Content-Based-Router: https://doc.flowground.net/guides/content-based-router.html, it is available on the Connector Catalog.
Related
We are trying to implement the oneM2M standard and have a question regarding the communication process between Remote CSE and IN-CSE. I wrote what I understood from the documentation step by step in below. Some of the issues are not so clear for us so before doing any implementation, I need to make sure everything is crystal clear.
I going to ask the question before telling everything we understand from the documentation. Then I am going to write step by step what is the solution we think. The question is, the request which is sent by an IN-AE, is for MN-CSE which the IN-CSE should going to redirect the request to MN-CSE or it should handle it itself.
Before anything else, we have two absolutely separated CSEs. One is IN-CSE, the other one is MN-CSE almost like below.
IN-CSE has a resource tree
/in-cse61
/in-cse61/csr-34
/in-cse61/ae-1234
MN-CSE has a resource tree
/mn-cse34
/mn-cse34/csr-61
/mn-cse34/ae-123456
/mn-cse34/cnt-1
/mn-cse34/cin-01
/mn-cse34/cin-02
/mn-cse34/cin-03
/mn-cse34/cnt-2
We skipped any security concern for now. Let’s say IN-AE wants to communicate with MN-CSE as we told in question above.
1- IN-AE should send a discovery or retrieve request to IN-CSE saying that get me all the child resources Remote CSE.
2- What is the exact difference between sending discovery or sending retrieve request? We thought that discovery request returns just resource uri but retrieve request returns whole data of exact resource. Is this approach correct?
3- After getting all the remoteCSEs, now I know ids of the remoteCSEs'. Then I can send a discovery request to the MN-CSE to get AEs in it. We think two options:
a. ~/in-cse61/csr-34?fu=1&rty=2
b. ~/mn-cse34?fu=1&rty=2
Option a : If IN-AE only wants to make a discovery request for IN-CSE’s resource tree, IN-CSE should take care of it without redirecting it to the MN-CSE. Because IN-CSE already knows that /in-cse61/csr-34 is kind of a valid RemoteCSE for it but the request path starts with ~/in-cse61 then it should be handled by IN-CSE.
Option b: If IN-AE wants to make a discovery request for MN-CSE’s resource tree then IN-CSE can understand it is related a RemoteCSE by looking at the /mn-cse34 part of the Request path because it doesn’t startwith IN-CSE’s resourceid.
So IN-AE(ex. Smartphone) somehow should decide which CSE should handle the request ? Is there anything we think wrong ?
---------------------EDITED--------------------------------------
I have inspected architecture of Application Developer Guide TR-0025 http://www.onem2m.org/application-developer-guide/architecture
According to this sample, a smartphone (IN-AE) can control Light#1(ADN-AE-1) through IN-CSE.
After Registration and Initial Resource Creation processes are completed, system is ready to discover and then control the lights.
GET /~/mn-cse/home_gateway?fu=1&rty=3&drt=2 HTTP/1.1
Host: in.provider.com:8080
Although Middle Node CSE-ID and Middle Node CSEBase name is used at HTTP Request url, host is addressed to IN-CSE. It means, the discovery request sent from IN-AE, handled by IN-CSE first then it redirects it to mn-cse. However you told me the opposite by saying “The retrieval or discovery normally is only limited to the resources of the hosting CSE, and does not traverse to the remote CSEs automatically.”.
At TR-0025 the given example is shown as common scenario.
And also at TR-0034, Actually it is traversing the request as you see on the diagram.
There are many points in your question that needs to be addressed.
First of all, there is no special entity in oneM2M named "IN-AE". This is just the name that is used for the AE that connects to the IN-CSE in oneM2M's TR-0025 : Light control using HTTP binding developer guide. An Application Entity can actually be connected to an IN-CSE or an MN-CSE by the same protocol (mca), though there might be AEs that are especially designed to work on one particular CSE.
Regarding your point 2, the difference between a retrieve and a discovery request:
The retrieve request is targeted at a resource to retrieve it. For example, a retrieve request sent to the Container resource /mn-cse34/cnt-1 (from your example) would retrieve the Container resource itself and its attributes.
A discovery request is also be targeted at a resource, and technically it very much looks like a normal retrieve request. But in addition you provide filter criteria and discovery result type. For example, a discovery request sent to the same Container resource /mn-cse34/cnt-1 might return all the references to the ContentInstances from that Container resource. Depending on the filter and result type you can either get the full resources or only references to them.
Please have a look at oneM2M's specification TS-0001 Functional Architecture, sections 10.2.6 Discovery and 8.1.2 Request for a full explanation and the list of possible parameters for the discovery request.
Regarding points 1 and 3 of your question: I don't know what your AE wants to solve, but it should have a notion of the data structure build in. It is a good idea to organise the data in a structured and uniform way, e.g. by using Containers, FlexContainers, Groups etc. This way the application doesn't need to browse the whole resource tree of a CSE, which could become really big over time. Of course, it might be that it is a general application that needs to traverse over a bigger and prior unknown structure. In that case the application could use a discovery request to get the relevant resources. Please note, that you can also do discovery over meta-data of resources, e.g. labels, date and time etc. This might be helpful to reduce the result set.
The retrieval or discovery normally is only limited to the resources of the hosting CSE, and does not traverse to the remote CSEs automatically. An exception are announced resources. Those resources are announced to a remote CSE where they get a kind of "shadow" counterpart, and they provide your application some information about the state of the resources as well as to how to retrieve them (via a link attribute). But if you really want to access a remote CSE and your application has permissions to do so, the pointOfAccess attribute provides you with the address of the remote CSE.
But as said before, in general you application (AE) is connected to a single CSE. On that CSE all the resources of the AE, or the resources the AE has access to, are hosted. Also keep in mind that the AE needs to have permission (via an AccessControlPolicy) on the CSE to access the resource.
Update
Perhaps I need to elaborate a bit more on how to work with a remote CSE. Ignoring announced resource for now, there are two possibilities that your "IN-AE" can access a resource on the remote CSE:
You can send requests such as retrieve, update etc to the remote CSE resource in the IN-CSE. These requests are then forwarded to the real "mn-cse" instance by the Mcc connection between the IN-CSE and the MN-CSE. This has the advantage that the "IN-AE" doesn't need to care on how to connect to the MN-CSE "mn-cse" directly (e.g. there might be firewalls etc in place to protect the MN-CSE).
You can see this if you look at the HTTP Request in the example of TR-0025 (http://www.onem2m.org/application-developer-guide/implementation/content-instance-retrieve)
GET /~/mn-cse/home_gateway/light_ae1/light/la HTTP/1.1
This receiver of the http request is the IN-CSE. But, as you can see it targets a ContentInstance at the remote CSE mn-cse.
If you really need to access the remote CSE directly, for example for performance reasons, then your "IN-AE" can retrieve the pointOfAccess attribute and directly access the remote CSE "mn-cse". In that case the "IN-AE" actually becomes an AE of the remote CSE "mn-cse" and needs to know how to connect to it.
I am planning to make a browser extension which uses Youtube data API v3. Since the code is public to the user, I am unable to use my API-key in the code. What is the correct way to use API in such a scenario? Also, since the API call will be made from user's browser, is there any other way to fetch data without using API-key at all?
TL;DR
On the API screen of Google Cloud Console, create a new key or edit an existing one to have no restriction. This will enable anyone to use this key to make requests the moment you publish it. There is no way to use the YouTube API without a key (or token respectively, when using OAuth). Your clients are allowed to consume up to 50.000.000 quota units per day, after which your app will essentially break for the rest of the day unless you buy more quota.
However, I have to disagree with the statement that you cannot (or "shouldn't") publish your API key; in certain scenarios, this may very well be desired.
Detailed Explanation
Web application keys used to be organized in two groups: Server keys and browser keys, the former of which where to be kept secret on the server of the web application, while the latter was sent to the client for use in JavaScript. Server keys could be configured to only be accepted from certain IP addresses. That way, even if someone got hold of your key, they wouldn't be able to use it. Browser keys could be restricted to a specified referrer, i.e. the domain (as in DNS) of your web application, so it wouldn't work on other sites beside your own either.
Nowadays, there is no distinction between server and browser keys anymore, they are simply called "API keys". This union makes perfect sense to me, since the only difference between the two types was how they were restricted. With the new API keys, one can still choose how to restrict its usage - or choose to not restrict the key at all.
This is where we get back to your case: It is, of course, possible to publish a key and at the same time not restrict it. Depending on how many users are using your app (and will be using it in the future) and how many are using your key for their own app (which you have no control over), the 50 million quota limit will work out for you or it will not.
An then there's responsibility as well. You are responsible for the queries that are made with your API key. This is probably one of the reasons why YouTube doesn't allow for requests without a valid key: They need to stay in control of their service and, naturally, want to protect it from DOS attacks. If someone does mischief with your key, you are the one who gets punished for it, usually by deactivation of the key.
I am just reading up on OData from here.
http://msopentech.com/odataorg/introduction/
Sorry, I am getting a bit impatient.
I just have a simple question for now before I go through the rest of the material. Which of the two options describe OData?
I understand it provides a protocol (much like SOAP or XML/Json over HTTP or XML-RPC) to transfer data from services over the web to clients. What I am intrigued by is that it also helps query that data, which is a great problem to solve as it help reduce payloads that you usually encounter when querying large data sets with XML/SOAP web services or other means (XML over Http, Json over Http, RPC responses, you name it).
Option A
Does oData get all the data to the client, use some client-based storage (like HTML 5 local storage for desktop browsers) to store it, and then query the data on the client using an in-process API?
Or
Option B
Does it provide an XML-based syntax for translation Linq like expressions and getting only the relevant result sets (filtered, ordered, whatever else) stuff from the server?
It's funny how when you type your thoughts, you end up solving your own problems. I think just typing the question has given me the answer. Option A sounds preposterous for so many reasons:
1) If it's a data-centric protocol, it has to not care about what type of client or consumer will want the data, so it cannot have any affinity to client or the capabilities (caching on client side) of the client.
2) It is a data-centric protocol and hence does not prescribe how data must be read or offer any tools on the client or server sides. It merely prescribes a data format, I would imagine.
It has to be Option B. Still, I just want a confirmation or correction.
Yes, it is Option B.
You could obviously write a terrible implementation of a client that would download ALL the data and then filter and show data based on client-side logic. But that would be rather silly.
The way you "write" your queries is quite well detailed in OData.org's "URL Conventions" page, typically something along the lines of: http://someserver/odata.svc/Customers(Location eq 'New York')
Hi i have googled all day long but i can't find an answer.
I have to write a web app which talks to asterisk.
It should be able to do ClicktoCall operations.
Can you guys recommend something ?
I came across a few projects but I'm still not sure.
I just want to connect to Asterisk and do calls from the web app.
thanks
If you're a Ruby programmer the best way for you to hook into Asterisk is adhearsion. It wraps up Asterisk's AGI and Manager (MAPI) APIs for you.
Also hAve a look at SIP, asterisk, adhearson and VoIP and in particular Adam Kalsey's answer. He works for Tropo which sponsor the adhearsion project.
First you need to know, that the protocol Asterisk uses is SIP, you can learn more at the Wikipedia.
Since you want to use an rails application, you may want to use ruby as well, so there's a ruby implementation named OverSip, you can check their API and see if it fits your requirements.
If you are aiming at web calls, you'll need an WebRTC, Flash or Java applet. For WebRTC you can check sipML5 for an opensource solution.
You can also opt for an interface, that will start a call from one number to another, using your phone. When the first call is picked up the server starts ringing in the destination.
Also you could make use of cloud communications providers like twilio, tropo, etc.
Try this Google search:
rails asterisk manager interface
I saw some interesting things right off. I am not trying to be one if those Use Google type people, just didn't want to paste all the links in that I found from this Google search.
Check it out, hope it helps.
There are several ways to do this but the three easiest ones are
1. Generate a call file on the Asterisk server
These files should be written to the dir
/var/spool/asterisk/outgoing
Asterisk will then pickup the file, process and delete it.
It's pretty aggressive when doing this so it's recommended to write the file into a temporary directory and then move it to the spool dir for processing.
An tutorial of the file format is here:
https://www.voip-info.org/asterisk-auto-dial-out/
(I personally feel this is a bit "hacky", and prefer doing it with an API call)
2. Generate the call by the AMI API interface.
Use the Originate function of the AMI API to generate the call. It's pretty easy to set this up just configure the manager.conf file whitch sets up a HTTP server on port 5038 from witch you can call the API.
https://www.voip-info.org/asterisk-config-managerconf/
3. Set up the call using the ARI API
First you need to setup ari.conf, this is enough for now:
[general]
enabled = yes
pretty=yes
allowed_origins=http://ari.asterisk.org
[my_username]
type = user
read_only = no
password = my_password
password_format = plain
This is a little bit more complicated to set up, but it really isn't that hard if you just get past the technical geek-speak. Just set up two channels, setup a mixing bridge and add both channels to the bridge.
To set up a click2call you dont even need to do that...
This is the call we use (ruby):
where
#{sip_id} is your registered SIP username
#{number} is the extension that is sent to the dialplan
#{USERNAME}
#{PASSWORD} is from ari.conf
HTTParty.post("http://sipserver.com/ari/channels?endpoint=SIP/#{sip_id}&extension=#{number}&context=outgoing&priority=1&timeout=30&api_key=#{USERNAME}:#{PASSWORD}")
(Note that you need to send the variabels for the variable parameter as a separate JSON for the originate command if you need to send them)
A really useful tool to understand how this works is the swagger at
http://ari.asterisk.org. We already allowed this origin in ari.conf so it should be ready to go. Remember to open your ports in firewalls etc.
Setup your Server IP and port and the API_KEY is in this format: my_username:my_password
I'm interested in creating a challenge / response type process in Delphi. The scenario is this...we have 2 computers...1 belongs to the user and 1 belongs to a support technician.
The user is locked out of a certain program, and in order to gain 1 time access, I want:
The user to be presented with a challenge phrase, such as "28394LDJA9281DHQ" or some type of reasonably unique value
The user will call support staff and read this challenge (after the support staff has validated their identity)
The support person will type this challenge value into a program on their system which will generate a response, something equally as unique as the response, such as "9232KLSDF92SD"
The user types in the response and the program determines whether or not this is a valid response.
If it is, the user is granted 1 time access to the application.
Now, how to do this is my question? I will have 2 applications that will not have networked access to one another. Is there any functionality within Windows that can help me with this task?
I believe that I can use some functionality within CryptoAPI, but I really am not certain where to begin. I'd appreciate any help you could offer.
I would implement a MD5 based Challenge-Response authentication.
From wikipedia http://en.wikipedia.org/wiki/CRAM-MD5
Protocol
Challenge: In CRAM-MD5 authentication, the server first sends
a challenge string to the client.
Response: The client responds with a username followed by a space
character and then a 16-byte digest in
hexadecimal notation. The digest is
the output of HMAC-MD5 with the user's
password as the secret key, and the
server's original challenge as the
message.
Comparison: The server uses the same method to compute the expected
response. If the given response and
the expected response match then
authentication was successful.
This provides three important types of
security.
First, others cannot duplicate the hash without knowing the password.
This provides authentication.
Second, others cannot replay the hash—it is dependent on the
unpredictable challenge. This is
variously called freshness or replay
prevention.
Third, observers do not learn the password. This is called secrecy.
The two important features of this
protocol that provide these three
security benefits are the one-way hash
and the fresh random challenge.
Additionally, you may add some application-identification into the challenge string, for a double check on the sender of the challenge.
Important: it has some weaknesses, evaluate carefully how they may affect you.
Regarding the verbal challenge/response strategy: We used this approach to license a niche application on five thousand workstations world-wide for more than ten years. Our support team called it the "Missile Launch Codes" because of its similarity to the classic missile launch authentication process seen on old movies.
This is an extremely time consuming way to protect your program. It consumed enormous amounts of our staffs' and customers' time reading the codes to and from users. They all hated it.
Your situation/context may be different. Perhaps you won't be using it nearly as frequently as we did. But here are some suggestions:
Carefully consider the length and contents of the code: most users (and support staff) resent typing lots of characters. Many users are bad typists. Consider whether a long string and including punctuation marks and case sensitivity unduly burdens them compared to the amount of security added.
After years of using a verbal challenge/response implementation, we left it in place (as a fall-back) but added a simple automated system. We chose to use FTP rather than a more sophisticated web approach so that we didn't have to have any software running on our in-house server (or deal with our IT staff!)
Basically, we use FTP files to do the exchange that was previously done on the phone. The server places a file on the FTP server containing the challenge phrase. The file's name is the customer's name. Our support staff have a program that automatically creates this file on our ftp site.
The customer is instructed by our staff to hit a hot key that reads the FTP file, authenticates it, and places a response file back on the server.
Our support staffs' software has been polling waiting for the customer's software to create the response file. When it sees the file, it downloads it and confirms its contents, and deletes it from the server.
You can of course have this exchange happen as many times and in either direction as you need in a given session in order to accomplish your goals.
The data in the files can have the same MD5 keys that you would use verbally, so that it is as secure as you'd like.
A weakness in this system is that the user has to have FTP access. We've found that the majority of our users (all businesses) have FTP access available. (Of course, your customer base may not...) If our application in the field is unable to access our FTP site, it clearly announces the problem so that our customer can go to their IT staff to request that they open the access. Meanwhile, we just fall back to the verbal codes.
We used the plain vanilla Indy FTP tools with no problem.
No doubt there are some weaknesses in this approach (probably including some that we haven't thought of.) But, for our needs, it has been fantastic. Our support staff and customers love it.
Sorry if none of this is relevant to you. Hope this helps you some.