Communication with resources on a remote CSE - OneM2M - iot

We are trying to implement the oneM2M standard and have a question regarding the communication process between Remote CSE and IN-CSE. I wrote what I understood from the documentation step by step in below. Some of the issues are not so clear for us so before doing any implementation, I need to make sure everything is crystal clear.
I going to ask the question before telling everything we understand from the documentation. Then I am going to write step by step what is the solution we think. The question is, the request which is sent by an IN-AE, is for MN-CSE which the IN-CSE should going to redirect the request to MN-CSE or it should handle it itself.
Before anything else, we have two absolutely separated CSEs. One is IN-CSE, the other one is MN-CSE almost like below.
IN-CSE has a resource tree
/in-cse61
/in-cse61/csr-34
/in-cse61/ae-1234
MN-CSE has a resource tree
/mn-cse34
/mn-cse34/csr-61
/mn-cse34/ae-123456
/mn-cse34/cnt-1
/mn-cse34/cin-01
/mn-cse34/cin-02
/mn-cse34/cin-03
/mn-cse34/cnt-2
We skipped any security concern for now. Let’s say IN-AE wants to communicate with MN-CSE as we told in question above.
1- IN-AE should send a discovery or retrieve request to IN-CSE saying that get me all the child resources Remote CSE.
2- What is the exact difference between sending discovery or sending retrieve request? We thought that discovery request returns just resource uri but retrieve request returns whole data of exact resource. Is this approach correct?
3- After getting all the remoteCSEs, now I know ids of the remoteCSEs'. Then I can send a discovery request to the MN-CSE to get AEs in it. We think two options:
a. ~/in-cse61/csr-34?fu=1&rty=2
b. ~/mn-cse34?fu=1&rty=2
Option a : If IN-AE only wants to make a discovery request for IN-CSE’s resource tree, IN-CSE should take care of it without redirecting it to the MN-CSE. Because IN-CSE already knows that /in-cse61/csr-34 is kind of a valid RemoteCSE for it but the request path starts with ~/in-cse61 then it should be handled by IN-CSE.
Option b: If IN-AE wants to make a discovery request for MN-CSE’s resource tree then IN-CSE can understand it is related a RemoteCSE by looking at the /mn-cse34 part of the Request path because it doesn’t startwith IN-CSE’s resourceid.
So IN-AE(ex. Smartphone) somehow should decide which CSE should handle the request ? Is there anything we think wrong ?
---------------------EDITED--------------------------------------
I have inspected architecture of Application Developer Guide TR-0025 http://www.onem2m.org/application-developer-guide/architecture
According to this sample, a smartphone (IN-AE) can control Light#1(ADN-AE-1) through IN-CSE.
After Registration and Initial Resource Creation processes are completed, system is ready to discover and then control the lights.
GET /~/mn-cse/home_gateway?fu=1&rty=3&drt=2 HTTP/1.1
Host: in.provider.com:8080
Although Middle Node CSE-ID and Middle Node CSEBase name is used at HTTP Request url, host is addressed to IN-CSE. It means, the discovery request sent from IN-AE, handled by IN-CSE first then it redirects it to mn-cse. However you told me the opposite by saying “The retrieval or discovery normally is only limited to the resources of the hosting CSE, and does not traverse to the remote CSEs automatically.”.
At TR-0025 the given example is shown as common scenario.
And also at TR-0034, Actually it is traversing the request as you see on the diagram.

There are many points in your question that needs to be addressed.
First of all, there is no special entity in oneM2M named "IN-AE". This is just the name that is used for the AE that connects to the IN-CSE in oneM2M's TR-0025 : Light control using HTTP binding developer guide. An Application Entity can actually be connected to an IN-CSE or an MN-CSE by the same protocol (mca), though there might be AEs that are especially designed to work on one particular CSE.
Regarding your point 2, the difference between a retrieve and a discovery request:
The retrieve request is targeted at a resource to retrieve it. For example, a retrieve request sent to the Container resource /mn-cse34/cnt-1 (from your example) would retrieve the Container resource itself and its attributes.
A discovery request is also be targeted at a resource, and technically it very much looks like a normal retrieve request. But in addition you provide filter criteria and discovery result type. For example, a discovery request sent to the same Container resource /mn-cse34/cnt-1 might return all the references to the ContentInstances from that Container resource. Depending on the filter and result type you can either get the full resources or only references to them.
Please have a look at oneM2M's specification TS-0001 Functional Architecture, sections 10.2.6 Discovery and 8.1.2 Request for a full explanation and the list of possible parameters for the discovery request.
Regarding points 1 and 3 of your question: I don't know what your AE wants to solve, but it should have a notion of the data structure build in. It is a good idea to organise the data in a structured and uniform way, e.g. by using Containers, FlexContainers, Groups etc. This way the application doesn't need to browse the whole resource tree of a CSE, which could become really big over time. Of course, it might be that it is a general application that needs to traverse over a bigger and prior unknown structure. In that case the application could use a discovery request to get the relevant resources. Please note, that you can also do discovery over meta-data of resources, e.g. labels, date and time etc. This might be helpful to reduce the result set.
The retrieval or discovery normally is only limited to the resources of the hosting CSE, and does not traverse to the remote CSEs automatically. An exception are announced resources. Those resources are announced to a remote CSE where they get a kind of "shadow" counterpart, and they provide your application some information about the state of the resources as well as to how to retrieve them (via a link attribute). But if you really want to access a remote CSE and your application has permissions to do so, the pointOfAccess attribute provides you with the address of the remote CSE.
But as said before, in general you application (AE) is connected to a single CSE. On that CSE all the resources of the AE, or the resources the AE has access to, are hosted. Also keep in mind that the AE needs to have permission (via an AccessControlPolicy) on the CSE to access the resource.
Update
Perhaps I need to elaborate a bit more on how to work with a remote CSE. Ignoring announced resource for now, there are two possibilities that your "IN-AE" can access a resource on the remote CSE:
You can send requests such as retrieve, update etc to the remote CSE resource in the IN-CSE. These requests are then forwarded to the real "mn-cse" instance by the Mcc connection between the IN-CSE and the MN-CSE. This has the advantage that the "IN-AE" doesn't need to care on how to connect to the MN-CSE "mn-cse" directly (e.g. there might be firewalls etc in place to protect the MN-CSE).
You can see this if you look at the HTTP Request in the example of TR-0025 (http://www.onem2m.org/application-developer-guide/implementation/content-instance-retrieve)
GET /~/mn-cse/home_gateway/light_ae1/light/la HTTP/1.1
This receiver of the http request is the IN-CSE. But, as you can see it targets a ContentInstance at the remote CSE mn-cse.
If you really need to access the remote CSE directly, for example for performance reasons, then your "IN-AE" can retrieve the pointOfAccess attribute and directly access the remote CSE "mn-cse". In that case the "IN-AE" actually becomes an AE of the remote CSE "mn-cse" and needs to know how to connect to it.

Related

I need some kind of decission module in flowground

I'm trying to send different message cards to multiple teams channels.
I have already created a webhook (telekom/webhook) for this which gives me the right variables via json.
There are four department receiver channels (telekom/rest-api-component) which are also configured to send pre-formatted teams message cards with the variables they have submitted.
Currently this happens to all channels at the same time. In between I would need an "action" in which I can decide which of the channels is served based on the input values. Unfortunately I don't find anything suitable due to the variety of the apis. Do you know how I could realize this ? So something like if value department = Backoffice then (Teams "Account Management") action.
In order to be able to talk with the different applications from Office 365 I wanted to use the Microsoft Graph api which is now available for some time. I couldn't find them in Flowground. Are you planning to include this module ?
For the implementation with Office365 flows this would be absolutely necessary for me.
I want to come back to this question: The CBR is a good choice for executing decisions indeed. But is is this the best solution in every situation? I do not think so.
Assume the following task:
Depending on an input parameter test you want to fire a request to different web services (WS1:google.de and WS2:bing.de)
Solution 1: You realize the requests with dedicated connectors for WS1 and WS2.
In this case you need the CBR in front of WS1 connector and WS2 connector to decide, what connector has to been used next.
Solution 2: You are able to realize both requests with REST-API connector. In this case you can use a JSONATA expression as URL mapping, e.g.
(test="google") ? "http://google.de" : http://bing.de
By using JSONATA expressions every connector has (limited) capability for executing decisions.
Solution 2 has a big advantage when you are using realtime flows. In this case you are able to reduce the number of connectors they are needed for running the flow and (very important from a cost perspective ) the number of permanently claimed token by this flow.
For reducing the complexity of JSONATA expressions (e.g. when you add further search engines) and for separation of individual configuration items you can use the configuration connector (we can discuss this in a separate thread if needed).
Solution 1 is the choice without alternative when you have to decide between different structures/connectors they need to be executed within a flow.
Please try the Content-Based-Router: https://doc.flowground.net/guides/content-based-router.html, it is available on the Connector Catalog.

Controlling IIS BITS uploads

I'm running an IIS web site (built using ASP.NET/MVC) that among other things collects files from multiple agents that anonymously upload the files via BITS.
I need to make sure that only files uploaded from known sources as well as matching certain predefined file name pattern will be accepted by IIS. All other BITS upload attempts must be cancelled.
As I understand, BITS uses an ad hoc protocol over HTTP 1.1 using "BITS_POST" verb. So, ideally, I'd like to hook into IIS, analyze a BITS_POST request info and if it does not satisfy my pre-conditions, drop the request.
I've tried to create and register a filter implementing IActionFilter.OnActionExecuting, but it seems that my filter does not receive BITS_POST requests.
I'd be glad to hear if somebody have implemented similar BITS related solutions and how this was done. Anyway, other ideas are welcome too.
Regards,
Natan
I have never worked with BITS, frankly i dont know what is it.
What i usually do is such situations is implement an HTTP module. On its begin request event, you can iterate through incoming HTTP request data and decide to stop processing the request if data is not complying with requirements. You have full access to HttpContext.Current.Request object from HTTP module code.
With HTTP modules, you can execute .NET code even before entering the ASP.NET pipeline.

HTTPS POST Security level

I've searched for this a bit on Stack, but I cannot find a definitive answer for https, only for solutions that somehow include http or unencrypted parameters which are not present in my situation.
I have developed an iOS application that communicates with MySQL via Apache HTTPS POSTS and php.
Now, the server runs with a valid certificate, is only open for traffic on port 443 and all posts are done to https://thedomain.net/obscurefolder/obscurefile.php
If someone knew the correct parameters to post, anyone from anywhere in the world could mess up the database completely, so the question is: Is this method secure? Let it be known nobody has access to the source code and none of the iPads that run this software are jailbreaked or otherwise compromised.
Edit in response to answers:
There are several php files which alone only support one specific operation and depend on very strict input formatting and correct license key (retreived by SQL on every query). They do not respond to input at all unless it's 100% correct and has a proper license (e.g. password) included. There is no actual website, only php files that respond to POSTs, given the correct input, as mentioned above. The webserver has been scanned by a third party security company and contains no known vulnerabilities.
Encryption is necessary but not sufficient for security. There are many other considerations beyond encrypting the connection. With server-side certificates, you can confirm the identity of the server, but you can't (as you are discovering) confirm the identity of the clients (at least not without client-side certficates which are very difficult to protect by virtue of them being on the client).
It sounds like you need to take additional measures to prevent abuse such as:
Only supporting a sane, limited, well-defined set of operations on the database (not passing arbitrary SQL input to your database but instead having a clear, small list of URL handlers that perform specific, reasonable operations on the database).
Validating that the inputs to your handler are reasonable and within allowable parameters.
Authenticating client applications to the best you are able (e.g. with client IDs or other tokens) to restrict the capabilities on a per-client basis and detect anomalous usage patterns for a given client.
Authenticating users to ensure that only authorized users can make the appropriate modifications.
You should also probably get a security expert to review your code and/or hire someone to perform penetration testing on your website to see what vulnerabilities they can uncover.
Sending POST requests is not a secure way of communicating with a server. Inspite of no access to code or valid devices, it still leaves an open way to easily access database and manipulating with it once the link is discovered.
I would not suggest using POST. You can try / use other communication ways if you want to send / fetch data from the server. Encrypting the parameters can also be helpful here though it would increase the code a bit due to encryption-decryption logic.
Its good that your app goes through HTTPS. Make sure the app checks for the certificates during its communication phase.
You can also make use of tokens(Not device tokens) during transactions. This might be a bit complex, but offers more safety.
The solutions and ways here for this are broad. Every possible solution cannot be covered. You might want to try out a few yourself to get an idea. Though I Suggest going for some encryption-decryption on a basic level.
Hope this helps.

Glimpse Security

I am convinced that I want to use Glimpse for my project, but I would like to learn a bit more about the security model.
From what I can tell, when you turn Glimpse on, it simply writes a set of cookies to the client. When Glimpse receives these cookies, Glimpse begins to record information for the request and then sends it to the client.
Seems like I could just set the cookies for a site I know uses Glimpse and I would then be able to see their information.
I highly doubt this is how it works, so I would like to know what features are in place to prevent exposing server information.
Glimpse uses a collection of configurable Runtime Policies (http://getglimpse.com/Help/Custom-Runtime-Policy) that dictate how Glimpse responds to any given HTTP request.
Glimpse already adds some Runtime Policies out of the box that filter requests based on content types, http status codes, remote or local access, Uri's...
You can also build your own by implementing the IRuntimePolicy and check for instance if a user is authenticated and member of a specific group and based on that allow Glimpse to gather and return data or not. Such an example can be found at the link above.

Client Server API pattern in REST (unreliable network use case)

Let's assume we have a client/server interaction happening over unreliable network (packet drop). A client is calling server's RESTful api (over http over tcp):
issuing a POST to http://server.com/products
server is creating an object of "product" resource (persists it to a database, etc)
server is returning 201 Created with a Location header of "http://server.com/products/12345"
! TCP packet containing an http response gets dropped and eventually this leads to a tcp connection reset
I see the following problem: the client will never get an ID of a newly created resource yet the server will have a resource created.
Questions: Is this application level behavior or should framework take care of that? How should a web framework (and Rails in particular) handle a situation like that? Are there any articles/whitepapers on REST for this topic?
The client will receive an error when the server does not respond to the POST. The client would then normally re-issue the request as they assume that it has failed. Off the top of my head I can think of two approaches to this problem.
One is that the client can generate some kind of request identifier, such as a guid, which it includes in the request. If the server receives a POST request with a duplicate GUID then it can refuse it.
The other approach is to PUT instead of POST to create. If you cannot get the client to generate the URI then you can ask the server to provide a new URI with a GET and then do a PUT to that URI.
If you search for something like "make POST idempotent" you will probably find a bunch of other suggestions on how to do this.
If it isn't reasonable for duplicate resources to be created (e.g. products with identical titles, descriptions, etc.), then unique identifiers can be generated on the server which can be tracked against created resources to prevent duplicate requests from being processed. Unlike Darrel's suggestion of generating unique IDs on the client, this would also prevent separate users from creating duplicate resources (which you may or may not find desirable). Clients will be able to distinguish between "created" responses and "duplicate" responses by their response codes (201 and 303 respectively, in my example below).
Pseudocode for generating such an identifier — in this case, a hash of a canonical representation of the request:
func product_POST
// the canonical representation need not contain every field in
// the request, just those which contribute to its "identity"
tags = join sorted request.tags
canonical = join [request.name, request.maker, tags, request.desc]
id = hash canonical
if id in products
http303 products[id]
else
products[id] = create_product_from request
http201 products[id]
end
end
This ID may or may not be part of the created resources' URIs. Personally, I'd be inclined to track them separately — at the cost of an extra lookup table — if the URIs were going to be exposed to users, as hashes tend to be ugly and difficult for humans to remember.
In many cases, it also makes sense to "expire" these unique hashes after some time. For example, if you were to make a money transfer API, a user transferring the same amount of money to the same person a few minutes apart probably indicates that the client never received the "success" response. If a user transfers the same amount of money to the same person once a month, on the other hand, they're probably paying their rent. ;-)
The problem as you describe it boils down to avoiding what are called double-adds. As mentioned by others, you need to make your posts idempotent.
This can be easily implemented at the framework level. The framework can keep a cache of completed responses. The requests have to have a request unique so that any retries are treated as such, and not as new requests.
If the successful response gets lost on its way to the client, the client will retry with the same request unique, the server will then respond with its cached response.
You are left with durability of the cache, how long to keep responses, etc. One approach is to remove responses from the server cache after a given period of time, this will depend on your app domain and traffic and can be left as a configurable step on the framework piece. Another approach is to force the client to sent acknowledgements. The acks can be sent either as separate requests (note that these could be lost too), or as extra data piggy backed on real requests.
Although what I suggest is similar to what others suggest, I strongly encourage you to keep this layer of network resiliency to do only that, deal with drop requests/responses and not allow it to deal with duplicate resources from separate requests which is an application level task. Merging both pieces will mush all functionality and will not leave you with a clear separation of responsibilities.
Not an easy problem, but if you keep it clean you can make your app much more resilient to bad networks without introducing too much complexity.
And for some related experiences by others go here.
Good luck.
As the other responders have pointed out, the basic problem here is that the standard HTTP POST method is not idempotent like the other methods. There is an effort underway to establish a standard for an idempotent POST method known as Post-Once-Exactly, or POE.
Now I'm not saying that this is a perfect solution for everybody in the situation you describe, but if it is the case that you are writing both the server and the client, you may be able to leverage some of the ideas from POE. The draft is here: https://datatracker.ietf.org/doc/html/draft-nottingham-http-poe-00
It isn't a perfect solution, which is probably why it hasn't really taken off in the six years since the draft was submitted. Some of the problems, and some clever alternate options are discussed here:
http://tech.groups.yahoo.com/group/rest-discuss/message/7646
HTTP is a stateless protocol, meaning the server can't open an HTTP connection. All connections get initialized by the client. So you can't solve such an error on the server side.
The only solution I can think of: If you know, which client created the product, you can supply it the products it created, if it pulls that information. If the client never contacts you again, you won't be able to transmit information about the new product.

Resources