SAP Successfactors Non-Standard OCN Provider Integration - odata

I'm working on an integration project connecting SAP Successfactors with a Non-Standard OCN Provider. I'm using OData API to push course listings into Successfactors LMS. After APM Sync, the OCN items are visible on SF, but the open content sessions for each item is showing as
"There are no content network sessions for this item"
Course schedule information is also pushed along with the payload parameters of the OData API
"schedule": [
{
"startDate": 1572393600,
"endDate": 1572480000,
"active": true,
"duration": "2 days"
}
]

Open Content Network (OCN) is a group of Massive Online Open Course (MOOC) partners who can provide content to SAP SuccessFactors LMS. MOOC courses will not have specific start time and end time as they are online courses and are different than the Instructor-led standard offerings. In order to convert OCN items into standard offerings, the item type (classification) of these items must to be converted into "Instructor-led". After that these courses can be added into scheduled offerings and then the date/time segments will start appearning.

Related

MQTT topic name design to handle multiple "same" things

In my current IOT project. I will be using multiple ESP8266s (3) to send data and receive actions. Each MCU will be in charge of monitoring different aquariums around the house. I have thought of structuring my topics like the following:
"Data" topics will follow the same structure, for example to retrieve temperature data:
esp8266/aquarium/aquarium_id/temperature/dht11
"Action topics", the topics the MCU subscribes to receive commands, for example:
aquariumcontroller/aquarium_id/action/water
The topic which the aquarium subscribes to to updte the MCU's params:
aquariumcontroller/aquarium/aquarium_id/params
The aquariumcontroller is the MQTT client written in Python. This is the entity that will be sending actions and handling messages received. I have two questions, are my topics correctly named and structured to handle multiple aquariums? Also as I will have a Controller, isn't it better to also have a database which will contain the aquariums info like the topics for this specific aquarium and the params or will i run into problems if i persist topics when i change the aquarium ID?
Thank you
What you mentioned will work.
You can also have a common topic that everyone subscribes to and use JSON format to send data. In JSON format you can mention necessary identifiers and actions.
{
"aquarium_id": "xxx",
"operation": "temperature_read",
"value": "24.5"
}
For every aquarium_id you can have an individual topic. The advantage of this scheme is that aquarium controller will only receive message that are associated with it however their will be a added complexity where by you have to pre-populate aquarium_id in your server.
{
"operation": "temperature_read",
"value": "24.5"
}
You should ideally have a database to store aquarium_id and its relevant properties (something like master record). With database you can also store you readings and action to get an historical view of your data. You can use postgres as DBMS.

Change Eve Python DOMAIN collection schema dynamically during runtime

I am using the Eve Python API Framework for MongoDB. I am writing a feature that allows my users to edit metadata sections for the documents that they are writing.
Example JSON:
{
"metadata": {
"document_type": ["story"],
"keywords": ["fantasy", "thriller"]
}
}
We have a CMS for the document editor that admins can use to allow them to do things like add new metadata fields for the authors (normal users) to add more information about their posts. For example, the site admin may want to add a field called "additional_authors" which is a list of strings. If they add this section to the frontend we would like some way to add it to the Eve Schema on the backend in real time without restarting the server. It is very important that it be real time and not part of a coding change in Eve or requiring Eve to restart.
Our current hard-coded metadata schema looks like this for the document collection:
{
"metadata": {
"type": "dict",
"schema": {
"document_type": {"type": "list", "required": True},
"keywords": {"type": "list", "required": True}
}
}
}
I understand that we can go with a non-strict approach when writing the "metadata" type dict so that it does not care what is inside but from my understanding this means we would not be able to use "projection" properly meaning that if I only wanted to return "metadata.additional_authors" of all documents through my API call, I would not be able to do so. Also, this would mean that we would have to deal with the required check ourselves using hooks instead of the built-in Eve schema check.
Is there anyway around this issue by essentially having a dynamic schema based on a MongoDB document that we can store the entire collection configuration dict in and retrieve it without restarting the server for it to take effect? Even if this means adding a hook to the new schema_dict collection and calling some internal Eve function I am all ears.
Thank you ahead of time for your help.

How to list down all aliases of Office365 account using Microsoft Graph API?

How to list down all aliases of Office365 account using Microsoft Graph API? Is there a separate permission level I need to provide or different parameter in API https://graph.microsoft.com/v1.0/me/ to get the list of all aliases. Any help is appreciated.
The default alias is stored in mailNickname property. The full list of assigned aliases/addresses are stored in the proxyAddresses property collection. These include additional aliases and domains, so you may need do a little processing to split each address at # and dedupe the first element.
To retrieve these you would need to specifically request the properties you want using the $select query parameter:
/v1.0/me?$select=id,userPrincipalName,displayName,mailNickname,proxyAddresses
Using Graph Explorer's demo data, you would see this result:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#users(id,userPrincipalName,displayName,mailNickname,proxyAddresses)/$entity",
"id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
"displayName": "Megan Bowen",
"mailNickname": "MeganB",
"proxyAddresses": [
"SMTP:MeganB#M365x214355.onmicrosoft.com"
],
"userPrincipalName": "MeganB#M365x214355.onmicrosoft.com"
}

Good way to design mqtt topic?

I am very new with mqtt design.
As I see from some tutorials in the internet, common mqtt topic has this format: /home/room/device_type/device_id
I could not see the benefit to do that. And have no idea how to use this kind of design.
From my point of view, the device (dev) might subscribe (sub) to control topic and publish (pub) to status topic. Like this:
pub: clients/dev/devid/stat
sub: clients/dev/devid/ctrl
In this way, it seems sub,pub logic is very simple for both clients and devices
Could someone please tell me some good way to design mqtt topic ?
(!) Please do not start topic with '/' (This one has been recommended by HiveMQ Team)
EDIT:
I just figured out that for whatever kind of design, the model must serve-able at least:
Individual control: send control command to a particular device.
Group control: send control command to a group of devices: type, defined group
Able to recieve the status of device.
Thank you very much
I found that the following topic split scheme works very well in multiple applications
protocol_prefix / src_id / dest_id / message_id / extra_properties
protocol_prefix is used to differentiate between different protocols / application that can be used at the same time
src_id is the ID of the mqtt client that publishes the message. It is expected to be the same as "client ID" used to connect to MQTT broker. It allows quick ACL control to check whether the client is allowed to publish specific topic.
dest_id is client ID of the "destination" unit, i.e. to whom the message is intended. Also allows quick ACL control on the broker of whether client is allowed to subscribe to a particular topic. There can be reserved "destination" strings to specify that the message is broadcasted to anyone who is interested. For example all.
message_id is actual ID of the message within used protocol. I usually use numeric value (as string of course), because the IOT or other embedded system that is connected to MQTT broker can have other I/O links and I would like to use the same protocol (but with different transport framing) to control the device using these other I/O links. I usually use numeric message IDs in such communication links.
extra_properties is an optional subtopic which can be used to communicate other MQTT specific extra information (comma separated key=value pairs for example). Good example would be reporting timestamp of the message when it was actually sent by the client. In case of "retained" messages it can help to identify the relevance of the received message. With MQTTv5 protocol that is expected to arrive soon, the need for this subtopic may disappear because there will be other way to communicate extra properties.
Hope it helps.
We have done some work on that for the domain of manufacturing (Industrial-IoT, not IoT!).
In our scenario there are lots of serverside apps of different companies communicating through MQTT. Thus we needed some overall structure. We call this "Manufacturing Message Stack".
The bottom layer is MQTT, then there is the "Messaging Layer". It consists mainly of
basic topic specs
basic payload specs
On top of the messaging layer there are domain message layers covering various domain specific topics as system messages, alerting, physical device / digital twin messages or other manufacturing related messages.
Topics
The topics are roughly defined as <senderapp>/<app-id>/<message-name>/<args> e.g. pacman/pacman-1/gameover (this is a sample for illustration only!)
The developer of an app which publishes MQTT messages defines <message-name> and <args> depending on the semantics of the payload.
The <senderapp> and <app-id> refers to the sending App and allows to quickly select messages from a defines origin (publisher). We deploy Apps in a microservice environment built with Docker, Rancher - and soon - Kubernetes.
Payload
The payload is specified in JSON format. There is a JSON schema reference URL in each
build an URL to an API of the publishing app which holds further information (e.g. JSON schema) of the message sent. Thus a subscriber can get meta data of a MQTT message on demand. Static meta data is not sent with a message to reduce the payload size.
Payload sample:
{
"$schema": "http://app/api/messages/message1.json",
"score": 1234,
"highscore": false
}
Publisher's message meta data
The publishing app holds an index of all messages which can be sent in an API at
http://<app>/api/messages/index.json:
{
"message1": "message1.json",
...
}
Each message is described by its JSON schema message1.json:
{
"$schema": "http://json-schema.org/draft-06/schema#",
"title": "Pacman end of game",
"properties": {
"score": {
"description": "Players score at the end of game",
"type": "integer"
},
...
}
}
Unfortunately we did not publish our manufacturing message stack yet. Publishing is planned in the next months. Feedback is welcome.
I think if topics are to reflect the physical world, we should look at something like Signal K.
EDIT:
That spec is also still maturing, but it includes concepts like "self" for the server/broker, and a tree that can start at the current vessel/home, but easily extends upwards to other vessels/aircraft/things.
My two cents:
All topics are read-only unless they end in "/set"
Ideally, topics are reasonably normalized and granular. I can understand grouping values up into a group topic. IMHO, this kind of decision should be application-specific.
Payloads should be strings, to avoid endian-ness issues
Here's one suggested tree:
broker = information of this specific broker
broker/clients
broker/clients/count
broker/clients/0/name or broker/clients[0]/name
broker/topics
home = this current location (could also be "here" or something)
home/kitchen/temperature "19C"
home/kitchen/temperature/hardware/type "ESP8266"
home/garage/maindoor/set "closed"
locations = list of all known locations
locations/0/uuid
locations/0/name
locations/0/address

(Ruby ON Rails) Write spec for Facebook ID and validate a FB page or group

I a newbie trying to write a [Ruby ON Rails] spec to make sure user insert is Facebook ID and I can't find anywhere for validate user FB id and direct link to it once clicked. Found https://graph.facebook.com/ but still unsure how to use it.
{
"id": "40796308305",
"name": "Coca-Cola",
"picture": "http://profile.ak.fbcdn.net/hprofile-ak-snc4/hs236.ash2/50516_40796308305_7651_s.jpg",
"link": "http://www.facebook.com/coca-cola",
"category": "Consumer_products",
"website": "http://www.coca-cola.com",
"username": "coca-cola",
"products": "Coca-Cola is the most popular and biggest-selling soft drink in history, as well as the best-known product in the world.\n\nCreated in Atlanta, Georgia, by Dr. John S. Pemberton, Coca-Cola was first offered as a fountain beverage by mixing Coca-Cola syrup with carbonated water. Coca-Cola was introduced in 1886, patented in 1887, registered as a trademark in 1893 and by 1895 it was being sold in every state and territory in the United States. In 1899, The Coca-Cola Company began franchised bottling operations in the United States.\n\nCoca-Cola might owe its origins to the United States, but its popularity has made it truly universal. Today, you can find Coca-Cola in virtually every part of the world.",
"fan_count": 17367199
}
You may use introspection in Graph API by adding metadata field while requesting object information from Graph API to discover the type of object:
http://graph.facebook.com/40796308305?metadata=1
This will return additional object metadata including type property which will be according to Object Type one of: user, page, group, status, photo, etc...

Resources