Unable to set the displayName on a relationship - azure-digital-twins

I am trying to understand and set the displayName on a relationship.
My mode is a car, that has multiple tires:
{
"#id": "dtmi:kevinsay:vehicle;1",
"#type": "Interface",
"displayName": "Vehicle",
"contents": [
{
"#type": "Relationship",
"name": "tires",
"maxMultiplicity": 5,
"displayName": "string"
},
{
"#type": "Property",
"name": "Model",
"schema": "string"
}
],
"#context": "dtmi:dtdl:context;2"
}
and I have a tire
{
"#id": "dtmi:kevinsay:Tire;1",
"#type": "Interface",
"displayName": "Tire",
"contents": [
{
"#type": "Relationship",
"name": "tpms"
},
{
"#type": "Property",
"name": "Manufacturer",
"schema": "string"
}
],
"#context": "dtmi:dtdl:context;2"
}
I easily create a car and a tire:
az dt twin create --dt-name kevinsay --dtmi "dtmi:kevinsay:vehicle;1" --twin-id "Lincoln" --properties '{"Model":"Continential"}'
az dt twin create --dt-name kevinsay --dtmi "dtmi:kevinsay:Tire;1" --twin-id "lincoln1" --properties '{"Manufacturer": "Goodyear"}'
The challenge is building the relationship of the tire to the car trying to specify the displayName.
This works:
az dt twin relationship create --dt-name kevinsay --relationship-id leftFront --relationship tires --twin-id Lincoln --target lincoln1
but this will not:
az dt twin relationship create --dt-name kevinsay --relationship-id leftFront --relationship tires --twin-id Lincoln --target lincoln1 -p '{"displayName": "leftFront"}'
I get the error message:
And Digital Twin explorer shows the displayName on the tires relationship:
any help would be appreciated

After much research, I found that I was asking the wrong question. Instead of trying to set a displayName -- something that the relationship object did not have, I had to set the relationship-id, which was controlled by the '--relationship-id' flag.

Related

DTDL - How to create object for location (lat,long)?

I have DTDL model similar to below. I can use Json as string and store lat,long values. But,
How I can store lat,long array of locations using objects.
{
"#id": "dtmi:DigitalTwins:BasicInfra;2",
"#type": "Interface",
"displayName": "BasicInfra interface model",
"#context": "dtmi:dtdl:context;2",
"contents": [
{
"#type": "Property",
"name": "name",
"schema": "string"
},
{
"#type": "Property",
"name": "location",
"description": "Polygon/PolyLine Format Location",
"schema": {
"#type": "Object",
"fields": [
{
"name": "x",
"schema": "double"
},
{
"name": "y",
"schema": "double"
}
]
}
},
{
"#type": "Relationship",
"name": "contains"
}
]
}
Thanks for posting this question.
We got a response from Microsoft's Product Team.
At the moment, we don't support arrays in properties.
Please follow Azure Digital Twins updates, blogs, and announcements to get latest information on upcoming features. Also Product Updates page.

How to find document from CouchDB based on properties of other documents in a single query?

There is already an existing CouchDB database which was created based on existing records from a MySQL database.
I have a set of documents like this:
[
{
"_id": "lf_event_users_1247537_11434",
"_rev": "1-19e90d3f19e9da7cc5adab44ebbe3894",
"TS_create": "2018-12-17T10:29:20",
"emm_id": 204662,
"eu_user_id": 201848611,
"type": "lf_event_users",
"uid": 1247537,
"vendor_id": 11434
},
{
"_id": "lf_event_users_1247538_11434",
"_rev": "1-0d0d1e9f1fb5aad9bafd4c53a6cada17",
"TS_create": "2018-12-17T10:29:20",
"emm_id": 204661,
"eu_user_id": 201848611,
"type": "lf_event_users",
"uid": 1247538,
"vendor_id": 11434
},
{
"_id": "lf_event_users_1247539_11434",
"_rev": "1-09bc2bfc709ee9c6e6cac9cb34964ac4",
"TS_create": "2018-12-17T10:29:20",
"emm_id": 204660,
"eu_user_id": 201848611,
"type": "lf_event_users",
"uid": 1247539,
"vendor_id": 11434
}
]
As you can see, all of them are for the same "eu_user_id" = 201848611, and each one has a different "emm_id".
Now, I have another set of document like this in the same CouchDB database:
[
{
"_id": "lf_event_management_master_204660_11434",
"_rev": "2-320111a3814a3efd6838baa0fb5412bb",
"emm_disabled": "n",
"emm_title": "Scanned for local delivery",
"settings": {
"event_view": "ScannedForLocalDeliveryEvent",
"sort_weight": 0
},
"type": "lf_event_management_master",
"uid": 204660,
"vendor_id": 11434
},
{
"_id": "lf_event_management_master_204661_11434",
"_rev": "2-e6d6ebbd4dc4ca473a376d3d16a58e93",
"emm_disabled": "n",
"emm_title": "Local Delivery Cancelled",
"settings": {
"event_view": "CancelDeliveryEvent",
"sort_weight": 4
},
"type": "lf_event_management_master",
"uid": 204661,
"vendor_id": 11434
},
{
"_id": "lf_event_management_master_204662_11434",
"_rev": "2-53cb3d3eba80704e87ea5ff8d5c269df",
"emm_disabled": "n",
"emm_title": "Local Delivery Exception",
"settings": {
"event_view": "DeliveryExceptionEvent",
"sort_weight": 3
},
"type": "lf_event_management_master",
"uid": 204662,
"vendor_id": 11434
}
]
As you can see, each document in this last set has a "uid" matching the "emm_id" in the previous set of documents. Basically this means:
A "user" has many allowed "events".
You can see also that the documents of type "lf_event_management_master" has no "eu_user_id" value or any other key matching this.
My question is:
How can I get all documents of type "lf_event_management_master" allowed for user "201848611" in a single query?
In my case, I only have the User ID (201848611) available at the point where I need to get the allowed events. Currently what is happening is:
I get all the "lf_event_users" records for this user.
Loop all results from previous query and build a new query, extracting this time to find all the "lf_event_management_master" where the "uid" includes any of the "emm_id" values found with the previous query.
Thank you in advance.

Reservation type who is not define is show in the Highlight section

I have recently add the json data to an email for show the highlight section in the email client for a train reservation but in the email there are also a advertising section for a care hire company and it's show this in the highlight too.
Is there a way to disable the RentalCar type in the email or to indicate to indicate to the client to ignoring a specific section of the html?
JSON-LD example
{
"reservationNumber": "XXXXX",
"reservationFor": {
"trainCompany": { "name": "Company", "#type": "Organization" },
"departureStation": { "name": "Departure station", "#type": "TrainStation" },
"arrivalStation": {
"name": "Arrival station",
"#type": "TrainStation"
},
"arrivalTime": "2018-08-15T12:47:00",
"#type": "TrainTrip"
},
"price": "10.00",
"reservationStatus": "http://schema.org/ReservationConfirmed",
"checkinUrl": "https://checkin-url.com/uk-en",
"modifyReservationUrl": "https://checkin-url.com/uk-en",
"#content": "http://schema.org",
"#type": "TrainReservation"}
Thanks
Damien

How do I use the swagger models section?

Inside the Swagger API Documentation there is inside the json beside the apis array a model object entry but no documentation about it. How can I use this "models" part?
{
apiVersion: "0.2",
swaggerVersion: "1.1",
basePath: "http://petstore.swagger.wordnik.com/api",
resourcePath: "/pet.{format}"
...
apis: [...]
models: {...}
}
Models are nothing but like your POJO classes in java which have variables and properties. In models section you can define your own custom class and you can refer it as data type.
If you see below
{
"path": "/pet.{format}",
"description": "Operations about pets",
"operations": [
{
"httpMethod": "POST",
"summary": "Add a new pet to the store",
"responseClass": "void",
"nickname": "addPet",
"parameters": [
{
"description": "Pet object that needs to be added to the store",
"paramType": "body",
"required": true,
"allowMultiple": false,
"dataType": "Pet"
}
],
"errorResponses": [
{
"code": 405,
"reason": "Invalid input"
}
]
}
Here in parameter section it have one parameter who's dataType is Pet and pet is defined in models as below
{
"models": {
"Pet": {
"id": "Pet",
"properties": {
"id": {
"type": "long"
},
"status": {
"allowableValues": {
"valueType": "LIST",
"values": [
"available",
"pending",
"sold"
]
},
"description": "pet status in the store",
"type": "string"
},
"name": {
"type": "string"
},
"photoUrls": {
"items": {
"type": "string"
},
"type": "Array"
}
}
}
}}
You can have nested models , for more information see Swagger PetStore example
So models are nothing but like classes.

Avro schema evolution

I have two questions:
Is it possible to use the same reader and parse records that were written with two schemas that are compatible, e.g. Schema V2 only has an additional optional field compared to Schema V1 and I want the reader to understand both? I think the answer here is no, but if yes, how do I do that?
I have tried writing a record with Schema V1 and reading it with Schema V2, but I get the following error:
org.apache.avro.AvroTypeException: Found foo, expecting foo
I used avro-1.7.3 and:
writer = new GenericDatumWriter<GenericData.Record>(SchemaV1);
reader = new GenericDatumReader<GenericData.Record>(SchemaV2, SchemaV1);
Here are examples of the two schemas (I have tried adding a namespace as well, but no luck).
Schema V1:
{
"name": "foo",
"type": "record",
"fields": [{
"name": "products",
"type": {
"type": "array",
"items": {
"name": "product",
"type": "record",
"fields": [{
"name": "a1",
"type": "string"
}, {
"name": "a2",
"type": {"type": "fixed", "name": "a3", "size": 1}
}, {
"name": "a4",
"type": "int"
}, {
"name": "a5",
"type": "int"
}]
}
}
}]
}
Schema V2:
{
"name": "foo",
"type": "record",
"fields": [{
"name": "products",
"type": {
"type": "array",
"items": {
"name": "product",
"type": "record",
"fields": [{
"name": "a1",
"type": "string"
}, {
"name": "a2",
"type": {"type": "fixed", "name": "a3", "size": 1}
}, {
"name": "a4",
"type": "int"
}, {
"name": "a5",
"type": "int"
}]
}
}
},
{
"name": "purchases",
"type": ["null",{
"type": "array",
"items": {
"name": "purchase",
"type": "record",
"fields": [{
"name": "a1",
"type": "int"
}, {
"name": "a2",
"type": "int"
}]
}
}]
}]
}
Thanks in advance.
I encountered the same issue. That might be a bug of avro, but you probably can work around by adding "default": null to the field of "purchase".
Check my blog for details: http://ben-tech.blogspot.com/2013/05/avro-schema-evolution.html
You can do opposite of it . Mean you can parse data schem 1 and write data from schema 2 . Beacause at write time it write data into file and if we don't provide any field at reading time than it will be ok. But if we write less field than read than it will not recognize extra field at reading time so , it will give error .
Best way is to have a schema mapping to maintain the schema like Confluent Avro schema registry.
Key Take Aways:
1. Unlike Thrift, avro serialized objects do not hold any schema.
2. As there is no schema stored in the serialized byte array, one has to provide the schema with which it was written.
3. Confluent Schema Registry provides a service to maintain schema versions.
4. Confluent provides Cached Schema Client, which checks in cache first before sending the request over the network.
5. Json Schema present in “avsc” file is different from the schema present in Avro Object.
6. All Avro objects extends from Generic Record
7. During Serialization : based on schema of the Avro Object a schema Id is requested from the Confluent Schema Registry.
8. The schemaId which is a INTEGER is converted to Bytes and prepend to serialized AvroObject.
9. During Deserialization : First 4 bytes are removed from the ByteArray. 4 bytes are converted back to INTEGER(SchemaId)
10. Schema is requested from the Confluent Schema Registry and using this schema the byteArray is deserialized.
http://bytepadding.com/big-data/spark/avro/avro-serialization-de-serialization-using-confluent-schema-registry/

Resources