I have location added to a space that sits 2 levels above sensor but I am finding no way with the current client reference operation to get the location as I want to enrich the telemetry with space location information.
I have used the following
getSpaceMetadata
getSpaceExtendedProperty(spaceId, propertyName) //As it is not extended property
I need the functionality similar to this
https://urlofdigitaltwin/management/api/v1.0/spaces/633a40d6-790d-4bd5-92c5-1cc8b1a86141/?includes=location
Please let me know if there is a way I can always do it inside some other azure service by going and reading these separately.
space with location
device
sensor
-matcher
-udf
Thanks for the great question! Azure Digital Twins is undergoing continuous improvement. I hope you'll find the documentation significantly improved.
Assuming you have extracted the location ID from the sensor or device, you can find the associated parentSpaceId:
{
"id": "aa000aaa-a0a0-0000-a0aa-00000a000aa0",
"name": "Example Room",
"typeId": 14,
"parentSpaceId": "1b1b1111-b1b1-1111-111b-1b1b11b11111",
"subtypeId": 13,
"statusId": 12
}
From there you can call the top-level space directly. You can combine that operation with several API query parameters such as traverse, minLevel, and maxLevel which should allow you to fetch everything you need in one call.
Two new resources that describe those API operations are now available:
https://learn.microsoft.com/azure/digital-twins/how-to-navigate-apis
https://learn.microsoft.com/azure/digital-twins/how-to-query-common-apis
Thanks!
Related
I'm working with a sample dataset of airports as I continue exploring Slate features for my team. I copied the default airport dataset into my files, so this is a version that I fully own (so presumably no permission issues there). The dataset is properly available in my Slate application since I'm also using it to display and filter data via Phonograph2 queries.
Based on the Phonograph2 docs, I created a new query to add a new airport to the dataset. I'm using the "Table Storage Service" and the "Post Event" endpoint. As a test, I configured my tableEditedEventPostRequest request as:
{
"primaryKey": {
"airport": "ABC"
},
"payload": {
"type": "rowAdded",
"rowAdded": {
"columns": {
"display_name": "[ABC] My New Airport"
}
}
}
}
(Once I get this working I'd switch the values out with dynamic values from widgets.)
When I run a test of this query, I get this error response:
{
"errorCode":"INVALID_ARGUMENT",
"errorName":"Phonograph2:ReadOnlyTables",
"errorInstanceId":"17ec990d-5d58-479d-a1b6-5ad033c8c808",
"parameters":{
"tableRids":"[ri.phonograph2.main.table.f3f33f6e-801a-4454-98e9-f2df5f170559]",
"dataInputLocatorRids":"[ri.foundry.main.dataset.6add7c46-d3c9-4056-89b6-a19dbe461ed4]"
}
}
I'm not finding anything about this error or anything in the docs (so far) about the target dataset being configured as read-only. There aren't settings I can find on the dataset to make it more permissive and I'm already the owner of the dataset. I'd appreciate any insights or tips to get past this road block.
For a Phonograph Table to be "editable" it needs to be associated with a writeback dataset. If you created the sync through the Ontology, which it seems like you did not, you would do this on the "Datasources" configuration tab.
Since it sounds like you created the sync directly from the Dataset Details view (or maybe through the Slate Datasets tab), you should have an option in that configuration to create a new dataset for writeback. All you should need to do is provide a dataset name and folder location.
I'm trying to build a cli app to show a list of dart versions and allow the user to select the one to install and then switch between them.
Note: there is a flutter tool (fvm) that can switch between flutter versions (and the embedded dart tools) but this tool is specifically for dart and needs versions as well as channels.
The fvm tool uses the following endpoint:
https://storage.googleapis.com/flutter_infra/releases/releases_linux.json
But I can't find an equivalent.
Is there any method of obtaining a list of versions for each of the dart channels.
I've found:
https://storage.googleapis.com/dart-archive/channels
but you need to know the full url as I can't find any endpoints that provide a list.
I'm hoping to avoid screen scraping.
You can see how the Dart Archive Page retrieves all the information and use their endpoints:
The endpoints returns in a format such as:
{
"kind": "storage#objects",
"prefixes": [
"channels/<stable|beta|dev>/release/1.11.0/",
...,
"channels/<stable|beta|dev>/release/2.9.3/",
"channels/<stable|beta|dev>/release/29803/", // You might need to filter out results such as this
...,
"channels/<stable|beta|dev>/release/latest/"
]
}
Note: The results are not ordered in any way
Url:
https://www.googleapis.com/storage/v1/b/dart-archive/o?delimiter=%2F&prefix=channels%2F<stable|beta|dev>%2Frelease%2F&alt=json
Replace <stable|beta|dev> with which version do you want the info of.
If you need to collect info about a version you can use:
https://storage.googleapis.com/dart-archive/channels/<stable|beta|dev>/release/< VERSION NUMBER | latest>/VERSION
which will return a json object like :
{
"date": "2020-11-11",
"version": "2.10.4",
"revision": "7c148d029de32590a8d0d332bf807d25929f080e"
}
The tags for the github archive for the SDK (https://github.com/dart-lang/sdk/tags) appear to have the releases tagged reasonably usefully. The downside is that it is weighing in at 1.3GB, and there's no easy way to get a workable shallow clone of that.
I’m currently creating a PCollectionView by reading filtering information from a gcs bucket and passing it as side input to different stages of my pipeline in order to filter the output. If the file in the gcs bucket changes, I want the currently running pipeline to use this new filter info. Is there a way to update this PCollectionView on each new window of data if my filter changes? I thought I could do it in a startBundle but I can’t figure out how or if it’s possible. Could you give an example if it is possible.
PCollectionView<Map<String, TagObject>>
tagMapView =
pipeline.apply(TextIO.Read.named("TagListTextRead")
.from("gs://tag-list-bucket/tag-list.json"))
.apply(ParDo.named("TagsToTagMap").of(new Tags.BuildTagListMapFn()))
.apply("MakeTagMapView", View.asSingleton());
PCollection<String>
windowedData =
pipeline.apply(PubsubIO.Read.topic("myTopic"))
.apply(Window.<String>into(
SlidingWindows.of(Duration.standardMinutes(15))
.every(Duration.standardSeconds(31))));
PCollection<MY_DATA>
lineData = windowedData
.apply(ParDo.named("ExtractJsonObject")
.withSideInputs(tagMapView)
.of(new ExtractJsonObjectFn()));
You probably want something like "use an at most a 1-minute-old version of the filter as a side input" (since in theory the file can change frequently, unpredictably, and independently from your pipeline - so there's no way really to completely synchronize changes of the file with the behavior of the pipeline).
Here's a (granted, rather clumsy) solution I was able to come up with. It relies on the fact that side inputs are implicitly also keyed by window. In this solution we're going to create a side input windowed into 1-minute fixed windows, where each window will contain a single value of the tag map, derived from the filter file as-of some moment inside that window.
PCollection<Long> ticks = p
// Produce 1 "tick" per second
.apply(CountingInput.unbounded().withRate(1, Duration.standardSeconds(1)))
// Window the ticks into 1-minute windows
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(1))))
// Use an arbitrary per-window combiner to reduce to 1 element per window
.apply(Count.globally());
// Produce a collection of tag maps, 1 per each 1-minute window
PCollectionView<TagMap> tagMapView = ticks
.apply(MapElements.via((Long ignored) -> {
... manually read the json file as a TagMap ...
}))
.apply(View.asSingleton());
This pattern (joining against slowly changing external data as a side input) is coming up repeatedly, and the solution I'm proposing here is far from perfect, I wish we had better support for this in the programming model. I've filed a BEAM JIRA issue to track this.
I need a function to get the Physical Sector Size for all kind of system drives, in Win7 or higher.
This is the code that I've used until today, when I found out that it's not working with my external USB HDD (exFAT file system) and with my USB MP3 Player (FAT16). In these cases the function DeviceIoControl fails and I get the exception: "System Error. Code 50. The request is not suported". But it works very well with NTFS volumes.
function GetSectorSize(Drive:Char):DWORD;
var h:THandle;
junk:DWORD;
Query:STORAGE_PROPERTY_QUERY;
Alignment:STORAGE_ACCESS_ALIGNMENT_DESCRIPTOR;
begin
result:=0;
h:=CreateFileW(PWideChar('\\.\'+UpperCase(Drive)+':'),0,FILE_SHARE_READ or FILE_SHARE_WRITE,nil,OPEN_EXISTING,0,0);
if h=INVALID_HANDLE_VALUE then RaiseLastOSError;
try
FillChar(Query,SizeOf(Query),0);
Query.PropertyId:=StorageAccessAlignmentProperty;
Query.QueryType:=PropertyStandardQuery;
if not DeviceIoControl(h,IOCTL_STORAGE_QUERY_PROPERTY,#Query,SizeOf(Query),#Alignment,SizeOf(Alignment),junk,nil) then RaiseLastOSError;
result:=Alignment.BytesPerPhysicalSector;
finally
CloseHandle(h);
end;
end;
According to MSDN:
File Buffering
Most current Windows APIs, such as IOCTL_DISK_GET_DRIVE_GEOMETRY and GetDiskFreeSpace, will return the logical sector size, but the physical sector size can be retrieved through the IOCTL_STORAGE_QUERY_PROPERTY control code, with the relevant information contained in the BytesPerPhysicalSector member in the STORAGE_ACCESS_ALIGNMENT_DESCRIPTOR structure. For an example, see the sample code at STORAGE_ACCESS_ALIGNMENT_DESCRIPTOR. Microsoft strongly recommends that developers align unbuffered I/O to the physical sector size as reported by the IOCTL_STORAGE_QUERY_PROPERTY control code to help ensure their applications are prepared for this sector size transition.
This same quote also appears in the following MSDN document:
Advanced format (4K) disk compatibility update
Which includes the following additional information:
The below list summarizes the new features delivered as part of Windows 8 and Windows Server 2012 to help improve customer and developer experience with large sector disks. More detailed description for each item follow.
...
•Provides a new API to query for physical sector size (FileFsSectorSizeInformation)
...
Here’s how you can query for the physical sector size:
Preferred method for Windows 8
With Windows 8, Microsoft has introduced a new API that enables developers to easily integrate 4K support within their apps. This new API supports even greater numbers of scenarios than the legacy method for Windows Vista and Windows 7 discussed below. This API enables these calling scenarios:
•Calling from an unprivileged app
•Calling to any valid file handle
•Calling to a file handle on a remote volume over SMB2
•Simplified programming model
The API is in the form of a new info class, FileFsSectorSizeInformation, with associated structure FILE_FS_SECTOR_SIZE_INFORMATION
FILE_FS_SECTOR_SIZE_INFORMATION structure
This information can be queried in either of the following ways:
•Call FltQueryVolumeInformation or ZwQueryVolumeInformationFile, passing FileFsSectorSizeInformation as the value of FileInformationClass and passing a caller-allocated, FILE_FS_SECTOR_SIZE_INFORMATION-structured buffer as the value of FileInformation.
•Create an IRP with major function code IRP_MJ_QUERY_VOLUME_INFORMATION.
•Call FsRtlGetSectorSizeInformation with a pointer to a FILE_FS_SECTOR_SIZE_INFORMATION-structured buffer. The FileSystemEffectivePhysicalBytesPerSectorForAtomicity member will not have a value initialized by the file system when this structure is returned from FsRtlGetSectorSizeInformation. A file system driver will typically call this function and then set its own value for FileSystemEffectivePhysicalBytesPerSectorForAtomicity.
Your principal error is that you try to get physical sector size from a volume handle rather than from that of an underlying physical device (\\.\PhysicalDriveX). Device's physical sector size doesn't depend on FS and shouldn't be confused with a logical sector size defined by FS properties.
Google provides nice examples of getting TextAd via API: https://code.google.com/p/google-api-adwords-php/source/browse/examples/v201209/BasicOperations/GetTextAds.php
I expected that getting DynamicSearchAd will be as easy as modifying line 54 to:
$selector->predicates[] = new Predicate('AdType', 'IN', array('TEXT_AD', 'DYNAMIC_SAERCH_AD'));
however for campaign with bunch of negative keywords, 0 postiive keywords and bunch of ads [ visible in the interface ] my result is buch of negative keywords and 0 ads, like they were not existing. I have googled for quite a long time already, but most recent post about keywordless ads is from 2012 and since then I believe that DynamicSearchAds went out from beta and are not available for everyone.
I played quite a little bit with sample example, changing fields [ like removing Headline and leaving only Id etc ], without success.
So my question is, how should I modify this example to obtain DSA ?
You've got missprint on constant in Predicate.
'DYNAMIC_SAERCH_AD' must be 'DYNAMIC_SEARCH_AD'