The property size of FileAttachment is larger than the actrually size of binary.
{
"#odata.type": "#microsoft.graph.fileAttachment",
"#odata.mediaContentType": "application/octet-stream",
"id": "<the attachment id>",
"lastModifiedDateTime": "2020-10-26T09:57:36Z",
"name": "test.bin",
"contentType": "application/octet-stream",
"size": 245,
"isInline": false,
"contentId": null,
"contentLocation": null,
"contentBytes": "aGVsbG8gd29ybGQ="
}
Here is a piece of java code create above json data:
IGraphServiceClient graphClient = GraphServiceClient.builder().authenticationProvider(provider).buildClient();
FileAttachment attachment = new FileAttachment();
attachment.oDataType = "#microsoft.graph.fileAttachment";
attachment.name = "test.bin";
attachment.contentBytes = "hello world".getBytes();
Attachment att = graphClient.users().byId(userId).messages().byId(mailId).attachments().buildRequest().post(attachment);
System.out.println(att.size);
The attachment size in the Graph should just be the PidTagAttachSize property https://learn.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxprops/917d8d18-adee-4f14-9f2a-9a1d37fff41e . The size returned is basically made up of the size of all the attachment properties (not just the underlying attachment size) in the Attachment object. So it will always be bigger then the actual attachment size. There's another description https://learn.microsoft.com/en-us/office/client-developer/outlook/mapi/pidtagattachsize-canonical-property
This property can be used to check the approximate size of the attachment before performing a remote transfer by modem and to display progress indicators when saving the attachment to disk. It is particularly useful with attached OLE objects.
Given they talk about modems its a little dated.
Related
I want to write Telegraf config file which will:
Uses openweathermap input or custom http request result
{
"fields": {
...
"humidity": 97,
"temperature": -11.34,
...
},
"name": "weather",
"tags": {...},
"timestamp": 1675786146
}
Splits result on two similar JSONs:
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": 97,
"type": "humidity"
}
and
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": -11.34,
"type": "temperature"
}
Sends this JSONs into MQTT queue
Is it possible or I must create two different configs and make two api calls?
I found next configuration which solves my problem:
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"temperature","value":fields.main_temp,"timestamp":timestamp}'
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"humidity","value":fields.main_humidity,"timestamp":timestamp}'
[[inputs.http]]
urls = [
"https://api.openweathermap.org/data/2.5/weather?lat={$LAT}&lon={$LON}2&appid=${API_KEY}&units=metric"
]
data_format = "json"
Here we:
Retrieve data from OWM in input plugin.
Transform received data structure in needed structure in two different output plugins. We use this language https://jsonata.org/ for this aim.
In our application we are using datadog for monitoring, we are tracking the memory usage and sending the alert notinification to a slack channel. the monitoring config file is as below, i have added the formula for the numerator_metric. but for the demominator_metric, the value should be 1700, but i cant add the value directly over there instead i need to create a metric that always returns 1700,
{
"type": "metric_ratio",
"comparison": ">",
"numerator_metric": "aws.lambda.enhanced.max_memory_used{functionname:sample-function}",
"denominator_metric": "",
"warning_threshold": 70,
"target_threshold": 75,
"description": "Increased memory usage [prod]",
"monitor_timeframe": "1h",
"slo_timeframe": "7d",
"name": "Memory Usage",
"notify":[
{ "platform": "slack", "reference": "slack-channel" }
]
}
iTunes/Apple Music allows for adding multiple cover artworks to a mp3 file. The first images is in a column named "Album Artwork" and remaining are in column "Other Artwork" as shown on the image below:
I'm trying to retrieve all of these artworks in Dart but when reading the ID3 tags I can only see the first artwork. I'm certain that the other artworks are stored in the MP3 file as its file size is increasing when I add additional artworks.
How can I retrieve the other artworks from a MP3 file?
Below is my code and JSON showing all available ID3 tags data.
import 'dart:convert';
import 'dart:io' as io;
import 'package:id3/id3.dart';
void main(List<String> args) {
var mp3 = io.File("./lib/song2.mp3");
MP3Instance mp3instance = MP3Instance(mp3.readAsBytesSync());
mp3instance.parseTagsSync();
Map<String, dynamic>? id3Tags = mp3instance.getMetaTags();
print(id3Tags!['APIC'].keys);
print(id3Tags!['APIC']['mime']);
io.File outputFile = io.File("lib/id3TagsOutput.txt");
outputFile.writeAsString(jsonEncode(id3Tags));
}
Output:
{
"Version": "v2.3.0",
"Album": "God of War (PlayStation Soundtrack)",
"Accompaniment": "Bear McCreary",
"Artist": "Bear McCreary",
"AdditionalInfo": "WWW",
"Composer": "Bear McCreary",
"Year": "2018",
"TPOS": "1",
"Genre": "Game Soundtrack",
"Conductor": "London Session Orchestra, Schola Cantorum Choir, London Voices",
"Title": "God of War",
"Track": "1",
"Settings": "Lavf57.56.100",
"APIC": {
"mime": "image/png",
"textEncoding": "0",
"picType": "FrontCover",
"description": "",
"base64": "//VERY LONG BASE 64 DATA"
}
}
I've found a solution to this issue.
ID3 tags are stored in different frames (information blocks). Frames images are named "APIC". When a mp3 file has multiple images, APIC frame is added for each image.
The package I'm using, package:id3/id3.dart, stores decoded tags information in a map. This map has a key "APIC" for storing image data. Every time it comes across "APIC" frame it overrides data from previously found image.
I've amended the source code of this library to store an array in "APIC" key:
MP3Instance(List<int> mp3Bytes) {
this.mp3Bytes = mp3Bytes;
metaTags['APIC'] = [];
}
and in function parseTagsSync I've changed this line: metaTags['APIC'] = apic; to metaTags['APIC'].add(apic);. This change allows the library to store all APIC data.
I want to set an example using image file from local
here is my code
#ApiImplicitParams({
#ApiImplicitParam(
name = "username",
required = true,
paramType = "form",
dataType = "String",
value = "Username Customer Owner",
example = "![alt text](C:\Users\user\Postman\files\KTPHD.jpeg)"
),
as you can see I'm trying to upload my local images as jpeg
I'm lost here
Swagger UI does not support example values for file uploads.
I'm trying to use OCR to extract only the base dimensions of a CAD model, but there are other associative dimensions that I don't need (like angles, length from baseline to hole, etc). Here is an example of a technical drawing. (The numbers in red circles are the base dimensions, the rest in purple highlights are the ones to ignore.) How can I tell my program to extract only the base dimensions (the height, length, and width of a block before it goes through the CNC)?
The issue is that the drawings I get are not in a specific format, so I can't tell the OCR where the dimensions are. It has to figure out on its own contextually.
Should I train the program through machine learning by running several iterations and correcting it? If so, what methods are there? The only thing I can think of are Opencv cascade classifiers.
Or are there other methods to solving this problem?
Sorry for the long post. Thanks.
I feel you... it's a very tricky problem, and we spent the last 3 years finding a solution for it. Forgive me for mentioning the own solution, but it will certainly solve your problem: pip install werk24
from werk24 import Hook, W24AskVariantMeasures
from werk24.models.techread import W24TechreadMessage
from werk24.utils import w24_read_sync
from . import get_drawing_bytes # define your own
def recv_measures(message: W24TechreadMessage) -> None:
for cur_measure in message.payload_dict.get('measures'):
print(cur_measure)
if __name__ == "__main__":
# define what information you want to receive from the API
# and what shall be done when the info is available.
hooks = [Hook(ask=W24AskVariantMeasures(), function=recv_measures)]
# submit the request to the Werk24 API
w24_read_sync(get_drawing_bytes(), hooks)
In your example it will return for example the following measure
{
"position": <STRIPPED>
"label": {
"blurb": "ø30 H7 +0.0210/0",
"quantity": 1,
"size": {
"blurb": "30",
"size_type":" "DIAMETER",
"nominal_size": "30.0",
},
"unit": "MILLIMETER",
"size_tolerance": {
"toleration_type": "FIT_SIZE_ISO",
"blurb": "H7",
"deviation_lower": "0.0",
"deviation_upper": "0.0210",
"fundamental_deviation": "H",
"tolerance_grade": {
"grade":7,
"warnings":[]
},
"thread": null,
"chamfer": null,
"depth":null,
"test_dimension": null,
},
"warnings": [],
"confidence": 0.98810
}
or for a GD&T
{
"position": <STRIPPED>,
"frame": {
"blurb": "[⟂|0.05|A]",
"characteristic": "⟂",
"zone_shape": null,
"zone_value": {
"blurb": "0.05",
"width_min": 0.05,
"width_max": null,
"extend_quantity": null,
"extend_shape": null,
"extend": null,
"extend_angle": null
},
"zone_combinations": [],
"zone_offset": null,
"zone_constraint": null,
"feature_filter": null,
"feature_associated": null,
"feature_derived": null,
"reference_association": null,
"reference_parameter": null,
"material_condition": null,
"state": null,
"data": [
{
"blurb": "A"
}
]
}
}
Check the documentation on Werk24 for details.
Although a managed offering, Mixpeek is one free option:
pip install mixpeek
from mixpeek import Mixpeek
mix = Mixpeek(
api_key="my-api-key"
)
mix.upload(file_name="design_spec.dwg", file_path="s3://design_spec_1.dwg")
This /upload endpoint will extract the contents of your DWG file, then when you search for terms it will include the file_path so you can render it in your HTML.
Behind the scenes it uses the open source LibreDWG library to run a number of AutoCAD native commands such as DATAEXTRACTION.
Now you can search for a term and the relevant DWG file (in addition to the context in which it exists) will be returned:
mix.search(query="retainer", include_context=True)
[
{
"file_id": "6377c98b3c4f239f17663d79",
"filename": "design_spec.dwg",
"context": [
{
"texts": [
{
"type": "text",
"value": "DV-34-"
},
{
"type": "hit",
"value": "RETAINER"
},
{
"type": "text",
"value": "."
}
]
}
],
"importance": "100%",
"static_file_url": "s3://design_spec_1.dwg"
}
]
More documentation here: https://docs.mixpeek.com/