FormulaLengthExceedsLimit error in Microsoft Graph api - microsoft-graph-api

I'm using the Microsoft Graph api to add a row to an Excel table.
It's a post to: https://graph.microsoft.com/v1.0/drives/xx-DriveId-xx/items/xx-File-id/workbook/worksheets/xx-WorkSheetid-xx/tables/xx-Tableid-xx/rows/add
With a payload like this:
{"values":[["=HYPERLINK(\"https://www.nu.nl/\",\"NU - Het laatste nieuws het eerst op NU.nl\")","","","Nieuw","","","","","","","09-30-2022","","","","","","","","","https://www.nu.nl/","","",""]],"index":null}
When I post this payload I'm getting this error:
"code": "FormulaLengthExceedsLimit",
"message": "The byte code of the applied formula exceeds the maximum length limit. For Office on 32-bit computers, the bytecode length limit is 16384 characters. On 64-bit computers, the bytecode length limit is 32768 characters. This error occurs in Excel on the web and on the desktop.",
"innerError": {
"code": "formulaLengthExceedsLimit",
"message": "The byte code of the applied formula exceeds the maximum length limit. For Office on 32-bit computers, the bytecode length limit is 16384 characters. On 64-bit computers, the bytecode length limit is 32768 characters. This error occurs in Excel on the web and on the desktop.",
"date": "2022-09-30T10:38:11",
"request-id": "8d111ca9-f7a7-4a99-9417-3dad672e82cd",
"client-request-id": "8d111ca9-f7a7-4a99-9417-3dad672e82cd"
But if another user posts exactly the same thing, the post succeeds and there will be an extra row.
It has worked for me in the past, but it doesn't anymore.

Related

Push notification not delivered if character limit exceeds 80 characters

For sending push notifications, we have used Amazon Simple Notification Service (Amazon SNS). When I test push notification by sending characters around 80, I get push notification but when characters exceeds 80 or 85, notifications are not delivered.
We have limits of 256 bytes for payload but I don't think it may exceed that limit if I send that much of characters. At-least messages should truncated.
I have found that:
Prior to iOS 7 the alerts display limit was 107 characters. Bigger
messages were truncated and you would get a "..." at the end of the
displayed message. With iOS 7 the limit seems to be increased to 235
characters. If you go over 8 lines your message will also get
truncated.
But in my case, I don't even get notification. Is it something related to Amazon SNS ? Am I missing something to check ?
EDIT 1:
I am not attaching image or anything with text message. I just send plain text message.
EDIT 2:
In iOS 8 and later, the maximum size allowed for a notification
payload is 2 kilobytes; Apple Push Notification service refuses any
notification that exceeds this limit. (Prior to iOS 8 and in OS X, the
maximum payload size is 256 bytes.)
I am having device which has iOS 9 installed. So for that device, 2000 Bytes are far more limit than 80-85 characters including payload size.
I am really desperate about what I am missing ?
You should remember that the 256 bytes limit is for the entire payload, so not only your message but also everything else - the payload is in JSON format, so keys, and all special characters also count to the limit.
This is the minimal payload required by Apple to be considered correct :
{
"aps" : {
"alert" : "your text"
}
}
So we already "loose" 19 bytes, to send a simple notification. If we want to have also a custom title :
{
"aps" : {
"alert" : {
"title" : "your title",
"body" : "your text"
}
}
}
This adds up to 40 "lost" bytes (about 15%). Adding custom sounds and badges will also decrease the count left for the actual message.
Now, these bytes are lost only due to the required keys, and there is not much you can do about it. I haven't used Amazon SNS, but they may be adding some custom fields for their own purposes, leaving you with less space for the message. You can inspect this in your didReceiveRemoteNotification method by inspecting the userInfo dictionary. Simple NSLog(#"userInfo -> %#", userInfo) should dump all contents to the console. This representation won't be 1:1 with the JSON in terms of extra characters, but will give you and idea of what else, if anything, apart from the required fields is sent.
Other thing worth mentioning is that non-ASCII characters will take more than one byte of space, so you can effectively use fewer characters for your message.

Twilio queue overflow error: how large is the queue?

Twilio's Message resource has a "status" property that indicates whether a SMS message is "queued", "sending", "failed", etc. If a Message instance failed to deliver, one possible error message is "Queue overflow". In the Twilio documentation, the description for this error case is: "You tried to send too many messages too quickly and your message queue overflowed. Try sending your message again after waiting some time."
Is the queue referenced in error code 30001 an instance of this resource? https://www.twilio.com/docs/api/rest/queue
Or is the queue (in the case of a 30001 error code) something that Twilio maintains on their end? If Twilio does throttling behind the scenes (queueing SMS messages per sending phone number), what is the size of that queue? How much would we have to exceed the rate limit (per phone number) before the queue overflow referenced in error code 30001 occurs?
Emily, message queue is not related to the queue resource you linked to above and it is something maintained on our end.
Twilio can queue up to 4 hours of SMS. This means, we can send out 1 sms per second, if there are more than 14,400 messages in the queue, all the messages queued up after will fail with 30001 error queue overflow and will not be sent. This info is for Long Code numbers. The link above explains processing for other scenarios.
A few suggestions to avoid the error:
Keep messages to at most 160 characters if possible. But if not
possible, calculate how many SMS messages each message will be (if
you are not sure you can always send 1 test message and see how much
you are charged for that message).
Based on the assumption that your messages is 160 characters,
throttle the sending rate to 3600 messages per hour (1 messaage/sec *
60 sec/min * 60 min/hr).
Please let me know if you've got any other questions.
Each of the Twilio phone numbers(senders) has a separate queue in which 14400(4 hr x 60 min x 60 sec) message segments can be queued. 1 segment is sent in one second.
What is a message segment?
A message segment is not a complete message but a part of a message.
Normally SMS is sent in terms of message segments and all message
segments are combined on the user’s mobile to create the actual SMS.
Twillio message segment config:
1 character = 8 bits(1 byte)
GSM encoding = 7 bit per character
UCS-2 encoding = 16 bit per character
Data Header = 6 bytes per segment
Summary: Each character takes 8 bits, If GSM encoding is used, each
character will take 7 bits or if UCS-2 encoding is used, each char
will take 16 bits. In the case of multiple segments, 6 bytes per
segment will be used for data headers(responsible for combining all
segments of the same SMS on user mobile)
Character per Message Segment:
GSM encoding when single segment = (140 char bytes x 8 bits)/ 7 bits =
160 characters
UCS-2 encoding when single segment = (140 char bytes x 8 bits)/ 16
bits = 70 characters
GSM encoding when multiple segment = ((140 char bytes - 6 header
bytes) x 8 bits)/ 7 bits = 154 characters
UCS-2 encoding when multiple segment = ((140 char bytes - 6 header
bytes) x 8 bits)/ 16 bits = 67 characters
Based on what encoding is used(check via Twilio Admin) for your message, you can calculate how many SMS can be in the queue at a time.
References:
https://support.twilio.com/hc/en-us/articles/115002943027-Understanding-Twilio-Rate-Limits-and-Message-Queues
https://www.twilio.com/blog/2017/03/what-the-heck-is-a-segment.html

Error in file.read() return above 2 GB on 64 bit python

I have several ~50 GB text files that I need to parse for specific contents. My files contents are organized in 4 line blocks. To perform this analysis I read in subsections of the file using file.read(chunk_size) and split into blocks of 4 then analyze them.
Because I run this script often, I've been optimizing and have tried varying the chunk size. I run 64 bit 2.7.1 python on OSX Lion on a computer with 16 GB RAM and I noticed that when I load chunks >= 2^31, instead of the expected text, I get large amounts of /x00 repeated. This continues as far as my testing has shown all the way to, and including 2^32, after which I once again get text. However, it seems that it's only returning as many characters as bytes have been added to the buffer above 4 GB.
My test code:
for i in range((2**31)-3, (2**31)+3)+range((2**32)-3, (2**32)+10):
with open('mybigtextfile.txt', 'rU') as inf:
print '%s\t%r'%(i, inf.read(i)[0:10])
My output:
2147483645 '#HWI-ST550'
2147483646 '#HWI-ST550'
2147483647 '#HWI-ST550'
2147483648 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
2147483649 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
2147483650 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967293 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967294 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967295 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967296 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967297 '#\x00\x00\x00\x00\x00\x00\x00\x00\x00'
4294967298 '#H\x00\x00\x00\x00\x00\x00\x00\x00'
4294967299 '#HW\x00\x00\x00\x00\x00\x00\x00'
4294967300 '#HWI\x00\x00\x00\x00\x00\x00'
4294967301 '#HWI-\x00\x00\x00\x00\x00'
4294967302 '#HWI-S\x00\x00\x00\x00'
4294967303 '#HWI-ST\x00\x00\x00'
4294967304 '#HWI-ST5\x00\x00'
4294967305 '#HWI-ST55\x00'
What exactly is going on?
Yes, this is the known issue according to the comment in cpython's source code. You can check it in Modules/_io/fileio.c. And the code add a workaround on Microsoft windows 64bit only.

websocket client packet unframe/unmask

I am trying to implement latest websocket spec. However, i am unable to get through the unmasking step post successful handshake.
I receive following web socket frame:
<<129,254,1,120,37,93,40,60,25,63,71,88,92,125,80,81,73,
51,91,1,2,53,92,72,85,103,7,19,79,60,74,94,64,47,6,83,
87,58,7,76,87,50,92,83,70,50,68,19,77,41,92,76,71,52,
70,88,2,125,90,85,65,96,15,14,20,107,31,14,28,100,27,9,
17,122,8,72,74,96,15,86,68,37,68,18,76,48,15,28,93,48,
68,6,73,60,70,91,24,122,77,82,2,125,80,81,85,45,18,74,
64,47,91,85,74,51,21,27,20,115,24,27,5,37,69,80,75,46,
18,68,72,45,88,1,2,40,90,82,31,37,69,76,85,103,80,94,
74,46,64,27,5,60,75,87,24,122,25,27,5,47,71,73,81,56,
21,27,93,48,88,76,31,57,77,74,11,55,73,68,73,115,65,81,
31,104,26,14,23,122,8,75,68,52,92,1,2,110,24,27,5,53,
71,80,65,96,15,13,2,125,75,83,75,41,77,82,81,96,15,72,
64,37,92,19,93,48,68,7,5,62,64,93,87,46,77,72,24,40,92,
90,8,101,15,28,83,56,90,1,2,108,6,13,21,122,8,82,64,42,
67,89,92,96,15,93,19,56,28,8,65,101,31,94,16,105,28,10,
20,56,30,14,65,56,27,93,71,106,16,11,17,63,25,4,17,57,
73,89,17,59,29,88,29,106,24,27,5,46,65,72,64,54,77,69,
24,122,66,93,93,49,5,12,8,109,15,28,76,59,90,93,72,56,
76,1,2,41,90,73,64,122,8,89,85,50,75,84,24,122,25,15,
23,105,25,5,19,106,26,14,20,111,25,27,5,53,77,85,66,53,
92,1,2,110,26,13,2,125,95,85,65,41,64,1,2,108,27,10,19,
122,7,2>>
As per base framing protocol defined here (https://datatracker.ietf.org/doc/html/draft-ietf-hybi-thewebsocketprotocol-17#section-5.2) i have:
fin:1, rsv:0, opcode:1, mask:1, length:126
Masked application+payload data comes out to be:
<<87,58,7,76,87,50,92,83,70,50,68,19,77,41,92,76,71,52,70,88,2,125,90,85,65,96,
15,14,20,107,31,14,28,100,27,9,17,122,8,72,74,96,15,86,68,37,68,18,76,48,15,
28,93,48,68,6,73,60,70,91,24,122,77,82,2,125,80,81,85,45,18,74,64,47,91,85,
74,51,21,27,20,115,24,27,5,37,69,80,75,46,18,68,72,45,88,1,2,40,90,82,31,37,
69,76,85,103,80,94,74,46,64,27,5,60,75,87,24,122,25,27,5,47,71,73,81,56,21,
27,93,48,88,76,31,57,77,74,11,55,73,68,73,115,65,81,31,104,26,14,23,122,8,75,
68,52,92,1,2,110,24,27,5,53,71,80,65,96,15,13,2,125,75,83,75,41,77,82,81,96,
15,72,64,37,92,19,93,48,68,7,5,62,64,93,87,46,77,72,24,40,92,90,8,101,15,28,
83,56,90,1,2,108,6,13,21,122,8,82,64,42,67,89,92,96,15,93,19,56,28,8,65,101,
31,94,16,105,28,10,20,56,30,14,65,56,27,93,71,106,16,11,17,63,25,4,17,57,73,
89,17,59,29,88,29,106,24,27,5,46,65,72,64,54,77,69,24,122,66,93,93,49,5,12,8,
109,15,28,76,59,90,93,72,56,76,1,2,41,90,73,64,122,8,89,85,50,75,84,24,122,
25,15,23,105,25,5,19,106,26,14,20,111,25,27,5,53,77,85,66,53,92,1,2,110,26,
13,2,125,95,85,65,41,64,1,2,108,27,10,19,122,7,2>>
While the 32-bit masking key is:
<<37,93,40,60,25,63,71,88,92,125,80,81,73,51,91,1,2,53,92,72,85,103,7,19,79,60,
74,94,64,47,6,83>>
As per https://datatracker.ietf.org/doc/html/draft-ietf-hybi-thewebsocketprotocol-17#section-5.2 :
j = i MOD 4
transformed-octet-i = original-octet-i XOR masking-key-octet-j
however, i doesn't seem to get my original octet sent from client side, which is basically a xml packet. Any direction, correction, suggestions are greatly appreciated.
I think you've mis-read the data framing section of the protocol spec.
Your interpretation of the first byte (129) is correct - fin + opcode 1 - final (and first) fragment of a text message.
The next byte (254) implies that the body of the message is masked and that the following 2 bytes provide its length (lengths of 126 or 127 imply longer messages whose length's can't be represented in 7 bits. 126 means that the following 2 bytes hold the length; 127 mean that its the following 4 bytes).
The following 2 bytes - 1, 120 - imply a message length of 376 bytes.
The following 4 bytes - 37,93,40,60 - are your mask.
The remaining data is your message which should be transformed as you write, giving the message
&ltbody xmlns='http://jabber.org/protocol/httpbind' rid='2167299354' to='jaxl.im' xml:lang='en' xmpp:version='1.0' xmlns:xmpp='urn:xmpp:xbosh' ack='1' route='xmpp:dev.jaxl.im:5222' wait='30' hold='1' content='text/xml; charset=utf-8' ver='1.1
0' newkey='a6e44d87b54461e62de3ab7874b184dae4f5d870' sitekey='jaxl-0-0' iframed='true' epoch='1324196722121' height='321' width='1366'/>

Verify that a '*.map' file match a Delphi application

For my program delphi-code-coverage-wizard, I need to verify that a (detailed) mapping file .map matches a Delphi application .exe
Of course, this verification should be realized with Delphi.
Is there a way to check it ? Maybe by verifying some information from the EXE ?
I think a quite simple heuristic would be to check that the various sections in the PE file start and finish at the same place:
For example, here's the top of a map file.
Start Length Name Class
0001:00401000 000A4938H .text CODE
0002:004A6000 00000C9CH .itext ICODE
0003:004A7000 000022B8H .data DATA
0004:004AA000 000052ACH .bss BSS
0005:00000000 0000003CH .tls TLS
I also looked at what dumpbin /headers had to say about these sections:
SECTION HEADER #1
.text name
A4938 virtual size
1000 virtual address (00401000 to 004A5937)
A4A00 size of raw data
400 file pointer to raw data (00000400 to 000A4DFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60000020 flags
Code
Execute Read
SECTION HEADER #2
.itext name
C9C virtual size
A6000 virtual address (004A6000 to 004A6C9B)
E00 size of raw data
A4E00 file pointer to raw data (000A4E00 to 000A5BFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60000020 flags
Code
Execute Read
...truncated
Look at the .text section. According to dumpbin it starts at 00401000 and finishes at 004A5937 which is a length of 000A4938, exactly as in the .map file. Naturally you'd read the PE file directly rather than running dumpbin, but this illustrates the point.
I'd expect a vanishingly small number of false positives with this approach.

Resources