Serilog event not saved - serilog

I'm trying to log to file / seq an event that's an API response from a web service. I know it's not best practice, but under some circumstances, I need to do so.
The JSON saved on disk is around 400Kb.to be honest, I could exclude 2 part of it (that are images returned as base64), I think I should use a destructured logger, is it right?
I've tried increasing the Seq limit to 1mb but the content is not saved even to log file so I think that's not the problem...I use Microsoft Logging (ILogger interface) with Serilog.AspnetCore
Is there a way I can handle such a scenario?
Thanks in advance

You can log a serialized value by using the # format option on the property name. For example,
Log.Information("Created {#User} on {Created}", exampleUser, DateTime.Now);
As you've noted it tends to be a bad idea unless you are certain that the value being serialized will always be small and simple.

Related

Differences in Umbraco cache structure?

Ok, So I have just spent the last 6-8 weeks in the weeds of Umbraco and have made some fixes/Improvements to our site and environments. I have spent a lot of that time trying to correct lower level Umbraco caching related issues. Now reflecting on my experience and I still don't have a clue what the conceptual differences are between the following:
Examine indexes
umbraco.config
cached xml file in memory (supposedly similar to umbraco.config)
CMSContentXML Table
Thanks Again,
Devin
Examine Indexes are index of umbraco content
So when ever you create/update/delete content, the current content information will be indexed
This index are use for searching - under the hood, it is lucene index
umbraco backend use these index for searching
You can create your own index if you want
more info checkout, Overview & Explanation - "Examining Examine by Peter Gregory"
umbraco.config and cached xml in memory are really the same thing.
The front end UmbracoHelper api get content from the cache not the database - the cache is from the umbraco.config
CMSContentXML contains each content's information as xml
so essentially this xml represent all the information of a node content
So in a nutshell they represent really 3 things:
examine is used for searching
umbraco.config cached data - save round trip to DB
CMSContentXML stores full information of a content
Edit to include better clarification from Robert Foster comment and the UmbracoHelper vs ExamineManager
For the umbraco.config and CMSContentXML table, #robert-foster commented
umbraco.config stores the most recent version of all published content only; the in-memory cache is a cached version of this file; and the cmscontentxml table stores a representation of all content and is used primarily for preview mode - it is updated every time a content item is saved. IIRC it also stores a representation of other content types
Regards to UmbracoHelper vs ExamineManager
UmbracoHelper api mostly get it's content from the memory cache - IMO it works best when locating direct content, such as when you know the id of content you want, you just call Umbraco.TypedContent(id)
But where do you get the id you want in the first place? or put it another way, say if you want to find all content's property Title which contain a word "Test", then you would use Examine to search for it. Because Examine is really lucene wrapper, so it is going to be fast and efficient
Although you can traverse tree by method such as Umbraco.TypedContent(id).Children then use linq to filter the result, but I think this is done in memory using linq-to-object, so it is not as efficient and preferment as lucene
So personally I think:
use Examine when you are searching (locating) for content - because you can use the capability of a proper search engine lucene
once you got the ids from the search result, use UmbracoHelper to get the full publish content representation of the content id into strong type model and work with the data.
one thing #robert-foster mention in the comment which, I did not know is that UmbracoHelper provides Search method which is a wrapper around the examine, so use that if more familiar with that api.
Lastly, if any above statement are wrong or not so correct, help me clarify so that anyone look at it later will not get it wrong, thanks all.

Avoiding repeated type definition for each result when projecting in Breeze.js

I'm currently querying an Entity using projections to avoid returning the entire object.
It works flawlessly, however, when looking at the actual response from the server, I'm seeing the same type definition repeated for every single element.
For example:
["$type":"_IB_4NdB_p8LiaC3WlWHHQ_pZzrAC_plF4[[System.Int32, mscorlib],[System.String, mscorlib],[System.String, mscorlib],[System.String, mscorlib],[System.Nullable`1[[System.Int32, mscorlib]], mscorlib],[System.Int32, mscorlib],[System.Single, mscorlib]], _IB_4NdB_p8LiaC3WlWHHQ_pZzrAC_plF4_IdeaBlade"
Now, given that every item in the result is sharing the same projection for that query, is there a way to have Breeze only define the Type Description ONCE instead of for every element?
It may not seem like a big deal but as result size increases those bytes do start to add up. At the moment There is little difference between returning the projected values and the entire entity itself due to this overhead.
NOTE: As it turns out, since we use Dynamic Compression of JSON in our real environments, this actually turns out to be a minor issue, since 200KB responses actually turn into less than 20KB traffic after gzip compression. Will probably be closing this question, unless someone has something to add that could be of use to others.
Update 18 September 2014
I decided to "cure" the problem of the long ugly $type names in serialized data for both dynamic types from projection queries and anonymous types created for an endpoint such as "Lookups".
There's a new Breeze Labs nuget package, "Breeze.DynamicTypeRenaming" (search for "Breeze Dynamic Type Renaming"). This adds two files to your Web API project's "Controllers" folder. One is a CustomBreezeConfig which replaces Breeze's default config and resets the Json.Net "Binder" setting with the new DynamicTypeRenamingSerializationBinder; this binder does the type name magic.
Just install the nuget package in your Web API project and it should "just work". In your case, the $type value would become "_IB_4NdB_p8LiaC3WlWHHQ_pZzrAC_plF4, Dynamic".
See an example of it in the "DocCode" sample.
As always, this is a Breeze Lab product, not part of the core Breeze product. It is offered "as is" with no promise of support. I'm pretty sure it's good and has no adverse side-effects. No guarantees. I'm sure you'll let me know if there's a problem.
That IS atrocious, isn't it! That's the C# generated anonymous type. You can get rid of it by casting into a custom DTO type.
I don't know if it is actually harmful. I hate looking at it in any case.
Lately I've been thinking about adding a JSON.NET IContractResolver that detects such uglies and turns them into shorter uglies. Wouldn't be hard. Just haven't had the time.
Why not write that yourself and contribute to the community? We'd be grateful! :-)
Using Dynamic Compression of JSON output has turned this into a non-issue, at least for now, since all that repeated content is heavily compressed server-side.

Decoding YouTube's error 500 page

This is YouTube's 500 page. Can anyone help decode this information?
<p>500 Internal Server Error<p>
Sorry, something went wrong.
<p>A team of highly trained monkeys has been dispatched to deal with this situation.<p>
If you see them, show them this information:
AB38WENgKfeJmWntJ8Y0Ckbfab0Fl4qsDIZv_fMhQIwCCsncG2kQVxdnam35
TrYeV6KqrEUJ4wyRq_JW8uFD_Tqp-jEO82tkDfwmnZwZUeIf1xBdHS_bYDi7
6Qh09C567MH_nUR0v93TVBYYKv9tHiJpRfbxMwXTUidN9m9q3sRoDI559_Uw
FVzGhjH5-Rd1GPLDrEkjcIaN_C3xZW80hy0VbJM3UI5EKohX35gZNK2aNi_8
Toi9z3L8lzpFTvz5GyHygFFBFEJpoRRJSu3CbH5S2OxXEVo4HgaaBTV7Dx_1
Zs1HZvPqhPIvXg9ifd4KZJiUJDFS8grPLE7bypFsRamyZw-OCVyUHsGQKBwu
77pTtRwpF3hOxYLxM4KnAyiY1N6yrASSWyaeumRDENAoEEe8i8MRxzifqHuR
leatvNMiwsg1pbSl7IIiaKljZaD9UkRms4Kvz1uYUNk4AwXnJ9-Wq44ufMPl
syiHp_LwaeqyuxXykJMl-SA9p05VrJc4kCETUW3Ybp0yTYvVrqggo56A0ofC
OiyAmifQA9pdYVGeumrQtbFlFyDyG9VKNpzn5lqutxFZPsS8xjiILfF3bETD
H4aUb5fT4iERFsEL7S-ClsXiA4yAJdAcNH-OhGg9ipAaIxRRTOR5P1MYx6s6
-OrqgpT5VEaEx2hMpS1afaMd2_F21sxvcz2d8sCpEceHHSfsntTth6talYeD
4l63aUTbbCKV1lHxKWxdUjACFKRobeAvIpcJPcdHSN3CNQI-LlIWIx9jeyBU
tDcL6S6GpRG_Z2of9fmw0LHpVU5hKlQ3lCPd4pVP6J02yrsBi0S9OLoE9jmM
T2FfCvU1sWUCsrZu4-UPflXMyRnFK8aN8DYiwWWE8OvnLQ-LIaRDhjp29u9a
LT6Lh4KxEmWF5XeZTwrzJxtuDLVomxVD5mpwFvK0YSoaz9dnPGXb0Fm2txSL
BvGssSrWBJ4FeR6eEEkd_UkQ-aUnPv2W-POox17n54wzTwLugYjslRenMzmk
I4_jlXcx9NpKmUg7Pa0qJuaElt-ZymPv6h0cXRUlyZtS0iT9-CQOHWLYMi3I
kKrYa6bKUCAj058JEderSnbXqGEMvwBeZ_xgJpAjJiSgMOxJPokhbS6ezIv4
1JNr_dvQyvu4vh-YQNZ37fNTqQcoDZtYflBsJjuGrJlmIcqBYufB9g6nUaOE
xPAKjPdvZ_z1Rn_8sWVf8NHNBBKGe5lgDgBxypsV0kIwVa9QOlehivOaieBI
tmqHNdQIfdob0XUTEBPSeLj9hmw3Bqplc3gqUfFhIvpHml6dOTbjBhfkq0TE
5yCRHL2VSe2Xt9_i8SPQA2yCtJVO8HP6pnohmxqlBWSTE8Xj87PI6quX7f9i
0W6PdtkMYaGJsd_Ly_4Ag-KmGNHN585tF9eC5HeQ8Gz-vHZWOUiM4OQAG9UA
31ENOAjHtYb--ketbUcdX_FdjGiPtI_GxYeBqEShICotcd-S-E3bEGO-77M2
CuUUdB1AUYVDZR81XejVG5kSWsrz-p1qZ-6sSpSHCp114C6PheQPCwRHEr_1
AS-DkZfIuZ-w8XAo6pHIwvnv0dORSo-hPFgw1rw2VE4aKsgeMc7ZoPUxby1d
Zr-o-0X4ZMgxoQHw_Ub27rTTHxS5Czt_vgBPq7k5OK5dm6b7JCs6Dbn2dsIA
AakPL26t4smr8IiPAnqNC2sn7vxSiAe9mTJ670eNc6C9dCSGwqzqSURiLHmT
kFyLhNSOdipttECmSSA1qh_E0K4LUhiOq7MFDEzg9CLD8kuJrqpEGgltYpD-
8lk7KEpyjMqbWFs-qeD8uJpsVfY2ac1C67OmyGzkERVoC245-YXuNCP8KUZH
LGzRm9jXwUP_piDETX0N5xj34VOCfUTffT1WlWHmB9WRPhwjIsYYy_kgR-uT
kIEDQ23NVUEGgDoryl-ymysIfwifjq-lPB3e85dz1PajNxawsCrKNeR_4hhq
zE_E4ete1EgXeAYoeH4UIgrPGXDD-KfoNoB6viNs0GzNU9czD9Avr-tDtARO
HBLSLIVRYq8caMA-jvpplTOMoDdmUMUWytf4Y_F5tKTpNtLPaAe1py1IgZBl
lfAGY9L_k5slelh_9gUBEkURxS2oMGf2gdSeDdRBxKKx5tF1b-cuMLK6JYZJ
vbGFYSsSENOkHrHEo9NdTwTi7NON9ZgRJgh7OaENK4TFCXrhKc4C6cyJs-V_
HZ0Q-B8XDyjL0qudg_0rJbjTNpNZajT_1WGsnhsTTAgMCGtTsj1T8vNx2LuX
lPQV30nUKpukdCP3zuiE9_aeJQ-nzf3dMQ-KnZU5APmGcIP_u2be6blieMWH
qVax1asKmuIjslh49ceM6lRt3Ia2bHUB8b1TMSjU4I79KPqc3clDnD8quNnU
cRkgfJ_8LCEoH7jml_2TNV0fLuH_9IOXF3jKjhT9K5f-e5N06GmPQLzdqzeQ
MnEtHuDcf4IizyKnB5GUXoNfQxbScQEzztQ_nHMYfF-E8KqoxxlK-Z0wfEDv
dJpL3mcNfFu_vz-_LJ0oI4dE0-vthsxbpTxVQkdI0E5XSi4nYfLqhXompk4j
gpxcHBjsXVbWcnelWhhQP15gCApj6Gz_ddRtk_uxiyiqZ44oUUDcl1KeWMTf
yhKDj7jgGNzTOkUsXZRPb9M77-ZYPuL2wR68E3b9PC_mS6HBHiUxQ7pXvkwS
Bi2CoFgd9SqBXk2O5I_BPaEoA8Aorazw4OvDrmTQrCk4OkGPKRukE4Ci2RMq
TZIYbBz-v3QxmOKHJoMXPNOfj93TRWpmlAd6iHCH6BVlSdfgfjdbHeD0b0ct
qXC_-S5fr1XFBuaZwaUTrBPxU-3IxWLp-dx7wpKcFykqKnByYpkzR3twKEXc
z--CZV79Qk3ZTMY9ATia4HbyhoAqY_hV9GKAHQdU_C-9qwYt0rliNUcizlBc
RHcwzoMyx30ciwbE8e9QsEH_AMa3E2ezuhjTqQlAG33_Gy5Bwe7fj6zNR0ud
jjpcNVf-wprWHEYxMcKwjCQvEHBtv6TnCHkgi_AOtPzzm3aYkMc_ysdNAnI7
DE_T9S5Mkcs6VdT2DWgUN_UC-oAw27xej0aTIn0GckXPDcLBgvrUhUPU3FRn
lW65syvFvxFmBOiCAEHD1q6Is1XhIf8vE6y0FMdNEWSMUW5rQG8f3KP_pjqG
XlUHnGrQPQysylrczHOj4E3WTT918xg2vrXVraDJbCYnJpaWp8m94iqZw2gJ
I_0UWOAZJMCYWz5jYf2DanCOBaGeZIO-UsWorP6YV3yHehirZ_Lc6KtaUorb
bm_BnnCqGVZypL4k6cDy-4GyO2GNohXzN-VWqbAIUQIat9w6RsDpzpS2DIap
96aMBDg24D73RhFTEgCSunPpbbGrDVU3GFkuTGFFBQWNfAU_F22XtoPr_ZB0
1zZPrBVXrEhebvrzp0Z31_sIT8zLop_oaSRvykbyJKKxucARfPee-d0xlgWN
WwKKtl49WVMhhu0OfDScH3knAVdv0LDAyt1fo3WF8jxdp7J9Hn3OWF3rcn0p
zw0gt6YV_6FRy_UbZmpLBvEhZQKUfKuxp6LK-SfHOilT29ERg9LJZhnyluTV
HELRtkJIcHzvphXupCIaIgZispYxHNSmAfze2cshWBYizGTBSKXWgrJeo7Q6
kEjim72yKaJ8JaLzMQFPxtQxhtvHRw94dCuXcajg3nE_r_9t7D8RicqF-CVV
tvp-rHMPhhizlgfixHWXrPB7reTtftT64pOSl5vUop8gTlbeW5Kg7WQAPNfp
zUH8YcAo0xDLJHA-FgTM4mYGih41rKaKKteWRFGU-fIyEzeO1s35tbGzlZ7R
btUG_fCpIbaJmucMZK9OzVBSfBgTBtFSesqKq6hIc8HctGcj5LPUfP9DRqqe
CrBi6bPjTlzrjaxJoU6oRq4ZtiBG38skOAaCUk61tpjilkq1fmWe2ByvLXhp
O2furZoiwNrizYmUmAW3ak3iSneScA64M-9apdZwhhEgpqyw5mUMYNT5SOOf
xZePlgXxhlL81t3KlofdbzT0w6tlbbT0NSbj9Q_zNkeZ8ar5aeMgTR-pJACg
baB20YVezziX-yboCF-uIptCTFNV
(Source: this post on HN)
The debug information contained in the (urlsafe-)base64 blob is likely encrypted.
Think about it from Google's perspective: You would want to display a stack trace, relevant headers of the http request and possibly some internal state of the user session to help a developer debug the situation. On the other hand all that information might contain sensitive information that you don't want the general public to see or that might endanger the user if he copy'n pastes it in a public support forum.
If I was to take a guess of the format I would imagine:
A public identifier of the key used for encryption (their servers could use different keys then)
The debug data encrypted using an authenticated encryption scheme
Additional data for error correction when OCR has to be used
For statistical analysis of the format it would be interesting to sample a lot of these error messages and see if some parts of the message are less random than you would expect from encrypted data (symmetrical encrypted data should follow a uniform distribution).
It looks like you are not the only one who is looking for some secret messages in YouTube error page. It seems that you can decode it using Base64.
Here is how:
http://www.cambus.net/decoding-youtube-http-error-500-message/
In a nutshell:
Sadly, contrary to my expectations, there doesn't seem to be any
hidden messageā€¦ Screw you, highly trained monkeys!
I guess it is just another Easter Egg similar to 'Goats Teleported' performance counter Google Chrome had:
https://plus.google.com/+RobertPitt/posts/PrqAX3kVapn
But I guess unless you look like this, you can't be 100% sure.
It's entirely possible that this is random padding to avoid the "friendly" IE error pages that show if your error page does not contain more than 512 bytes of HTML. It would be base64 encoded if it were simply random bytes.
Imho this is all about customer care.
Actually there would be no need to send the error/debug message to the customer, because, I guess, it's already handled internally.
So:
why do we see this?
and why do they crypt it?
and is there really no hidden message for us?
Although the error might be handled and resolved internally, this does not necessarily satisfy a customer, who is not able to use the product. They pretty much do crypt by a good reason as this debug message might reveal more than a typical admin is used to.
And also there is no need to hide a message for us. Why? Because we NEVER stop until we find something.
I think:
internally the error is dealt with
external users might have something in hand to tell a technician if necessary and in return can get an approximation of ongoing problem
All in all nothing special about it and i think linking e.g. to the inf. monkey theorem is a bit overspectulated...
Error 500 means google has a problem which can not resolve. So when reporting a bug the most important thing is to prepare reproduction steps. So I tried to find an answer of the question "When this happens?"
I found this post in reddit: https://www.reddit.com/r/youtube/comments/40k858/is_youtube_giving_you_500_internal_server_errors/?utm_source=amp&utm_medium=comment_list
As resume:
It happens on desktops (www...), it works ok on mobile version (m...)
It happens for authenticated users. For anonymous users is working fine.
The problem is resolved after cookies are cleaned.
So I would give a direction: try to find the key in the session cookie. I hope my 2 cents will help.

Parsing a CSV for Database Insertion when Formatted Incorrectly

I recently wrote a mailing platform for one of our employees to use. The system runs great, scales great, and is fun to use. However, it is currently inoperable due to a bug that I can't figure out how to fix (fairly inexperienced developer).
The process goes something like this...
Upload a CSV file to a specific FTP directory.
Go to the import_mailing_list page.
Choose a CSV file within the FTP directory.
Name and describe what the list contains.
Associate file headings with database columns.
Then, the back-end loops over each line of the file, associating the values with a heading, and importing these values into a database.
This all works wonderfully, except in a specific case, when a raw CSV is not correctly formatted. For example...
fname, lname, email
Bob, Schlumberger, bob#bob.com
Bobbette, Schlumberger
Another, Record, goeshere#email.com
As you can see, there is a missing comma on line two. This would cause an error when attempting to pull "valArray[3]" (or valArray[2], in the case of every language but mine).
I am looking for the most efficient solution to keep this error from happening. Perhaps I should check the array length, and compare it to the index we're going to attempt to pull, before pulling it. But to do this for each and every value seems inefficient. Anybody have another idea?
Our stack is ColdFusion 8/9 and MySQL 5.1. This is why I refer to the array index as [3].
There's ArrayIsDefined(array, elementIndex), or ArrayLen(array)
seems inefficient?
You gotta code what you need to code, forget about inefficiency. Get it right before you get it fast (when needed).
I suppose if you are looking for another way of doing this (instead of checking the array length each time, although that really doesn't sound that bad to me), you could wrap each line insert attempt in a try/catch block. If it fails, then stuff the failed row in a buffer (including the line number and error message) that you could then display to the user after the batch has completed, so they could see each of the failed lines and why they failed. This has the advantages of 1) not having to explicitly check the array length each time and 2) catching other errors that you might not have anticipated beforehand (maybe a value is too long for your field, for example).

CAB file API clarification

Since I'm not really seeing any content anywhere that doesn't point back to the original Microsoft documents on this matter, or source code that really doesn't seem to answer the questions I'm having, I thought I might ask a few things here. (Delphi tag is there because that's what my dev environment is on the code I'm making from this)
That said, I had a few questions the API document wasn't answering. First one: fdi_notify messages. What is "my responsibility" is in coding these: fdintCABINET_INFO: fdintPARTIAL_FILE: fdintNEXT_CABINET: fdintENUMERATE: ? I'll illustrate what I mean by an example. For fdintCLOSE_FILE_INFO, "my responsibility" is to Close a file related to handle given me, and set the file's date and time according to the data passed in fdi_notify.
I figure I'm missing something since my code isn't handling extracting spanned CAB files...any thoughts on how to do this?
What you're more than likely running into is that FDICopy only reads the cab you passed in. It will use fdintNEXT_CABINET to get spanned data for any files you extract in response to fdintCOPY_FILE, but it only calls fdintCOPY_FILE for files that start on that first cab.
To get a directory listing for the entire set, you need to call FDICopy in a loop. Every time you get a fdintCABINET_INFO event, save off the psz1 parameter (next cab name). When FDICopy returns, check that. If it's an empty string you're done, if not call FDICopy again with the next cab as the new path.
fdintCABINET_INFO: The only responsibility for this is returning 0 to continue processing. You can use the information provided (the path of the next cabinet, next disk, path name, nad set ID), but you don't need to.
fdintPARTIAL_FILE: Depending on how you're processing your cabs, you can probably ignore this. You'll only see it for the second and later images in a set, and it's to tell you that the particular entry is continued from a previous cab. If you started at the first cab in the set you'll have already seen an fdintCOPY_FILE for the file. If you're processing random .cabs, you won't really be able to use it either, since you won't have the start of the file to extract.
fdintNEXT_CABINET: You can use this to prompt the user for a new directory for the next cabinet, but for simple spanning support just return 0 if the passed in filename is valid or -1 if it isn't. If you return 0 and the cab isn't valid, or is the wrong one, this will get called again. The easiest approach (if you don't request a new disk/directory), is just to check pfdin^.fdie. If it's FDIError_None it's equal the first time being called for the requested cab, so you can return 0. If it's anything else it's already tried to open the requested cab at least once, so you can return -1 as an error.
fdintENUMERATE: I think you can ignore this. It isn't covered in the documentation, and the two cab libraries I've looked at don't use it. It may be a leftover from a previous API version.

Resources