Why isn't Solace .Net API targeting AnyCPU? I've compared both SolaceSystems.Solclient.Messaging.dll and SolaceSystems.Solclient.Messaging_64.dll assemblies in Reflector and they are absolutely identical except for one class SolaceNativeAPI which points to libsolclient.dll and libsolclient_64.dll. It seems it would be very easy to convert this class to a non-static, expose common interface ISolaceNativeAPI which would be used instead of SolaceNativeAPI and create a factory that checks current process architecture and returns 32 or 64 bit implementation.
I'd be happy to submit a pull request if Solace code repository was public because it'd make my current work easier.
The Solace .NET API does not currently have an 'Any CPU' .NET binary because it was found in the past to slightly degrade performance. The Solace .NET API makes use of a native adapter layer that wraps a native library. To achieve run-time selection of the correct native library (32 or 64-bit) we need to introduce an indirection layer between the .NET API and the native adapter. This was tested in the past but found it degraded API performance by 5% to 10%, which was deemed too high to be released.
Related
I'm building an API using .NET 6 and plan to use Serilog as logger. According to https://onloupe.com/blog/serilog-vs-mel/ there are two possibilities:
Use Serilog exclusively
Use Microsoft.Extensions.Logging as logging API + Serilog as logging framework
On the one hand, using Serilog exclusively has the drawback that everything in my codebase would take a dependency on Serilog, so using it in combination with Microsoft.Extensions.Logging should provide more flexibility.
On the other hand, on https://github.com/serilog/serilog-extensions-logging they clearly recommend to use https://github.com/serilog/serilog-aspnetcore for .NET Core projects:
ASP.NET Core applications should prefer Serilog.AspNetCore and UseSerilog() instead.
Does anybody know the reason for this recommendation which looks contradictory to me in terms of flexibility?
I think the recommendation:
ASP.NET Core applications should prefer Serilog.AspNetCore and UseSerilog() instead.
is not really talking about which ILogger abstraction to use in your application code -- Microsoft or Serilog. I think it's more about choosing between the low-level Serilog.Extensions.Logging package or the higher level Serilog.AspNetCore package (which depends on the former).
The Serilog.AspNetCore package brings in some useful features like a request logging and integration directly with the generic host. Things you're likely to want in an ASP.NET Core application. The Serilog.Extensions.Logging package is more suited for scenarios where you're not using a generic host, e.g. a console application.
Regarding whether to prefer using Microsoft's ILogger abstraction or Serilog's: It depends. In library code that you expect to share across diverse projects (or publicly), I'd probably stick to Microsoft's abstractions for maximum compatibility and to not force an unwelcome dependency on consumers. Similarly, if you think there's any chance you'll someday want to swap out logging frameworks, you'll probably have an easier time if you stick to using the Microsoft APIs.
On the other hand, you might prefer the ergonomics of Serilog's API over Microsoft's. For example, maybe you like the convenience of a static Log and LogContext APIs without all the ceremony of injected logger. And the IDiagnosticContext abstraction (along with the above mentioned ASP.NET Core request logger) is a really powerful logging pattern. Those would be good reasons to go "all-in" with Serilog.
I look like having a requirement to access a SignalR hub with a Delphi client.
How do I implement a basic SignalR client in Delphi?
More generally, where can I find an up to date description of the protocol?
Traffic load will not be high, so it doesn't have to be extremely clever or anything.
[Edited to make it less of a "recommendation" question.]
Partial answer to question: I found useful, low level version of the documentation in a legacy zip of version 1.3
It has since vanished from the signalr git repository, and I've been unable to find a newer version online anywhere.
I have popped it here for convenience.
http://www.mithril.com.au/SignalR%20Protocol.docx
This document describes the protocol at a suitably low-level for me to feel confident I can construct a partial implementation of SignalR using existing components, sufficient for my purposes anyway.
If a more up to date version of this exists anywhere, I'd appreciate a link.
How does AppDynamics and similar problems retrieve data from apps ? I read somewhere here on SO that it is based on bytecode injection, but is there some official or reliable source to this information ?
Data retrieval by APM tools is done in several ways, each one with its pros and cons
Bytecode injection (for both Java and .NET) is one technique, which is somewhat intrusive but allows you to get data from places the application owner (or even 3rd party frameworks) did not intend to allow.
Native function interception is similar to bytecode injection, but allows you to intercept unmanaged code
Application plugins - some applications (e.g. Apache, IIS) give access to monitoring and application information via a well-documented APIs and plugin architecture
Network sniffing allows you to see all the communication to/from the monitored machine
OS specific un/documented APIs - just like application plugins, but for the Windows/*nix
Disclaimer : I work for Correlsense, provider of APM software SharePath, which uses all of the above methods to give you complete end-to-end transaction visibility.
Made a research before asking this but I couldn't really understand much of differences between what I'm asking above. In-depth information would be much appreciated. Thanks in advance.
API - a set of functions and structures (classes?) for performing a selected task (e.g. libcurl API for network requests)
A Framework is something you can build upon. Usually it is complete (or almost complete) to a point it can be started out of the box (but probably would`nt do anything useful) and provides APIs to override some functionality
a toolkit is a set of utilities/tools you can use for some task (e.g. Kali Linux is a network penetration toolkit)
SDK (Software Developer`s Kit) is a toolkit (usually official) that can be used to interact with/program some device/entity. It also may provide APIs and frameworks internally. (e.g. Android SDK allows to develop, build, test and deploy applications for, well, Android. it describes APIs accessible from different OS versions. )
A toolkit is a set of utilities/tools you can use for some task (e.g. Kali Linux is a network penetration toolkit)
I read this question but was somehow not satisfied with the answers.
I also quickly read (as suggested in that question) the last chapter of Marco Cantù 2010 Handbook, from which I quote the following (I think I can quote such a short text):
I [Marco Cantù] do have a lot of
investment in server side web and REST
applications written in Delphi, and in
the recent years I've started playing
with and introducing at conferences a
Delphi Web Application REST
Framework119 (that is, DWARF), which
at this time is still not publicly
available... simply because it is too
sketchy and unfinished to be
published. I've seen other ongoing
efforts to clone Rails in Delphi and
offer other REST server architectures.
I think that if you want to build a
very large REST application
architecture you should roll out your
own technology or use one of these
prototypical architectures.
Considering that I own Delphi XE Professional and DataSnap is not in there and I would like to consider to write large applications too according to the above comments it seems DataSnap is not an option.
Is there even a commercial solution for this? I don't want to consider "my own implementation of REST", I would like to create a webserver that uses some of my datamodules where I use the DAC I choose (Devart in this case).
Final note: my goal is to write the backend for a large web application, on the client I would like to use Ext JS 4.0, but I want to do all the client work in javascript, to take full advantage of EXT JS, so basically I need a webserver just for the data and tracking the state, not for serving webpages.
To create your REST services, try our Open Source mORMot project. Now it is a well known and stabilized project, used world wide in production.
You can use any DAC with the current state of the framework by implementing a custom TSQLRestServerStatic class (similar to the TSQLRestServerStaticInMemory class, but calling your DAC): so you'll benefit for the ORM and the JSON RESTful architecture, together with the high-speed http.sys kernel-mode server.
The SQLite3 engine is NOT mandatory with our framework, even if it was designed to work better with it.
If you will start an application from scratch, I think the mORMot is a good option if Delphi is your only option. If you choose datasnap you'll have to live with the problems of performance and stability.
I wrote an article on my blog talking about performance and stability with DataSnap (and mORMot) in large applications, you can see it on the following link:
DataSnap analysis based on Speed & Stability tests
I think you should have a look at kbmMW, there is a way to implement a basic REST server based on an event driven HTTP server.
Check news.components4developers.com news groups, there you will have a lot of documentation.
FireHttp is a high-performance Web server based on Delphi/Object Pascal language. It supports HTTP 1.1, HTTPS (SSL/TLS), WebSocket, GZip, Deflate, IOCP, EPOLL. It adopts multi-process+multi-threading model, has good stability and concurrency performance, and provides SDK source code. Developers can use SDK to quickly build high-performance cross-platform Web applications.