How does AppDynamics (and programs alike) retrieve information - monitoring

How does AppDynamics and similar problems retrieve data from apps ? I read somewhere here on SO that it is based on bytecode injection, but is there some official or reliable source to this information ?

Data retrieval by APM tools is done in several ways, each one with its pros and cons
Bytecode injection (for both Java and .NET) is one technique, which is somewhat intrusive but allows you to get data from places the application owner (or even 3rd party frameworks) did not intend to allow.
Native function interception is similar to bytecode injection, but allows you to intercept unmanaged code
Application plugins - some applications (e.g. Apache, IIS) give access to monitoring and application information via a well-documented APIs and plugin architecture
Network sniffing allows you to see all the communication to/from the monitored machine
OS specific un/documented APIs - just like application plugins, but for the Windows/*nix
Disclaimer : I work for Correlsense, provider of APM software SharePath, which uses all of the above methods to give you complete end-to-end transaction visibility.

Related

Electron with C++ backend - secure?

I have written a UI in Electron and I would like to connect it with my C++ code. However, I will be selling this product and so I would like to know if this makes it easier for people to crack my C++ code? Obviously I know compiled C++ can be cracked anyway, but does this affect it in any way?
Additionally, what is the best way to go about this while preserving maximum possible security?
Thanks.
EDIT: How about this? Is it possible to use c++ as back-end for Electron.js?
EDIT2: To clarify, my Electron app will be showing the status of operations being performed in the C++ program. As such, I will need to send lists, dictionaries, strings etc. from C++ to JS which will then render it. Additionally, buttons on my Electron app need to trigger actions in the C++ code, such as stopping or starting certain parts of the program.
I have written a UI in Electron and I would like to connect it with my C++ code ...
I would like to know if this makes it easier for people to crack my C++ code?
Using electron does not make any meaningful difference for protecting the C++ source code. (Your intellectual property)
The Javascript code running in electron will be very easy to reverse engineer though, which gives users a head start on experimenting with your C++ binary. Using minification and obfuscation tools can at least make that harder.
For the C++ side, connecting C++ to Electron can be done in at least these two ways:
By dynamically linking to a shared library (Node.js C++ Addons)
In this case your C++ API would be functions that get exported by the shared library. There are many tools to inspect shared libraries (DLLs) and view these functions.
By communicating with another process using some sort of Inter-process communication.
In this case your API would depend on the IPC method used. If it was TCP/UDP messages you could use Wireshark to inspect the packets between the processes. There are ways to inspect messages going over any type of IPC.
Either way, your application must be delivered to the end-user with a compiled binary. Preventing reverse engineering of the binary itself is impossible if you actually give the binary to your users.
You should also expect that a savvy end-user will have access to other tools that can inspect the API and implement third-party code that talks to that API.
Additionally, what is the best way to go about this while preserving maximum possible security?
By "maximum possible security", I will assume you are referring to preventing unauthorized use of the C++ code with other applications.
You would need a licensing system that can authenticate the application that is using your C++ binary's API. Explaining what that would be exactly is probably too large of an answer for a Stack Overflow, and you will have to do some research on how licensing systems are implemented.
It may be theoretically impossible to develop a perfect licensing system though. Look at the gaming industry, it takes just a matter of days to for the licensing software become circumvented for every new game that is released. The only software architecture that cracks haven't completely conquered are cloud-based applications, which don't actually deliver compiled code with their business logic to the end-user's computer.

What is meant by the phrase adapter/connector?

This is a basic questions. I want to apply to an entry level java developer position with the following requirement:
Familiarity with the Sailpoint Identity IQ standard adapters/connectors
By standard connectors do they basically mean how Sailpoint exchanges data with third party tools? And by adapter do they mean that the adapter pattern would be used? Thanks
This is going to probably appear well after your interview - but to answer the question:
1) Standard adapters/connectors:
SailPoint ships with a "standard" set of connectors which are part of the purchase price there are those ie EPIC which do not ship as part of the standard product and must be enabled. To give you a deeper view into connectors..
Connectivity Methods:
Direct Connectivity - This is where a connector communicates directly to a system using APIs or data-sources. Some advantages of using direct connect are that you don't have to generate or transmit files, and you can be more efficient in processing only things that have changed. Some disadvantages are the they are subject to availability and downtime concerns like any connected system. They are also typically subject to advantages and disadvantages that APIs might impose as well.
Some people also refer to this as an 'online' method of connectivity.
File-Based Connectivity - This is where a connector reads from a snapshot of data presented in a file, rather than connecting directly to the system. Some advantages of using a file, are that files are portable, easily inspected for data issues, and not typically subject to availability. Some disadvantages are that files are usually processed in their entirety, and may require processing or transformation in order to work effectively.
Some people also refer to this as a 'decoupled' or 'offline' method of connectivity.
Connector Implementations
Source-Specific Implementation - These are connectors built with a specific target-system in mind. These typically use specific APIs targeted to the system they are integrated with. Because the systems and APIs are known, these typically require less configurations to get working.
Examples of these are Active Directory, Workday, Salesforce, SAP, etc.
General Implementation - These are general-purpose connectors which can be used to connect to a variety of sources or systems. These tend to be more flexible in general, but typically do require a bit more setup and configuration to meet needs.
Examples of these are Web Services, SCIM, JDBC, Delimited Files, etc.
Custom Implementation - These are completely custom connectors and tailored to the system and API of your choice. This approach offers the most flexibility of all connector options, however making custom connectors is definitely a development-level activity, and is not to be taken lightly. The code written for custom connectors is maintained and supported by the customer who owns the connector.
Examples of these are custom in-house applications, etc.
Understanding these connector implementations is important, because if a source-specific implementation isn't available, another general or custom connector implementation may be used instead.

AWS SWF vs Flow Framework

I am familiar with the Concept of amazon SWF . I can see many SDK in different languages to use SWF services. Also, amazon Flow Framework is a set of library to implement distributed applications . Currently this Flow Framework is available in Java and Ruby . Then how can we write distributed applications using SWF in other languages like python , php etc. Does this mean amazon provides the framework in Java and Ruby only , rest of the languages have other vendor's libraries ? Please explain .
You are right that AWS currently only provides high-level frameworks for Ruby and Java ("Flow" frameworks). Low-level access to SWF is available in most (all?) official SDKs though: boto2/3 for Python, go-sdk, etc.
When using SWF, you'll find yourself implementing mainly two types of programs: "activity workers" and "deciders" (http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dev-actors.html).
Using the Flow framework is not mandatory, but it helps implementing deciders by providing high-level abstractions for describing synchronisation points, defining which tasks can be run in parallel, retries, etc. There are also non-official libraries (I'm personally maintaining one for my company, "simpleflow").
If you want to use other languages for deciders, I recommend you try to use an existing framework first, then see if you want to implement this yourself (it's not trivial from my experience).
If you want to implement activities in other languages, I recommend you start using the Flow framework end-to-end, and then you can either 1/ fork and use your favorite language as a subprocess of Ruby/Java Flow workers, or 2/ mimic the serialisation logic of the Flow framework and implement workers directly yourself with low-level APIs (which is simple: poll for an activity, do work, then respond to SWF with the result).

How to do a REST webserver with Delphi as a backend for a big web application?

I read this question but was somehow not satisfied with the answers.
I also quickly read (as suggested in that question) the last chapter of Marco Cantù 2010 Handbook, from which I quote the following (I think I can quote such a short text):
I [Marco Cantù] do have a lot of
investment in server side web and REST
applications written in Delphi, and in
the recent years I've started playing
with and introducing at conferences a
Delphi Web Application REST
Framework119 (that is, DWARF), which
at this time is still not publicly
available... simply because it is too
sketchy and unfinished to be
published. I've seen other ongoing
efforts to clone Rails in Delphi and
offer other REST server architectures.
I think that if you want to build a
very large REST application
architecture you should roll out your
own technology or use one of these
prototypical architectures.
Considering that I own Delphi XE Professional and DataSnap is not in there and I would like to consider to write large applications too according to the above comments it seems DataSnap is not an option.
Is there even a commercial solution for this? I don't want to consider "my own implementation of REST", I would like to create a webserver that uses some of my datamodules where I use the DAC I choose (Devart in this case).
Final note: my goal is to write the backend for a large web application, on the client I would like to use Ext JS 4.0, but I want to do all the client work in javascript, to take full advantage of EXT JS, so basically I need a webserver just for the data and tracking the state, not for serving webpages.
To create your REST services, try our Open Source mORMot project. Now it is a well known and stabilized project, used world wide in production.
You can use any DAC with the current state of the framework by implementing a custom TSQLRestServerStatic class (similar to the TSQLRestServerStaticInMemory class, but calling your DAC): so you'll benefit for the ORM and the JSON RESTful architecture, together with the high-speed http.sys kernel-mode server.
The SQLite3 engine is NOT mandatory with our framework, even if it was designed to work better with it.
If you will start an application from scratch, I think the mORMot is a good option if Delphi is your only option. If you choose datasnap you'll have to live with the problems of performance and stability.
I wrote an article on my blog talking about performance and stability with DataSnap (and mORMot) in large applications, you can see it on the following link:
DataSnap analysis based on Speed & Stability tests
I think you should have a look at kbmMW, there is a way to implement a basic REST server based on an event driven HTTP server.
Check news.components4developers.com news groups, there you will have a lot of documentation.
FireHttp is a high-performance Web server based on Delphi/Object Pascal language. It supports HTTP 1.1, HTTPS (SSL/TLS), WebSocket, GZip, Deflate, IOCP, EPOLL. It adopts multi-process+multi-threading model, has good stability and concurrency performance, and provides SDK source code. Developers can use SDK to quickly build high-performance cross-platform Web applications.

What types of API do you offer with your Delphi application?

We have a 3-tier Delphi application written using RemObjects DataAbstract. Many of our customers are asking for an API so they can interact with it using their own applications.
The API must allow the clients to call methods with various parameters and return results ranging from simple parameters to whole datasets.
What types of API can you recommend and how difficult are they to implement?
Since you've written your application using RemObjects DataAbstract then you've got just about everything you need already waiting for you in your application.
RemObjects DataAbstract includes the RemObjects SDK which is one of the most flexible and easy ways to build an API available. The RemObjects SDK lets you expose methods methods to your customers in a multitude of ways from native binary RemObjects calls, to XML-RPC, to JSON, to SOAP, to a local DLL, to Windows Messages, to Named Pipes... even via SMTP/POP.
The beauty is that you'll be able to design one API and then easily expose it to your customers via any or all of these different mechanisms. Just design your API methods, then ask your customer how they'd like to be able to consume it, chances are RemObjects have a message/channel combination that matches their request.
Publish the API as functions in a DLL. Easy enough to code, but limited by the DLL limits (only plain functions, etc.). Not easy to call from scripts, for example
Publish the API as COM objects. A bit more complex to implement (especially if you never used COM before), but very flexible. Can be easily called from scripts, if needed.
Use a standard generic RPC mechanism like SOAP or REST. Better suited for server, not difficult to implement, requires a "listener" active to receive the calls
Use your own protocol to communicate. Longer to implement, can be faster than SOAP or REST, but requires also more work on the customer side.
Besides the plain business logic API, I think it will be also a big advantage if the application offers APIs for generic tasks like:
logging / audit trails
monitoring (performance, statistics)
rights administration
basic administration (shutdown / go to maintenance mode)
messaging (send notifications to users or applications)

Resources