Using Microsoft-Windows-WFP/Pef-WFP-MessageProvider - network-programming

I'd like to monitor network activity at transport level. No need for ethernet frames and other low level stuff. It looks that Microsoft-Windows-WFP and Microsoft-Pef-WFP-MessageProvider providers actually do the task. However I am not sure that PEF provider (part of MessageAnalyzer) is suitable for use by 3rd party software. Also, I couldn't neither find manifest for Microsoft-Windows-WFP provider nor get traffic from it.
The questions are:
Are these providers intended for public use?
Is it possible to use PEF provider directly without involving OPN and PEF infrastructure?
If the providers above are not intended for public use, are there any other ETW providers that log network traffic?
TIA.

Related

Authentication (Passport) enough for security with Node js backend server?

Is PassportJS using Facebook Authentication enough for an iOS backend with Node JS?
I have the toobusy package as well to decline requests when things get to busy (I'm guessing it would be good for DDOSes).
I'm thinking of using nginx as a reverse proxy to my Node.JS server as well.
What are some more security measures that can scale? Some advice and tips? Anything security related that I should be concerned about that PassportJS's authenticated session can't handle?
It’s a bit hard to cram in all security-related best practices in one post, but for what it’s worth, here’s my take on the issue.
Providing authentication and securing it are two separate things. PassportJS will be able to handle everything related to authentication, but it’s an entirely different thing preventing it to be fooled or overwhelmed.
One (big) reason for putting PasswordJS behind a reverse proxy (RP) is that you’ll be able to provide a first line of defense for anything related to HTTP: header/body lengths/data, allowed methods, duplicate/unwanted headers, etc.
Nginx/Apache/HAProxy all provide excellent facilities to handle these cases and on the up-side, you get a nice separation of concerns as well: let the reverse proxy handle security and let PassportJS handle authentication. Architecture-wise, it will also make more sense because you’ll be able to hide the number and infrastructure of PassportJS nodes. Basically, you want to make it appear as there is only one entry point for your clients. Scaling out will also be easier with this architecture. As always, make sure that your RP(s) keep as little state as possible, preferably none, in order to scale linearly.
In order to configure your RP properly, you need to really understand what how PassportJS’ protocols (in case you want to provide more authentication methods than just Facebook’s) work. Knowing this, you can set up your RP(s) to:
Reject any disallowed request HTTP method (TRACE, OPTION, PUT, DELETE, etc).
Reject requests/headers/payload larger than a known size.
Load-balance your PassportJS nodes.
One thing to be on the lookout for in terms of load-balancing are sticky sessions. Some authenticators store all their state in an encrypted cookie, others will be a simple session handle, which only can be understood by the node that created the session. So unless you have session sharing enabled for the latter type (if you need PassportJS resilience), you need to configure your RP to handle sticky sessions. This should be the maximum amount of state they should handle. Configured correctly, this may even work if you need to restart an RP.
As you diligently pointed out, toobusy (or equivalent) should be in place to handle throttling. In my experience, HAProxy is bit easier to work with than the other RPs with regards to throttling, but toobusy should work fine too, especially if you are already familiar with it.
Another thing that may or may not be in your control is network partitioning. Obviously, the RPs need to be accessible, but they should act as relays for your PassportJS nodes. Best practice, if possible, is to put your authentication nodes on a separate network/DMZ from your backend servers, so that they cannot be directly reached other than through the RP. If compromised, they shouldn’t be able to be used as stepping stones to the backend/internal network.
As per Passport documentation:
"support authentication using a username and password, Facebook, Twitter, and more."
It is the middleware, with which provides the feasibility to integrate multiple type of security methodologies with NodeJS.
You should consider the purpose of the application, is it only supporting Facebook Authentication or custom register/login process. If it is providing second option, then in that case, it is better not to rely on authtoken of any social networking site like Facebook/Twitter or any other.
The better option is to create your own token like JWT token, and bind it with the user from multiple platforms. It will help you in extending the scope of your project to integrate other social networking sites.
Here is the link to integrate JWT in NodeJS.
https://scotch.io/tutorials/authenticate-a-node-js-api-with-json-web-tokens
Similarly there are many other blogs and tutorials available in market to integrate JWT with NodeJS

Best choice for robust self hosting server: WCF vs. ASP.NET Web Api

We currently have an .NET 4 application that consists of Windows Service running in the background and local or remote clients (only 1-3 normally).
The clients have a WPF GUI and need some data from the windows service. Therefore, we use WCF with NamedPipe binding for a local client and NetTcp binding for remote clients. This works, but we often have problems with endpoints that are not reachable (channel faulted or not found etc.). We already try to rebuild faulted connections but it seems to be pretty fragile...
Now enter Web Api: It looks like a HTTP based stack might be more robust (no channels, no endpoints, can be self-hosted in windows service as well). There seems to be no problems with broken channels because each request is handled individually. So if something fails, you just repeat the request. (And we have experience with ASP.NET MVC from other apps, so this not new to us).
Now we are thinking what might be our best bet. Is it better to "harden" our existing WCF service (one service interface with about 15 operations) or to move the interface to Web Api and run it as HTTP requests (with JSON data)? Performance is not our main issue here...
Any ideas?
Hartmut
I recommend you stick with WCF (SOAP) services for your WPF application rather than moving to the Web API. There are a number of reasons for this. First I think we need to consider what the new Web API is trying to address - namely to provide a framework for supporting RESTful/HTTP/hypermedia services. This is likely to be a good fit for building applications that make heavy use of HTTP such as web, mobile and JavaScript applications, where you want to maximise the "reach" or interopability of your services (irrespective of platform). This is not to say that you can't use it for WPF clients but in your case, where all traffic is local to your domain, it makes more sense to stick with your current implementation.
The binding choices you have made for your services / clients sound ok to me. I would focus on why your channels are faulting and address these issues. You may also want to consider hosting your services via IIS and use WAS to expose your non-HTTP endpoints. I have had much success with this in the past and for the most part has been pretty stable. It also takes away a few of the headaches with managing your own host. If you are concerned about the TCP binding faults, then just create a new HTTP or wsHTTP endpoint and use that instead. This will provide you exactly the same transport the web api uses without having to change your programming model.

Need my apps to talk to each other

In a DELPHI 2007 application that I am developing some prospect clients just found interesting to be able to share data and information with each other.
They all have the same application.
All have independent Databases
But all have the same installed application and there are some data types that they might want to share (replicate) between their databases.
How can I enable them to share data with other users of the same application program, but not to everybody on the whole internet.
I would like this to be as automatic as possible, as I already have considered approaches that involve manually sending emails.
I know Datasnap is an option, is there any other.
UPDATE:
The idea is to enable companies that have the same application to be able to share data.
They should be able to select what partner and what to send.
I have been investigating datasnap, but would like to know if there is another way to do this
Another standard way to connect distributed applications and share data and information is through some Message-oriented middleware (MOM). There are many open source middleware products (message brokers) available, which can be used over Delphi client libraries, even in multithreaded Delphi server applications. (Disclaimer: I am the author of message broker client libraries for Delphi and Free Pascal)
There are many essential differences between web services and message brokers, like peer-to-peer and publish/subscribe communication models. They also play a key role in enterprise application integration patterns.
One standard way to connect applications to other applications is to make a web-service, and make a client that consumes that web-service, called a web-client. Technologies like SOAP and REST refer to such web service and web clients.
Your question is vague, perhaps due to english not being your language, but you should probably edit it and be more specific.
If all your applications are going to talk directly to each other that is called "peer to peer networking" and there are huge problems with enabling that kind of communication directly over the internet. It is much easier if you build a server that all these applications will connect to.
As a sample, consider the IRC Chat service, and consider writing a Web Service that will be the Chat Server, and consider all your clients to be "Chat clients". Sharing data could be the same idea as creating "rooms" or "channels" on a chat server.
I get the idea that you want something like a Peer to Peer Data Replication Service. I think that the closest you're going to get to that is something like "RSS Feeds" (used by blog syndication services). You subscribe to them via a simple web service, and pull down the new content on some periodic basis. Since that data has to be published to a central server, that means, that a peer to peer approach is out of the question. If you don't have your own web server running on a web hosting service, or on a "cloud", and you need a truly peer to peer solution, I am not aware of any way to do that, at least not without an incredible custom engineering effort.

designing intranet applications, any special programmatic consideration to take care off?

I mostly work on CMSs, where there is no security concern about the people who CAN/CANT enter the site because it is for public use, but now i have been assigned to work on an Intranet App, and would like to know if on the programmatic side i must take care of extra security features or all that "intranet security" is handle by the "net hardware and config" -> (Routers, switch, hubs, etc...)
first of all, i'm sorry my lousy english, as this is not my natural language.
So... in my company we deny all traffic from external interfaces to the intranet server (hardware based firewall).
This should be obvious. Additionally, we only allow traffic from internal IP address range.
In the application itself, is just a well known CMS. Also, the mysql server only allows traffic from/to the apache server. And we closed all the ports except https (on apache), and a non-default to SSH. This little setup, on a different network range, is also behind a proxy/firewall (unTangle) which takes care of extra protection, logging, and NAT.
Hope this can "enlight" you a little more.
EDIT: The CMS itself, allows to install a module/plug-in to manage permissions based on user level/category.

What protocol does a kindle use to phone home?

What protocol does a Kindle 3 WiFi use to communicate with Amazon and download files on your kindle.com/free.kindle.com account?
I know I can use some network sniffing software to find it out, but I just need a quick answer.
It's a standard REST protocol that is not public and is hardened to prevent non-Kindles from using it. The actual HTTP traffic is encrypted and requires multiple forms of authentication to use. Only official Kindles can make calls; everyone else is prevented using multiple techniques.

Resources