Hi i'm relativly new to kernel programming (i've got a lot of c++ development experience though) and have a goal that i want to achieve:
Detecting and conditionally blocking attempts from userland programs to write or read to specific memory addresses located in my own userland process. This has to be done from a driver.
I've setup a development enviorment (virtual machine running the latest windows 10 + virtualkd + windbg) and already successfully deployed a small kmdf test driver via the visual studio integration (over lan).
So my question is now:
How do i detect/intercept Read/WriteProcessMemory calls to my ring3 application? Simply blocking handles isn't enough here.
It would be nice if some one could point me into the right direction either by linking (a non outdated) example or just by telling me how to do this.
Update:
Read a lot about filter drivers and hooking Windows Apis from kernel mode, but i really dont want to mess with Patchguard and dont really know how to filter RPM calls from userland. Its not important to protect my program from drivers, only from ring3 applications.
Thank you :)
This code from here should do the trick.
OB_PREOP_CALLBACK_STATUS PreCallback(PVOID RegistrationContext,
POB_PRE_OPERATION_INFORMATION OperationInformation)
{
UNREFERENCED_PARAMETER(RegistrationContext);
PEPROCESS OpenedProcess = (PEPROCESS)OperationInformation->Object,
CurrentProcess = PsGetCurrentProcess();
PsLookupProcessByProcessId(ProtectedProcess, &ProtectedProcessProcess); // Getting the PEPROCESS using the PID
PsLookupProcessByProcessId(Lsass, &LsassProcess); // Getting the PEPROCESS using the PID
PsLookupProcessByProcessId(Csrss1, &Csrss1Process); // Getting the PEPROCESS using the PID
PsLookupProcessByProcessId(Csrss2, &Csrss2Process); // Getting the PEPROCESS using the PID
if (OpenedProcess == Csrss1Process) // Making sure to not strip csrss's Handle, will cause BSOD
return OB_PREOP_SUCCESS;
if (OpenedProcess == Csrss2Process) // Making sure to not strip csrss's Handle, will cause BSOD
return OB_PREOP_SUCCESS;
if (OpenedProcess == CurrentProcess) // make sure the driver isnt getting stripped ( even though we have a second check )
return OB_PREOP_SUCCESS;
if (OpenedProcess == ProtectedProcess) // Making sure that the game can open a process handle to itself
return OB_PREOP_SUCCESS;
if (OperationInformation->KernelHandle) // allow drivers to get a handle
return OB_PREOP_SUCCESS;
// PsGetProcessId((PEPROCESS)OperationInformation->Object) equals to the created handle's PID, so if the created Handle equals to the protected process's PID, strip
if (PsGetProcessId((PEPROCESS)OperationInformation->Object) == ProtectedProcess)
{
if (OperationInformation->Operation == OB_OPERATION_HANDLE_CREATE) // striping handle
{
OperationInformation->Parameters->CreateHandleInformation.DesiredAccess = (SYNCHRONIZE | PROCESS_QUERY_LIMITED_INFORMATION);
}
else
{
OperationInformation->Parameters->DuplicateHandleInformation.DesiredAccess = (SYNCHRONIZE | PROCESS_QUERY_LIMITED_INFORMATION);
}
return OB_PREOP_SUCCESS;
}
}
This code, once registered with ObRegisterCallback, will detect when a new handle is created to your protected process and will kill it if it's not coming from Lsass, Csrss, or itself. This is to prevent blue screens from critical process being denied a handle to
your application.
Related
I have a bit of code that creates a salix webapp and runs it from an IDE popup menu by making use of util::Webserver. In order to allow for the command to be used multiple times, I try to shutdown any existing webserver at that address first but it doesn't seem to be working. No matter what it always comes up with an illegal argument error stating "shutdown" not possible.
void run_game(Tree t, loc s){
t = annotate(t);
PSGAME g = ps_implode(t);
Checker c = check_game(g);
Engine engine = compile(c);
loc host = |http://localhost:9050/|;
try { util::Webserver::shutdown(host);} catch: ;
util::Webserver::serve(host, load_app(engine)().callback, asDaemon = true);
println("Serving content at <host>");
}
What I expect to happen is that the first time this function it run, shutdown throws an error that is silenced because no webserver exists and then serve starts the webserver. If the user tries to run the function again then shutdown successfully runs, clearing the address bind and serve binds successfully to the address.
What actually happens the second time, is that shutdown still errors, the error is silenced and then serve complains that the address is already in use.
I'm looking for any solution that would allow me to start a salix app through the IDE's popup menu (previously registered) at the same address.
PS_contributions =
{
PS_style,
popup(
menu(
"PuzzleScript",
[
action("Run Game", run_game)
]
)
)
};
registerContributions(PS_NAME, PS_contributions);
Right; we ran into similar issues and decided to special case actions that run web apps. So we added this:
data Menu = interaction(str label, Content ((&T <: Tree) tree, loc selection) server)
See https://github.com/usethesource/rascal-eclipse/blob/bb70b0f6e8fa6f8c227e117f9d3567a0c2599a54/rascal-eclipse/src/org/rascalmpl/eclipse/library/util/IDE.rsc#L119
Content comes from the Content module which basically wraps any Response(Request) servlet.
So you can wrap your salix webApp in a Content and return it given a current selection and the current tree.
The IDE will take care to start and also shutdown the server. It does that every time an interaction with the same label is created or after 30 minutes of silence on the given HTTP port.
I'm trying to pipe the output(logs) of a program to a Go program which aggregates/compress the output and uploads to S3. The command to run the program is "/program1 | /logShipper". The logShipper is written in Go and it's simply read from os.Stdin and write to a local file. The local file will be processed by another goroutine and upload to S3 periodically. There are some existing docker log drivers but we are running the container on a fully managed provider and the log processing charge is pretty expensive, so we want to bypass the existing solution and just upload to S3.
The main logic of the logShipper is simply read from the os.Stdin and write to some file. It's work correctly when running on the local machine but when running in docker the goroutine blocked at reader.ReadString('\n') and never return.
go func() {
reader := bufio.NewReader(os.Stdin)
mu.Lock()
output = openOrCreateOutputFile(&uploadQueue, workPath)
mu.Unlock()
for {
text, _ := reader.ReadString('\n')
now := time.Now().Format("2006-01-02T15:04:05.000000000Z")
mu.Lock()
output.file.Write([]byte(fmt.Sprintf("%s %s", now, text)))
mu.Unlock()
}
}()
I did some research online but not find why it's not working. One possibility I'm thinking is might docker redirect the stdout to somewhere so the PIPE not working the same way as it's running on a Linux box? (As looks like it can't read anything from program1) Any help or suggestion why it not working is welcome. Thanks.
Edit:
After doing more research I realized it's a bad practice to handle the logs in this way. I should more rely on the docker's log driver to handle the log aggregate and shipping. However, I'm still interested to find out why it's not read anything from the PIPE source program.
I'm not sure about the way the Docker handles output, but I suggest that you extract the file descriptor with os.Stdin.Fd() and then resort to using golang.org/x/sys/unix package as follows:
// Long way, for short one jump
// down straight to it.
//
// retrieve the file descriptor
// cast it to int, because Fd method
// returns uintptr
fd := int(os.Stdin.Fd())
// extract file descriptor flags
// it's safe to drop the error, since if it's there
// and it's not nil, you won't be able to read from
// Stdin anyway, unless it's a notice
// to try again, which mostly should not be
// the case
flags, _ := unix.FcntlInt(fd, unix.F_GETFL, 0)
// check if the nonblocking reading in enabled
nb := flags & unix.O_NONBLOCK != 0
// if this is the case, just enable it with
// unix.SetNonblock which is also a
// -- SHORT WAY HERE --
err = unix.SetNonblock(fd, true)
The difference between the long and a short way is that the long way will definitely tell you, if the problem is in the nonblocking state absence or not.
If this is not the case. Then I have no other ideas personally.
Created a webextension for firefox (currently using Nightly 52), that uses native messaging to launch a java program on Linux (Ubuntu 14, 32x).
The webextension loads, reads the .json file and reads the path which points to a script that starts the java program. The JSON and the path are correct as when I use:
var native = browser.runtime.connectNative("passwordmanager");
console.log("native.name" + native.name); //outputs passwordmanager.
native.onDisconnect.addListener(function(m) { console.log("Disconnected"); });
The above code prints the name of the native port and also prints "Disconnected". So I m guessing the native app is terminating for some reason.
The application is only skeleton right now, that just does sysout and reads sysin and works correctly if Launch it directly through the shell script.
While debugging the webextension, I am not able to step into the call to connectNative, as it just steps-over that call instead of doing step-in. So kind of out of options whats' going wrong.
Please let me know if anyone is able to create a native messaging app based on FF webextension and any pointers on what I might be doing wrong.
Thanks
This solution here shows you how to detect onConnect and onFail. It should help you out to figure out your real problem.
So I don't think you can do proper error handling with connectNative from the JS side alone. You can do somewhat error handling if you get the exe side involved, but you can't get a string for "error reason" when an error occurs. The error is only logged to console.
First make sure to set your deeloper prefs, so messages show in your browser console. You can use this addon - https://addons.mozilla.org/en-US/firefox/addon/devprefs/ - or read that addon description it gives you the MDN page with the prefs to set.
Then this is how you can do some sort of error handling (without error reason) (pseudo-code - i might need a .bind in the callbcks):
function connectNative(aAppName, onConnect, onFail) {
var listener = function(payload) {
if (!connected) {
connected = true;
port.onDisconnect.removeListener(failedConnect);
onConnect();
} else {
// process messages
}
}
var failedConnect = function() {
onFail('failed for unattainable reason - however see browser console as it got logged there');
}
var connected = false;
var port = chrome.runtime.connectNative(aAppName);
port.onMessage.addListener(listener);
port.onDisconnect.addListener(failedConnect);
return port;
}
Now in your exe, as soon as it starts up, make it write to stdout something. That will trigger the onConnect.
Can we use graph database neo4j with react js? If not so is there any alternate option for including graph database in react JS?
Easily, all you need is neo4j-driver: https://www.npmjs.com/package/neo4j-driver
Here is the most simplistic usage:
neo4j.js
//import { v1 as neo4j } from 'neo4j-driver'
const neo4j = require('neo4j-driver').v1
const driver = neo4j.driver('bolt://localhost', neo4j.auth.basic('username', 'password'))
const session = driver.session()
session
.run(`
MATCH (n:Node)
RETURN n AS someName
`)
.then((results) => {
results.records.forEach((record) => console.log(record.get('someName')))
session.close()
driver.close()
})
It is best practice to close the session always after you get the data. It is inexpensive and lightweight.
It is best practice to only close the driver session once your program is done (like Mongo DB). You will see extreme errors if you close the driver at a bad time, which is incredibly important to note if you are beginner. You will see errors like 'connection to server closed', etc. In async code, for example, if you run a query and close the driver before the results are parsed, you will have a bad time.
You can see in my example that I close the driver after, but only to illustrate proper cleanup. If you run this code in a standalone JS file to test, you will see node.js hangs after the query and you need to press CTRL + C to exit. Adding driver.close() fixes that. Normally, the driver is not closed until the program exits/crashes, which is never in a Backend API, and not until the user logs out in the Frontend.
Knowing this now, you are off to a great start.
Remember, session.close() immediately every time, and be careful with the driver.close().
You could put this code in a React component or action creator easily and render the data.
You will find it no different than hooking up and working with Axios.
You can run statements in a transaction also, which is beneficial for writelocking affected nodes. You should research that thoroughly first, but transaction flow is like this:
const session = driver.session()
const tx = session.beginTransaction()
tx
.run(query)
.then(// same as normal)
.catch(// errors)
// the difference is you can chain multiple transactions:
const tx1 = await tx.run().then()
// use results
const tx2 = await tx.run().then()
// then, once you are ready to commit the changes:
if (results.good !== true) {
tx.rollback()
session.close()
throw error
}
await tx.commit()
session.close()
const finalResults = { tx1, tx2 }
return finalResults
// in my experience, you have to await tx.commit
// in async/await syntax conditions, otherwise it may not commit properly
// that operation is not instant
tl;dr;
Yes, you can!
You are mixing two different technologies together. Neo4j is graph database and React.js is framework for front-end.
You can connect to Neo4j from JavaScript - http://neo4j.com/developer/javascript/
Interesting topic. I am using the driver in a React App and recently experienced some issues. I am closing the session every time a lifecycle hook completes like in your example. When there where more intensive queries I would see a timeout error. Going back to my setup decided to experiment by closing the driver in some more expensive queries and it looks like (still need more testing) the crashes are gone.
If you are deploying a real-world application I would urge you to think about Authentication and Authorization when using a DB-React setup only as you would have to store username/password of the neo4j server in the client. I am looking into options of having the Neo4J server issuing a token and receiving it for Authorization but the best practice is for sure to have a Node.js server in the middle with something like Passport to handle Authentication.
So, all in all, maybe the best scenario is to only use the driver in Node and have the browser always communicating with the Node server using axios...
I have a really odd and hard to diagnose issue with MSBuild / TFS. I have a solution that contains about 12 different build configurations. When running on the build server, it takes maybe 30mins to build the lot and has worked fine for weeks now but now is occasionally failing.
Most of the time, when it fails it'll be an error like this:
19:25:45.037 2>TestPlanDocument.cpp(1): fatal error C1093: API call 'GetAssemblyRefHash' failed '0x8007000e' : ErrorMessage: Not enough storage is available to complete this operation. [C:\Builds\1\ICCSim Card Test Controller\ICCSimCTC Release\src\CardTestController\CardTestController.vcxproj]
The error will sometimes happen on a different file. It won't happen for every build configuration either, it's very inconsistent and occasionally even builds all of them successfully. There's not much different between the build configurations either, mostly it's just a few string changes and of course they all build locally just fine.
The API call in question is usually GetAssemblyRefHash but not always. I don't think this is the issue, as Googling for GetAssemblyRefHash specifically brings up next to nothing. I suspect there's some kind of resource issue at play here but I'm at a loss as to what: There's plenty of HDD space (Hundreds of GB's), plenty of RAM (Machine originally had 4GB minimum allocated but was dynamic as it's a Hyper-v - it never pushed above 2.5GB. I upped this to 8GB minimum just in case and there's been no change).
I've set the build verbosity to diagnostic and it doesn't really show anything else that's helpful, just the same error.
For reference, the build server is fully up to date on all patches. It's running Windows Server 2012 R2, has TFS 2013 and VS 2013 installed, both are on Update 4.
I'm really at a loss at this point and would appreciate any help or pointers.
EDIT: Just to keep people up to date, the compile toolchain was in 32bit mode however even after switching to 64bit, the issue persists.
I think I found the source, but I still don't know the reason.
Browsing through the Microsoft Shared Source, we can find the source for GetAssemblyRefHash():
HRESULT CAsmLink::GetAssemblyRefHash(mdToken FileToken, const void** ppvHash, DWORD* pcbHash)
{
if (TypeFromToken(FileToken) != mdtAssemblyRef) {
VSFAIL( "You can only get AssemblyRef hashes for assemblies!");
return E_INVALIDARG;
}
HRESULT hr;
CAssembly *file = NULL;
if (FAILED(hr = m_pImports->GetFile( FileToken, (CFile**)&file)))
return hr;
return file->GetHash(ppvHash, pcbHash);
}
Only two places here to investigate - the call to m_pImports->GetFile(), where m_pImports is CAssembly *m_pImports;, the other is file->GetHash().
m_pImports->GetFile() is here, and is a dead end:
HRESULT CAssembly::GetFile(DWORD index, CFile** file)
{
if (!file)
return E_POINTER;
if (RidFromToken(index) < m_Files.Count()) {
if ((*file = m_Files.GetAt(RidFromToken(index))))
return S_OK;
}
return ReportError(E_INVALIDARG);
}
file->GetHash(), which is here:
HRESULT CAssembly::GetHash(const void ** ppvHash, DWORD *pcbHash)
{
ASSERT( ppvHash && pcbHash);
if (IsInMemory()) {
// We can't hash an InMemory file
*ppvHash = NULL;
*pcbHash = 0;
return S_FALSE;
}
if (!m_bDoHash || (m_cbHash && m_pbHash != NULL)) {
*ppvHash = m_pbHash;
*pcbHash = m_cbHash;
return S_OK;
}
DWORD cchSize = 0, result;
// AssemblyRefs ALWAYS use CALG_SHA1
ALG_ID alg = CALG_SHA1;
if (StrongNameHashSize( alg, &cchSize) == FALSE)
return ReportError(StrongNameErrorInfo());
if ((m_pbHash = new BYTE[cchSize]) == NULL)
return ReportError(E_OUTOFMEMORY);
m_cbHash = cchSize;
if ((result = GetHashFromAssemblyFileW(m_Path, &alg, (BYTE*)m_pbHash, cchSize, &m_cbHash)) != 0) {
delete [] m_pbHash;
m_pbHash = 0;
m_cbHash = 0;
}
*ppvHash = m_pbHash;
*pcbHash = m_cbHash;
return result == 0 ? S_OK : ReportError(HRESULT_FROM_WIN32(result));
}
We can see that about halfway down, it tries to allocate room to store the byte[] result, and when fails, returns E_OUTOFMEMORY, which is the error code you're seeing:
if ((m_pbHash = new BYTE[cchSize]) == NULL)
return ReportError(E_OUTOFMEMORY);
m_cbHash = cchSize;
There's other paths to consider, but this seems like the most obvious source. So it looks like the problem is that a plain memory allocation is failing.
What could cause this?
Lack of free physical memory pages / swap
Memory fragmentation in the process.
Inability to reserve commit space for this in the swap file
Lack of address space
At this point, my best guess would be memory fragmentation. Have you triple checked that the Microsoft CPP compiler is running in 64-bit mode? Perhaps see if you can debug the compiler (Microsoft symbol servers may be able to help you here), and set a breakpoint for that line and dump the heap when it happens.
Some specifics on diagnosing heap fragmentation - fire up sysinternal's VMMap when the compiler breaks, and look at the free list - you need three chunks at least 64 kB free to perform an allocation; less than 64 kB and it won't get used, and two 64 kB chunks are reserved.
Okay, I have an update to this! I opened a support ticket with Microsoft and have been busy working with them to figure out the issue.
They went down the same paths as outlined above and came to the same conclusion - it's not a resources issue.
To cut a long story short, Microsoft has now acknowledged that this is likely a bug in the VC++ compiler, which is almost certainly caused by a race condition (Though this is unconfirmed). There's no word on if they'll fix it in a future release.
There is a workaround by using the /MP flag at the project level to limit the number of compiler processes opened by MSBuild without disabling multiple instances entirely (Which for me was doubling build times).
To do this, go to your project properties and under Configuration Properties -> C/C++ -> Command Line, you need to specify the /MP flag and then a number to limit the number of processes.
My build server has 8 Virtual CPU's and the normal behaviour is equivelant to /MP8 but this causes the bug to sometimes appear. For me, using /MP4 seems to be enough to limit the bug without causing build times to increase too much. If you're seeing a bug similar to this, you may need to experiment with other numbers such as /MP6 or /MP2.