swapbuffers minifilter problems - driver

I implemented a minifilter driver using the swapbuffers example. I made two changes:
attach only to \Device\HarddiskVolume3
encryption XORing with 0xFF
Encryption works, but the volume3 (which in my system is E:) not working. E: is not recognized file system. chkdsk E: results all boot sectors corrupted message.
After investigations (using procmon.exe): the chkdsk.exe creates a shadow copy of volume. If the driver attaches the shadow copy too the chkdsk E: is OK, the filesystem is perfect. But E: remains unrecognized.
Any idea what I should change?

Assuming no simple mistake was made, that is, the volume was unmounted, you added the filter, and remounted, obviously the mount/filesystem is not using your filter.
I noticed a comment in the example code about "not for kernel mode drivers".
What you want to research is "whole disk encryption". A google search AND search on: windows whole disk encryption will help.
In particular, TrueCrypt does what you want. Since it is open source, and is available on sourceforge.net, you could download the source and figure out how to hook your stuff in by learning how TrueCrypt does it.
Just one problem: TrueCrypt has security gaps, so the sourceforge.net page is now just migration info to BitLocker. But, it still exists and other pages have been created where you can get it. Notably, a fork of TrueCrypt is VeraCrypt
Just one of the pages in the search is: http://www.howtogeek.com/203708/3-alternatives-to-the-now-defunct-truecrypt-for-your-encryption-needs/
UPDATE
Note: After I wrote this update, I realized that there may be hope ... So, keep reading.
Minifilter appears to be for filesystems but not underlying storage. It may work, you just need to find a lower level hook. What about filter stack altitute? Here's a link: https://msdn.microsoft.com/en-us/library/windows/hardware/ff540402%28v=vs.85%29.aspx It also has documentation on fltmc and the !fltkd debugger extension
In this [short] blog: http://blogs.msdn.com/b/erick/archive/2006/03/27/562257.aspx it says:
The Filter Manager was meant to create a simple mechanism for drivers to filter file system operations: file system minifilter drivers. File system minifilter driver are located between the I/O manager and the base filesystem, not between the filesystem and the storage driver(s) like legacy file system filter drivers.
Figuring out what that means will help. Is the hook point between FS and I/O manager [which I don't know about] sufficient? Or, do you need to hook between filesystem and storage drivers [implying legacy filter]?
My suspicion is that a "legacy" driver filter may be what you need, if the minifilter does not have something that can do the same.
Since your hooks need to work on unmounted storage so that chkdsk will work, this may imply the legacy filter. On the other hand, you mentioned that you were able to hook the shadow copy and it worked for chkdsk. That implies minifilter has the right stuff.
Here's a link that I think is a bit more informative: http://blogs.msdn.com/b/ntdebugging/archive/2013/03/25/understanding-file-system-minifilter-and-legacy-filter-load-order.aspx It has a direct example about the altitute of an encryption filter. You just may need more hook points and to lower the altitude of your minifilter
UPDATE #2
Swapbuffers just hooks a few things: IRP_MJ_READ, IRP_MJ_WRITE, IRP_MJ_DIRECTORY_CONTROL. These are file I/O related, not device I/O related. The example is fine, just not necessarily for your purposes.
The link I gave you to fltmc is one page in MS's entire reference for filters. If you meander around that, you'll find more interesting things like IoGetDeviceAttachmentBaseRef, IoGetDiskDeviceObject. You need to find the object for the device and filter its I/O operations.
I think that you'll have to read the reference material in addition to examples. As I've said previously, your filter needs to hook more or different things.
In the VeraCrypt source, the Driver subdirectory is an example of the types of things you may need to do. In DriveFilter.c, it uses IRP_MJ_READ but also uses IRP_MN_START_DEVICE [A hook when the device is started].
Seriously, this may be more work than you imagine. Is this just for fun, or is this just a test case for a much larger project?

Related

Reversing symbols in file name

This is not really about programming, but I don't know where else to ask it. So I've just downloaded torrent with one file in it - the formal name of file should be "123.avi.exe" (which is typical for viruses and trojans). Now, interesting thing is that name is encoded in UTF16-LE as following bytes:
FFFE3100320033002E002D202E202D202E206900760061002E00650078006500
which gives us strange, partially reversed over ".exe" text (try to move cursor left-to-right and you will be surprised):
123.‭‮‭‮iva.exe
But the bad part of all - is that utorrent showing non-suspicious ".avi" extension while when you double click it in GUI - it goes as as ".exe" and program runs.
You can test it yourself by creating dummy file with the name I wrote above. How can I protect myself from running files like that on system level?
P.S. I've started similar thread on uTorrent tracker (not yet approved by moderator)
You have possibly found a active attack using a Remote Code Execution Vulnerability in uTorrent and other torrent clients. There has been similar vulnerabilities before: http://www.zerodayinitiative.com/advisories/ZDI-16-674/
It's probably a good idea to contact uTorrent and make them aware of the exploit.
What version of uTorrent do you use?
In general the best protection is to use the newest stable version of programs.
Even if newer uTorrent versions is infested with annoying advertisement, that can be deactivated.
This question may fit better at https://security.stackexchange.com/

What is the proper way for a program to open and write to a mapped drive without allowing the computer user to do so?

I am working with a program designed to record and display user-input data for tracking courses in a training process. One of the requirements was that we be able to keep a copy of each course's itinerary (in .pdf format) to display alongside the course. This program is being written in Delphi 7, expected to run on Windows 7 machines.
I've managed to get a remote location set up on the customer's main database (running CentOS 6), as a samba share, to store the files. However, I'm now running into a usability issue with the handling of the files in question.
The client doesn't want the process to go to a mapped drive; they've had problems in the past with individual users treating the mapped drive another set of programs require as personal drive space. However, without that, the only method I could come up with for saving/reading back the .pdf files was a direct path to the share (that is, setting the program to copy to/read from \\server\share\ directly) - which is garnering complaints that it takes too long.
What is the proper way to handle this? I've had several thoughts on the issue, but I can't determine which path would be the best to follow:
I know I could map the drive at the beginning of the program execution, then unmap it at the end, but that leaves it available for the end user to save to while the program is up, or if the program were to crash.
The direct 'write-to-share' method, bypassing the need for a mapped drive, as I've said, is considered too slow (probably because it's consistently a bit sluggish to display the files).
I don't have the ability to set a group policy on these machines, so I can't hide a drive that way - and I really don't think it's a wise idea for my program to attempt to change the registry on the user's machine, which also lets that out.
I considered trying to have the drive opened as a different user, but I'm not sure that helps - after looking at it, I'm thinking (perhaps inaccurately) that it wouldn't be any defense; the end user would still have access to the drive as opened during the use window.
Given that these four options seem to be less than usable, what is the correct way to handle these requirements?
I don't think it will work with a samba share.
However you could think about using (secure) ftp or if there is a database just uploading them as a blob.
This way you don't have to expose user credentials to a user.

What changes in a jailbroken kernel?

Having seen this question on protecting your app from being cracked, I saw that the top answerer mentioned something about being able to see if a device was jailbroken by some internal imbalance in the kernel. Having looked into it a bit more, I discovered the Kernel Architecture Overview guide, and have knowledge of ways to interact with the Mach-BSD kernel. All I need to know is: What am I looking for? Is there some kind of key or internal state that changes when the device is jailbroken in the context of the kernel?
To be clear, I'm not looking for code (I know how to do these things myself), I'm looking for what to look for... As weird as that sounds. I've seen the answers in the linked questions, I know that they work, but I'm wondering about an all kernel route, which seems more of a generic and efficient way to check instead of searching for directories that might change or plist keys that might have different names.
I also don't intend to disable any functionality on the part of the app because of piracy (just show a message or something based on a condition).
All the "modern" kernel patches are based on comex's patches.
the main things which are being patched are:
security.mac.proc_enforce
cs_enforcement_disable (kernel and AMFI)
PE_i_can_has_debugger
vm_map_enter
vm_map_protect
…
Oh, and there are sandbox patches too. If you wanna read more about all these patches I suggest you take a look at iOS Hacker's Handbook.
Edit:
I just came up with a simple idea to check if the device is jailbroken, but I'm not sure if Apple allows the use of these functions:
allocate some memory using mach_vm_allocate()
change the protection of that page via mach_vm_protect() to VM_PROT_READ | VM_PROT_EXECUTE | VM_PROT_COPY
Since the stock iOS doesn't allow VM_PROT_EXECUTE from inside your app this will fail, check the return value of mach_vm_protect(), when not jailbroken, but succeed if the device is jailbroken.
About a year ago, saurik wrote a comment on Hacker News with a list of the "'best practice' patches that jailbreaks install by default". I'd suggest reading that comment for all the details, but here is a preview of what he says (with lots of explanation that I snipped out):
AFC2: allows you to access, over USB, all of / as root instead of just /var/mobile/Media as mobile.
fstab / rw: makes / be mounted read-write.
fstab /var suid dev: allows setuid executables and device nodes on the user data partition.
codesign: allow code that has not been signed by anyone to execute.
codehash: allow processes with "corrupt" pages of code to execute.
rw->rx: supports changing a page of memory from writable to executable.
rwx: allows memory to be marked for write and execute at the same time.
sandbox: allow processes to access files that are outside of their sandbox based on Unix permissions rather than the normal sandbox
rules.
crazeles: a ludicrously complicated hack by planetbeing that neuters the FairPlay DRM checks that cause iBooks to refuse to operate
correctly on jailbroken devices.

Understanding file mapping

I try to understand mmap and got the following link to read:
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
I understand the text in general and it makes sense to me. But at the end is a paragraph, which I don't really understand or it doesn't fit to my understanding.
The read-only page table entries shown above do not mean the mapping is read only, they’re merely a kernel trick to share physical memory until the last possible moment. You can see how ‘private’ is a bit of a misnomer until you remember it only applies to updates. A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from. Once copy-on-write is done, changes by others are no longer seen. This behavior is not guaranteed by the kernel, but it’s what you get in x86 and makes sense from an API perspective. By contrast, a shared mapping is simply mapped onto the page cache and that’s it. Updates are visible to other processes and end up in the disk. Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
The folloing to lines doesn't match for me. I see no sense.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
It is private. So it can't see changes by others!
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
Don't know what the author means with this. Is their a flag "MAP_READ_ONLY"? Until a write occurs, every pointer from the programs virtual-pages to the page-table-entries in the page-cache is read-only.
Can you help me to understand this two lines?
Thanks
Update
It seems it got it, with some help.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
Although a mapping is private, the virtual page really can see the changes by others, until it modifiy itselfs a page. The modification becomes is private and is only visible to the virtual page of the writing program.
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
I'm told that pages itself can also have permissions (read/write/execute).
Tell me if I'm wrong.
This fragment:
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
is telling you that the kernel cheats a little bit in the name of optimization. Even though you've asked for a private mapping, the kernel will actually give you a shared one at first. Then, if you write the page, it becomes private.
Observe that this "cheating" doesn't matter (doesn't make any difference) if all processes which are accessing the file are doing it with MAP_PRIVATE, because no actual changes to the file will ever occur in that case. Different processes' mappings will simply be upgraded from "fake cheating MAP_PRIVATE" to true "MAP_PRIVATE" at different times according to whenever each process first writes to the file. This is probably a common scenario. It's only if the file is being concurrently updated by other means (MAP_SHARED with PROT_WRITE or else regular, non-mmap I/O operations) that it makes a difference.
I'm told that pages itself can also have permissions (read/write/execute).
Sure, they can. You have to ask for the permissions you want when you initially map the file, in fact: the third argument to mmap, which will be a combination of PROT_READ, PROT_WRITE, PROT_EXEC, and PROT_NONE.

Delphi logging with multiple sinks and delayed classification?

Imagine i want to parse a binary blob of data. If all comes okay, then all the logs are INFO, and user by default does not even see them. If there is an error, then user is presented with error and can view the log to see exact reason (i don't like programs that just say "file is invaid. for some reason. you do not want to know it" )
Probably most log libraries are aimed at quickly loading, classifying and keeping many many log lines per second. which by itself is questionable, as there is no comfort lazy evaluation and closures in Delphi. Envy Scala :-)
However that need every line be pre-сlassified.
Imagine this hypothetical flow:
Got object FOO [ok]
1.1. found property BAR [ok]
1.1.1. parsed data for BAR [ok]
1.2 found property BAZ [ok]
1.2.1 successfully parsed data for BAR [ok]
1.2.2 matching data: checked dependancy between BAR and BAZ [fail]
...
So, what can be desired features?
1) Nested logging (indenting, subordination) is desired then.
Something like highlighted in TraceTool - see TraceNode.Send Method at http://www.codeproject.com/KB/trace/tracetool.aspx#premain0
2) The 1, 1.1, 1.1.1, 1.2, 1.2.1 lines are sent as they happen in a info sink (TMemo, OutputDebugString, EventLog and so one), so user can see and report at least which steps are complete before error.
3) 1, 1.2, 1.2.2 are retroactively marked as error (or warning, or whatever) inheriting from most specific line. Obviously, warning superseeds info, error superseeds warning and info, etc/
4) 1 + 1.2 + 1.2.2 can be easily combined like with LogMessage('1.2.2').FullText to be shown to user or converted to Exception, to carry the full story to human.
4.1) Optionally, with relevant setup, it would not only be converted to Exception, but the latter even would be auto-raised. This probably would require some kind of context with supplied exception class or supplied exception constructing callback.
5) Multisink: info can be just appended into collapsible panel with TMemo on main form or currently active form. The error state could open such panel additionally or prompt user to do it. At the same time some file or network server could for example receive warning and error grade messages and not receive info grade ones.
6) extra associated data might be nice too. Say, if to render it with TreeView rather than TMemo, then it could have "1.1.1. parsed data for BAR [ok]" item, with mouse tooltip like "Foo's dimensions are told to be 2x4x3.2 metres"
Being free library is nice, especially free with sources. Sometimes track and fix the bug relying solely on DCUs is much harder.
Non-requiring extra executable. it could offer extra more advanced viewer, but should not be required for just any functionality.
Not being stalled/abandoned.
ability to work and show at least something before GUI is initialized would be nice too. Class constructors are cool, yet are executed as part of unit visualization, when VCL is not booted yet. If any assertion/exception is thrown from there, user would only see Runtime error 217, with all the detail lost. At least OutputDebugStreen can be used, if nothing more...
Stack tracing is not required, if needed i can do it and add with Jedi CodeLib. But it is rarely needed.
External configuration is not required. It might be good for big application to reconfigure on the fly, but to me simplicity is much more important and configuration in code, by calling constructors or such, is what really matters. Extra XML file, like for Log4J, would only make things more fragile and complex.
I glanced few mentioned here libraries.
TraceTool has a great presentation, link is above. Yet it has no info grade, only 3 predefined grades (Debug/Error/Warning) and nothing more, but maybe Debug would suit for Info replacement... Seems like black box, only saving data into its own file, and using external tool to view it, not giving stream of events back to me. But their messages nesting and call chaining seems cool. Cools is also attaching objects/collection to messages.
Log4D and Log4Delphi seems to be in a stasis, with last releases of 2007 and 2009, last targeted version Delphi 7. Lack documentation (probably okay for log4j guy, but not for me :- ) Log4Delphi even had test folder - but those test do not compile in Delphi XE2-Upd1. Pity: In another thread here Log4delphi been hailed for how simple is to create custom log appender (sink)...
BTW, the very fact that the only LOG4J was forked into two independent Delphi ports leaves the question of which is better and that both lack something, if they had to remain in split.
mORMot part is hardly separated from the rest library. Demo application required UAC escalation for use its embedded SQLite3 engine and is frozen (no window opened, yet the process never exits normally) if refused Admin grants. Another demo just started infinite stream of AV exceptions, trying to unwind the stack. So is probably not ready yet for last Delphi. Though its list of message grades is excessive, maybe even a bit too many.
Thank you.
mORMot is stable, even with latest XE2 version of Delphi.
What you tried starting were the regression tests. Among its 6,000,000 tests, it includes the HTTP/1.1 Client-Server part of the ORM. Without the Admin rights, the http.sys Server is not able to register the URI, so you got errors. Which makes perfectly sense. It's a Vista/Seven restriction, not a mORMot restriction.
The logging part can be used completely separated from the ORM part. Logging is implemented in SynCommons.pas (and SynLZ.pas for the fast compression algorithm used for archival and .map embedding). I use the TSynLog class without any problem to log existing applications (even Delphi 5 and Delphi 6 applications), existing for years. The SQLite3 / ORM classes are implemented in other units.
It supports the nesting of events, with auto-leave feature, just as you expect. That is you can write:
procedure TMyClass.MyMethod(const Params: integer);
begin
TSynLog.Enter;
// .... my method code
end;
And adding this TSynLog.Enter will be logged with indentation corresponding to the recursive level. IMHO this may meet your requirements. It will declare an ISynLog interface on the stack, which will be freed by Delphi at the "end;" code line, so it will implement an Auto-Leave feature. And the exact unit name, method name and source code line number will be written into the log (as MyUnit.TMyClass.MyMethod (123)), if you generated a .map file at compilation (which may be compressed and appended to the .exe so that your customers logs will contain the source line numbers). You have methods at the ISynLog interface level to add some custom logging, including parameters and custom state (you can log objects properties as JSON if you need to, or write your custom logging data).
The exact timing of each methods are tracked, so you are able to profile your application from the data supplied by your customer.
If you think the logs are too much verbose, you have several levels of logging, to be customized on the client side. See the blog articles and the corresponding part of the framework documentation (in the SynCommons part). You've for instance "Fail" events and some custom kind of events. And it is totally VCL-independent, so you can use it without GUI or before any GUI is started.
You have at hand a log viewer, which allow client-side profiling and nested Enter/Leave view (if you click on the "Leave" line, you'll go back to the corresponding "Enter", e.g.):
If this log viewer is not enough, you have its source code to make it fulfill your requirements, and all the needed classes to parse and process the .log file on your own, if you wish. Logs are textual by default, but can be compressed into binary on request, to save disk space (the log viewer is able to read those compressed binary files). Stack tracing and exception interception are both implemented, and can be activated on request.
You could easily add a numeration like "1.2.1" to the logs, if you wish to. You've got the whole source code of the logging unit. Feel free to ask any question in our forum.
Log4D supports nested diagnostic contexts in the TLogNDC class, they can be used to group together all steps which are related to one compound action (instead of a 'session' based grouping of log events). Multi-Sinks are called Appenders in log4d and log4delphi so you could write a TLogMemoAppender with around twentyfive lines of code, and use it at the same time as a ODSAppender, a RollingFileAppender, or a SocketAppender, configurable at run time (no external config file required).

Resources