Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am developing an application in which it would be of great advantage to monitor all of the activity on a hard drive. I am using Diskmon to trace the activity and IOMeter to make particular requests to the drive. All is well, except that Diskmon only recognizes actual hard drives and unfortunately I only have one physical drive in the computers that I have available. This drive happens to have one partition for windows. So whenever applications or anything in Windows makes a request to the drive, it appears as extraneous data in the Diskmon log file.
As such, I am curious to know if there is anyway to create a "virtual hard drive" that is for all intents and purposes a normal hard drive with respect to Windows? I have tried creating a virtual hard disk (VHD) as supported by Windows 7. As far as I am concerned, it actually does appear as a hard drive because it shows up in "My Computer" as a new disk. Even IOMeter picks up on the VHD. However, Diskmon does not differentiate between the VHD and the true disk on which it resides. As such, the virtual drive feature (VHD) does nothing to accomplish my goal. My assumption is that the Diskmon application is using lower level Windows APIs in which the difference between regular data on a disk and those within a virtual disk are trivial.
Is it possible for me to create a true virtual disk that even the diskmon logging utility will be able identify as a genuine hard drive? Ideally, I would like to create such a virtual disk on a USB key, but from what I am seeing currently, the only option may be to buy an external hard drive.
Any help is greatly appreciated! Thanks
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
tl;dr
I've currently got a PostGresQL database with about 10gb of data. This data is "archived" data- so it won't ever be changing, but I do need the data to be queryable/searchable/available for reading in the cheapest method possible for my Rails app.
Details:
I'm running a Digital Ocean server, but this is a no-profit project, so keeping costs low is essential. I'm currently using a low-end droplet 4 GB Memory / 40 GB Disk / SFO2 - Ubuntu 16.04.1 x64
Querying this data/loading the pages it's used on can take a significant amount of time occasionally. Some pages timeout because they take over a minute to load. (Given, those are very large pages, but still)
I've been looking at moving the database over to Amazon RedShift, but the base prices seem large- as they're aimed at MUCH larger projects than mine.
Is my best bet to try to put more and more into making the queries small and only rendering small bits at a time? Even basic pages have a long query time because the server is slowed down so much. Or is there a method similar to RedShift that will allow me to quickly query the data while also storing it externally for a reasonable price?
You can try Amazon S3 and Amazon Athena. S3 is a super simple storage where you can dump your data in text files and Athena is a service that provides SQL-like interface to data stored on S3. S3 is super cheap and Athena has per runtime cost. Since you said your data isn't going to change and is going to be queried rarely it's a good solution. Check this out: 9 Things to Consider When Choosing Amazon Athena
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have a pcap file which contains the attack to a local server environment I made. The attack to the local was made using Metasploit Framework on another Kali Linux machine and the traffic was captured with Wireshark using port mirroring on the router. I was able to exploit the system and get the local password.
The question is, how do I know which exploit I have used just by looking on the pcap file? I would like to give that file for forensic analysis.
Is there any way to find the exploit name on the pcap file?
Best regards
After further investigations, I was able to figure it out how to know which exploit has been used on the attack. I managed to configure SNORT, a IDS system, on my Kali Linux machine and pass the *.pcap file to it.
Snort will analyze that *.pcap file trying to find all traffic that matches for certain rules. If any traffic behavior matches with any snort rules, snort will prompt you with a message.
Taking this into account, I was able to gather exploit name after a match of the rule exploit.rules in Snort folder.
Snort has plenty of rules by default on his rules folder, so you just need to run the following command to the *.pcap file and pray for a match ;)
snort -r <your-pcap-file>
I hope this help to anyone who is trying to find which exploit has been used on a attack that was captured by tcpdump or wireshark.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
When i compile a new project in Delphi 7 IDE , then i scan using http://www.virustotal.com my Delphi Project identifies it's ivected 8 Virus:
I reinstall my Windows OS and check my PC Hardisk using Dual Boot Linux Ubuntu but i can't find virus on my PC.
Please help me :'(
I checked on the name of one of the reported viruses in your screen shot and came up with this description from McAfee:
This software is not a virus or a Trojan. It is detected as a "potentially unwanted program"
(PUP). PUPs are any piece of software that a reasonably security- or privacy-minded computer
user may want to be informed of and, in some cases, remove. PUPs are often made by a
legitimate corporate entity for some beneficial purpose, but they alter the security state
of the computer on which they are installed, or the privacy posture of the user of the
system, such that most users will want to be aware of them.
It lists aliases for this from other virus detectors, and the list includes "PUA.Win32.Packer.BorlandDelphi" from clamav. I think that may be the answer. Are you compressing your exe? Regardless, this has to do with some characteristic of the Delphi-generated EXE file and not an actual virus or trojan.
Is your program using the Indy library? Some virus scanners had signatures including Indy code because there were Trojans that used it.
I see two possibilities:
This is a false positive. Your program is doing something that looks like virus behaviour. Only you can tell what your program does.
This is a re-infection of the executable you just compiled.
If you have anti-virus software on your computer and other executables are clean, it must be case 1.
What surprises me is why you would upload your program to Virustotal. What's the reason for that? Did something happen that you have not told us?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I need online file storage that supports webdav so I can mount the storage as a drive to my local drive, priority is speedy read and writes, can you give me some recommendation?
You can create your own using IT Hit WebDAV Server Engine for .Net. You will just need to click-through wizard in Viasual Studio. You will get the best performance if you select "store files and metadata in file system option".
I have some of my SkyDrive folders mapped as local drives on my machine, and it works pretty well. I have to admit that I haven't tested it for speed, so I'm not sure if it's up to the performance that you're looking for.
Some blogs to help you with the steps:
Connect Your SkyDrive To Windows Explorer
Access SkyDrive from Windows Explorer through WebDAV
http://www.nirmaltv.com/2010/02/02/how-to-map-skydrive-as-network-drive-in-windows/
Hope this helps!
Apache Jackrabbit ( http://jackrabbit.apache.org/ ) will do what you want. It has a very simple standalone deployment option that you can just double-click and then access the file system via WebDAV.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a program that reads about a million of rows and group the rows; the client computer is not stressed at all, no more than 5% cpu usage and the network card is used at about 10% or less.
If in the same client machine I run four copies of the program the use grow at the same rate, with the four programs running, I get about 20% cpu usage and about 40% network usage. That makes me think that I can improve the performance using threads to read the information from the database. But I don't want to introduce this complexity if a configuration change could do the same.
Client: Windows 7, CSDK 3.50.TC7
Server: AIX 5.3, IBM Informix Dynamic Server Version 11.50.FC3
There are a few tweaks you can try, most notably setting the fetch buffer size. The environment variable FET_BUF_SIZE can be set to a value such as 32767. This may help you get closer to saturating the client and the network.
Multiple threads sharing a single connection will not help. Multiple threads using multiple connections might help - they'd each be running a separate query, of course.
If the client program is grouping the rows, we have to ask "why?". It is generally best to leave the server (DBMS) to do that. That said, if the server is compute bound and the client PC is wallowing in idle cycles, it may make sense to do the grunt work on the client instead of the server. Just make sure you minimize the data to be relayed over the network.