I'm attempting to simulate a 32 bit computer under a very scuffed architecture I have come up with on my own. I am probably doing everything wrong but it's just a fun thing I'm doing to teach myself C.
I am encountering a slight issue where I have no idea how many bytes of a number I should save to memory.
At the moment I have an instruction that looks like this: CODE, (addressing info), add-a, add-b, add-c.
the opcode and addressing info is 4 bits long, and the addresses are 8 bits long. If I add 2 32 bit numbers (b and c) they get saved at address a. The issue arises when I have a number that is less than 32 bits. For example, if I have an array of 1 byte chars and for whatever reason I want to add 1 to one of the numbers, when I save the 1 byte char back to that array, it would be written as a 32 bit number, thus overwriting the 3 subsequent chars.
I'm not really sure the best way to tackle this issue but I have a few ideas.
Idea 1:
Just do everything in 32 bit chunks. Let the programmer deal with the issue themselves. (do some funky bitwise manipulation to fit the 1 byte char back into the array. maybe with a mask)
I don't want to do this as it would make the code messy.
Idea 2:
Only allow addresses every 32 bit . If every number is 32 bits long, then no number will be overwritten.
This sucks as as far I can tell, nothing does this. It would make saving smaller numbers take up 4 times more memory than they need to.
Idea 3:
Stop working with 32 bit numbers. Only ever add, subtract, store, get 8 bit numbers. This would work and probably be less messy but would also be very annoying. Adding 32 bit numbers would suddenly take at least 4 lines of code, the programs would then run slower. It would also mean moving lines of code around would also take at least 4 lines of code as each line of code is 4 bytes long.
Basically I have no idea what I'm doing and I can find anyone online talking about this. I'm sure either there is something glaringly obvious, or I'm doing something stupid and I will need to redesign the whole system...
also side note, I'm not sure if this is the correct place to ask this kind of question but if it isn't I would love to know where is
All the ideas you mention seem to share a common concept, which is to limit what the hardware does and make software make up the rest of its desires/requirements by (a) assembling larger items from smaller storage units, and vice versa, (b) packing smaller items into larger storage units.
Generally speaking this is how computation works anyway, providing only limited capabilities in hardware, and making software make up any shortfall. The limited capabilities, ideally, are well matched to common software patterns, such as for strings, integer of various sizes, floats, etc..
Where the line is drawn between hardware-built-in capabilities and software compensation has been changed many times by many processors over the years.
Software generally has to do both of these with any machine organization existing today. If you want an array of Boolean values, then you would probably want to pack them into bytes (or words) and set/extract bits from them, which is (b). On the other hand if you want long strings or multiword numeric data, then software assembles some larger number of storage units into a whole, which is (a).
Modern 64-bit hardware offers at least 1-byte, 2-byte, 4-byte and 8-byte data (modulo vectors). By offering these data sizes, we mean that it provides for instructions that directly operate on these sizes, i.e. single instructions that do useful things with them.
However, there are no modern bit-addressable machines, so if you want smaller than a byte (quite reasonable sometimes) you have to handle that with software.
Further if you want 3-, 5-, 6-, or 7-byte data, the hardware doesn't necessarily provide that directly — though support for misaligned load helps, since with that you can load a larger size and mask off the bad pieces; stores similar with read-modify-write.
If you want 9-byte or larger, you'll have to use multiple load and store instruction, though again misaligned capabilities in the hardware help with odd sizes.
Some instruction sets have drawn a limited line by removing byte load & store instructions (while remaining byte addressable) though provided dedicated instructions to extract the proper byte from a word in a register so as to still provide some hardware acceleration for byte operations on hardware that doesn't have misaligned loads, since without either byte loads, misaligned capabilities, or special helper instructions, extracting the proper byte from a word can take multiple instructions and/or repetitive loading of the same memory word for sequential access.
I advocate the load/store model. That means rich load & store instructions:, load signed byte, load unsigned byte, signed half, unsigned half, word (32), double. And arithmetic in such a model would be register to register, so then you don't need smaller than word-sized addition. No mainstream programming language demands byte addition, and having byte arithmetic doesn't even offer optimization in a load/store architecture.
However, you will want to take the architecture as a whole into account in designing the individual instructions.
I am a mathematician and not a programmer, I have a notion on the basics of programming and am a quite advanced power-user both in linux and windows.
I know some C and some python but nothing much.
I would like to make an overlay so that when I start a game it can get info about amd and nvidia GPUs like frame time and FPS because I am quite certain the current system benchmarks use to compare two GPUs is flawed because small instances and scenes that bump up the FPS momentarily (but are totally irrelevant in terms of user experience) result in a higher average FPS number and mislead the market either unintentionally or intentionally (for example, I cant remember the name of the game probably COD there was a highly tessellated entity on the map that wasnt even visible to the player which lead AMD GPUs to seemingly under perform when roaming though that area leading to lower average FPS count)
I have an idea on how to calculate GPU performance in theory but I dont know how to harvest the data from the GPU, Could you refer me to api manuals or references to help me making such an overlay possible?
I would like to study as little as possible (by that I mean I would like to learn what I absolutely have to learn in order to get the job done I dont intent to become a coder).
I thank you in advance.
It is generally what the Vulkan Layer system is for, which allows to intercept API commands and inject your own. But it is nontrivial to code it yourself. Here are some pre-existing open-source options for you:
To get to timing info and draw your custom overlay you can use (and modify) a tool like OCAT. It supports Direct3D 11, Direct3D 12, and Vulkan apps.
To just get the timing (and other interesting info) as CSV you can use a command-line tool like PresentMon. Should work in D3D, and I have been using it with Vulkan apps too and it seems to accept them.
I'm new to using VHDL and have run into an issue with my project. I'm trying to make an FPGA to converts from one communication protocol to a different one, and for this purpose it would be useful to be able to store (hopefully multiple) packets before converting.
Before I tried to store this data in arrays, but it became quickly apparent that this takes up far too much space on the FPGA. Therefore, I have been searching for a way to store the data on the DDR3 ram on the SP605 board (http://www.xilinx.com/support/documentation/boards_and_kits/xtp067_sp605_schematics.pdf, page 9). I however cannot find instructions on how to write or read data from this. I'm trying to store one 8bit std_logic_vector per clock cycle to later be accessed.
Can anyone advise me on how to proceed?
Xilinx offers an IP Core generator. This IP catalog contains a Memory Interface Generator (MIG) which generates an IP Core to access different memory types. Configure this core for DDR3.
Writing a DDR3 controller in VHDL is not a project for a beginner not even for an experienced designer.
The state machine is simple and well known, but the calibration logic is very costly.
You should consider a caching or burst read/write technique, because DDR memory can not be accessed in every cycle.
I am looking for some advice on memory usage on mobile devices, BlackBerry in particular. Using some profiling tools we have calculated a working set size in RAM of 525kb. Problem is we don't really know whether this is acceptable or too high ?
Can anyone give any insight into their own experience with memory usage on BlackBerry? What sort of number should we be aiming for?
I am also wondering what sort of things we should be looking out for in particular to reduce memory usage.
512KB is perfectly acceptable on the current generation of BlackBerrys devices. You can take a look at JBenchmark to see the exact JVM heap you can expect for each model, but none of the current devices out there go below 20MB of heap. Most are much larger than that.
On JBenchmark you can choose the device you are interested from a drop down on the right side of the page. Then, navigate to the JVM Tab for the device.
When it comes to reducing memory usage I wouldn't worry about the total bytes used for this application if you are truly inline with 525K, just about how often allocation/reallocation is required. Try to pool/reuse objects as much as possible, avoiding any unneeded allocation. For instance, use the StringBuffer class to concatenate strings instead of operators as multiple String objects will be created for each concatenation using the operator, where a StringBuffer will just put the characters in an array and only expand when needed. Google is a good way to find more tips.
Finally, relying on profiling tools, which the BlackBerry JDE has, is a very important part of understanding exactly how you can optimize heap memory usage.
If I'm not mistaken, Blackberry apps are written in Java... which is a managed environment, which means really the only surefire way to use less memory is to create fewer objects. There's not a whole lot you can do about your working set, I think, since it's managed by the runtime (which is actually probably the point of using Java on devices like this).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I am looking for a text editor that will be able to load a 4+ Gigabyte file into it. Textpad doesn't work. I own a copy of it and have been to its support site, it just doesn't do it. Maybe I need new hardware, but that's a different question. The editor needs to be free OR, if its going to cost me, then no more than $30. For Windows.
glogg could also be considered, for a different usage:
Caveat (reported by Simon Tewsi in the comments, Feb. 2013)
One caveat - has two search functions, Main Search and Quick Find.
The lower one, which I assume is Quick Find, is at least an order of magnitude slower than the upper one, which is fast.
I've had to look at monster(runaway) log files (20+ GB). I used hexedit FREE version which can work with any size files. It is also open source. It is a Windows executable.
Jeff Atwood has a post on this here: http://www.codinghorror.com/blog/archives/000229.html
He eventually went with Edit Pad Pro, because "Based on my prior usage history, I felt that EditPad Pro was the best fit: it's quite fast on large text files, has best-of-breed regex support, and it doesn't pretend to be an IDE."
Instead of loading a gigantic log file in an editor, I'm using Unix command line tools like grep, tail, gawk, etc. to filter the interesting parts into a much smaller file and then, I open that.
On Windows, try Cygwin.
Have you tried context editor? It is small and fast.
I Stumbled on this post many times, as I often need to handle huge files (10 Gigas+).
After being tired of buggy and pretty limited freeware, and not willing to pay fo costly editors after trial expired (not worth the money after all), I just used VIM for Windows with great success and satisfaction.
It is simply PERFECT for this need, fully customizable, with ALL feature one can think of when dealing with text files (searching, replacing, reading, etc. you name it)
I am very surprised nobody answered that (Except a previous answer but for MacOS)...
For the record I stumbled on it on this blog post, which wisely adviced it.
It's really tough to handle a 4G file as such. I used to handle larger text files, but I never used to load them in to my editor. I mostly used UltraEdit in my previous company, now I use Notepad++, but I would get just those parts which i needed to edit. (Most of the cases, the files never needed an edit).
Why do u want to load such a big file in to an editor? When I handled files of these size, I used GNU Core Utils. The most common operations i performed on those files were head ( to get the top 250k lines etc ), tail, split, sort, shuf, uniq etc. It's really powerful.
There's a lot of things you can do with GNU Core Utils. I would definitely recommend those, instead of a new editor.
Sorry to post on such an old thread, but I tried several of the tips here, and none of them worked for me.
It's slightly different than a text editor, but I found that Beyond Compare could handle an extremely large (3.6 Gig) file on my Vista 32-bit machine.
This is a file that that Emacs, Large Text File Viewer, HexEdit, and Notepad++ all choked on.
-Eric
My favourite after trying a few to read a 6GB mysqldump file:
PilotEdit Lite http://www.pilotedit.com/
Because:
Memory usage has (somehow?!) never gone above 25MB, so basically no impact on the rest of my system - though it took several minutes to open.
There was an accurate progress bar during that time so I knew how it was getting on.
Once open, simple searching, and browsing through the file all worked as well as a small notepad file.
It's free.
Others I tried...
EmEditor Pro trial was very impressive, the file opened almost instantly, but unfortunately too expensive for my requirements.
EditPad Pro loaded the whole 6GB file into memory and slowed everything to a crawl.
For windows, unix, or Mac? On the Mac or *nix you can use command line or GUI versions of emacs or vim.
For the Mac: TextWrangler to handle big files well. I'm not versed enough on the Windows landscape to help out there.
f you just want to view a large file rather than edit it, there are a couple of freeware programs that read files a chunk at a time rather than trying to load the entire file in to memory. I use these when I need to read through large ( > 5 GB) files.
Large Text File Viewer by swiftgear http://www.swiftgear.com/ltfviewer/features.html
Big File Viewer by Team Walrus.
You'll have to find the link yourself for that last one because the I can only post a maximum of one hyperlink being a newbie.
When I'm faced with an enormous log file, I don't try to look at the whole thing, I use Free File Splitter
Admittedly this is a workaround rather than a solution, and there are times when you would need the whole file. But often I only need to see a few lines from a larger file and that seems to be your problem too. If not, maybe others would find that utility useful.
A viewer that lets you see enormous text files isn't much help if you are trying to get it loaded into Excel to use the Autofilter, for example. Since we all spend the day breaking down problems into smaller parts to be able to solve them, applying the same principle to a large file didn't strike me as contentious.
HxD -- it's a hexeditor, but it allows in place edits, and doesn't barf on large files.
Tweak is a hex editor which can handle edits to very large files, including inserts and deletes.
EmEditor should handle this. As their site claims:
EmEditor is now able to open even larger than 248 GB (or 2.1 billion lines) by opening a
portion of the file with the new custom bar - Large File Controller.
The Large File Controller allows you to specify the beginning point,
end point, and range of the file to be opened. It also allows you to
stop the opening of the file and monitor the real size of the file and
the size of the temporary disk available.
Not free though..
I found that FAR commander could open large files ( I tried 4.2 GB xml file)
And it does not load the entire file in memory and works fast.
Opened 5GB file (quickly) with:
1) Hex Editor Neo
2) 010 editor
Textpad also works well at opening files that size. I have done it many times when having to deal with extremely large log files in the 3-5gb range. Also, using grep to pull out the worthwhile lines and then look at those works great.
The question would need more details.
Do you want just to look at a file (eg. a log file) or to edit it?
Do you have more memory than the size of the file you want to load or less?
For example, TheGun, a very small text editor written in assembly language, claims to "not have an effective file size limit and the maximum size that can be loaded into it is determined by available memory and loading speed of the file. [...] It has been speed optimised for both file load and save."
To abstract the memory limit, I suppose one can use mapped memory. But then, if you need to edit the file, some clever method should be used, like storing in memory the local changes, and applying them chunk by chunk when saving. Might be ineffective in some cases (big search/replace for example).
I have had problems with TextPad on 4G files too. Notepad++ works nicely.
Emacs can handle huge file sizes and you can use it on Windows or *nix.
What OS and CPU are you using? If you are using a 32-bit OS, then a process on your system physically cannot address more than 4GB of memory. Since most text editors try to load the entire file into memory, I doubt you'll find one that will do what you want. It would have to be a very fancy text editor, that can do out-of-core processing, i. e. load a chunk of the file at a time.
You may be able to load such a huge file with if you use a 64-bit text editor on a computer with a 64-bit CPU and a 64-bit operating system. And you have to make sure that you have enough space in your swap partition or your swap file.
Why do you want to load a 4+ GB file into memory? Even if you find a text editor that can do that, does your machine have 4 GB of memory? And unless it has a lot more than 4 GB in physical memory, your machine will slow down a lot and go swap file crazy.
So why do you want a 4+ GB file? If you want to transform it, or do a search and replace, you may be better off writing a small quick program to do it.
I also like notepad++.