Automatic saving when making certain number of changes - save

Is it possible to set in .vimrc that vim saves the file every time I will change (insert, delete, modify) certain number of characters (not after time interval).
I have a bad habit to press :w every time I make some small edit and I'd like to get rid of this bad habit.
thank you

In general, you don't have to save with Vim until you're actually done editing or need some external program to be able to see the current state of the file. Vim periodically (both based on time and how much has changed) saves a swap file for the buffer you're editing. So, if Vim or your computer crashes and you haven't saved, you'll still be able to recover a relatively recent version of your file by restoring from the swap file.

Related

Keeping a Google Doc (Sheet, etc.) last update time

I need to put (among other things) the time of the last update of a Google Document (Doc, Sheet, etc.) in a footer, so that if the document is printed it is possible to distinguish which printed copy is the latest. This needs to be a script so it is done automatically for users without requiring them to do anything special. I have a script that does this (at least for Docs), but there are some issues:
In a Doc there is no onEdit trigger, so I can't determine automatically when the document is updated in order to update the footer. (I am aware that I would need to prevent my own update from triggering this by ignoring the change I make)
onOpen can only update the document if the user has edit access to the document. So if somebody opens the document, edits it, and closes the window, and then another user opens it without edit access, they would see the next to last update time instead of the last update time.
The current version of my script must be manually bound to each document when it is created. Is there some way to have it automatically get bound when new documents are created? Would an add-on work?
Is it possible to use the "Detect Changes" API somehow? I'm not sure this is even going to do what I want, and it seems like it might be complicated to do it this way.
Would a time-driven trigger make sense? The only problem with this is that time driven triggers don't seem to be able to run more than once per minute (unless in an add-on, which can only run once per hour), so that last update time could be off by up to this much. This probably wouldn't be a big problem, but could occasionally cause some issues. Also, would running code every minute cause any kind of quota issues?
I tried using ClockTriggerBuilder with a delay of 1000ms, but it wasn't updating. Then I noticed that the after function says it will run the trigger after the specified number of milliseconds plus or minus 15 minutes!

Read huge file from disk

I am developing an iOS app and I have this text file with a city name per line. I have like 3 Million cities in that file. In order to be able to perform searches and operations on it I am using a B-Tree but this tree takes a long time to be created. It is not good for the user experience having him to wait for this every time he uses the time. All this without using Core data!
Any tips on how can I speed up this process?
Thank you
My recommendation is that you use SQLite with an index on the fields you want to query (or some other type of permanent, indexed storage) so that the user only has to wait the first time the app is opened, and then you can query the database, which will be much faster. I am also fairly certain that you can install a SQLite database from a pre-generated file, so you might be able to generate this index offline, bundle it with your application, and then the user has no wait time at all. I'm not 100% sure on this options though, so you should investigate.
Either way, there is no magic solution here. If the data you want is on line 2 million of the file, you will have to read 2 million lines of text in order to get to that line. I would recommend finding a way to make the UX of your app acceptable so that the user feels better about waiting for the data to load. If you display some sort of pretty screen with a progress bar while the data indexes, the user will be more forgiving of this wait.
The B-Tree will obviously take some time to be created. If you don't want to use a database but stick with your own B-Tree implementation you could dump the tree data to a separate file and load that when the program starts instead of recreating it every time. However, you will have to update the cached tree every time the source data is modified.
In Python the pickle module can help you, but most programming languages will have a serialisation module.
Does this file come with the Application? If it does then you could already process the file file into an SQLite database. Before you ship the app containing the database. You can then use "Select" statements to search the data using indexed fields (like cityname).
If the file changes. Then Still ship with a database and just send amendments as a file. Which would edit the database to bring it back up to date. You may need to add a command to the file for each line like, REPLACE, NEW, DELETE.

Lock file for editing in Dropbox

I am building app in iOS that saves data in Dropbox. Multiple device can use the same data. While doing this, sometime two device may overwrite same file. To avoid this situation is there any like lock file for writing.
Any alternative workaround solutions are also welcome.
While I don't know the Dropbox API, I would always be careful with a locking mechanism. I know from some systems, that the locks lead to a problem, if for example the app crashes or quits and the lock does not get released.
A very simple approach though would be to store the modification date when you have read the file. Then, before saving changes, compare your stored value with the most current one. If they are different, the file was modified. Next ask your users how to proceed and either commit the changes, cancel or create a new file with the same name and some appendix. That is how some sync clients I use are dealing with this problem.

How to make the program kill itself in delphi?

I found a post about how to kill the program itself one year ago. It suggested writing some values in registry or windows directory or a location in disk when it runs first time. When it tries to run for the second time, the program just check the value in that location, if not match, it terminates itself.
This is simple and a little naive as any realtime anti-virus application would easily watch what value and where your program wrote in a disk. And in a true sense, that method did not 'kill' itself, the program just lies thare and sleeps intact and complete, only because of lack of trigger.
Is there a method that, in true meaning, kills itself such as deleting itself permanently, disemboweling itself, disrupting classes or functions or fragmenting itself?
Thank you.
+1 to this question.
It is so unfortunate that people often tend to vote down, if somebody asks questions that are related to tricky ways of doing things! Nothing illegal but at times this qustion may sound to other people that this method is unnecessary. But there are situations where one wants to delete itself (self) once it is executed.
To be clear - it is possible to delete the same exe once it is executed.
(1) As indicated in the earlier answer, it is not possible for an exe to get deleted once it is executed from disk. Because OS simply doesn't allow that.
(2) However, at this point, to achieve this, what we need to do is, just execute the EXE in momory! It is pretty easy and the same EXE could be easily deleted from disk once it is executed in memory.
read more on this unconventional technique here:
execute exe in memory
Please follow above post and see how you can execute an exe in momory stream; or you can even google it and find out yet another way. There are numerous examples that shows how to execute an exe in memory. Once it is executed, you can safely delete it from disk.
Hope this throws some light into your question.
An application cannot delete itself off the disk directly, because while the application is running the disk file is 'open' - hence it cannot be deleted.
See if MoveFileEx with the MOVEFILE_DELAY_UNTIL_REBOOT fits your requirement.
If you can't wait for a reboot, you'll have to write a second application (or batch file) that runs when the first application closes to wait for the first application to complete closing and then delete it.
It's chicken and egg though - how do you delete the second application/batch file? It can't delete itself. But you could put it in the %temp% directory and then use MoveFileEx() to delete it next time the machine is rebooted.

Reconstructing sms.db

Backstory
This afternoon, I replied to a text from my girlfriend, then apparently neglected to sleep my phone before putting it back in my pocket. When I pulled it back out a few minutes later, my phone had decided to hit "Edit->Clear All" on the conversation, vaporizing two years and two phones worth of SMS history with her. While I have a backup of the phone, it's close to three weeks old at this point, and there's enough solid discussion that I'd like to reconstruct; I've already grabbed a copy of sms.db, but I think the method I used vacuumed the file, so there are no soft-deleted texts in it.
Meat of the Question
I have a three-week old backup of my sms.db, and have access to date copy of her sms.db. I'd like to
export the texts she has but I don't (easy, at least to CSV)
change the "perspective" info (the address field and the sent/received/deleted/unknown field), keeping the timestamp and text
import/merge these new entries into my old sms.db backup
merge this updated backup with my current sms.db (optional/there seems to be an online utility for that)
I don't really know SQL but would be willing to learn; the problem I have is that from what I understand, the tables within sms.db have become more interdependent over the OS's lifespan, and the triggers now call C functions that don't exist outside the phone, so it's not a simple matter of calling a single trigger on multiple entries. Does anyone know of any ways to work around this complexity, or even better, any utilities that have already figured out how to import individual entries into sms.db?
Edit:
I've been examining sms.db, and from what I can tell, the relationships are pretty straightforward:
for message, I need to mostly make sure that the ROWID of any added messages are higher than the current highest ROWID
msg_group holds the message:ROWID of the last message for each contact; I can lookup the correct address within group_member; group_member:group_id corresponds with msg_group:ROWID
msg_group has a hash column; this will probably be the hardest thing to update, since I'm not immediately sure what it's updating, or what hash to use
sqlite_sequencedoesn't seem like it's quite up-to-date; its entries seem to all be smaller than the actual ROWIDs, but I assume this means I won't have to mess with it very much.
I'm not really sure that I'll be able to change msg_pieces at all: it's the table in charge of handling the multiple parts of an MMS message.
Hey did you get this sorted out? if you haven't I suggest taking a look at http://smsmerge.homedns.org/
I have been in a similar position as you have, but I was lucky and had a more recent backup than that.
Let me know if you need a hand with it

Resources