Neo4j 3.5.12
I ran a delete of 9.5M nodes.
Before the delete, the total size of all files in the /var/lib/neo4j/data/databases/graph.db folder was 9.1Gb.
After the delete they are 16Gb (i.e. bigger).
I understand this article, but my aim is to reduce the memory that the database uses and according to these calculations the dbms.memory.pagecache.size should be the sum of the sizes of the $NEO4J_HOME/data/graph.db/neostore.*.db files and adding e.g. 20% for growth. My neostore *.db files are now 6.4Gb (I didn't calculate them before the delete annoyingly, so I am unsure if this reduced).
Am I correct in thinking that even though the total size of all files in /var/lib/neo4j/data/databases/graph.db has got bigger, the RAM required will have got smaller as the neostore *.db files within those will have got smaller?
Also, I note that the dbms.tx_log.rotation.retention_policy setting for my database is 1 day. I assume this means that after one day any neostore.transaction.db files created by the delete operations would be removed and the total size of all files in the /var/lib/neo4j/data/databases/graph.db would therefore go back down again. Is that correct?
This is probably due to the transaction log files. Look at the Logging configuration section of the neo4j.conf file and limit the size of logs. Look at the documentation for more information.
Related
I was looking at my disk with DaisyDisk and I have a 30GB something called Docker.qcow2. More specifically, DaisyDisk puts it under ~/Library/Containers/com.docker.docker/Data/vms/0/Docker.qcow2. Is it some kind of cache? Can I delete it? I have a lot of images from old projects that I won't ever use and I'm trying to clear up my disk.
The .qcow2 file is exposed to the VM as a block device with a maximum size of 64GiB by default. As new files are created in the filesystem by containers, new sectors are written to the block device. These new sectors are appended to the .qcow2 file causing it to grow in size, until it eventually becomes fully allocated. It stops growing when it hits this maximum size.
You can stop Docker and delete this file, however deleting it will also remove all your containers and images. And Docker will recreate this file on start.
If you stumbled upon this, you're probably not stoked about a 64gb file. If you open up Docker > Preferences, you can tone it down quite a bit to a more reasonable size. Doing this will delete the old cow file, and that will delete your containers, so be careful.
I've had the same issue. Instead of deleting the file or adjusting the size using the settings simply use the following commands:
docker images
This will show all of the images on your system and the size of each image (you'd be surprised how quickly this can get out of hand).
docker image rm IMAGEID
This will remove the image with the ID that you can get from the images command above.
I use this method and it frees up a lot of disk space.
Various users of our application started to complain it uses lots of memory on the phone. We added data collector based on files (that locates within application folder) and it's sizes, the following been found for small amount of users:
Preferences:{
files:{
"{bundle_identifier}.plist":"23.97479057312012",
"{bundle_identifier}.plist.0BTeiJo":"22.25380897521973",
"{bundle_identifier}.plist.1lT9kMO":0,
"{bundle_identifier}.plist.2HHwLSb":0,
"{bundle_identifier}.plist.2L9bkJR":0,
"{bundle_identifier}.plist.2xAnoy5":0,
"{bundle_identifier}.plist.3Qgyplk":0,
"{bundle_identifier}.plist.4SBpAox":"23.95059013366699",
"{bundle_identifier}.plist.4Xm8NvI":0,
"{bundle_identifier}.plist.5sPZPIi":0,
"{bundle_identifier}.plist.6GOkP57":0,
"{bundle_identifier}.plist.6SYZ1VF":"21.67253875732422",
"{bundle_identifier}.plist.6TJMV5r":"21.67211151123047",
"{bundle_identifier}.plist.6oNMJ0b":0,
"{bundle_identifier}.plist.7C1Kuvm":0,
"{bundle_identifier}.plist.7E3pmr4":0,
"{bundle_identifier}.plist.7ExLAx0":"21.70229721069336",
"{bundle_identifier}.plist.7GOPE3W":"18.70771026611328",
...
},
size:"960.2354183197021"
Can someone assist and explain, why this files (plist.*) appeared and how it's possible to safely remove them and ensure they wont appear again?
P.S. I found the logic in the project where we store dictionaries to the NSUserDefaults (I know this is a bad practice), but there is not much data.
UPDATE:
I have discovered that files (*.plist.*) are generated after Back Up. And sometimes size is 0 sometimes size same as origin *.plist size (on back up time).
Now i need to know, is it safe to remove them?
I am using docker to have versioned database on my local dev environment (e.g. to be able to snapshot/revert db state). I need it due to nature of my work. I can not use transactions to achieve what I want [one of reasons - some of statements are DDL]
So, I have docker container with one large file (MySQL Inno db file)
If I change this file a little bit (like update row in table), and then commit container, new layer will be created, and size of this layer will be size of this huge file, even if only couple of bytes in file changed.
I understand it happens because for docker file is 'atomic' structure, if file is being modified its copy is created in new layer, and this layer is later included in image
Is there a way to change this behavior and to make Docker to store diffs on file level, e.g. if 10 bytes of 10 GiG file was changed, create layer with size smaller then 10 GiG?
Mb I can use some other storage engine? [which one?]
I also not very bound to docker, so I can even switch to rkt, question is - do you guys think it can help? (mb image format is different and can store diffs on file content level)
In my app i want the user to be able to download offline map content.
So I (compressed) moved all my tiles into a zip file. (I used 0 compression)
The structure is like that: {z/x/y.jpg}
+0
+-0
+--0.jpg
+1
+-1
+--0.jpg
+2
+-2
+--1.jpg
So basically there are going to be many many files for zoom level 0-15. (about 120.000 tiles for my test-region).
I am using https://github.com/mattconnolly/ZipArchive now but also tried out https://github.com/soffes/ssziparchive before and both are pretty slow. It takes about 5!! minutes on my iPhone 5S for the files to unzip.
Is there any way I can speed things up? What other possibilities rather than downloading the tiles in one big zip file would there be?
Edit:
How can i download the content of the whole folder quickly to my iPhone without the need of unzipping something?
Any help is appreciated!
JPGs rarely compress at all with zip - they are by definition already compressed. What you should do is create your own binary file format, and put whatever metadata you need into it along with the images (which you should encode with a really low quality number, to get their size down).
When you download those files, you can open then, quickly read them into memory, and extract out data or images as needed.
This will be really fast and have virtually no overhead if your extra data is binary (not text).
PS: I just tripped on a PHP Plist class
If anyone is wondering how I was ending up:
For my use-case (MapTiles) I am using MBTiles now instead of zipped images. It's one big database file and super easy to read if using FMDB. No unpacking whatsoever needed...
Even if I was placing the Images all in one binary file without any compression, the "extracting" still took forever!
Have gone through several links, and though it explains what is incremental backup how are the difference since the last backup identified? is it by size of the files or the data... can anyone explain...else guide me to a reference for the same?
Found out the following information on the reference specified. Hope it helps out sm1.
The client considers a file changed if any of the following attributes changed since the last backup:
File size
Date or time of last modification
Extended Attributes
Access Control List
If only the following items change, they are updated without causing the entire file to be backed up to the server:
File owner
File permissions
Last access time
Inode
Group ID
Reference : http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=%2Fcom.ibm.itsm.client.doc%2Fc_bac_fullpart.html