I have a script file for parsing through a SQLite database. I now need to create a CRON job that will download and unzip said database from a third-party (already hosting as SQLITE). I understand this can be done using WGET and UNZIP, but given Heroku's read only file system, is this possible entirely in memory? Thanks.
Heroku's file system is read-only but you can use the tmp and log directory from within your application folder.
From Heroku's doc:
There are two directories that are writeable: ./tmp and ./log (under your application root). If you wish to drop a file temporarily for the duration of the request, you can write to a filename like #{RAILS_ROOT}/tmp/myfile_#{Process.pid}. There is no guarantee that this file will be there on subsequent requests (although it might be), so this should not be used for any kind of permanent storage.
Related
I don't understand what is the need/use of the git unpack-objects command.
If I have a pack file outside of my repository and run the git unpack-objects on it the pack file will be "decompressed" and all the object files will be placed in .git/objects. But what is the need for this? If I just place the pack and index files in .git/objects I can still see all the commits and have a functional repo and have less space occupied by my .git since the pack file is compact.
So why would anyone need to run this command?
Pack file uses the format that is used with normal transfer over the network. So, I can think of two main reasons to use the manual command instead of network:
having a similar update workflow in an environment without network configuration between the machines or where that cannot be used for other reasons
debugging/inspecting the contents of the transfer
For 1), you could just use a disk or any kind of mobile media for your files. It could be encrypted, for instance.
I'm running my development environment inside of a Ubuntu VM hosted by a Windows OS, so I'm using a windows-hosted NFS which the VM uses. I've been having problems lately with 'too-quick' file access (sprockets tries to unlink a file but fails, and I can do so manually only seconds later). This frequent problem shows up as: Permission denied # unlink_internal - /home/vagrant/rails/dev.website/tmp/cache/assets/development/sprockets/v3.0/[some-random-string]. This crops up with different asset references every time, so I know it's not a problem with the files themselves.
My stop-gap solution was to use memcached as sprockets' cache method (instead of filestore).
This works, however when I want to debug rendering time/iteration within my logs, I don't want memcached running. Ideally I'd like to set the entire app's temp directory to the local file system of the VM instead of the NFS mounted folder that my rails app resides in - unless someone has a better solution.
I created a database Neo4j on a PC, with many relationships, node, etc
how to move/ copy the database from this pc to another?
thanks for the help
francesco
update1: I have tried to found conf/neo4j-server.properties but i don't have...
this is a screenshot of my folder ne04j (It is in Windows document Folder)
http://s12.postimg.org/vn4e22s3x/fold.jpg
Neo4J databases live in your filesystem, you can simply make a copy of the folder in which your Neo4J data is stored. If you are running standalone this folder will be configured in conf/neo4j-server.properties and the line will look something like this:
org.neo4j.server.database.location=data/graph.db
Copy the content of that folder to the graph database folder on your other machine. I'd recommend that your databases are not running when you do this.
I believe you're looking for the dump shell command which you can use to export a database into a single Cypher create statement, you'd "dump" the database and then import it on your new machine.
Information on using the command is outlined here: Neo4j docs
A Neo4j database can be dumped and loaded using the following commands:
neo4j-admin dump --database=<database> --to=<destination-path>
neo4j-admin load --from=<archive-path> --database=<database> [--force]
Limitations
The database should be shutdown before running the dump and load commands.
https://neo4j.com/docs/operations-manual/current/tools/dump-load/
i used the above solution, but the file name was different.
in the folder of the neo4j data, look for folder called conf and inside the configuration file called neo4j.conf
inside this file you will see a line that direct to the folder that contain the data.
its called "graph.db"
replace it with the same folder from your backup of the DB that you want to clone.
As the title says (and as it may be visible that I am still a beginner). In my rails app, I have implemented an MVC for support pages to my app.
I wanted to show the pages that I created to my mentor, so I committed and pushed to GitHub, but I noticed that only the images were pushed to GitHub! (I use CKeditor to handle images).
Now I am sure that the pages (that consists of a Title and Contents fields) exist, because when I execute the command db.support_pages.find() in the Mongo Shell, it gives me back a list of the pages with their contents and titles. But when I open those pages (localhost) and edit the content I see that git is not even tracking them!
I don't know what more information I should post, I will post the .gitignore file:
*.rbc
*.sassc
*~
.sass-cache
.project
capybara-*.html
.rspec
/.bundle
/vendor/bundle
/log/*
/tmp/*
/public/assets/*
/db/*.sqlite3
/public/system/*
/coverage/
/spec/tmp/*
/spec/coverage/*
**.orig
rerun.txt
pickle-email-*.html
# Ignore all logfiles and tempfiles.
/log/*.log
/tmp
.idea
/attic/*
Any tips, leads, advice (or even queries to post more info regarding this issue) are welcomed. :)
Thanks in advance.
Your MongoDB database is composed of multiple data files containing the data and indexes.
If you want to commit the contents of the database to version control you will want to export the data and index definitions using mongodump.
If you want to share your database with your mentor, I would suggest using mongodump to get a full copy of the database, then compress and add that dump into git.
For example (on Linux) .. assuming a database called mydatabase:
cd ~/backup
mongodump -d mydatabase
tar -czvf mydatabase.tgz dump/
git add mydatabase.tgz
Your mentor would need to have MongoDB installed, and could extract the tgz file (tar xzvf mydatabase.tgz) and use mongorestore to load the data. I expect your application might require future configuration, which you would document in a README.
Git will track changes made in its directory. The pages you're talking about are stored in the database which is located somewhere else in your computer. We will need more information to give you some advice as to where you should dig.
Hi Alsways uploading made web-sites , projects, I want to make such thing
make zip file,
upload one file
and then extract with default CHMOD for folders lets say 755 and for files 664
With Cpanel hostings its OK, I can do it via file manager... But for hostings without I can't.
Baybe someone can give a hint how...????
I use php unzipper. Here is quick tutorial on it.
Tutorial
The FTP protocol doesn't allow for such a thing.
Sometimes I keep a locked down directory where I drop compressed files and I have a little PHP script that unzips them by doing glob("*.zip") to get all the files, and executing the unzip on them.
My "solution" does require the ability to execute commands, but if you're in a more restricted environment you can use PHP's zip_ functions or even a PEAR package.