I'm writing an app using pygtk and I was wondering how would my app be able to save data on the user's computer. I plan to distribute this across windows and unix. What would be the best way to go about this?
You could use the pickle module.
It serializes data, so that you can retrieve it in it's native Python form later.
It uses the file() object, so it's cross-platform, and can handle basically any object,
and it is even good with custom classes. The only thing I know it cannot serialize is a function.
Short use explenation:
import pickle
# Create an object
array = [1, "foo", Exception()]
# Serialize it
pickle.dump(array, open("settings.dat", "w"))
# Unserialize it
array = pickle.load(open("settings.dat"))
It really depends on what you want to do. I personally prefer using SQLite3, which is a very easy to use database with Python bindings. (It also offers you the freedom to save with your own file extension.) The disadvantage to that is that SQLite3 databases can be viewed by notepad (albeit, cryptically).
Related
If we have a small table which contains relatively static data, is it possible to have Active Record load this in on startup of the app and never have to hit the database for this data?
Note, that ideally I would like this data to be join-able from other Models which have relationships to it.
An example might be a list of countries with their telephone number prefix - this list is unlikely to change, and if it did it would be changed by an admin. Other tables might have relationships with this (eg. given a User who has a reference to the country, we might want to lookup the country telephone prefix).
I saw a similar question here, but it's 6 years old and refers to Rails 2, while I am using Rails 5 and maybe something has been introduced since then.
Preferred solutions would be:
Built-in Rails / ActiveRecord functionality to load a table once on startup and if other records are subsequently loaded in which have relationships with the cached table, then link to the cached objects automatically (ie. manually caching MyModel.all somewhere is not sufficient, as relationships would still be loaded by querying the database).
Maintained library which does the above.
If neither are available, I suppose an alternative method would be to define the static dataset as an in-memory enum/hash or similar, and persist the hash key on records which have a relationship to this data, and define methods on those Models to lookup using the object in the hash using the key persisted in the database. This seems quite manual though...
[EDIT]
One other thing to consider with potential solutions - the manual solution (3) would also require custom controllers and routes for such data to be accessible over an API. Ideally it would be nice to have a solution where such data could be offered up via a RESTful API (read only - just GET) if desired using standard rails mechanisms like Scaffolding without too much manual intervention.
I think you may be discounting the "easy" / "manual" approach too quickly.
Writing the data to a ruby hash / array isn't that bad an idea.
And if you want to use a CRUD scaffold, why not just use the standard Rails model / controller generator? Is it really so bad to store some static data in the database?
A third option would be to store your data to a file in some serialized format and then when your app loads read this and construct ActiveRecord objects. Let me show an example:
data.yml
---
- a: "1"
b: "1"
- a: "2"
b: "2"
This is a YAML file containing an array of hashes; you can construct such a file with:
require 'yaml'
File.open("path.yml", "w") do |f|
data = [
{ "a" => "1", "b" => 1 },
{ "a" => "2", "b" => 2 }
]
f.write(YAML.dump(data))
end
Then to load the data, you might create a file in config/initializers/ (everything here will be autoloaded by rails):
config/initializers/static_data.rb
require 'yaml'
# define a constant that can be used by the rest of the app
StaticData = YAML.load(File.read("data.yml")).map do |object|
MyObjectClass.new(object)
end
To avoid having to write database migrations for MyObjectClass (when it's not actually being stored in the db) you can use attr_accessor definitions for your attributes:
class MyObjectClass < ActiveRecord::Base
# say these are your two columns
attr_accessor :a, :b
end
just make sure not to run stuff like save, delete, or update on this model (unless you monkeypatch these methods).
If you want to have REST / CRUD endpoints, you'd need to write them from scratch because the way to change data is different now.
You'd basically need to do any update in a 3 step process:
load the data from YAML into a Ruby object list
change the Ruby object list
serialize everything to YAML and save it.
So you can see you're not really doing incremental updates here. You could use JSON instead of YAML and you'd have the same problem. With Ruby's built in storage system PStore you would be able to update objects on an individual basis, but using SQL for a production web app is a much better idea and will honestly make things more simple.
Moving beyond these "serialized data" options there are key-val storage servers store data in memory. Stuff like Memcached and Redis.
But to go back to my earlier point, unless you have a good reason not to use SQL you're only making things more difficult.
It sounds like FrozenRecord would be a good match for what you are looking for.
Active Record-like interface for read only access to static data files of reasonable size.
If we have a small table which contains relatively static data, is it possible to have Active Record load this in on startup of the app and never have to hit the database for this data?
Note, that ideally I would like this data to be join-able from other Models which have relationships to it.
An example might be a list of countries with their telephone number prefix - this list is unlikely to change, and if it did it would be changed by an admin. Other tables might have relationships with this (eg. given a User who has a reference to the country, we might want to lookup the country telephone prefix).
I saw a similar question here, but it's 6 years old and refers to Rails 2, while I am using Rails 5 and maybe something has been introduced since then.
Preferred solutions would be:
Built-in Rails / ActiveRecord functionality to load a table once on startup and if other records are subsequently loaded in which have relationships with the cached table, then link to the cached objects automatically (ie. manually caching MyModel.all somewhere is not sufficient, as relationships would still be loaded by querying the database).
Maintained library which does the above.
If neither are available, I suppose an alternative method would be to define the static dataset as an in-memory enum/hash or similar, and persist the hash key on records which have a relationship to this data, and define methods on those Models to lookup using the object in the hash using the key persisted in the database. This seems quite manual though...
[EDIT]
One other thing to consider with potential solutions - the manual solution (3) would also require custom controllers and routes for such data to be accessible over an API. Ideally it would be nice to have a solution where such data could be offered up via a RESTful API (read only - just GET) if desired using standard rails mechanisms like Scaffolding without too much manual intervention.
I think you may be discounting the "easy" / "manual" approach too quickly.
Writing the data to a ruby hash / array isn't that bad an idea.
And if you want to use a CRUD scaffold, why not just use the standard Rails model / controller generator? Is it really so bad to store some static data in the database?
A third option would be to store your data to a file in some serialized format and then when your app loads read this and construct ActiveRecord objects. Let me show an example:
data.yml
---
- a: "1"
b: "1"
- a: "2"
b: "2"
This is a YAML file containing an array of hashes; you can construct such a file with:
require 'yaml'
File.open("path.yml", "w") do |f|
data = [
{ "a" => "1", "b" => 1 },
{ "a" => "2", "b" => 2 }
]
f.write(YAML.dump(data))
end
Then to load the data, you might create a file in config/initializers/ (everything here will be autoloaded by rails):
config/initializers/static_data.rb
require 'yaml'
# define a constant that can be used by the rest of the app
StaticData = YAML.load(File.read("data.yml")).map do |object|
MyObjectClass.new(object)
end
To avoid having to write database migrations for MyObjectClass (when it's not actually being stored in the db) you can use attr_accessor definitions for your attributes:
class MyObjectClass < ActiveRecord::Base
# say these are your two columns
attr_accessor :a, :b
end
just make sure not to run stuff like save, delete, or update on this model (unless you monkeypatch these methods).
If you want to have REST / CRUD endpoints, you'd need to write them from scratch because the way to change data is different now.
You'd basically need to do any update in a 3 step process:
load the data from YAML into a Ruby object list
change the Ruby object list
serialize everything to YAML and save it.
So you can see you're not really doing incremental updates here. You could use JSON instead of YAML and you'd have the same problem. With Ruby's built in storage system PStore you would be able to update objects on an individual basis, but using SQL for a production web app is a much better idea and will honestly make things more simple.
Moving beyond these "serialized data" options there are key-val storage servers store data in memory. Stuff like Memcached and Redis.
But to go back to my earlier point, unless you have a good reason not to use SQL you're only making things more difficult.
It sounds like FrozenRecord would be a good match for what you are looking for.
Active Record-like interface for read only access to static data files of reasonable size.
I have a Rails application and a JSON file which contains the data, instead of creating table for a model i want to point my model to read that jSON file, and i should be table to treat that file like a table, please suggest.
If I understand you correctly, you want a model that uses a JSON file as it's "backend DB" instead of a normal DB?
In order to get a Rails model to point to a JSON file, you would need to use a JSON DB adapter, and I'm not sure if there is one for Rails.
Rails uses what are called "adapters", where the same code:
Model.find(1)
Will work on any DB (PostgreSQL, MySQL, SQLite3, etc...) because the Model.find() method has been implemented by each adapter.
This allows the interface for the developer to remain the same as long as the adapter implements it.
It avoids the problem where every DB creator implements a different interface, and now everyone has to learn those particular methods (convention for the win!).
All that said, I can't find a JSON DB adapter, so if you want that functionality you'll have to read a JSON file and search against it.
However, if you're talking about using client-side storage with a JSON file, this isn't possible because client-side only understand JavaScript and a Ruby model (class) is on the backend server. They don't directly talk to each other. In that case, you'll have to implement a JavaScript model that maps to the JSON data.
MySQL as in-memory-database
Rails with in memory database has a way to use MySQL in-memory; then you load your data from the JSON file at the start and dump it out at the end (or after commits).
In-memory DB adapter
https://github.com/maccman/supermodel exists but looks dead (4 years old). Maybe you find others?
Rolling it yourself with Nulldb
Use https://github.com/nulldb/nulldb to throw away all SQL statements and register some hooks (after_save etc.) to store them in some hash. You then load that has into memory at the start and dump it out to JSON later.
Separating concerns
My favourite approach, maybe too late if you have lots of working code already:
Separate your active-record code from your actual domain model. That means, if you today have a model class Stuff < ActiveRecord::Base, then separate that into class StuffAR < ActiveRecord::Base and class Stuff.
Stuff then contains an instance of StuffAR.
Using the proper ruby mechanisms (i.e., BasicObject::method_missing), you can delegate all calls from Stuff to StuffAR by default.
You now have complete control over your data and can do whatever with it.
(This is only one way to do it, depending on how dynamic/flexible you want to be, and if you need a real DB part-time, you can do it different; i.e. class Stuff < StuffAR at the one extreme, or not using a generic method_missing but explicitly coded methods which call StuffAR etc. - Stuff is PORO (plain old ruby objects) now and you use StuffAR just for the DB contact)
In this approach, be careful not to use Stuff like an AR object. I.e., do not use Stuff.where(name: 'xyz') from outside, but create domain methods for that (e.g., in this example, Stuff.find_by_name(...).
Yes, this is coding overhead, but it does wonders to improve your code when your models become big and unwieldy after a time.
Don't need AR at all?
If you do only want to use JSON ever, and never use a real DB, then do the same as before, just leave StuffAR out. It's just PORO then.
I think you have to import your JSON file to the database (e.g. sqlite3) to handle it as a table.
The other workaround would be:
Create a JSON importer for your model which fills the Users from the JSON into an Array of users.
If you do that, you'll have to write the whole searching/ordering by yourself in plain ruby.
I don't know what your current circumstances are, but if you would like to change some data or add data, I suggest using a simple and lightweight database like sqlite3
I have a rails application that serves as an interface to a hybrid of data. Most of the information I require is retrieved from the command-line program using XML-RPC. Aside from this, I require some additional bit of data which I have no option but to store in a database. For this reason, I am having trouble figuring out what would be the best way to design the application.
I have overridden self.all and self.find(id) such that they rely on calls to super and then "enrich" the object by defining its instance variables to the appropriate data retrieved from the program using XML-RPC.
This all seems pretty convoluted though. For example, I imagine I have lost the ability to use the magic finders (find_by_x), and I don't know if anything else will break as a result of this.
My question is if there is a more logical and sensible way of going on about doing this. That is, designing an application that depends on XML-RPC for its data for the most part, but also some data stored in a database.
I did read about after_find. Using this callback, I can implement the "object enriching" process and have it run anytime there is a found record. However, my method of retrieving data associated with an item is different than that of retrieving all item data. The way I do it for retrieving all item data (self.all) is way more efficient, but unfortunately not applicable, to retrieving only one item's data (self.find). This would work well if there were a way I could make the callback not apply to self.all calls.
In my experience, you shouldn't mess with ActiveRecord's finders - there is a lot of magic that they rely on.
after_find is a great direction to start with, but if you're having issues with batching, then what I'd recommend is twofold - use a caching layer and alias_method_chain to implement a version of #all that performs your batched XML-RPC find, caches it, and then pass the call through to the unaliased original all. Then, your after_find would check your cache for data first, and if it's not there, perform the remote find. This would let you successfully batch data for all finds while utilizing the callback.
That said, there is probably an easier way to do this. I would just use models that don't descend from ActiveRecord::Base, but rather, which descend from some XMLRPC base interface, and then have faux associations on them that point to AR instances with your database information. Thus, you might have something like:
class XmlRpcModelBase
...
def find(...)
end
def all(...)
end
def extra_data
#extra_data ||= SomeActiveRecordModel.find(...)
end
end
class Foo < XmlRpcModelBase
end
It's not ideal, and honestly, it's going to depend a lot on how much of this is read, and how much is read/write, but I would try to stay out of ActiveRecord's way where possible, and instead just bolt on the AR-related pieces as necessary.
I am implementing an application that manipulate XML documents using Ruby on Rails. I want to have models that encapsulate all the logic and then convert them to corresponding XML document when save. Although I do not need database persistence for my models, I want my models to have validation mechanism like ActiveRecord's. Also, for converting between XML and Ruby objects, an XML mapping library is preferred to make things easier.
Although there are quite a few solutions which allow using ActiveRecord without table, it seems to me the XML mapping libraries (e.g. ROXML, XML Mapping) do not seem to play well with ActiveRecord'S fields. In other words, it does not look like that they can be used together due to the conflict it their syntax.
Therefore, I would like to know what are the preferred solution in this case. The solution which allow the use of an XML binding library with tableless models with validation functionality.
For example, one solution is to have two separate models. One is tableless ActiveRecord and the other is plain Ruby objects with xml binding (like what is described in this post). The ActiveRecord models are for validation. To convert them to XML, it will need to be copies to XML binding models first. Although this solution does work, it is not elegant.
You are saying that you don't need "database persistence", but then you say that you want to save them to an XML file, and read them from it. So you need persistence - on a file. The difference between that and "database persistence" is only linguistic, since we have SQLite, which stores the whole database on a file.
What you want to do could be done by implementing your own adapter. It would not be complicated, but it would take some time.
I might made the question looks more complicated that it actually is. Essentially what I wanted are subset of ActiveRecord features (validations, create object from hash) and XML binding. In this case, it is probably better to use standard Ruby objects with libraries that provide comparable functionality to ActiveRecord. For validation, validatable is the right fit. Creating object from hash could also be easily implemented.
In summary, Ruby class + ROXML + validatable gem + custom hash-to-object implementation is the way to go for me.