I have a method in order to get data out of a database for a propel-based symfony-1.1 project.
Now the use-case arrived to store an integer into a varchar field, which results in a wrong order, e.g. {1, 17, 5}, and not the numeric one I was expecting, i.e. {1, 5, 17}.
I know that one way would be to redesign my schema.yml, but this is not an option. I was wondering if there is a way to cast said varchar field as an integer without harming the propel-approach.
This is the sorting function:
public static function getFooData($column = 'FooPeer::ID', $orderBy = 'asc') {
//FIXME: Sort varchar fields as integer, needed for FooPeer::REQUESTS
$c = new Criteria();
if ($orderBy == 'asc') {
$c->addAscendingOrderByColumn($column);
} else {
$c->addDescendingOrderByColumn($column);
}
return FooPeer::doSelect($c);
}
What about:
$c->addAscendingOrderByColumn('CAST('.$column.' AS UNSIGNED)');
Just for interest, you could also have written a view for this, and build your model on top of the view rather than the table. Assuming you're writing to the table with Propel, this solution requires the platform to support writable views (I'm not sure they all do, but perhaps that assumption is out of date).
This is often a good/quick technique where you're not sure how to do something in Propel, or where it is really awkward. It's saved me a few times, even though it's not every purist's cup of tea.
Related
Reading the injected comments in the Code Snippet should give enough context.
--| Table |--
QuestData = {
["QuestName"]={
["Quest Descrip"]={8,1686192712},
["Quest Descrip"]={32,1686193248},
["Quest Descrip"]={0,2965579272},
},
}
--| Code Snippet |--
--| gets QuestName then does below |--
if QuestName then
-- (K = QuestName) and (V = the 3 entries below it in the table)
for k,v in pairs(QuestData) do
-- Checks to make sure the external function that obtained the QuestName matches what is in the table before cont
if strlower(k) == strlower(QuestName) then
local index = 0
-- Iterates over the first two pairs - Quest Descrip key and values
for kk,vv in pairs(v) do
index = index + 1
end
-- Iterates over the second two pairs of values
if index == 1 then
for kk,vv in pairs(v) do
-- Sends the 10 digit hash number to the function
Quest:Function(vv[2])
end
end
end
end
end
The issue I'm running into is that Lua will only pick up one of the numbers and ignore the rest. I need all the possible hash numbers regardless of duplicates. The QuestData table ("database") has well over 10,000 entries. I'm not going to go through all of them and remove the duplicates. Besides, the duplicates are there because the same quest can be picked up in more than one location in the game. It's not a duplicate quest but it has a different hash number.
Key is always unique. It is the point of the key, that the key is pointing to unique value and you can't have more keys with same name to point different values. It is by definition by Lua tables.
It is like if you would want to have two variables with same name and different content. It does not make sense ...
The table type implements associative arrays. [...]
Like global variables, table fields evaluate to nil if they are not initialized. Also like global variables, you can assign nil to a table field to delete it. That is not a coincidence: Lua stores global variables in ordinary tables.
Quote from Lua Tables
Hashing in Lua
Based on comments, I update the answer to give some idea about hashing.
You are using hashing usually in low-level languages like C. In Lua, the associative arrays are already hashed somehow in the background, so it will be overkill (especially using SHA or so).
Instead of linked lists commonly used in C, you should just construct more levels of tables to handle collisions (there is nothing "better" in Lua).
And if you want to have it fancy set up some metatables to make it somehow transparent. But from your question, it is really not clear how your data look like and what you really want.
Basically you don't need more than this:
QuestData = {
["QuestName"]={
["Quest Descrip"]={
{8,1686192712},
{32,1686193248},
{0,2965579272},
},
},
}
As Jakuje already mentioned table keys are unique.
But you can store both as a table member like:
QuestData = {
-- "QuestName" must be unique! Of course you can put it into a table member as well
["QuestName"]={
{hash = "Quest Descrip", values = {8,1686192712} },
{hash = "Quest Descrip", values = {32,1686193248} },
{hash = "Quest Descrip", values = {0,2965579272} }
}
}
I'm sure you can organize this in a better way. It looks like a rather confusing concept to me.
You've said you can't "rewrite the database", but the problem is the QuestData table doesn't hold what you think it holds.
Here's your table:
QuestData = {
["QuestName"]={
["Quest Descrip"]={8,1686192712},
["Quest Descrip"]={32,1686193248},
["Quest Descrip"]={0,2965579272},
},
}
But, this is actually like writing...
QuestData["Quest Descrip"] = {8,1686192712}
QuestData["Quest Descrip"] = {32,1686193248}
QuestData["Quest Descrip"] = {0,2965579272}
So the second (and then, third) values overwrite the first. The problem is not that you can't access the table, but that the table doesn't contain the values any more.
You need to find a different way of representing your data.
I have three rails objects: User, DemoUser and Stats. Both the User and the DemoUser have many stats associated with them. The User and Stats tables are stored on Postgresql (using ActiveRecord). The DemoUser is stored in redis. The id for the DemoUser is a (random) string. The id for the User is a (standard-rails) incrementing integer.
The stats table has a user_id column that can contain either the User id or the DemoUser id. For that reason, the user_id column is a string, rather than an integer.
There isn't an easy way to translate from the random string to an integer, but there's a very easy way to translate the integer id to a string (42 -> "42"). The ids are guaranteed not to overlap (there won't be a User instance with the same id as a DemoUser, ever).
I have some code that manages those stats. I'd like to be able to pass over a some_user instance (which can either be a DemoUser or a User) and then be able to use the id to fetch Stats, update them etc. Also would be nice to be able to define a has_many for the User model, so I can do things like user.stats
However, operations like user.stats would create a query like
SELECT "stats".* FROM "stats" WHERE "stats"."user_id" = 42
which then breaks with PG::UndefinedFunction: ERROR: operator does not exist: character varying = integer
Is there a way to either let the database (Postgresql), or Rails do auto-translation of the ids on JOIN? (the translation from integer to string should be simple, e.g. 42 -> "42")
EDIT: updated the question to try to make things as clear as possible. Happy to accept edits or answer questions to clarify anything.
You can't define a foreign key between two types that don't have built-in equality operators.
The correct solution is to change the string column to be an integer.
In your case you could create a user-defined = operator for varchar = string, but that would have messy side effects elsewhere in the database; for example, it would allow bogus code like:
SELECT 2014-01-02 = '2014-01-02'
to run without an error. So I'm not going to give you the code to do that. If you truly feel it's the only solution (which I don't think is likely to be correct) then see CREATE OPERATOR and CREATE FUNCTION.
One option would be to have separate user_id and demo_user_id columns in your stats table. The user_id would be an integer that you could use as a foreign key to the users table in PostgreSQL and the demo_user_id would be a string that would link to your Redis database. If you wanted to treat the database properly, you'd use a real FK to link stats.user_id to users.id to ensure referential integrity and you'd include a CHECK constraint to ensure that exactly one of stats.user_id and stats.demo_user_id was NULL:
check (user_id is null <> demo_user_id is null)
You'll have to fight ActiveRecord a bit to properly constrain your database of course, AR doesn't believe in fancy things like FKs and CHECKs even though they are necessary for data integrity. You'd have to keep demo_user_id under control by hand though, some sort of periodic scan to make sure they link up with values in Redis would be a good idea.
Now your User can look up stats using a standard association to the stats.user_id column and your DemoUser can use stats.demo_user_id.
For the time being, my 'solution' is not to use a has_many in Rails, but I can define some helper functions in the models if necessary. e.g.
class User < ActiveRecord::Base
# ...
def stats
Stats.where(user_id: self.id.to_s)
end
# ...
end
also, I would define some helper scopes to help enforce the to_s translation
class Stats < ActiveRecord::Base
scope :for_user_id, -> (id) { where(user_id: id.to_s) }
# ...
end
This should allow calls like
user.stats and Stats.for_user_id(user.id)
I think I misunderstood a detail of your issue before because it was buried in the comments.
(I strongly suggest editing your question to clarify points when comments show that there's something confusing/incomplete in the question).
You seem to want a foreign key from an integer column to a string column because the string column might be an integer, or might be some unrelated string. That's why you can't make it an integer column - it's not necessarily a valid number value, it might be a textual key from a different system.
The typical solution in this case would be to have a synthetic primary key and two UNIQUE constraints instead, one for keys from each system, plus a CHECK constraint preventing both from being set. E.g.
CREATE TABLE my_referenced_table (
id serial,
system1_key integer,
system2_key varchar,
CONSTRAINT exactly_one_key_must_be_set
CHECK (system1_key IS NULL != system2_key IS NULL),
UNIQUE(system1_key),
UNIQUE(system2_key),
PRIMARY KEY (id),
... other values ...
);
You can then have a foreign key referencing system1_key from your integer-keyed table.
It's not perfect, as it doesn't prevent the same value appearing in two different rows, one for system1_key and one for system2_key.
So an alternative might be:
CREATE TABLE my_referenced_table (
the_key varchar primary key,
the_key_ifinteger integer,
CONSTRAINT integerkey_must_equal_key_if_set
CHECK (the_key_ifinteger IS NULL OR (the_key_ifinteger::varchar = the_key)),
UNIQUE(the_key_ifinteger),
... other values ...
);
CREATE OR REPLACE FUNCTION my_referenced_table_copy_int_key()
RETURNS trigger LANGUAGE plpgsql STRICT
AS $$
BEGIN
IF NEW.the_key ~ '^[\d]+$' THEN
NEW.the_key_ifinteger := CAST(NEW.the_key AS integer);
END IF;
RETURN NEW;
END;
$$;
CREATE TRIGGER copy_int_key
BEFORE INSERT OR UPDATE ON my_referenced_table
FOR EACH ROW EXECUTE PROCEDURE my_referenced_table_copy_int_key();
which copies the integer value if it's an integer, so you can reference it.
All in all though I think the whole idea is a bit iffy.
I think I may have a solution for your problem, but maybe not a massively better one:
class User < ActiveRecord::Base
has_many :stats, primary_key: "id_s"
def id_s
read_attribute(:id).to_s
end
end
Still uses a second virtual column, but maybe more handy to use with Rails associations and is database agnostic.
My database has "spine numbers" and I want to sort by them.
#films = Film.all.sort{|a,b| a.id <=> b.id }
That is my one controller, but the spines go 1, 2, 3 ... 100, 101 etc. instead of 001,002,003... so the sorting is out of whack. There's probably an easy class for this something like:
#films = Film.all.sort{|a,b| a.id.abs <=> b.id.abs }
But I don't know it. Thanks for the help.
PS also, why has the rails wiki been down so often recently?
You should use Film.order("id DESC") (or "ASC") method which aplies SQL ORDER BY clause to the query.
By default, records are sorted by the primary key column, at least in MySQL.
If this hasn't answered your question, please provide some more information on your database.
Edited
Yes, I do see. The only thing that comes to mind is that you're using some kind of string datatype for the spine numbers column. In this case, this kind of sorting makes sense, because values are compared alpabetically char to char like this
1| |
0|5|4
2|5|
1|4|3
which'll return
054
1
143
25
while numeric values such as integer, or float, are compared by their actual value, and not by separate bytes.
So you should create a migration to change the datatype of your spine number to integer.
I have a table with many fields and additionally several boolean fields (ex: BField1, BField2, BField3 etc.).
I need to make a Select Query, which will select all fields except for boolean ones, and a new virtual field (ex: FirstTrueBool) whose value will equal to the name of the first TRUE Boolean Field.
For ex: Say I have BField1 = False, BField2 = True, BField3 = true, BField4=false, in that case SQL Query should set [FirstTrueBool] to "BField2". Is that possible?
Thank you in advance.
P.S. I use Microsoft Access (MDB) Database and Jet Engine.
If you want to keep the current architecture (mixed 'x' non-null status and 'y' non-status fields) you have (AFAIS now) only the option to use IIF:
Select MyNonStatusField1, /* other non-status fields here */
IIF([BField1], "BField1",
IIF([BField2], "BField2",
...
IIF([BFieldLast], "BFieldLast", "#No Flag#")
))))) -- put as many parenthesis as it needs to close the imbricated IIFs
From
MyTable
Of course you can add any Where clause you like.
EDIT:
Alternatively you can use the following trick:
Set the fields to null when the flag is false and put the order number (iow, "1" for BField1, "2" for BField2 etc.) when the flag is true. Be sure that the status fields are strings (ie. Varchar(2) or, better, Char(2) in SQL terminology)
Then you can use the COALESCE function in order to return the first non-value from the status fields which will be the index number as string. Then you can add in front of this string any text you like (for example "BField"). Then you will end with something like:
Select "BField" || Coalesce(BField1, BField2, BField3, BField4) /*etc. (add as many fields you like) */
From MyTable
Much clearer IMHO.
HTH
You would be better using a single 'int' column as a bitset (provided you have up to 32 columns) to represent the columns.
e.g. see SQL Server: Updating Integer Status Columns (it's sql server, but the same technique applies equally well to MS Access)
I know of:
http://lua-users.org/wiki/SimpleLuaApiExample
It shows me how to build up a table (key, value) pair entry by entry.
Suppose instead, I want to build a gigantic table (say something a 1000 entry table, where both key & value are strings), is there a fast way to do this in lua (rather than 4 func calls per entry:
push
key
value
rawset
What you have written is the fast way to solve this problem. Lua tables are brilliantly engineered, and fast enough that there is no need for some kind of bogus "hint" to say "I expect this table to grow to contain 1000 elements."
For string keys, you can use lua_setfield.
Unfortunately, for associative tables (string keys, non-consecutive-integer keys), no, there is not.
For array-type tables (where the regular 1...N integer indexing is being used), there are some performance-optimized functions, lua_rawgeti and lua_rawseti: http://www.lua.org/pil/27.1.html
You can use createtable to create a table that already has the required number of slots. However, after that, there is no way to do it faster other than
for(int i = 0; i < 1000; i++) {
lua_push... // key
lua_push... // value
lua_rawset(L, tableindex);
}