i have this query:
insert into orders (customers_id, customers_name) values ('51064', 'Šample Šample')
If i execute this query from PHP, my database record become
[51604, '?ample ?ample'] (if executed with MySQLi) or
[51604, 'Šample Šample'] (with mysql_query)
I also noticed that if i use the value from $_GET
insert into orders (customers_id, customers_name) values ('51064', {$_GET['name']})
it becomes [51064, 'Åample Åample']
BUT if i insert manually the query using software like 'Navicat' it saves the query with the correct character (so i think that the charset is the right one)
I need to save the character Š (and many others) in the right way from PHP.
Set the charset to be used when sending data back and forth from the database server.
$mysqli->set_charset("utf8")
This is preferred over
// Will not affect $mysqli->real_escape_string();
$mysqli->query("SET NAMES utf8");
See http://www.php.net/manual/en/mysqli.set-charset.php and http://www.php.net/manual/en/mysqlinfo.concepts.charset.php
set Mysql encoding before using the DB.
mysql_query("SET NAMES 'utf8'");
OR use the utf8 utf8_encode() utf8_decode() functions
Related
when I use hive sql with chinese comment to create table on the web(HUE), it pops up 'ascii codec can't encode characters in position', then I tried to chang python default encod into utf8 to fix it, but the exception change into semantic exception.
I double check on hive with beeline connection, it can execute ddl,dql with chinese successfully, such as
'select *, 'chinese中文' as test from tb_name',
'create table tb_name (id string comment 'chinese中文') comment 'chinese中文' row format ....'
'desc formatted tb_name'.
I also change corresponding column with utf8 on desktop_document2 which in the hue's metastore to make shure chinese can show on the ui.
till now I gussing encoding get worng between typing on ui and execut stage. need help, thank you so much.
btw, hue is 4.10 and hive is 2.3.7. same problem when query mysql table, but I added jdbc dialect '?characterEncoding=UTF-8' to fix it. hope it can provide inspiration.
Windows 10, Access 2016
I am moving a very small database (14 tables and 40-50 stored procedures) from SQL Server to Access. I have tried to recreate the stored procedures from code using an OLEDB command object. This is a sample of a CommandText…
CREATE PROCEDURE DeleteOrderDetailByOrderID
([#ID] int)
AS
DELETE FROM OrderDetails
WHERE (OrderID = #ID);
I get an error message that the Data Type of #ID is incorrect. It is not. When I remove the brackets from #ID all is forgiven and the code runs. However, Access strips the # from #ID in the parameter section (not in the Where clause). I have had to go into Access and manually correct this. I do not like the idea of going through almost 5000 lines of code to correct parameter names in my program. I thought I could use the direct approach by pasting the SQL directly into Access but I get an error with this route saying syntax error in CREATE TABLE and it highlights the word PROCEDURE. This leads me to believe that you cannot use CREATE PROCEDURE directly in Access. Is this true? Is there another approach that I am missing?
You are missing, that T-SQL of SQL Server is not Access SQL.
Access has UDFs - user defined functions - that can be used in queries also, but that is VBA code.
If you just need a single-user file based database to hold your data, you may get away with the SQL Server Compact Edition which supports a subset of T-SQL.
I had a project in Redmine with more than 600 issues. I moved all the issues to a different project. I had no idea that the move deletes all the data for the custom fields!
So all the custom field values are now lost. I did not backup the database before this action as I really did not think that I was going to do any harm by moving issues as moving is a native function in the UI.
What I noticed is though that the production.log contains events for all creation and updates. All my 600 issues are in order in the production log. How can I use these log statements to repeat the actions? If I can import all the log actions, I can migrate the custom fields that it writes to the original Redmine instance and restore my values.
Entries look like this:
Processing IssuesController#update (for XX.XX.XX.X at 2013-02-07 11:19:54) [PUT]
Parameters: {"_method"=>"put", "authenticity_token"=>"nWNSSRYjHhN0BGb+Ya8M4pYWPPgsfdM=", "issue"=>{"assigned_to_id"=>"", "custom_field_values"=>{"10"=>"", "5"=>"Not translated", "1"=>"fi", "8"=>"http://screencast.com/t/ODknR8K", "9"=>"", "3"=>"", "4"=>""}, "done_ratio"=>"0", "due_date"=>"", "priority_id"=>"4", "estimated_hours"=>"", "start_date"=>"2013-02-07", "subject"=>"1\tInstallation in English", "tracker_id"=>"1", "lock_version"=>"0", "description"=>"Steps:\r\nOpen Nitro\r\n\r\nProblem:\r\nNot localized"}, "controller"=>"issues", "time_entry"=>{"hours"=>"", "activity_id"=>"", "comments"=>""}, "attachments"=>{"1"=>{"description"=>""}}, "id"=>"3876", "action"=>"update", "commit"=>"Submit", "notes"=>""}
I am really hoping that there is a way, any help will be greatly appreciated
You could use a decent text editor and/or spreadsheet application and do a massive find and replace and construct a series of UPDATE SQL commands and run them directly on the database (TEST FIRST!!)
Extract from log
Remove unnessary information
Copy into spreadsheet
Split text into columns
Add in columns with necessary SQL commands "UPDATE SET etc" copy into all rows of this column etc.
Join columns to make one text command per row
Export joined data to a text file
Run against test database as sql
If all goes well run against production database as sql
The log entry, following "Parameters:", looks like a regular Ruby hash definition. I'd parse that out and eval it back into a hash variable.
From there you will need to peel off elements and insert them into a database. I'd do that using Sequel, but use what works for you.
Talk to the RedMine support people and get the schema for their tables so you can figure out what data goes where and the database driver needed.
Is librdf_model_add writing the statements into the hash-storage?
I am having problem to run a sparql query to retrieve them. The db files are probably populated as their file size keep increasing, but when I attempt to perform sparql query to them I don't seem to get any result. Do I need to load the statements from the storage into the model manually before issuing a query?
the statement that issue the query
$query = librdf_new_query(
$world,
'sparql',
NULL,
<<<SPARQL
PREFIX sensei: <http://coolsilon.com/flickr_schema/>
SELECT ?a ?c
WHERE {?a ?b ?c}
SPARQL
,
NULL
);
$result = librdf_query_execute($query, $model);
var_dump(librdf_query_results_get_count($result)); // returns 0
I am using PHP (5.3.5) language binding, and my redland version is 1.0.12 running under Ubuntu Natty.
p/s: I checked again with postgresql storage, and the above code works :/
This is better asked on semantic overflow or the redland-dev list.
The most likely thing is the model has no data.
Use some of the librdf functions to print out the model or use a serializer.
Try the test.php for pointers in https://github.com/dajobe/redland-bindings/tree/master/php
I have a rails application and I am trying to load data into it via PostgreSQL's COPY command. The file is CSV. The table maps to a Rails model and I need to retain the primary-key ID column so I can use model.for_each to play with the data--the table will have 5M+ rows.
My question is, how can I structure my CSV so that I can load data into the table and still allow the ID column to be there? It's a bummer because the only reason I need it is for the for_each method.
Regardless, I tried sending NULLs, as in:
NULL,col1,col2,col3,col4, etc.
But it doesn't work.
Thanks.
Passing in null for the primary key will never work no matter what options you have set for null string. You would be telling the backend to set the primary key to null which it will never allow no matter what the command to insert might be.
I really have no idea what you mean by retaining the primary key. That is something that is going be retained no matter what do. If you mean letting the DB pick the value for you and the primary key is a serial (auto-increment) then explicitly name all the columns but the primary key:
COPY country (colA, colB, colC) FROM '/usr1/proj/bray/sql/country_data'; -- leave out pkey
It also might be quicker to read the documentation on what null string options you would like to use instead of guessing possible values:
http://www.postgresql.org/docs/9.0/static/sql-copy.html
The default when using WITH CSV is an unquoted empty string, such as:
,col1,col2,col3,,col5
Which would create a record that looks like:
NULL,'col1','col2','col3',NULL,'col5'