Problems with UNLOAD and the DELIMITER Informix - informix

I am using UNLOAD in Informix with the DELIMITER "|" but I have problems because in some fields it has as part of the data the value | and then it gives me more fields than the correct ones, I tried with other separators but it's the same.
Is there a better way to export the data than UNLOAD or is there a way to export each field enclosed in double quotes and delimited by "|"?
I am using the following:
UNLOAD TO "/ruta/al/archivo/nombre_tabla.txt' DELIMITER '|'
SELECT * FROM nombre_tabla;

Related

Adding special characters in sqlplus

In SQLPlus, how do I add string that contains a special character to my database? The special character I am trying to use is 'é'.
I've tried Aim(atl+130) which is: Aimé but returns 'Aim?'
I copied your character and did this to find the value 233:
select dump('é') from dual;
So you should be able to do this and get it back (of course you can INSERT it too):
select 'Aim' || chr(233) from dual;

Neo4j - how to import csv having special characters

I am trying to import csv file which has some special characters like ' , " , \ when I am running this query -
LOAD CSV WITH HEADERS FROM "http://192.168.11.121/movie-reco-db/movie_node.csv" as row
CREATE (:Movie_new {movieId: row.movie_id, title: row.title, runtime: row.runtime, metaScore: row.meta_score, imdbRating: row.IMDB_rating, imdbVotes: row.IMDB_votes , released: row.released , description: row.description , watchCount: row.watch_count , country: row.country ,
category: row.category , award: row.award , plot: row.plot , timestamp: row.timestamp})
It throws an error -
At http://192.168.11.121/movie-reco-db/movie_node.csv:35 - there's a field starting with a quote and whereas it ends that quote there seems to be characters in that field after that ending quote. That isn't supported. This is what I read: ' Awesome skills performed by skillful players! Football Circus! If you like my work, become a fan on facebook",0,USA,Football,N/A,N/A,1466776986
260,T.J. Miller: No Real Reason,0,0,7.4,70,2011-11-15," '
I debug the file and remove \ from the line causing problem , then re-run the above query , it worked well. Is there any better way to do it by query or I have to find and remove such kind of characters each time ?
It doesn't appear that the quote character is configurable for LOAD CSV (like the FIELDTERMINATOR is):
http://neo4j.com/docs/developer-manual/current/cypher/#csv-file-format
If you're creating a new Database, you could look into using the neo4j-import tool which does appear to have the option to configure the quote character (and then just set it to a character you know won't be in your csv files (say for example a pipe symbol |):
http://neo4j.com/docs/operations-manual/current/deployment/#import-tool
Otherwise, yes, you're going to need to run a cleansing process on your input files prior to loading.

Quote field name and table name in Advantage database SQL

I am working on a legacy system developed with Advantage Database 8.1, I need to run SQL, like:
"SELECT CODETEST.DESC FROM CODETEST"
This SQL won't run at .NET due to the "CODETEST.DESC" name which is belong to SQL reserved keyword "DESC", I think to quote the "CODETEST.DESC", for example, in MySQL, we can use 'CODETEST'.'DESC' (I changed ` to ' here) to avoid that.
I read Advantage Database Help but can't find how to do so, it's a legacy system, and the database structure won't be changed. So is there any way to quote the table and field name in Advantage SQL?
Found the answer, here Use of Non-Standard Characters in Names, its simple by [CODETEST].[DESC]
Double quotes and [] (brackets) are used to delimit identifiers that
contain non-alphanumeric characters or that start with numbers. For
example, if a database contains a table name or column name that
starts with a number, contains spaces, or that has non-alphanumeric
characters, the application must enclose the name in double quotes or
[] (brackets) (e.g., "3D", "Contact Date", "l/c", [Full Name]). Also,
full path names or table names that include extensions must be
enclosed in double quotes or [] (brackets) (e.g., "x:\pathname\table",
"\server\volume\path\table", "table.abc", "..\otherdir\table").

PostgreSql + Query Statement having \r in between the attributes !

Suppose we have a textarea in which we put example string. Textbox contain :
Earth is revolving around the Sun.
But at the time of saving, I just pressed a enter key after "the sun". Now the statements in texbox ::
Earth is revolving around
the Sun
Now, in database where enter was pressed the \r is stored. Now i am trying to fetch the data but unable, because my query is of such type ::
SELECT * FROM "volume_factors" WHERE lower(volume_notes) like E'atest\\r\\n 100'
Actual data stored in database field
atest\r
100
Please suggest me to solve the issue.I have tried gsub to replace but no change.
search_text_array[1] = search_text_array[1].gsub('\\r\\n','\r\n')
Thanks in Advance !!!
Try this:
update volume_factors set volume_notes = regexp_replace(volume_notes, '\r\n', ' ');
That's to replace crlf with one space for data that is already in the database. You use postgresql's psql to do this.
To prevent new data containing crlf entering database, you should do it in the application. If you use ruby's gsub, do not use single quote, use double quote to recognize \n like this:
thestring.gsub("\n", " ")
Here we can replace \r\n by % to fetch the data.
Seaching Query will be like this ::
SELECT * FROM "volume_factors" WHERE lower(volume_notes) like E'atest% 100'
gsub function ::
search_text_array[1] = search_text_array[1].gsub('\\r\\n','%')

Detecting regional settings (List Separator) from web

After having the unpleasant surprise that Comma Seperated Value (CSV) files are not necessarily comma-separated, I'm trying to find out if there is any way to detect what the regional settings list separator value is on the client machine from http request.
Scenario is as follows: A user can download some data in CSV format from web site (RoR, if it matters). That CSV file is generated on the fly, sent to the user, and most of the time double-clicked and opened in MS Excel on Windows machine at the destination. Now, if the user has ',' set as the list separator, the data is properly arranged in columns, but if any other separator (';' is widely used here) is set, it all just gets thrown into a single column. So, is there any way to detect what separator is used on the client machine, and generate the file accordingly?
I have a sinking feeling that it is not, but I'd like to be sure before I pass the 'can't be done, sorry' line to the customer :)
Here's a JavaScript solution that I just wrote based on the method shown here:
function getListSeparator() {
var list = ['a', 'b'], str;
if (list.toLocaleString) {
str = list.toLocaleString();
if (str.indexOf(';') > 0 && str.indexOf(',') == -1) {
return ';';
}
}
return ',';
}
The key is in the toLocaleString() method that uses the system list separator.
You could use JavaScript to get the list separator and set it in a cookie which you could then detect from your server.
I checked all the Windows Locales, and it seems that the default list separator is virtually always either ',' or ';'. For some locales the drop-down list in the Control Panel offers both options; for others it offers just ','. One locale, Divehi, has a strange character that I've not seen before as the list separator, and, for any locale, it is possible for the user to enter any string they want as the list separator.
Putting random strings as the separator in a CSV file sounds like trouble to me, so my function above will only return either a ';' or a '.', and it will only return a ';' if it can't find a ',' in the Array.toLocaleString string. I'm not entirely sure about whether array.toLocaleString has a format that's guaranteed across browsers, hence the indexOf checks rather than picking out a character at a specific index.
Using Array.toLocaleString to get the list separator works on IE6, IE7, and IE8, but unfortunately it doesn't seem to work on Firefox, Safari, Opera, and Chrome (or at least the versions of those browsers on my computer): they all seem to separate array items with a comma, irrespective of the Windows "list separator" setting.
Also worth noting that by default Excel seems to use the system "decimal separator" when it's parsing numbers out of CSV files. Yuk. So, if you're localizing the list separator you might want to localize the decimal separator too.
I think everyone should use Calc from OpenOffice - it asks when you open a file about encoding, column separators and other. I don't know answer for your question, but maybe you can try to send data in html tables or in xml - excel should read both of them correctly. From my experience it isn't easy to export data to excel. Few weeks ago I have problem with it and after few hours of work I asked a person, who couldn't open my csv file in excel, about version. It was Excel 98...
Take a look on html example and xml.
The simplier version of getListSeparator function, enabling any character to be a separator:
function getListSeparator_bis()
{
var list = ['a', 'b'];
return(list.toLocaleString().charAt(1));
}// getListSeparator_bis
Just set any char (f.e. '#') as list separator in your OS and try the code as above. The appropriate char (i.e. '#' if set as sugested) is returned.
Could you just have the users with non comma separators set a profile kind of option and then generate CSVs based on user settings with the default being commas?
Toms, as far as I'm aware there is no way of achieving what you're after. The most you can do is try and detect the user locale and map it against a database of locales/list separators, altering the list separator in the .CSV file as a result.

Resources