Export table to a csv file in DBeaver with command - psql

I am using the following command to export the data
\copy (select * From table_name) To 'my_path' With CSV DELIMITER ',' HEADER;
But I get error syntax error at or near "\".
If I use
copy (select * From table_name) To 'my_path' With CSV DELIMITER ',' HEADER;
I get the error COPY TO instructs the PostgreSQL server process to write a file. You may want a client-side facility such as psql's \copy.
I really do not know which command I should run. Thanks in advance for your help:))

Related

start gerrit failed after I migrated H2 to postgresql because schema version downgrade is not supported

I create a new gerrit (version: 2.13.11) using postgresql database, then I successfully migrated the H2 datas of my old gerrit (version: 2.13.11) to the new gerrit. I also executed the reindex command. but when I try to start gerrit, it failed.
This is the detail in error_log:
[2018-05-17 15:19:10,550] [main] WARN com.google.gerrit.server.config.GitwebCgiConfig : gitweb not installed (no /usr/lib/cgi-bin/gitweb.cgi found)
[2018-05-17 15:19:10,740] [main] INFO org.eclipse.jetty.util.log : Logging initialized #5032ms
[2018-05-17 15:19:10,783] [main] INFO com.google.gerrit.server.git.LocalDiskRepositoryManager : Defaulting core.streamFileThreshold to 438m
[2018-05-17 15:19:10,797] [main] ERROR com.google.gerrit.pgm.Daemon : Unable to start daemon
com.google.inject.ProvisionException: Unable to provision, see the following errors:
1) Unsupported schema version 161; expected schema version 129. Downgrade is not supported.
1 error
at com.google.gerrit.server.schema.SchemaVersionCheck.start(SchemaVersionCheck.java:68)
at com.google.gerrit.lifecycle.LifecycleManager.start(LifecycleManager.java:89)
at com.google.gerrit.pgm.Daemon.start(Daemon.java:313)
at com.google.gerrit.pgm.Daemon.run(Daemon.java:214)
at com.google.gerrit.pgm.util.AbstractProgram.main(AbstractProgram.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.google.gerrit.launcher.GerritLauncher.invokeProgram(GerritLauncher.java:163)
at com.google.gerrit.launcher.GerritLauncher.mainImpl(GerritLauncher.java:104)
at com.google.gerrit.launcher.GerritLauncher.main(GerritLauncher.java:59)
at Main.main(Main.java:25)
The both gerrit servers are running in jre7. I've no idea what should I do.
Could please give me some advice? Thank you very much!
Here is my commands to export data from H2 database:
CALL CSVWRITE('/home/sarah/migrate213/ACCOUNTS', 'SELECT * FROM ACCOUNTS');
CALL CSVWRITE('/home/sarah/migrate213/ACCOUNT_EXTERNAL_IDS', 'SELECT * FROM ACCOUNT_EXTERNAL_IDS');
CALL CSVWRITE('/home/sarah/migrate213/ACCOUNT_GROUPS', 'SELECT * FROM ACCOUNT_GROUPS');
CALL CSVWRITE('/home/sarah/migrate213/ACCOUNT_GROUP_BY_ID', 'SELECT * FROM ACCOUNT_GROUP_BY_ID');
CALL CSVWRITE('/home/sarah/migrate213/ACCOUNT_GROUP_BY_ID_AUD', 'SELECT * FROM ACCOUNT_GROUP_BY_ID_AUD');
CALL CSVWRITE('/home/sarah/migrate213/ACCOUNT_GROUP_MEMBERS', 'SELECT * FROM ACCOUNT_GROUP_MEMBERS');
CALL CSVWRITE('/home/sarah/migrate213/ACCOUNT_GROUP_MEMBERS_AUDIT', 'SELECT * FROM ACCOUNT_GROUP_MEMBERS_AUDIT');
CALL CSVWRITE('/home/sarah/migrate213/ACCOUNT_GROUP_NAMES', 'SELECT * FROM ACCOUNT_GROUP_NAMES');
CALL CSVWRITE('/home/sarah/migrate213/ACCOUNT_PROJECT_WATCHES', 'SELECT * FROM ACCOUNT_PROJECT_WATCHES');
CALL CSVWRITE('/home/sarah/migrate213/CHANGES', 'SELECT * FROM CHANGES');
CALL CSVWRITE('/home/sarah/migrate213/CHANGE_MESSAGES', 'SELECT * FROM CHANGE_MESSAGES');
CALL CSVWRITE('/home/sarah/migrate213/PATCH_COMMENTS', 'SELECT * FROM PATCH_COMMENTS');
CALL CSVWRITE('/home/sarah/migrate213/PATCH_SETS', 'SELECT * FROM PATCH_SETS');
CALL CSVWRITE('/home/sarah/migrate213/PATCH_SET_APPROVALS', 'SELECT * FROM PATCH_SET_APPROVALS');
CALL CSVWRITE('/home/sarah/migrate213/SCHEMA_VERSION', 'SELECT * FROM SCHEMA_VERSION');
CALL CSVWRITE('/home/sarah/migrate213/SYSTEM_CONFIG', 'SELECT * FROM SYSTEM_CONFIG');
SELECT currval('change_message_id');
SELECT currval('change_id');
SELECT currval('account_id');
SELECT currval('account_group_id');
This my commands to import the above data of old gerrit to PostgreSQL database of new gerrit:
DELETE FROM ACCOUNTS;
COPY ACCOUNTS(registered_on,full_name,preferred_email,inactive,account_id) FROM '/home/sarah/migrate213/ACCOUNTS' DELIMITER ',' CSV HEADER;
DELETE FROM ACCOUNT_EXTERNAL_IDS;
COPY ACCOUNT_EXTERNAL_IDS(account_id,email_address,password,external_id) FROM '/home/sarah/migrate213/ACCOUNT_EXTERNAL_IDS' DELIMITER ',' CSV HEADER;
DELETE FROM ACCOUNT_GROUPS;
COPY ACCOUNT_GROUPS(name,description,visible_to_all,group_uuid,owner_group_uuid,group_id) FROM '/home/sarah/migrate213/ACCOUNT_GROUPS' DELIMITER ',' CSV HEADER;
DELETE FROM ACCOUNT_GROUP_BY_ID;
COPY ACCOUNT_GROUP_BY_ID(group_id,include_uuid) FROM '/home/sarah/migrate213/ACCOUNT_GROUP_BY_ID' DELIMITER ',' CSV HEADER;
DELETE FROM ACCOUNT_GROUP_BY_ID_AUD;
COPY ACCOUNT_GROUP_BY_ID_AUD(added_by,removed_by,removed_on,group_id,include_uuid,added_on) FROM '/home/sarah/migrate213/ACCOUNT_GROUP_BY_ID_AUD' DELIMITER ',' CSV HEADER;
DELETE FROM ACCOUNT_GROUP_MEMBERS;
COPY ACCOUNT_GROUP_MEMBERS(account_id,group_id) FROM '/home/sarah/migrate213/ACCOUNT_GROUP_MEMBERS' DELIMITER ',' CSV HEADER;
DELETE FROM ACCOUNT_GROUP_MEMBERS_AUDIT;
COPY ACCOUNT_GROUP_MEMBERS_AUDIT(added_by,removed_by,removed_on,account_id,group_id,added_on) FROM '/home/sarah/migrate213/ACCOUNT_GROUP_MEMBERS_AUDIT' DELIMITER ',' CSV HEADER;
DELETE FROM ACCOUNT_GROUP_NAMES;
COPY ACCOUNT_GROUP_NAMES(group_id,name) FROM '/home/sarah/migrate213/ACCOUNT_GROUP_NAMES' DELIMITER ',' CSV HEADER;
DELETE FROM ACCOUNT_PROJECT_WATCHES;
COPY ACCOUNT_PROJECT_WATCHES(notify_new_changes,notify_all_comments,notify_submitted_changes,notify_new_patch_sets,notify_abandoned_changes,account_id,project_name,filter) FROM '/home/sarah/migrate213/ACCOUNT_PROJECT_WATCHES' DELIMITER ',' CSV HEADER;
DELETE FROM CHANGES;
COPY CHANGES(change_key,created_on,last_updated_on,owner_account_id,dest_project_name,dest_branch_name,status,current_patch_set_id,subject,topic,row_version,change_id,original_subject,submission_id,note_db_state) FROM '/home/sarah/migrate213/CHANGES' DELIMITER ',' CSV HEADER;
DELETE FROM CHANGE_MESSAGES;
COPY CHANGE_MESSAGES(author_id,written_on,message,patchset_change_id,patchset_patch_set_id,change_id,uuid,tag) FROM '/home/sarah/migrate213/CHANGE_MESSAGES' DELIMITER ',' CSV HEADER;
DELETE FROM PATCH_COMMENTS;
COPY PATCH_COMMENTS(line_nbr,author_id,written_on,status,side,message,parent_uuid,range_start_line,range_start_character,range_end_line,range_end_character,change_id,patch_set_id,file_name,uuid,tag) FROM '/home/sarah/migrate213/PATCH_COMMENTS' DELIMITER ',' CSV HEADER;
DELETE FROM PATCH_SETS;
COPY PATCH_SETS(revision,uploader_account_id,created_on,draft,change_id,patch_set_id,groups,push_certificate) FROM '/home/sarah/migrate213/PATCH_SETS' DELIMITER ',' CSV HEADER;
DELETE FROM PATCH_SET_APPROVALS;
COPY PATCH_SET_APPROVALS(value,granted,change_id,patch_set_id,account_id,category_id,tag) FROM '/home/sarah/migrate213/PATCH_SET_APPROVALS' DELIMITER ',' CSV HEADER;
DELETE FROM SCHEMA_VERSION;
COPY SCHEMA_VERSION(version_nbr,singleton) FROM '/home/sarah/migrate215/SCHEMA_VERSION' DELIMITER ',' CSV HEADER;
DELETE FROM SYSTEM_CONFIG;
COPY SYSTEM_CONFIG(register_email_private_key,site_path,admin_group_id,anonymous_group_id,registered_group_id,wild_project_name,batch_users_group_id,owner_group_id,admin_group_uuid,batch_users_group_uuid,singleton) FROM '/home/sarah/migrate213/SYSTEM_CONFIG' DELIMITER ',' CSV HEADER;
SELECT setval('change_message_id', 1);
SELECT setval('change_id', 2);
SELECT setval('account_id', 1000001);
SELECT setval('account_group_id', 2);
I also have copied the git directory of old gerrit to the new gerrit.
and I did reindex successfully.
But it failed when I executed this command: bin/gerrit.sh start

WARNING: Invalid input 'S': expected 'n/N' USING PERIODIC COMMIT

I'm trying to import à CSV via Neo4JShell on Neo4J 3.0.4.
This script worked on an older version of Neo4J.
Here's my script :
USING PERIODIC COMMIT 5000
LOAD CSV WITH HEADERS FROM
"file:///my_file.csv"
AS line FIELDTERMINATOR ';'
WITH coalesce(line.VAR1,"") as var1
MERGE (i:myObject {var1: var1})
But I get this error :
WARNING: Invalid input 'S': expected 'n/N' (line 9, column 2 (offset: 247))
"USING PERIODIC COMMIT 5000"
Any idea ?
I install the zip version, one powershell to run "invoke-neo4j console", and an other powershell to run de "invoke-neo4jshell -file "my_script.cypher"
Thanks.

search a pattern from a unix directory

I want to find whether any script in one unix directory is using one particular table as "select *" or not. Means I want to find "select * from tablename". Now after select there can be any number of space or newline like "select <any number of space or newline> * <any number of space or newline> from <any number of space or newline> tablename"
Let's take this as the test file:
$ cat file
select
*
from
tablename
Using grep:
$ grep -z 'select\s*[*]\s*from\s*tablename' -- file
select
*
from
tablename
-z tells to treat the input as NUL-separated. Since no sensible text file contains NUL characters, this has the effect of reading in the whole file at once. That allows us to search over multiple lines. (If the file is too big for memory, we would want to think about another approach.) To protect against file names which begin with -, the -- is used to tell grep to stop option processing.
To obtain just the names of the matching files in the current directory:
grep -lz 'select\s*[*]\s*from\s*tablename' -- *
* tells grep to look at all files in the directory. -l tells grep to just print the names of matching files and not the matching text.
More on the need for --
Let's consider a directory with one file:
$ ls
-l-
Now, lets run a grep command:
$ grep 'regex' *
grep: invalid option -- '-'
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
The problem here is that grep interprets the file name as two options: l and -. Since the second is not a legal option, it reports an error. To protect against this, we need to use --. The following will run without error:
grep 'regex' -- *

export to csv from neo4j using import-cypher

I found this neo4j data exporting tool (https://github.com/jexp/neo4j-shell-tools#cypher-import) and it worked perfectly on my mac OS computer. I followed the same step to export data from a ubuntu server and the following error message was generated without further explanations.
Has anyone used this tool on ubuntu and any idea what the error message may indicate? Also, is there another way to export large (~100M rows) neo4j data into a csv file?
neo4j-sh (?)$ import-cypher -d"," -o test.csv match (p:Product)-[s:SIMILAR_TO]-(q:Product) return p.Id,q.Id limit 10
Query: match (p:Product)-[s:SIMILAR_TO]-(q:Product) return p.Id,q.Id limit 10 infile (none) delim ',' quoted false outfile test.csv batch-size 1000
Error occurred in server thread; nested exception is:
java.lang.NoSuchMethodError: org.neo4j.graphdb.GraphDatabaseService.execute(Ljava/lang/String;)Lorg/neo4j/graphdb/Result;
I just added a new way of exporting data as cypher statements.
https://github.com/jexp/neo4j-shell-tools#cypher-export
(Note this is for Neo4j 2.2.5)
But for 100M rows I think import-cypher -o is still a good approach.
Otherwise check out: http://neo4j.com/blog/export-csv-from-neo4j-curl-cypher-jq/

Informix: How to get the table contents and column names using dbaccess?

Supposing I have:
an Informix database named "my_database"
a table named "my_table" with the columns "col_1", "col_2" and "col_3":
I can extract the contents of the table by creating a my_table.sql script like:
unload to "my_table.txt"
select * from my_table;
and invoking dbaccess from the command line:
dbaccess my_database my_table.sql
This will produce the my_table.txt file with contents like:
value_a1|value_a2|value_a3
value_b1|value_b2|value_b3
Now, what do I have to do if I want to obtain the column names in the my_table.txt? Like:
col_1|col_2|col_3
value_a1|value_a2|value_a3
value_b1|value_b2|value_b3
Why you don't use dbschema?
To get schema of one table (without -t parameter show all database)
dbschema -d [DBName] -t [DBTable] > file.sql
To get schema of one stored procedure
dbschema -d [DBName] -f [SPName] > file.sql
None of the standard Informix tools put the column names at the top of the output as you want.
The program SQLCMD (not the Microsoft newcomer - the original one, available from the IIUG Software Archive) has the ability to do that; use the -H option for the column headings (and -T to get the column types).
sqlcmd -U -d my_database -t my_table -HT -o my_table.txt
sqlunload -d my_database -t my_table -HT -o my_table.txt
SQLCMD also can do CSV output if that's what you need (but — bug — it doesn't format the column names or column types lines correctly).
Found an easier solution. Place the headers in one file say header.txt (it will contain a single line "col_1|col_2|col_3") then to combine the header file and your output file run:
cat header.txt my_table.txt > my_table_wth_head.txt

Resources