Is there a way to configure the neo4j-shell to echo commands - neo4j

I have some ETL code that sometimes takes awhile to run. I'd just like to know what's running. When I run
neo4j-shell -file mycode.cql
Is there a way to see either
An "echo" of the cypher being run as it is loaded, or
Just some random text, without doing something, er, hacky, e.g.
MATCH () RETURN "Frobnibbles loaded!" LIMIT 1;

No echo unfortunately, but you can just use return (you don't need the MATCH or LIMIT bit):
return "Hello, world!";

Related

Grep Individual Commands not working when combined in Multi Pattern grep command

I have a need to perform multiple grep matches as part of the same grep command. When I run them individually, they work fine. But not when together. I hope someone could either show me a solution or perhaps can help me find a work-around. Here is sample stream:
(string start..) RollUp:"V" Enzyme:"ENZA ENZB ENZD ENZE" (..string end)
In the first command I am needing to isolate all RollUp substrings.Value is always A or V:
grep -o "RollUp:\"[AV]\""
In the second command I am needing to isolate all combinations of Enzyme values (1-20 total, spaces in between, don't know values names). This command works:
grep -oE 'Enzyme:[[:space:]]*"[^"]+"'
However, I need to match both patterns as part of same stream. When I try:
grep -oE "RollUp:\"[AV]\""\|Enzyme:[[:space:]]*"[^"]+""
, nothing is returned. I would be grateful for any ideas for getting this double grep pattern match to work. Thank you!
regex someting[^"]+ : this means string something followed by anything till next " is seen. Here + sign means , at least one or more match.
grep -oE 'RollUp:"[^"]+|Enzyme:[[:space:]]*"[^"]+"' file

"spool off; /" significance of '/' post spool off

Please help me with the below concern I am facing.
I have some existing shell scripts that run daily at regular intervals to spool some data into a text file and send it to another system.
Now i have made some changes to those scripts and the spooling which used to take 6 hours, now the same is taking more than 8 hours.
I have read "/" in the script usually executes the previous sql statement.
So by the code below, is the sql query being called twice?
I am new to this and sorry if i am being Naive, any help related to the same is appreciated.
Thanks in advance.
#!/bin/ksh
ORACLE_HOME=/pprodi1/oracle/9.2.0; export ORACLE_HOME;
Script_Path=<path>
dt=`date '+%y%m%d%H%M'`
find $Script_Path/testing_spool* -mtime +3 | xargs rm -f { }
cd $Script_Path
sqlplus -s uname/pwd#db_name<<EOF1>/dev/null
set echo off
set head off
set pages 0
set feedback off
set pause off
set colsep " "
set verify off
set termout off
set linesize 3000
set trimspool on
spool $Script_Path/testing_spool.dat
SELECT column_name
FROM table_name
WHERE created_date > SYSDATE - 1
AND col1 = '126'
AND col2 = 'N'
AND col3 = 6;
spool off;
/
EOF1
cat testing_spool.dat > testing_spool_$dt.txt
Yes, your query will be executed twice, once while the spool is active and then again after it has been turned off.
As you mentioned, the / executes whatever is currently in the SQL buffer, which will still contain your query. The spool off is a client command and does not affect the SQL buffer.
If you run your script without the output redirect >/dev/null on line 10, or redirect to a file instead if you expect a lot of output, then you will see the query results repeated.
Incidentally, set termout off isn't doing anything in your script because you're running it as a heredoc. If you had the statements in a script file and ran that with start or # then it would suppress the output, but as the documentation says:
ECHO does not affect the display of commands you enter interactively or redirect to SQL*Plus from the operating system.
You could potentially create a .sql file, run that, and then delete it - all within your shell script. You may not see much benefit, but it would mean you didn't need to hide all output with that redirect, which would make it harder to diagnose failures.

How can I find files that match a two-line pattern using grep?

I created a test file with the following:
<cert>
</cert>
I'm now trying to find this with grep and the following command, but it take forever to run.
How can I search quickly for files that contain adjacent lines like these?
tr -d '\n' | grep '<cert></cert>' test.test
So, from the comments, you're trying to get the filenames that contain an empty <cert>..</cert> element. You're using several tools wrong. As #iiSeymour pointed out, tr only reads from standard input-- so if you want to use it to select from lots of filenames, you'll need to use a loop. grep prints out matching lines, not filenames; though you could use grep -l to see the filenames instead.
But you're only joining lines because grep works one line at a time; so let's use a better tool. Here's how to search with awk:
awk '/<cert>/ { started=1; }
/<\/cert>/ { if (started) { print FILENAME; nextfile;} }
!/<cert>/ { started = 0; }' file1 file2 *.txt
It checks each line and keeps track of whether the previous line matched <cert>. (!/pattern/ sets the flag back to zero on lines not matching /pattern/.) Call it with all your files (or with a wildcard like *.txt).
And a friendly suggestion: Next time, try each command separately (you've been stuck on this for hours and you still don't know what grep does?). And have a quick look at the manual for the tools you want to use. Unix tools are usually too complex for simple trial and error.

What tools deal with spaces in columnar data well?

Let's start with an example that I ran into recently:
C:\>net user
User accounts for \\SOMESYSTEM
-------------------------------------------------------------------------------
ASPNET user1 AnotherUser123
Guest IUSR_SOMESYSTEM IWAM_SOMESYSTEM
SUPPORT_12345678 test userrrrrrrrrrrr test_userrrrrrrrrrrr
The command completed successfully.
In the third row, second column there is a login with a space. This causes many of the tools that separate fields based on white space to treat this field as two fields.
How would you deal with data formatted this way using today's tools?
Here is an example in pure** Windows batch language on the command prompt that I would like to have replicated in other modern cross-platform text processing tool sets:
C:\>cmd /v:on
Microsoft Windows [Version 5.2.3790]
(C) Copyright 1985-2003 Microsoft Corp.
C:\>echo off
for /f "skip=4 tokens=*" %g in ('net user ^| findstr /v /c:"The command completed successfully."') do (
More? set record=%g
More? echo !record:~0,20!
More? echo !record:~25,20!
More? echo !record:~50,20!
More? )
ASPNET
user1
AnotherUser123
Guest
IUSR_SOMESYSTEM
IWAM_SOMESYSTEM
SUPPORT_12345678
test userrrrrrrrrrrr
test_userrrrrrrrrrrr
echo on
C:\>
** Using variable delayed expansion (cmd /v:on or setlocal enabledelayedexpansion in a batch file), the for /f command output parser, and variable substring syntax... none of which are well documented except for at the wonderful website http://ss64.com/nt/syntax.html
Looking into AWK, I didn't see a way to deal with the 'test userrrrrrrrrrrr' login field without using substr() in a similar method to the variable substring syntax above. Is there another language that makes text wrangling easy and is not write-only like sed?
PowerShell:
Native user list example, no text matching needed
Get-WmiObject Win32_UserAccount | Format-Table -Property Caption -HideTableHeaders
Or, if you want to use "NET USER":
$out = net user # Send stdout to $out
$out = $out[4..($out.Length-3)] # Skip header/tail
[regex]::split($out, "\s{2}") | where { $_.Length -ne 0 }
# Split on double-space and skip empty lines
Just do a direct query for user accounts, using vbscript (or powershell if your system supports)
strComputer = "."
Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
Set colItems = objWMIService.ExecQuery("Select * from Win32_UserAccount",,48)
For Each objItem in colItems
Wscript.Echo objItem.Name
Next
This will show you a list of users, one per line. If your objective is just to show user names, there is no need to use other tools to process thee data.
Awk isn't so great for that problem because awk is focused on lines as records with a recognizable field separator, while the example file uses fixed-width fields. You could, e.g., try to use a regular expression for the field separator, but that can go wrong. The right way would be to use that fixed width to clean the file up into something easier to work with; awk can do this, but it is inelegant.
Essentially, the example is difficult because it doesn't follow any clear rules. The best approach is a quite general one: write data to files in a well-defined format with a library function, read files by using a complementary library function. Specific language doesn't matter so much with this strategy. Not that that helps when you already have a file like the example.
TEST
printf "
User accounts for \\SOMESYSTEM
-------------------------------------------------------------------------------
ASPNET user1 AnotherUser123
Guest IUSR_SOMESYSTEM IWAM_SOMESYSTEM
SUPPORT_12345678 test userrrrrrrrrrrr test_userrrrrrrrrrrr
The command completed successfully.
\n" | awk 'BEGIN{
colWidth=25
}
/-----/ {next}
/^[[:space:]]*$/{next}
/^User accounts/{next}
/^The command completed/{next}
{
col1=substr($0,1,colWidth)
col2=substr($0,1+colWidth,colWidth)
col3=substr($0,1+(colWidth*2),colWidth)
printf("%s\n%s\n%s\n", col1, col2, col3)
}'
There's probably a better way than the 1+(colWidth*2) but I'm out of time for right now.
If you try to execute code as is, you'll have to remove the leading spaces at the front of each line in the printf statement.
I hope this helps.
For this part:
set record=%g
More? echo !record:~0,20!
More? echo !record:~25,20!
More? echo !record:~50,20!
I would use:
for /f "tokens=1-26 delims= " %a in (%g%) do (
if not "%a" = "" echo %a
if not "%b" = "" echo %b
if not "%c" = "" echo %c
rem ... and so on...
if not "%y" = "" echo %y
if not "%z" = "" echo %z
)
That is if I had to do this using batch. But I wouldn't dare to call this "modern" as per your question.
perl is really the best choice for your case, and millions of others. It is very common and the web is ripe with examples and documentation. Yes it is cross platform, extremely stable, and nearly perfectly consistent across platforms. I say nearly because nothing is perfect and I doubt in your lifetime that you would encounter an inconsistency.
It is a language interpreter but supports a rich command-line interface as well.

Find stored procedures not referenced in source code

I am trying to clean up a legacy database by dropping all procedures that are not used by the application. Using grep, I have been able to determine that a single procedure does not occur in the source code. Is there a way to do this for all of the procedures at once?
UPDATE: While using -E "proc1|proc2" produces an output of all lines in all files which match either pattern, this is not very useful. The legacy database has 2000+ procedures.
I tried to use the -o option thinking that I could use its output as the pattern for an inverse search on the original pattern. However, I found that there is no output when you use the -o option with more than one pattern.
Any other ideas?
UPDATE: After further experimenting, I found that it is the combination of the -i and -o options which are preventing the output. Unfortunately, I need a case insensitive search in this context.
feed the list of stored procedures to egrep separated by "|"
or:
for stored_proc in $stored_procs
do
grep $stored_proc $source_file
done
I've had to do this in the past as well. Don't forget about any procs that may be called from other procs.
If you are using SQL Server you can use this:
SELECT name,
text
FROM sysobjects A
JOIN syscomments B
ON A.id = B.id
WHERE xtype = 'P'
AND text LIKE '%< sproc name >%'
I get output under the circumstances described in your edit:
$ echo "aaaproc1bbb" | grep -Eo 'proc1|proc2'
proc1
$ echo $?
0
$ echo "aaabbb" | grep -Eo 'proc1|proc2'
$ echo $?
1
The exit code shows if there was no match.
You might also find these options to grep useful (-L may be specific to GNU grep):
-c, --count
Suppress normal output; instead print a count of matching lines
for each input file. With the -v, --invert-match option (see
below), count non-matching lines. (-c is specified by POSIX.)
-L, --files-without-match
Suppress normal output; instead print the name of each input
file from which no output would normally have been printed. The
scanning will stop on the first match.
-l, --files-with-matches
Suppress normal output; instead print the name of each input
file from which output would normally have been printed. The
scanning will stop on the first match. (-l is specified by
POSIX.)
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit
immediately with zero status if any match is found, even if an
error was detected. Also see the -s or --no-messages option.
(-q is specified by POSIX.)
Sorry for quoting the man page at you, but sometimes it helps to screen things a bit.
Edit:
For a list of filenames that do not contain any of the procedures (case insensitive):
grep -EiL 'proc1|proc2' *
For a list of filenames that contain any of the procedures (case insensitive):
grep -Eil 'proc1|proc2' *
To list the files and show the match (case insensitive):
grep -Eio 'proc1|proc2' *
Start with your list of procedure names. For easy re-use later, sort them and make them lowercase, like so:
tr "[:upper:]" "[:lower:]" < list_of_procedures | sort > sorted_list_o_procs
... now you have a sorted list of the procedure names. Sounds like you're already using gnu grep, so you've got the -o option.
fgrep -o -i -f sorted_list_o_procs source1 source2 ... > list_of_used_procs
Note the use of fgrep: these aren't regexps, really, so why treat them as such. Hopefully you will also find that this magically corrects your output issues ;). Now you have an ugly list of the used procedures. Let's clean them up as we did the orginal list above.
tr "[:upper:]" "[:lower:]" < list_of_used_procs | sort -u > short_list
Now you have a short list of the used procedures. Let's find the ones in the original list that aren't in the short list.
fgrep -v -f short_list sorted_list_o_procs
... and there they are.

Resources