What is the correct csh syntax to store the output of a sqlPlus block?
sqlplus -s / <<SQL
set feedback off
set linesize 100
set lines 150
set pages 0
set head off
set serveroutput on size 10000
select 1 from dual;
SQL
In this example, I'd like to be able to assign the value '1' to a variable in the csh script. Using another shell variant is not an option.
You need to write the variables in SQL*Plus into a file that you can then source once control has returned to your CSH program.
Note that I'm also showing you how to store the output of a SQL*Plus column value in a SQL*Plus variable using the column statement. You can skip this step if your code is simple, but I though this was a worthwhile addition.
sqlplus -s apps/apps#VIS <<SQL
set feedback off
set linesize 100
set lines 150
set pages 0
set head off
set serveroutput on size 10000
column result new_value result
select 1 as result from dual;
prompt Variable result = &result.
spool output.csh
prompt set result = &result.
spool off
SQL
source output.csh
echo "Back in CSH and result = $result "
Related
I was wondering if someone could help me with the error message I am getting from Snowflake. I am trying to create a stored procedure that will loop through 125 files in S3 and copy into the corresponding tables in Snowflake. The names of the tables are the same names as the csv files. In the example I only have 2 file names set up (if someone knows a better way than having to liste all 125, that will be extremely. helpful) .
The error message I am getting is the following:
syntax error line 5 at position 11 unexpected '1'.
syntax error line 6 at position 22 unexpected '='. (line 4)
CREATE OR REPLACE PROCEDURE load_data_S3(file_name VARCHAR,table_name VARCHAR)
RETURNS VARCHAR
LANGUAGE SQL
AS
$$
BEGIN
FOR i IN 1 to 2 LOOP
CASE i
WHEN 1 THEN
SET file_name = 'file1.csv';
SET table_name = 'FILE1';
WHEN 2 THEN
SET file_name = 'file2.csv';
SET table_name = 'FILE2';
--WILL LIST THE REMAINING 123 WHEN STATEMENTS
ELSE
-- Do nothing
END CASE;
COPY INTO table_name
FROM #externalstg/file_name
FILE_FORMAT = (type='csv');
END LOOP;
RETURN 'Data loaded successfully';
END;
$$;
There are various ways to list the files in a stage (see the post here). You can loop through the resultset and run COPY INTO on each record
Is it possible to do something like this inside an Informix stored procedure :
DEFINE my_data VARCHAR(255);
LET meta = (select count(*), something from tab11);
SYSTEM 'echo '|| meta;
You capture the output using an INTO clause and wrap the SELECT in a FOREACH. This fetches one row of data at a time, and you need separate variables for each column that you select. You can then manipulate those into a bigger string.
You can then use SYSTEM.
However, the output of echo will be sent to /dev/null (or NUL:). If that's what you want, fine — but why? If not, you'll need to organize redirection to somewhere else for yourself.
CREATE PROCEDURE echo(str VARCHAR(200) DEFAULT 'hello world');
DEFINE cmd VARCHAR(255);
LET cmd = "echo " || str || " >>/Users/jleffler/tmp/arcana.out";
SYSTEM cmd;
END PROCEDURE;
EXECUTE PROCEDURE echo();
EXECUTE PROCEDURE echo("The world is your oyster");
DROP PROCEDURE echo;
You'll need to adjust the file name to suit your purposes — the chances are high that you don't have my home directory on your machine.
Example output file:
hello world
The world is your oyster
Permissions on file and directories leading to file:
2 drwxr-xr-x root wheel 2017-05-24 17:17:16 /
169236 drwxr-xr-x root admin 2016-09-20 12:46:37 /Users
609973 drwxr-xr-x jleffler staff 2017-05-24 17:18:45 /Users/jleffler
1670154 drwxr-xr-x jleffler staff 2017-05-24 17:19:02 /Users/jleffler/tmp
63140467 -rw-r--r-- jleffler staff 2017-05-24 17:19:02 /Users/jleffler/tmp/arcana.out
Agree with everything #Jonathan Leffler mentioned above. Here is another example where the select the statement returns a single integer value similar to what you have have shown in your question.
create procedure test();
DEFINE my_data int;
LET my_data = (select count(*) from systables);
system 'echo ' || my_data || ' > /tmp/my_data';
end procedure;
execute procedure test();
In my test system, the output of the select statement
select count(*) from systables;
(count(*))
113
1 row(s) retrieved.
When I execute the procedure, the result of the system statement is the file /tmp/my_data.
cat /tmp/my_data
113
In short, it is certainly possible to achieve what you are looking to do. However depending on the result set of the select statement, you may need more complex handling inside the stored procedure.
I successfully wrote an intersection of text search and other criteria using Redis. To achieve that I'm using a Lua script. The issue is that I'm not only reading, but also writing values from that script. From Redis 3.2 it's possible to achieve that by calling redis.replicate_commands(), but not before 3.2.
Below is how I'm storing the values.
Names
> HSET product:name 'Cool product' 1
> HSET product:name 'Nice product' 2
Price
> ZADD product:price 49.90 1
> ZADD product:price 54.90 2
Then, to get all products that matches 'ice', for example, I call:
> HSCAN product:name 0 MATCH *ice*
However, since HSCAN uses a cursor, I have to call it multiple times to fetch all results. This is where I'm using a Lua script:
local cursor = 0
local fields = {}
local ids = {}
local key = 'product:name'
local value = '*' .. ARGV[1] .. '*'
repeat
local result = redis.call('HSCAN', key, cursor, 'MATCH', value)
cursor = tonumber(result[1])
fields = result[2]
for i, id in ipairs(fields) do
if i % 2 == 0 then
ids[#ids + 1] = id
end
end
until cursor == 0
return ids
Since it's not possible to use the result of a script with another call, like SADD key EVAL(SHA) .... And also, it's not possible to use global variables within scripts. I've changed the part inside the fields' loop to access the list of ID's outside the script:
if i % 2 == 0 then
ids[#ids + 1] = id
redis.call('SADD', KEYS[1], id)
end
I had to add redis.replicate_commands() to the first line. With this change I can get all ID's from the key I passed when calling the script (see KEYS[1]).
And, finally, to get a list 100 product ID's priced between 40 and 50 where the name contains "ice", I do the following:
> ZUNIONSTORE tmp:price 1 product:price WEIGHTS 1
> ZREMRANGEBYSCORE tmp:price 0 40
> ZREMRANGEBYSCORE tmp:price 50 +INF
> EVALSHA b81c2b... 1 tmp:name ice
> ZINTERSTORE tmp:result tmp:price tmp:name
> ZCOUNT tmp:result -INF +INF
> ZRANGE tmp:result 0 100
I use the ZCOUNT call to know in advance how many result pages I'll have, doing count / 100.
As I said before, this works nicely with Redis 3.2. But when I tried to run the code at AWS, which only supports Redis up to 2.8, I couldn't make it work anymore. I'm not sure how to iterate with HSCAN cursor without using a script or without writing from the script. There is a way to make it work on Redis 2.8?
Some considerations:
I know I can do part of the processing outside Redis (like iterate the cursor or intersect the matches), but it'll affect the application overall performance.
I don't want to deploy a Redis instance by my own to use version 3.2.
The criteria above (price range and name) is just an example to keep things simple here. I have other fields and type of matches, not only those.
I'm not sure if the way I'm storing the data is the best way. I'm willing to listen suggestion about it.
The only problem I found here is storing the values inside a lua scirpt. So instead of storing them inside a lua, take that value outside lua (return that values of string[]). Store them in a set in a different call using sadd (key,members[]). Then proceed with intersection and returning results.
> ZUNIONSTORE tmp:price 1 product:price WEIGHTS 1
> ZREVRANGEBYSCORE tmp:price 0 40
> ZREVRANGEBYSCORE tmp:price 50 +INF
> nameSet[] = EVALSHA b81c2b... 1 ice
> SADD tmp:name nameSet
> ZINTERSTORE tmp:result tmp:price tmp:name
> ZCOUNT tmp:result -INF +INF
> ZRANGE tmp:result 0 100
IMO your design is the most optimal one. One advice would be to use pipeline wherever possible, as it would process everything at one go.
Hope this helps
UPDATE
There is no such thing like array ([ ]) in lua you have to use the lua table to achieve it. In your script you are returning ids right, that itself is an array you can use it as a separate call to achieve the sadd.
String [] nameSet = (String[]) evalsha b81c2b... 1 ice -> This is in java
SADD tmp:name nameSet
And the corresponding lua script is the same as that of your 1st one.
local cursor = 0
local fields = {}
local ids = {}
local key = 'product:name'
local value = '*' .. ARGV[1] .. '*'
repeat
local result = redis.call('HSCAN', key, cursor, 'MATCH', value)
cursor = tonumber(result[1])
fields = result[2]
for i, id in ipairs(fields) do
if i % 2 == 0 then
ids[#ids + 1] = id
end
end
until cursor == 0
return ids
The problem isn't that you're writing to the database, it's that you're doing a write after a HSCAN, which is a non-deterministic command.
In my opinion there's rarely a good reason to use a SCAN command in a Lua script. The main purpose of the command is to allow you to do things in small batches so you don't lock up the server processing a huge key space (or hash key space). Since scripts are atomic, though, using HSCAN doesn't help—you're still locking up the server until the whole thing's done.
Here are the options I can see:
If you can't risk locking up the server with a lengthy command:
Use HSCAN on the client. This is the safest option, but also the slowest.
If you're want to do as much processing in a single atomic Lua command as possible:
Use Redis 3.2 and script effects replication.
Do the scanning in the script, but return the values to the client and initiate the write from there. (That is, Karthikeyan Gopall's answer.)
Instead of HSCAN, do an HKEYS in the script and filter the results using Lua's pattern matching. Since HKEYS is deterministic you won't have a problem with the subsequent write. The downside, of course, is that you have to read in all of the keys first, regardless of whether they match your pattern. (Though HSCAN is also O(N) in the size of the hash.)
I am using a shell script to extract the data from 'extr' table. The extr table is a very big table having 410 columns. The table has 61047 rows of data. The size of one record is around 5KB.
I the script is as follows:
#!/usr/bin/ksh
sqlplus -s \/ << rbb
set pages 0
set head on
set feed off
set num 20
set linesize 32767
set colsep |
set trimspool on
spool extr.csv
select * from extr;
/
spool off
rbb
#-------- END ---------
One fine day the extr.csv file was having 2 records with incorrect number of columns (i.e. one record with more number of columns and other with less). Upon investigation I came to know that the two duplicate records were repeated in the file. The primary key of the records should ideally be unique in file but in this case 2 records were repeated. Also, the shift in the columns was abrupt.
Small example of the output file:
5001|A1A|AAB|190.00|105|A
5002|A2A|ABB|180.00|200|F
5003|A3A|AAB|153.33|205|R
5004|A4A|ABB|261.50|269|F
5005|A5A|AAB|243.00|258|G
5006|A6A|ABB|147.89|154|H
5003|A7A|AAB|249.67|AAB|153.33|205|R
5004|A8A|269|F
5009|A9A|AAB|368.00|358|S
5010|AAA|ABB|245.71|215|F
Here the primary key records for 5003 and 5004 have reappeared in place of 5007 and 5008. Also the duplicate reciords have shifted the records of 5007 and 5008 by appending/cutting down their columns.
Need your help in analysing why this happened? Why the 2 rows were extracted multiple times? Why the other 2 rows were missing from the file? and Why the records were shifted?
Note: This script is working fine since last two years and has never failed except for one time (mentioned above). It ran successfully during next run. Recently we have added one more program which accesses the extr table with cursor (select only).
I reproduced a similar behaviour.
;-> cat input
5001|A1A|AAB|190.00|105|A
5002|A2A|ABB|180.00|200|F
5003|A3A|AAB|153.33|205|R
5004|A4A|ABB|261.50|269|F
5005|A5A|AAB|243.00|258|G
5006|A6A|ABB|147.89|154|H
5009|A9A|AAB|368.00|358|S
5010|AAA|ABB|245.71|215|F
See the input file as your database.
Now I write a script that accesses "the database" and show some random freezes.
;-> cat writeout.sh
# Start this script twice
while IFS=\| read a b c d e f; do
# I think you need \c for skipping \n, but I do it different one time
echo "$a|$b|$c|$d|" | tr -d "\n"
(( sleeptime = RANDOM % 5 ))
sleep ${sleeptime}
echo "$e|$f"
done < input >> output
EDIT: Removed cat input | in script above, replaced by < input
Start this script twice in the background
;-> ./writeout.sh &
;-> ./writeout.sh &
Wait until both jobs are finished and see the result
;-> cat output
5001|A1A|AAB|190.00|105|A
5002|A2A|ABB|180.00|200|F
5003|A3A|AAB|153.33|5001|A1A|AAB|190.00|105|A
5002|A2A|ABB|180.00|205|R
5004|A4A|ABB|261.50|269|F
5005|A5A|AAB|243.00|200|F
5003|A3A|AAB|153.33|258|G
5006|A6A|ABB|147.89|154|H
5009|A9A|AAB|368.00|358|S
5010|AAA|ABB|245.71|205|R
5004|A4A|ABB|261.50|269|F
5005|A5A|AAB|243.00|258|G
5006|A6A|ABB|147.89|215|F
154|H
5009|A9A|AAB|368.00|358|S
5010|AAA|ABB|245.71|215|F
When I edit the last line of writeout.sh into done > output I do not see the problem, but that might be due to buffering and the small amount of data.
I still don't know exactly what happened in your case, but it really seems like 2 progs writing simultaneously to the same script.
A job in TWS could have been restarted manually, 2 scripts in your masterscript might write to the same file or something else.
Preventing this in the future can be done using some locking / checks (when the output file exists, quit and return errorcode to TWS).
In Informix stored Procedure I have some condition which goes like this :-
If val1 > 0 // 1st If
Select count(*) of value from a table and stored it in a Variable say VALUE
If VALUE > 0 // 2nd If
perform UPDATE
ELSE // Intended ELSE for 2nd IF
Perform Insert
END IF
ELSE // Intended ELSE for 1st IF
perform Operation X
END IF
Some how I see my execution is always going in ELSE Intended for 1st IF and this is creating a problem for me . Can SomeOne let me know How can I correct this or where am i
Going Wrong.
Regards
The explicit keyword END IF means that the nesting of IF statements in SPL is unambiguous. Translating and indenting your code yields:
IF val1 > 0 THEN
SELECT COUNT(*) INTO value FROM SomeTable;
If VALUE > 0 THEN
Perform UPDATE
ELSE
Perform INSERT
END IF
ELSE
Perform Operation X
END IF
There is no way for there to be any ambiguity; there is no 'dangling else' problem because of the explicit END IF notation.
If the wrong code is being executed, then maybe you're being caught by 3-value logic and the behaviour of comparisons when one of the comparands is NULL. For example, if val1 is NULL, then the perform Operation X will always be executed because val1 > 0 is NULL > 0 which evaluates to NULL which is not TRUE so the ELSE clause is taken and Operation X is performed.
As noted by ceinmart, you can use SET DEBUG FILE and TRACE ON to debug what is happening as you execute the stored procedure.
Include the commands bellow before the if.
set debug file to '/tmp/trace.out';
trace on ;
....
trace "Value of val1 ="||val1;
trace "Value of VALUE = "||VALUE;
Run the procedure and check the output of the /tmp/trace.out file on SERVER where the database is.
To commands reference, use the online manual : TRACE , SET DEBUG FILE