Is it possible to use literal data as stream source in Sumologic? - sumologic

Is it possible for a Sumologic user to define data source values inside a Query and use it in subquery condition?
For example in SQL, one can use literal data as source table.
-- example in MySQL
SELECT * FROM (
SELECT 1 as `id`, 'Alice' as `name`
UNION ALL
SELECT 2 as `id`, 'Bob' as `name`
-- ...
) as literal_table
I wonder if Sumo logic also have such kind of functionality.
I believe combining such literal with subqueries would make user's life easier.

I believe the equivalent in a Sumo Logic query would be combining the save operator to create a lookup table in a subquery: https://help.sumologic.com/05Search/Subqueries#Reference_data_from_child_query_using_save_and_lookup
Basically something like this:
_sourceCategory=katta
[subquery:(_sourceCategory=stream explainJSONPlan.ETT) error
| where !(statusmessage="Finished successfully" or statusmessage="Query canceled" or isNull(statusMessage))
| count by sessionId, statusMessage
| fields -_count
| save /explainPlan/neededSessions
| compose sessionId keywords]
| parse "[sessionId=*]" as sessionId
| lookup statusMessage from /explainPlan/neededSessions on sessionid=sessionid
Where /explainPlan/neededSessions is your literal data table that you select from later on in the query (using lookup).

You can define a lookup table with some static map/dictionary you update not so often (you can even point to a file in the internet in case you change the mapping often).
And then you can use the |lookup operator. It's nothing special for subqueries.
Disclaimer: I am currently employed by Sumo Logic.

Related

Left join table on multiple tables in SAS

I've got multiple master tables in the same format with the same variables. I now want to left join another variable but I can't combine the master tables due to limited storage on my computer. Is there a way that I can left join a variable onto multiple master tables within one PROC SQL? Maybe with the help of a macro?
The LEFT JOIN code looks like this for one join but I'm looking for an alternative than to copy and paste this 5 times:
PROC SQL;
CREATE TABLE New AS
SELECT a.*, b.Value
FROM Old a LEFT JOIN Additional b
ON a.ID = b.ID;
QUIT;
You can't do it in one create table statement, as it only creates one table at a time. But you can do a few things, depending on what your actual limiting factor is (you mention a few).
If you simply want to avoid writing the same code five times, but otherwise don't care how it executes, then just write the code in a macro, as you reference.
%macro update_table(old=, new=);
PROC SQL;
CREATE TABLE &new. AS
SELECT a.*, b.Value
FROM &old. a LEFT JOIN Additional b
ON a.ID = b.ID;
QUIT;
%mend update_table;
%update_table(old=old1, new=new1)
%update_table(old=old2, new=new2)
%update_table(old=old3, new=new3)
Of course, if the names of the five tables are in a pattern, you can perhaps automate this further based on that pattern, but you don't give sufficient information to figure that out.
If you on the other hand need to do this more efficiently in terms of processing than running the SQL query five times, it can be done a number of ways, depending on the specifics of your additional table and your specific limitations. It looks to me that you have a good use case for a format lookup here, for example; see for example Jenine Eason's paper, Proc Format, a Speedy Alternative to Sort/Merge. If you're just merging on the ID, this is very easy.
data for_format;
set additional;
start = ID;
label = value;
fmtname='AdditionalF'; *or '$AdditionalF' if ID is character-valued;
output;
if _n_=1 then do; *creating an "other" option so it returns missing if not found;
hlo='o';
label = ' ';
output;
end;
run;
And then you just have five data steps with a PUT statement adding the value, or even you could simply format the ID variable with that format and it would have that value whenever you did most PROCs (if this is something like a classifier that you don't truly need "in" the data).
You can do this in a single pass through the data in a Data Step using a hash table to lookup values.
data new1 new2 new3;
set old1(in=a) old2(in=b) old3(in=c);
format value best.;
if _n_=1 then do;
%create_hash(lk,id,value,"Additional");
end;
value = .;
rc = lk.find();
drop rc;
if a then
output new1;
else if b then
output new2;
else if c then
output new3;
run;
%create_hash() macro available here.
You could, alternatively, use Joe's format with the same Data Step syntax.

How to show same column in dbgrid with different criteria

i need your help to finish my delphi homework.
I use ms access database and show all data in 1 dbgrid using sql. I want to show same column but with criteria (50 record per column)
i want select query to produce output like:
No | Name | No | Name |
1 | A | 51 | AA |
2 | B | 52 | BB |
3~50 | | 53~100| |
Is it possible ?
I can foresee issues if you choose to return a dataset with duplicate column names. To fix this, you must change your query to enforce strictly unique column names, using as. For example...
select A.No as No, A.Name as Name, B.No as No2, B.Name as Name2 from TableA A
join TableB B on B.Something = A.Something
Just as a note, if you're using a TDBGrid, you can customize the column titles. Right-click on the grid control in design-time and select Columns Editor... and a Collection window will appear. When adding a column, link it to a FieldName and then assign a value to Title.Caption. This will also require that you set up all columns. When you don't define any columns here, it automatically returns all columns in the query.
On the other hand, a SQL query may contain duplicate field names in the output, depending on how you structure the query. I know this is possible in SQL Server, but I'm not sure about MS Access. In any case, I recommend always returning a dataset with unique column names and then customizing the DB Grid's column titles. After all, it is also possible to connect to an excel spreadsheet, which can very likely have identical column names. The problem arrives when you try to read from one of those columns for another use.

Joining two tables based on observations in a seperate table (in EG)?

In Enterprise Guide, I have a table (called, COUNTRIES) containing the name of some countries of the world in one column, and the currency of that country in a second column.
E.g.
CTRY | CRNCY
------------------------
UK | GBP
US | USD
FR | EUR
AU | AUD
etc
This table is only a small subset of all the countries in the world, and ranges from anywhere between 10 to 20 observations depending on preference. The number of entries in this table can change at any time.
For each country specified in COUNTRIES, I have a table containing information about that country, (e.g. for the example above, I have tables called CTRY_UK, CTRY_US, CTRY_FR, CTRY_AU, etc) and the same goes for their currencies (so I also have CRNCY_GBP, CRNCY_EUR, etc)
Now for each observation in COUNTRIES, for example (UK and GBP), I want to join the CTRY_UK table with the CRNCY_GBP table, but I don't know a way of doing so in SAS.
In other words, I want to join two tables together based on the entries given in a seperate table. How can this be done?
You can read the data values into macro variables using the call open and call set functions, and then write whatever code you need using the macro variables.
%macro Combine;
** open Countries data in input mode;
%let dsid = %sysfunc(open(Countries, i));
** set up reading of values into macro variables of the same name;
%syscall set(dsid);
** read first observation;
%let rc = %sysfunc(fetch(&dsid));
%do %while (&rc = 0);
** merge data sets using the auto-filled &Cntry and &Crncy macro variables;
data merged_&Cntry;
merge CNTRY_&Cntry CRNCY_&Crncy;
by ID;
run;
** read next observation;
%let rc = %sysfunc(fetch(&dsid));
%end;
** close data set;
%let rc = %sysfunc(close(&dsid));
%mend;
** actual macro call;
%Combine
The best approach here is likely to create the code from the initial table, and then run that as a macro call.
So imagine the call is something like
%macro join_my_Tables(country=,currency=);
create table &country. as
select whatever stuff from ctry_&country., crncy.&currency.
...
;
quit;
%mend join_my_Tables;
Then you would create calls ot that:
proc sql;
select cats('%join_my_Tables(country=',ctry,',currency=',crncy,')')
into :calllist separated by ' '
from tbl1;
quit;
*not technically needing to be a separate proc sql here, just to show it is doing something else;
proc sql;
&calllist.
quit;
That would do what you want, I suspect. You might need to modify it some if your various tables have different aspects to them (why are they separate, anyway; that's a silly way to store data, unless the columns are very different and you really don't want a vertical structure for some reason).
If you have very different sets of columns, and don't want to rely on select *, then you may need to create a dataset that stores this information and pull it out during the macro execution.

How to get output of sql queries in FitNesse + DbFit?

I am trying to get sql query output in DBfit using i.e. !|Execute|select * from abc| but don't know how it will display in DBfit.
I think that you are looking for the Inspect Query table (you can find reference docs about it here).
!|Inspect Query|select * from abc|
When executed, this will print the resultset of the query.
First, the execute fixture is typically used for actions that do not return data, e.g.:
!|Execute|insert into tablename values (…)|
or
!|Execute|update tablename st... where...|
However, even some non-data actions have more specific commands. The above update can be done with, for example, with:
!|Update|tablename |
|field_to_change=|field_to_select|
|new value |matching value |
For returning data, use the query fixture
!|query|select Id, BatchNum from tablename|
|Id |BatchNum? |
|1 |>>Bat1 |
|2 |<<Bat1 |
As shown, just put your field names in the row below the fixture, then your data rows below that.

Complex queries using Rails query language

I have a query used for statistical purposes. It breaks down the number of users that have logged-in a given number of times. User has_many installations and installation has a login_count.
select total_login as 'logins', count(*) as `users`
from (select u.user_id, sum(login_count) as total_login
from user u
inner join installation i on u.user_id = i.user_id
group by u.user_id) g
group by total_login;
+--------+-------+
| logins | users |
+--------+-------+
| 2 | 3 |
| 6 | 7 |
| 10 | 2 |
| 19 | 1 |
+--------+-------+
Is there some elegant ActiveRecord style find to obtain this same information? Ideally as a hash collection of logins and users: { 2=>3, 6=>7, ...
I know I can use sql directly but wanted to know how this could be solved in rails 3.
# Our relation variables(RelVars)
U =Table(:user, :as => 'U')
I =Table(:installation, :as => 'I')
# perform operations on relations
G =U.join(I) #(implicit) will reference final joined relationship
#(explicit) predicate = Arel::Predicates::Equality.new U[:user_id], I[:user_id]
G =U.join(I).on( U[:user_id].eq(I[:user_id] )
# Keep in mind you MUST PROJECT for this to make sense
G.project(U[:user_id], I[:login_count].sum.as('total_login'))
# Now you can group
G=G.group(U[:user_id])
#from this group you can project and group again (or group and project)
# for the final relation
TL=G.project(G[:total_login].as('logins') G[:id].count.as('users')).group(G[:total_login])
Keep in mind this is VERY verbose because I wanted to show you the order of operations not just the "Here is the code". The code can actually be written with half the code.
The hairy part is Count()
As a rule, any attribute in the SELECT that is not used in an aggregate should appear in the GROUP BY so be careful with count()
Why would you group by the total_login count?
At the end of the day I would simply ask why don't you just do a count of the total logins of all installations since the user information is made irrelevant by the outer most count grouping.
I don't think you'll find anything as efficient as having the db do the work. Remember that you don't want to have to retrieve the rows from the db, you want the db itself to compute the answer by grouping the data.
If you want to push the SQL further into the database, you can create the query as a view in the database and then use a Rails ActiveRecord class to retrieve the results.
In the end imo the SQL syntax is way more readable. This arel stuff is just slowing me down all the time when I only need just a tiny bit more complexity. It's just another syntax you have learn, not worth it imo. I'd stick to SQL in these cases.

Resources