ADS does not support table variables? - advantage-database-server

For example, in Transact-SQL, I can declare a table variable like this:
DECLARE #MyTableVar table(
EmpID INT NOT NULL,
OldVacationHours INT,
NewVacationHours INT,
ModifiedDate datetime);
I'm using ADS 11. It looks like this version does not support table variables.
UPDATED
I looked at ADS 12, this version also does not support table variables.

Related

How to achieve conditional searching via Spotlight indexing?

Currently, I am porting a SQLite based app, to CoreData based app.
This is my CoreData entity
Entity Note
-----------
var type: Int
var locked: Bool
var title: String?
var searched_string: String?
var body: String?
One CoreData feature which I would like to adopt is spotlight indexing.
In the legacy SQLite based app, we are performing the following conditional searching
SELECT * FROM Note WHERE
(title LIKE :searchedStringWithPercentageSigns ESCAPE '\') OR
(type = 1 AND locked = 0 AND searched_string LIKE :searchedStringWithPercentageSigns ESCAPE '\') OR
(type = 0 AND locked = 0 AND body LIKE :searchedStringWithPercentageSigns ESCAPE '\')
ORDER BY "order" ASC
This interpreted as
First, search at property (column) title
If not, search at property searched_string, only if type is 1 and locked is 0
If not, search at property body, only if type is 0 and locked is 0
If not, do not search for anything. (locked is 1)
After watching https://developer.apple.com/videos/play/wwdc2021/10098/ , I still do not have a clear idea on how to implement such condition searching behavior
Full text search inside the app via CSSearchQuery
Search outside the app via Spotlight search
I know I need to
In Data Model Editor, I need to select Index in Spotlight for title, searched_string, body. But, do I need to do the same for properties (type, locked) which is used for condition checking?
What is the way to achieve conditional searching via Spotlight indexing?
Thanks.

Apollo ios codegen generates optional values

I'm trying with all the latest version of apollo-ios but i'd like to solve this one lingering problem: I keep getting optional values (see image below).
Here's what I've explored (but still can't find whyyy)
When I created the table, Nullable is false. Then, I create a view which is for public to access it.
With apollo schema:download command, here's the generated json: schema.json
With graphqurl command, here's the generated schema.graphql: schema.graphql. Here's the snippet.
"""
columns and relationships of "schedule"
"""
type schedule {
activity: String
end_at: timestamptz
id: Int
"""An array relationship"""
speakers(
"""distinct select on columns"""
distinct_on: [talk_speakers_view_select_column!]
"""limit the number of rows returned"""
limit: Int
"""skip the first n rows. Use only with order_by"""
offset: Int
"""sort the rows by one or more columns"""
order_by: [talk_speakers_view_order_by!]
"""filter the rows returned"""
where: talk_speakers_view_bool_exp
): [talk_speakers_view!]!
start_at: timestamptz
talk_description: String
talk_type: String
title: String
}
I am suspecting that it looks like id: Int is missing ! in the schema, is the cause of codegen interpreting it as optional. But I could be wrong too. Here's the repo for the complete reference https://github.com/vinamelody/MyApolloTest/tree/test
It's because Postgres marks view columns as explicitly nullable, regardless of the underlying column nullability, for some unknown reason.
Vamshi (core Hasura server dev) explains it here in this issue:
https://github.com/hasura/graphql-engine/issues/1965
You don't need that view though -- it's the same as doing a query:
query {
talks(
where: { activity: { _like: "iosconfig21%" } },
order_by: { start_at: "asc" }
}) {
id
title
start
<rest of fields>
}
Except now you have a view you need to manage in your Hasura metadata and create permissions for, like a regular table, on top of the table it's selecting from. My $0.02 anyways.
You can even use a GraphQL alias if you really insist on it being called "schedule" in the JSON response
https://graphql.org/learn/queries/

PXDatabase should accept PXDbType.Udt in Acumatica ERP

How can I call a stored procedure in Acumatica via PXDataBase which has as input parameter User defined type?
For example, I have the following type:
CREATE TYPE [dbo].[string_list_tblType] AS TABLE(
[RefNbr] [nvarchar](10) NOT NULL,
PRIMARY KEY CLUSTERED
(
[RefNbr] ASC
)WITH (IGNORE_DUP_KEY = OFF)
)
GO
I have the following stored procedure:
CREATE PROCEDURE [dbo].[GetListOfAPInvoices]
#APInvoices as string_list_tblType readonly,
AS
BEGIN
select * from APInvoice a where a.RefNbr in (select RefNbr from #APInvoices)
END
and following fragment of C# code:
var par = new SqlParameter("APInvoices", dt);
par.SqlDbType = SqlDbType.Structured;
par.TypeName = "dbo.string_list_tblType";
par.UdtTypeName = "dbo.string_list_tblType";
par.ParameterName = "APInvoices";
PXSPParameter p1 = new PXSPInParameter("#APInvoices", PXDbType.Udt, par);
var pars = new List<PXSPParameter> { p1};
var results = PXDatabase.Execute(sqlCommand, pars.ToArray());
but when I execute my C# code I'm receiving error message:
UdtTypeName property must be set for UDT parameters
When I debugged with reflector class PXSqlDatabaseProvider, method
public override object[] Execute(string procedureName, params PXSPParameter[] pars)
I noticed that
using (new PXLongOperation.PXUntouchedScope(Thread.CurrentThread))
{
command.ExecuteNonQuery();
}
command.Parameters.Items has my method parameters, but item which is related to Udt type is null. I need to know how to pass user defined table type. Has anybody tried this approach?
Unfortunately UDT parameters are not supported in Acumatica's PXDatabase.Execute(..) method and there is no way to pass one to a stored procedure using the built-in functionality of the platform.
Besides, when writing data-retrieval procedures like the one in your example, you should acknowledge that BQL-based data-retrieval facilities do a lot of work to match company masks, filter out records marked as DeletedDatabaseRecord and apply some other internal logic. If you chose to fetch data with plain select wrapped into a stored procedure you bypass all this functionality. Hardly is this something that you want to achieve.
If you absolutely want to use a stored procedure to get some records from the database but don't want the above side-effect, one option is to create an auxiliary table in the DB and select records into it using a procedure. Then in the application you add a DAC mapped to this new table and use it to get data from the table by means of PXSelect or similar thing.
Coming back to your particular example of fetching some ARInvoices by the list of their numbers, you could try using dynamic BQL composition to achieve something like this with Acumatica data access facilities.

Impala create external table, stored by Hive

I am trying to figure out since yesterday why my table creation is not working. Since I can't link my Impala to my Hbase I can't make queries on my twitter stream :/
Do I need a special JAR like Hive for the SerDe properties ?
Here is my command:
CREATE EXTERNAL TABLE HB_IMPALA_TWEETS (
id int,
id_str string,
text string,
created_at timestamp,
geo_latitude double,
geo_longitude double,
user_screen_name string,
user_location string,
user_followers_count string,
user_profile_image_url string
)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
"hbase.columns.mapping" =
":key,tweet:id_str,tweet:text,tweet:created_at,tweet:geo_latitude,tweet:geo_longitude, user:screen_name,user:location,user:followers_count,user:profile_image_url"
)
TBLPROPERTIES("hbase.table.name" = "tweets");
But I got an error on: the strored by:
Query: create EXTERNAL TABLE HB_IMPALA_TWEETS ( id int, id_str string, text string, created_at timestamp, geo_latitude double, geo_longitude double, user_screen_name string, user_location string, user_followers_count string, user_profile_image_url string ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( "hbase.columns.mapping" = ":key,tweet:id_str,tweet:text,tweet:created_at,tweet:geo_latitude,tweet:geo_longitude, user:screen_name,user:location,user:followers_count,user:profile_image_url" ) TBLPROPERTIES("hbase.table.name" = "tweets")
ERROR: AnalysisException: Syntax error in line 1:
...image_url string ) STORED BY 'org.apache.hadoop.hive.h...
Encountered: BY
Expected: AS
CAUSED BY: Exception: Syntax error
For info, I followed this page:
https://github.com/AronMacDonald/Twitter_Hbase_Impala/blob/master/README.md
Thanks for helping me :)
Well, it seems that Impala still not support the SerDe (serialization/deserialisation).
"You create the tables on the Impala side using the Hive shell,
because the Impala CREATE TABLE statement currently does not support
custom SerDes and some other syntax needed for these tables: You
designate it as an HBase table using the STORED BY
'org.apache.hadoop.hive.hbase.HBaseStorageHandler' clause on the Hive
CREATE TABLE statement."
So, just run the command on the hive shell, or hue hive, then, in impala, type 'invalidate metadata', and then you can see your table with a 'show tables'.
So for this part the problem seems solved.

Restriction when using boss_db?

I am trying to use boss_db for accessing pgsql. The table should have the column name, id, it should be primary key
The Id type is only be uuid or serial. Is it right?
I want that id is varchar(20), the value of id is decided by the program, not auto decided by DBMS. Is it possible?
create table operators(
id serial primary key, /*I want id is varchar(20), is it possible*/
tag_id varchar(20),
name text,
barcode varchar(20),
tel varchar(12),
mobile varchar(20),
email text,
ldap_user_id varchar(20),
operator_barcode varchar(20)
);
A = operator:new(id,"0102030405060708",
"operator_01","B001","12345678",
"13812345678",
"p001#gmail.com",
"ldap001",
"PB001"),
The follwing codes is from boss_sql_lib.erl file:
infer_type_from_id(Id) when is_list(Id) ->
[Type, TableId] = re:split(Id, "-", [{return, list}, {parts, 2}]),
TypeAtom = list_to_atom(Type),
IdColumn = proplists:get_value(id, boss_record_lib:database_columns(TypeAtom)),
IdValue = case keytype(Type) of
uuid -> TableId;
serial -> list_to_integer(TableId)
end,
{TypeAtom, boss_record_lib:database_table(TypeAtom), IdColumn, IdValue}.
Assign an ID when you create the BossRecord by using your own ID rather than the atom id:
> AutoId = operator:new(id,
"0102030405060708",
"operator_01","B001","12345678",
"13812345678",
"p001#gmail.com",
"ldap001",
"PB001"),
> ManualID = operator:new("operator-01",
"0102030405060708",
"operator_01","B001","12345678",
"13812345678",
"p001#gmail.com",
"ldap001",
"PB001"),
> AutoID:save(), % will just work
> ManualId:save(). % good luck
Record IDs are typically created by the framework. They must be globally unique. I recommend allowing the framework to assign its own IDs, rather than creating them manually as you will probably encounter bugs. However, there is nothing stopping you doing whatever you want.

Resources