How to create connection string to connect ax database in x++ of ax 7 - x++

I wrote code to connect ax database using SqlSystem class to create connectionstring. but it worked on developer environment not on production environment. How can I create database connection string so that work on both developer as well as production server. My code is as below :
public str getConnectionString()
{
var sqlInfo = SysSQLSystemInfo::construct();
var loginServer = sqlInfo.getLoginServer();
var loginDatabase = sqlInfo.getLogInDataBase();
return strFmt('Server=%1;Database=%2;Trusted_Connection=true',loginServer,loginDatabase);
}
Thanks,
Sandy

Short answer: You cannot connect to the production DB in the manner that you appear to be attempting to.
Long answer: Based on your comments on the question itself, it appears that you are trying to execute raw sql code or commands from x++. You will not be able to do it using the technique you have shown in the OP, and from your comment to "add condition to check it work with Windows credential or Sql server credential" - this will never work in production.
If my understanding is correct, and you are indeed attempting to execute raw sql from x++, I would look at the class SRSStatementQuery in the AOT for an example of an out of the box class that does this. Also, the following is a simplified version of this:
public static void main(Args _args)
{
Connection connection;
Statement statement;
SqlStatementExecutePermission permission;
ResultSet resultSet;
str sqlStatement = #"SELECT * FROM [%1].CUSTTABLE";
sqlStatement = strFmt(sqlStatement, xSession::getDbSchema());
connection = new Connection();
statement = connection.createStatement();
permission = new SqlStatementExecutePermission(sqlStatement);
permission.assert();
// BP Deviation Documented - performing assert above
resultSet = statement.executeQuery(sqlStatement);
while (resultSet.next())
{
//process the results
}
}

Related

How to get GraphDatabaseService instance from Bolt Driver

I wrote some codes that used java Api of GraphDatabaseService to access an embedded database before. But now I want to switch the database to a remote one, so I have to use Driver and Session class to write and run cyphers. I don't like cyphers and I don't want to change the old codes.
So I'm looking for a way so that I can get GraphDatabaseService from Driver, but none is found.
I think a possible way is to make up a GraphDataBaseService delegate that wraps a Driver and converts Api calls to cyphers. Is that feasible? Is there already some libraries which can do this?
You won't get a direct equivalent of GraphDatabaseService since that's the embedded API.
You can however run Cypher queries like this:
var driver = GraphDatabase.driver(
"some://address",
AuthTokens.basic("username", "password")
);
// [...]
try (var session = driver.session() {
return session.writeTransaction(tx -> {
Result result = tx.run(
"CYPHER QUERY",
Map.of(/* query parameters */)
);
// do something with result
});
}
// [...]
driver.close(); // (driver is usually a singleton - closes when application shuts down)
I tried to make a project to solve this problem by wrapping Driver to GraphDatabaseService.

Running init script on oracle test container with system privileges

I am struggling with org.testcontainers:oracle-xe:1.14.3.
I am trying to run a test intended to verify schema creation and migration, however I'm getting stuck at the InitScript, when trying to initialize the users for the test with the users 'sys as sysdba'.
#Before
public void setUp() {
oracleContainer = new OracleContainer("oracleinanutshell/oracle-xe-11g")
.withUsername("sys as sysdba")
.withInitScript("oracle-initscript.sql");
oracleContainer.start();
}
The above seems to be able to connect, but execution of the init script fails with a
ORA-01109: database not open
Using the 'system' user in the above does not provide the InitScript connection with sysdba privileges, but result in an open database.
I'm looking for a solution that will allow me to initialize multiple users prior to a test. This initialization has grants that requires sysdba privileges. The test, in which some SQL scripts are executed, requires that both users are created in the database and can connect to the database.
In my case I'm using
oracleContainer = new OracleContainer("gvenzl/oracle-xe:18.4.0-slim")
.withUsername("test")
.withPassword("test")
.addEnv("ORACLE_PASSWORD", "s") // Sys password is required
.withCopyFileToContainer(MountableFile.forHostPath("oracle-initscript.sql"), "/container-entrypoint-initdb.d/init.sql")
gvenzl/oracle-xe is the default image used by the org.testcontainers.oracle-xe library.
The documentation for this image describes how to call initialization SQL on DB start and it works great.
Hard to say what is the issue but here are some tricks:
maybe "sys as sysdba" is not valid in your code, documentation is not clear about the usage
maybe withLogConsumer can provide some clues what's wrong
I recommend the image gvenzl/oracle-xe,
in some cases withInitScript may not work properly.
it is useful to test the init script on the container started manually
I finished on end with this approach:
as sys admin created two different schema/user)
#SpringBootTest(classes = Main.class)
#Import(DbConfiguration.class)
#Testcontainers
public class ServiceIntegrationTest {
#Container
public static final OracleContainer oracleContainer =
new OracleContainer("gvenzl/oracle-xe:21-slim-faststart");
}
import static com.integrationtests.local_test.service.IntegrationTest.oracleContainer;
#TestConfiguration
public class DbConfiguration {
static final String DEFAULT_SYS_USER = "sys as sysdba";
private static final String ENTITY_MANAGER_FACTORY = "entityManagerFactory";
#Bean
public DataSource getDataSource() {
DataSourceBuilder<?> dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.driverClassName("oracle.jdbc.OracleDriver");
dataSourceBuilder.url(oracleContainer.getJdbcUrl());
dataSourceBuilder.username(DEFAULT_SYS_USER);
dataSourceBuilder.password(oracleContainer.getPassword());
return dataSourceBuilder.build();
}
Also in application.yaml put scripts
spring:
datasource:
initialization-mode: always
schema:
- classpath:/sql/init_schemas/USER_ONE.sql
- classpath:/sql/init_schemas/USER_TWOT.sql

EF migration Seeding with large dataset

I am using EF Code First Migration with ASP.NET MVC5 project. I want to seed the database with about 5000 records. The examples on the asp.net web site and blogs all have the seed function within the configuration method.
internal sealed class Configuration : DbMigrationsConfiguration<foo.ApplicationDbContext>
{
public Configuration()
{
AutomaticMigrationsEnabled = false;
MigrationsDirectory = #"Migrations\ApplicationDbContext";
}
protected override void Seed(foo.ApplicationDbContext context)
{
SeedProducts(context);
SeedFoods(context);
// This method will be called after migrating to the latest version.
// You can use the DbSet<T>.AddOrUpdate() helper extension method
// to avoid creating duplicate seed data. E.g.
//
// context.People.AddOrUpdate(
// p => p.FullName,
// new Person { FullName = "Andrew Peters" },
// new Person { FullName = "Brice Lambson" },
// new Person { FullName = "Rowan Miller" }
// );
//
}
}
What are some options to seed large data sets when using EF Migration?
Currently The configuration file is over 5000 lines long and is different to manage.
I would prefer to store them in another database or excel spreadsheet and then import them in using the seed function. I am not sure how to go about importing data from external sources within the Seed method.
I also tried to break up the data set into several files but when I try to call the function
SeedProducts(context);
SeedFoods(context);
outside of the Configuration Class I get a build error: "The name does not exists in the current context". (I am not sure what this means?
You can store the sql file in the base directory and use it. You can create multiple sql files. I used following code to execute all sql files stored on base directory.
protected override void Seed(CRM.Data.DbContexts.CRMContext context)
{
var sqlfiles = Directory.GetFiles(AppDomain.CurrentDomain.BaseDirectory+"\\initialdata", "*.sql");
sqlfiles.ToList().ForEach(x=> context.Database.ExecuteSqlCommand(File.ReadAllText(x)));
}
Why we need to have a seed data for 5000 records. If you prefer this way it will take lot of manual work also. where as, its not required here.
Instantly you can Create Scripts and execute that into you db. via Code as well as Manually.
Db Scripts can be taken for entire db as well as each table, Store Procedure wise also. So, that you will get records particularly.
Follow the steps from this link OR MSDN
Note: After Creating the Database Script. You can read the file from Seed Function and Execute the query from function itself. Or Manually you can go and execute when ever you need it.
I ended up using a CSV (comma delimited file) and storing it as a domain resource. Then reading the CSV file and adding database records:
I am able to Seed the database using EF Migration Seed method and a CSV file as defined as follows in the Migration.cs file. Note: the CSV file in the project in Visual Studio are set to the Build Action to Embedded Resource.
public void SeedData(WebApp.Models.ApplicationDbContext Context)
{
Assembly assembly = Assembly.GetExecutingAssembly();
string resourceName = "WebApp.SeedData.Name.csv";
using (Stream stream = assembly.GetManifestResourceStream(resourceName))
{
using (StreamReader reader = new StreamReader(stream, Encoding.UTF8))
{
CsvReader csvReader = new CsvReader(reader);
csvReader.Configuration.WillThrowOnMissingField = false;
var records = csvReader.GetRecords<Products>().ToArray();
foreach (Product record in records)
{
Context.Products.AddOrUpdate(record);
}
}
}
Context.SaveChanges();
}

How to force only one transaction within multiple DbContext classes?

Background:
From another question here at SO I have a Winforms solution (Finance) with many projects (fixed projects for the solution).
Now one of my customers asked me to "upgrade" the solution and add projects/modules that will come from another Winforms solution (HR).
I really don't want to keep these projects as fixed projects on the existing finance solution. For that I'm trying to create plugins that will load GUI, business logic and the data layer all using MEF.
Question:
I have a context (DbContext built to implment the Generic Repository Pattern) with a list of external contexts (loaded using MEF - these contexts represent the contexts from each plugin, also with the Generic Repository Pattern).
Let's say I have this:
public class MainContext : DbContext
{
public List<IPluginContext> ExternalContexts { get; set; }
// other stuff here
}
and
public class PluginContext_A : DbContext, IPluginContext
{ /* Items from this context */ }
public class PluginContext_B : DbContext, IPluginContext
{ /* Items from this context */ }
and within the MainContext class, already loaded, I have both external contexts (from plugins).
With that in mind, let's say I have a transaction that will impact both the MainContext and the PluginContext_B.
How to perform update/insert/delete on both contexts within one transaction (unity of work)?
Using the IUnityOfWork I can set the SaveChanges() for the last item but as far as I know I must have a single context for it to work as a single transaction.
There's a way using the MSDTC (TransactionScope) but this approach is terrible and I'd reather not use this at all (also because I need to enable MSDTC on clients and server and I've had crashes and leaks all the time).
Update:
Systems are using SQL 2008 R2. Never bellow.
If it's possible to use TransactionScope in a way that won't scale to MSDTC it's fine, but I've never achieved that. All the time I've used TransactionScope it goes into MSDTC. According to another post on SO, there are some cases where TS will not go into MSDTC: check here. But I'd really prefer to go into some other way instead of TransactionScope...
If you are using multiple contexts each using separate connection and you want to save data to those context in single transaction you must use TransactionScope with distributed transaction (MSDTC).
Your linked question is not that case because in that scenario first connection do not modify data so it can be closed prior to starting the connection where data are modified. In your case data are concurrently modified on multiple connection which requires two-phase commit and MSDTC.
You can try to solve it with sharing single connection among multiple contexts but that can be quite tricky. I'm not sure how reliable the following sample is but you can give it a try:
using (var connection = new SqlConnection(connnectionString))
{
var c1 = new Context(connection);
var c2 = new Context(connection);
c1.MyEntities.Add(new MyEntity() { Name = "A" });
c2.MyEntities.Add(new MyEntity() { Name = "B" });
connection.Open();
using (var scope = new TransactionScope())
{
// This is necessary because DbContext doesnt't contain necessary methods
ObjectContext obj1 = ((IObjectContextAdapter)c1).ObjectContext;
obj1.SaveChanges(SaveOptions.DetectChangesBeforeSave);
ObjectContext obj2 = ((IObjectContextAdapter)c2).ObjectContext;
obj2.SaveChanges(SaveOptions.DetectChangesBeforeSave);
scope.Complete();
// Only after successful commit of both save operations we can accept changes
// otherwise in rollback caused by second context the changes from the first
// context will be already accepted = lost
obj1.AcceptAllChanges();
obj2.AcceptAllChanges();
}
}
Context constructor is defined as:
public Context(DbConnection connection) : base(connection,false) { }
The sample itself worked for me but it has multiple problems:
First usage of contexts must be done with closed connection. That is the reason why I'm adding entities prior to opening the connection.
I rather open connection manually outside of the transaction but perhaps it is not needed.
Both save changes successfully run and Transaction.Current has empty distributed transaction Id so it should be still local.
The saving is much more complicated and you must use ObjectContext because DbContext doesn't have all necessary methods.
It doesn't have to work in every scenario. Even MSDN claims this:
Promotion of a transaction to a DTC may occur when a connection is
closed and reopened within a single transaction. Because the Entity
Framework opens and closes the connection automatically, you should
consider manually opening and closing the connection to avoid
transaction promotion.
The problem with DbContext API is that it closes and reopens connection even if you open it manually so it is a opened question if API always correctly identifies if it runs in the context of transaction and do not close connection.
#Ladislav Mrnka
You were right from the start: I have to use MSDTC.
I've tried multiple things here including the sample code I've provided.
I've tested it many times with changed hare and there but it won't work. The error goes deep into how EF and DbContext works and for that to change I'd finally find myself with my very own ORM tool. It's not the case.
I've also talked to a friend (MVP) that know a lot about EF too.
We have tested some other things here but it won't work the way I want it to. I'll end up with multiple isolated transactions (I was trying to get them together with my sample code) and with this approach I don't have any way to enforce a full rollback automatically and I'll have to create a lot of generic/custom code to manually rollback changes and here comes another question: what if this sort of rollback fails (it's not a rollback, just an update)?
So, the only way we found here is to use the MSDTC and build some tools to help debug/test if DTC is enabled, if client/server firewalls are ok and all that stuff.
Thanks anyway.
=)
So, any chance this has changed by October 19th? All over the intertubes, people suggest the following code, and it doesn't work:
(_contextA as IObjectContextAdapter).ObjectContext.Connection.Open();
(_contextB as IObjectContextAdapter).ObjectContext.Connection.Open();
using (var transaction = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions{IsolationLevel = IsolationLevel.ReadUncommitted, Timeout = TimeSpan.MaxValue}))
{
_contextA.SaveChanges();
_contextB.SaveChanges();
// Commit the transaction
transaction.Complete();
}
// Close open connections
(_contextA as IObjectContextAdapter).ObjectContext.Connection.Close();
(_contextB as IObjectContextAdapter).ObjectContext.Connection.Close();
This is a serious drag for implementing a single Unit of Work class across repositories. Any new way around this?
To avoid using MSDTC (distributed transaction):
This should force you to use one connection within the transaction as well as just one transaction. It should throw an exception otherwise.
Note: At least EF6 is required
class TransactionsExample
{
static void UsingExternalTransaction()
{
using (var conn = new SqlConnection("..."))
{
conn.Open();
using (var sqlTxn = conn.BeginTransaction(System.Data.IsolationLevel.Snapshot))
{
try
{
var sqlCommand = new SqlCommand();
sqlCommand.Connection = conn;
sqlCommand.Transaction = sqlTxn;
sqlCommand.CommandText =
#"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'";
sqlCommand.ExecuteNonQuery();
using (var context =
new BloggingContext(conn, contextOwnsConnection: false))
{
context.Database.UseTransaction(sqlTxn);
var query = context.Posts.Where(p => p.Blog.Rating >= 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
}
sqlTxn.Commit();
}
catch (Exception)
{
sqlTxn.Rollback();
}
}
}
}
}
Source:
http://msdn.microsoft.com/en-us/data/dn456843.aspx#existing

BLToolkit Oracle SP support

Does BLT support Oracle stored procedures? I've tried numerous methods, described below to get it to work but no luck. The stored procedure updates a table with several values. This is the stored procedure, a small test procedure.
DROP PROCEDURE BETA_AUTO_UPDATE;
/
CREATE OR REPLACE PROCEDURE BETA_AUTO_UPDATE
(
AutoId IN NUMBER,
Rule IN NVARCHAR2,
Nam IN NVARCHAR2,
Loc IN NVARCHAR2
)
IS
BEGIN
UPDATE Beta_Auto
SET RuleGuid = Rule,
Name = Nam,
Location = Loc
WHERE Id=AutoId;
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20001, 'ERROR OCCURED DURING UPDATE');
END BETA_AUTO_UPDATE;
/
Tried the following
DbManager.AddDataProvider(new OdpDataProvider());
DbManager OracleDb = new DbManager("BetaOracleDBConn");
Beta_Auto Betar = new Beta_Auto();
Betar.ID = 1;
Betar.Name = "Jim";
Betar.RuleGuid = "jlDKDKDKDKDKDKp";
Betar.Location = "LocDLDLDLDLDtor";
OracleDb.SetSpCommand("Beta_Auto_UPDATE",
OracleDb.CreateParameters(Betar)).ExecuteNonQuery();
That didn't work.
Tried this
[ActionName("UPDATE")]
public abstract void Update(Beta_Auto Auto);
That didn't work.
Tried this:
[SprocName("Beta_Auto_Update")]
public abstract void UpdateByParam(
[Direction.InputOutput("ID", "RuleGuid", "Name", "Location")] Beta_Auto Auto);
That didn't work.
[SprocName("Beta_Auto_Update")]
public abstract void UpdateByParam(int Id, string RuleGuid, string Name, string Location);
Also tried this:
[ActionName("Update")]
public abstract void UpdateByParam(int Id, string RuleGuid, string Name, string Location);
That didn't work.
Set the trace level on odp.net to 7. Saw that the call was being made, but couldn't see any parameters. Swapped out XE (thought it might have been a licensing problem as db was bigger that 5GB) for enterprise Oracle. Didn't work.
Create a new user, datafile, tablespace, and assigned all roles and privs, including Execute Any Procedure to the user. Didn't work.
I ran a standard ado.net (very long winded) to call the stored procedure via OracleCommand and it called perfectly and did the update.
I am stumped. All of the above work for SQL Server.
Thanks.
scope_creep
I'm doing it like
var parameters = OracleDb.GetSpParameters("BETA_AUTO_UPDATE", true, true);
parameters.SetParamValue("pParam1", param1);
parameters.SetParamValue("pParam2", param2);
...
OracleDb.SetSpCommand("BETA_AUTO_UPDATE", parameters).ExecuteNonQuery();
It's an extra round-trip, but since I'm only using stored procedures for a couple of big batch updates it doesn't really matter, (normal/simple updates are done with Linq DML operations)

Resources