I have Oracle SQL developer client. I tried running my stored procedure and got positive result.
Now, I need to schedule my stored procedure. I followed the correct steps to schedule the execution of the stored procedure. But, the procedures are not executed at the scheduled time.
I have doubts on the permissions of the client database. Can we schedule a stored procedure with client access? Otherwise, Is there any way to check that the existing client have the permissions to perform scheduling?
Related
I'm using Firebase for my mobile app's entire backend and I was wondering which of the two is a more reliable way of performing the task of creating a user and then a batch of documents.
Create the Firebase user (using Firebase Auth) on the client and if it succeeds then perform a Firestore batch write also on the client to create the documents.
Call a Firebase cloud function from the client to perform the above task.
My reliability concern has to do with the network. If the user is created on the client but is unable to create the documents then, well, that's not good. But if the client is able to invoke a cloud function then I feel like network reliability is not an issue in the cloud environment. If the task is to fail in the cloud it will be because of violating an error that I have control over (i.e. bad email format, weak password). Which of the two is more reliable for my purposes?
If the user has a spotty connection, the call to the Cloud Functions is just as likely to fail as the call that writes them directly to Firestore. The main difference is that the Firestore SDKs will automatically retry the write operation in such cases, while with a Cloud Function you'd have to implement that yourself.
I definitely would recommend using option 1. You should create a user with Firebase Auth and then create a collection called "users" and add a document with the user's UID, which is auto generated. This should occur after you insure that there is no error in the Firebase Auth process. If there is you should just display the error. If you need more specific info, feel free to respond.
I am connecting to on premises SQL Server and an Azure SQL database that is in a pool. Both from same SSMS instance.
When I right click on a stored procedure in any database on the on-premise server, I see the properties option and can add/modify permissions. But the Azure SQL connection does not show the properties option for stored procedures so I must GRANT in T-SQL.
There is a lot of info in the properties general as well as permission I'd like to have the SSMS access to see easily. Is there a setting in Azure SQL that enables / disables properties?
Thanks for any help!
Azure SQL database doesn't support the all the properties settings in SSMS.
For example, SSMS supports Azure SQL database properties settings:
But as you found that it doesn't support Stored Procedure properties setting.
Is there a setting in Azure SQL that enables / disables properties?
No, there isn't such settings in Portal or SSMS. No matter which account(admin or db_owner) you login to the Azure SQL database.
The only way is you grant it in T_SQL.
Hope this helps.
This isn't an Azure SQL Database issue, but a database permissions issue.
Are you connecting in the same way - i.e. both using a SQL login?
You will need to ensure that you grant the same permissions for the user on both databases, one way might be to script out the permissions from the on-premises database and run the query on the Azure one. You should still ensure that you do this at the lowest level possible so that you don't grant excessive permissions. I suggest at the schema level
The permission that you need at the minimum level is:
GRANT VIEW DEFINITION ON SCHEMA::<my schema name> TO <my user>
I want to create a project in MVC that works in online mode and offline mode for example when a user work in offline there is no connectivity of the internet available
then all data stored on the local machine when internet connectivity available then all the data push on the server.
Please help how can I do this.
Thanks
For that, you have to use third party sync service/SQL sync/Microsoft sync service etc.
you have to create other project which run at certain time and execute your sync process for local system to Live server database and vice versa.
you should have to use GUID to store your unique(PK) value, because at time of sync live server table has incoming data from any local server, so your local db tables pk no more usable in live server db table
Note: For this type of offline and online sync process your PK column should be type of VARCHAR(36) and store GUID value
I am using entity provider & sqlclient provider targeting same sql server in a single transaction scope. I am getting below error:
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
I dont want to escalate to msdtc as only one sql server is being used. Please suggest.
A distributed transaction is required if more than one SqlConnection is used, regardless of the number of servers and databases. This is because each connection has its own SQL session that can be independently committed and rolled back. If you have more than one connection then a distributed transaction coordinator is needed to coordinate the two separate transactions.
If you don't want to escalate then you can only use one connection in the transaction.
UPDATED 2010-11-25
A legacy stand-alone application (A1) is being re-created as a web application (A2).
A1 is written in Delphi 7 and uses a MS Access database to store the data. A1 has been distributed to ~1000 active users that we have no control over during the build of A2.
The database has ~50 tables, some which contain user data, some which contain template data (which does not need to be copied); 3-4 of these user tables are larger (<5000 records), the rest is small (<100).
Once A2 is 'live', users of A1 should be able to migrate to A2. I'm looking for a comparison of scenario's to do so.
One option is to develop a stand-alone 'update' tool for these users, and have this update tool talk to the A2 database through webservices.
Another option is to allow users to upload their Access db (~15 MB) database to our server, run some kind of SSIS package (overnight, perhaps) to get this into A2 for that user, and delete the Access db afterward.
Am I missing options? Which option is 'best' (I understand this may be somewhat subjective, but hopefully the pro's and cons for the scenario's can at least be made clear).
I'll gladly make this a community wiki if so demanded.
UPDATE 2010-11-23: it has been suggested that a variant of scenario 1 would be to have the update tool/application talk directly to the production database. Is this feasible?
UPDATE 2011-11: By now, this has been taken into production. Users upload the .zip file the .mdb is in, which is unpacked and placed in a secure location. A nightly SSIS scheduled job comes along and moves the data to staging tables, which are then moved into production through SP's.
I would lean toward uploading the complete database and running the conversion on the server.
In either case you need to write a conversion program. The real questions is how much of the conversion you deploy and run on the customers' computers. I would keep that part as simple as possible, i.e. just the upload. That way if you find any bugs or unexpected data during the conversion you can simply update the server and not need to re-deploy your conversion program.
The total amount of data you are talking about is not too large to upload, and it sounds like the majority of it would need to be uploaded in any case.
If you install a conversion program locally it would need a way to recover from a conversion that stopped part way through. That can be a lot more complicated than simply restarting an upload of the access database.
Also you don't indicate there would be any need for the web services after the conversions are done. The effort to put those services together, and keep them running and secure during the conversions would be far more than a simple upload application or web form.
Another factor is how quickly your customers would convert. If some of them will run the current application for some time period you may need to update your conversion application as the server database changes over time. If you upload the database and run the conversion on the server then only the server conversion program would need to be updated. There would not be any risk of a customer downloading the conversion program but not running it until after the server databases were updated.
We have a similar case where we choose to run the conversion on the server. We built a web page for the user to upload their files. In that case there is nothing to deploy for the new application. The only downside we found is getting the user to select the correct file. If you use a web form for the upload you can't pre-select file name for the user because of security restrictions. In our case we knew where the file was located but the customers did not. We provide directions on the upload page for the users to help them out. You could avoid this by writing a small desktop application to perform the upload for the users.
The only downside I see to writing a server based conversion is some of your template data will be uploaded that is un-needed. That is a small amount of data anyway.
Server Pros:
- No need to re-deploy the conversion due to bugs, unexpected data, or changes to the server database
- Easier to secure (possibly), there is only one access point - the upload. Of course you are accepting customer data in the form of an access database so you still can't trust anything in it.
Server Cons:
- Upload un-needed template data
Desktop Pros:
- ? I'm having trouble coming up with any
Desktop Cons:
- May need multiple versions deployed
As to talking to a server database directly. I have one application that talks to a hosted database directly to avoid creating web services. It works OK, but if given the chance I would not take that route again. The internet is dropped on a regular basis and the SQL Providers do not recover very well. We have trained our clients just to try again when that happens. We did this to avoid creating web services for our desktop application. We just reference the IP address in the server connection string. There is an entire list of security reasons not to take this route - we were comfortable with our security setup and possible risks. In the end the trade off of using the desktop application with no modifications was not worth having an unstable product.
Since a new database server to be likely one the standard database engines in the industry, why not consider linking the access application to this database server? That way you can simply send your data up to sql server that way.
I'm not really sure why you'd consider even suggest using a set of web services to a database engine when access supports an ODBC link to that database engine. So one potential upgrade path would be to simply issue a new application in access that has to be placed in the same directory as to where their current existing data file (and application) is now. Then on startup this application can simply RE link all of its tables to your existing database, plus come with a pre link set of tables to the database server. This is going to be far less work in building up some type of web services approach. I suppose part of this centers around where the database servers going to be hosted, but in most cases perhaps during the migration period, you have the database server running somewhere where everyone can get access to it. And a good many web providers allow external links to their database now.
It's also not clear that on the database server system you're going to create separate databases for each one, or as you suggest in your title it's all going to be placed into one database. Since is going to be placed into one database, then during the upsizing, an additional column that identifies the user location or however you plan to distinguish each database will be added during this upsizing process to distinguish each user set of data.
How easy this type of migration be will depend on the schema and database layout that the developers are using for the new system. Hopefully and obviously it has provisions for each user or location or however you plan to distinguish each individual user of the system. So, I don't suggest web services, but do suggest linking tables from the Access application to the instance of SQL server (or whatever server you run).
How best to do this will depend on the referential integrity and business rules that must be enforced, if there are any. For example, is there the possibility of duplicates when the databases are merged? I gather they are being merged from your somewhat cryptic statement: "And yes, one database for all, aspnet membership for user id's".
If you have no control of the 1000+ users of A1, how are you going to get them all to convert to A2?
Have you considered giving them an SQL Server Express DB to upgrade to, and letting them host the Web App on their own servers?