Background:I have TFS 2015 Update 3 supported by a database cluster with 2 nodes. The databases are AlwaysOn. Or at least they should be.
The precipitating event: In the recent past we created a new database by restoring a backup from a detached collection on another TFS server. It all worked as expected until the cluster switched from node A to node B. Then we noticed that new database was not created with AlwaysOn so it was not present on node B.
The Solution: Simple. switch the cluster back to node A and setup AlwaysOn for the new database. Everything worked as expected EXCEPT....
The Puzzle: While the database was on node B as primary due to the unexpected switch from node A the list of users in the Administration Console Users section of the TFS Administration Console was not correct. Some users were missing. When we switched back to node A the complete list of Administration Console users we correct. Since I know the tfs_Configuration database IS setup with AlwaysOn and since I presume that is where the Administration Console users are stored how can this be. I inspected the database separately on each DB server and both contained the list of all the databases. We are certain updates to the Administration Console users predate adding the new collection databases.
So how is it possible the list of Administration console users was incorrect but the list of databases was correct for node B? Is my presumption incorrect and those users stored somewhere else?
Related
Is it possible to change your active/default database connection in SQL Workbench/J while still under a single connection profile? There are times I am connected to a database server with multiple databases and I would like to switch my active database without having to use a USE statement, specify the full 3 part naming convention, or switch connection profiles entirely. In SSMS, there is a simple drop-down menu to easily switch between different databases. Just wondering if there is something similar in SQL Workbench/J that I'm just missing.
There is an experimental feature to enable a dropdown with the available databses in the main window.
If you run
WbSetDbConfig gui.enable.dbswitcher=true;
in a SQL editor tab when connected to a SQL Server database, then you should have a dropdown to switch the current database after restarting SQL Workbench/J.
It will essentially issue a USE in the background for the current connection when using SQL Server.
Looking for some clarification on how incremental sync works / does. I have recently configured Ranger/AD sync with incremental sync off and the user search filter blank. This resulted in all users from AD being added to Ranger.
This was just intended as a base-case test, but when adding a new user search filter for the Ranger AD configs in Ambari and restarting the Ranger service, no changes appear to have been made (which is what I had expected when setting incremental sync to off) and ALL of the AD users are still visible, not just the ones specified by the filter. At this point have a some questions:
If I were to go into the Ranger UI and go to the users and groups menu and manually delete all of the AD users and groups, then add the user search filter to the Ranger configs, and restart Ranger would that wipe the rest of the users from Ranger's user DB and leave only the AD users from the search filter once Ranger was restarted? Any other way to get this desired result?
What would happen if accidentally manually deleted a unix user from the users and groups menu in the Ranger UI? Would they repopulate once restarted Ranger or would I need to something else to fix the mistake?
From the Apache email list...
Once Users are Groups are sync'd to Ranger DB, deleting them is an admin only manual operation. Ranger doesn't delete users and groups automatically based on the search filter changes. But once you cleanup the users are groups and restarting ranger usersyn should pull in only the users and groups based on the configured filter.
Just an FYI - For testing purposes, ranger usersync supports one config "ranger.usersync.policymanager.mockrun" which can be set to true so that the sync'd users and groups are not updated to ranger DB. https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/installing-ranger/content/ranger_install_configure_ranger_user_sync.html
If a user/group is deleted from Ranger UI, once ranger is restarted, these users/groups are sync'd to Ranger DB based on the sync configuration.
So from these points, note that if the default HDP / hadoop users (eg. hdfs, yarn, livy) were created as unix users on each machine in cluster (eg. what HDP 3.1.0 does by default) and you manually delete these users from the Ranger UI, they will not reappear once Ranger restarted. That is, Ranger will not look at both AD and local unix users (perhaps there is a way to change this in configs?). So what you would need to do then is switch Ranger user sync to "unix", restart Ranger to let the local unix users sync, then switch configs to AD and restart again to get the AD users into Ranger (and the previously synced unix users should still be there since Ranger does not delete users based on the user sync configs).
I am in the process of switching the LDAP backend that we use to authenticate access to Gerrit.
When a user logs in via LDAP, a local account is created within Gerrit. We are running version 2.15 of Gerrit, and therefore our local user accounts have migrated from the SQL DB into NoteDB.
The changes in our infrastructure, mean that once the LDAP backend has been switched, user logins will appear to Gerrit as new users and therefore a new local account will be generated. As a result we will need perform a number of administrative tasks to the existing local accounts before and after migration.
The REST API exposes some of the functionality that we need, however two key elements appear to be missing:
There appears to be no way to retrieve a list of all local accounts through the API (such that I could then iterate through to perform the administrative tasks I need to complete). The /accounts/ endpoint insists on a query filter being specified, which does not appear to include a way to simply specify 'all' or '*'. Instead I am having to try and think of a search filter that will reliably return all accounts - I haven't succeeded yet.
There appears to be no way to delete an account. Once the migration is complete, I need to remove the old accounts, but nothing is documented for the API or any other method to remove old accounts.
Has anybody found a solution to either of these tasks that they could share?
I came to the conclusion that the answers to my questions were:
('/a/' in the below examples is accessing the administrative endpoint and so basic Auth is required and the user having appropriate permissions)
Retrieving all accounts
There is no way to do this in a single query, however combining the results of:
GET /a/accounts?q=is:active&n=<number larger than the number of users>
GET /a/accounts?q=is:inactive&n=<number larger than the number of users>
will give effectively the same thing.
Deleting an account
Seems that this simply is not supported. The only option appears to be to set an account inactive:
DELETE /a/accounts/<account_id>/active
I have a remote Firebird 3.0 server with a database. In this database, there is a big table. The client very often queries this table during their work. There are too many clients and bad internet connection, so the work with this table is terrible. I made a local copy of this table via IBExpert into a temporary database, which is distributed with client application.
But now there is a need in a change of some values in this table (add new values and edit some olds). So I need some kind of synchronization - copying of remote modified table to client's local database.
The client application was made by use of Delphi Berlin 10.1. So the synchronization should be done by Delphi code.
Can you give me an idea, how it will be correctly to synchronize such a big table, please?
You could fire POST_EVENT on master database (for insert, update, delete (triggers)) to notify client applications that there are changes.
Then your client would need to fire procedure (on local DB) to do a sync. This could be done by EXECUTE STATEMENT ON EXTERNAL
FOR EXECUTE STATEMENT ('SELECT ... WHERE CURRENT_TIMESTAMP >= tablename.modifiedon')
ON EXTERNAL 'SERVER/PORT:DBPATH'
You should include date of insert/modified/delete in master DB.
I am trying to run stored procedure from a limited permission login that has been granted execute permissions for said stored procedure. The stored procedure access 2 databases that exist on the same server. When I execute the stored procedure I receive an error that states:
The server principal "LimitedUser" is not able to access the database "Database2" under the current security context.
Some background:
I have recently been tasked with the goal of migrating our 2 different database servers into a single database. I have backed up and exported the necessary databases and restored them into the new server. The older databases are MS sql server 2000 (for Database 2), and MS sql server 2005 (for database 1 - where the aforementioned stored proc is located)
I have found some leads that seem to suggest that because I imported the databases, the owners were different and that would cause a problem. So i ran "exec sp_changedbowner 'sa'" on the 2 databases to ensure they had the same owner. I still got the same error when running the stored proc from the LimitedUser. A lot of other examples on various forum sites deal with databases that are on different servers...and having to utilize open query commands. I do not believe this is necessary.
When I run it as a user who has more admin permissions, the stored proc runs just fine. So my question is, what permissions should I be setting to allow this action from LimitedUser?
Thanks!
LimitedUser needs permissions on Database2 to do whatever the stored procedure is doing in that database, ownership chaining will only work within the same database (unless you enable the server option Cross Database Ownership Chaining, which I don't recommend as it breaks down the database container as a security boundary).
So, for example, you have db1 and db2, there is a stored proc in db1 that executes select * from db2.dbo.table1
For this you need LimitedUser to have:
execute permissions in the db1 database for the procedure
select permissions on table1 in db2