The following best practice checks fail when Cassandra's PasswordAuthenticator is enabled:
Search nodes enabled with bad autocommit
Search nodes enabled with query result cache
Search nodes with bad filter cache
My values are in compliance with the recommended values; and I have confirmed that the checks indeed pass when I disable authentication in Cassandra. What's weird is that there are 6 checks under the "Solr Advisor" category of the Best Practice Service and only these 3 are failing when authentication is enabled.
Is this a known bug in Opscenter? I'm using v5.0.1 but I've seen this since v5.0.0.
Where can I file bug reports like this? Does Datastax have a public bug tracker?
PS:
I actually feel that this question is more appropriate under ServerFault but I don't have enough reputation in that site to create the tags "datastax" and "datastax-enterprise". Can somebody do so please and move this question?
When Cassandra is using PasswordAuthenticator, then the http routes that opscenter agent uses to determine the solr schema settings also become password protected (however the agent does not try the password properly). This is a bug in the opscenter agent, and can be referenced as OPSC-3605.
Unfortunately Datastax Enterprise does not have a public bug tracker. If you're a DSE customer, probably the best method you can use is to go through DSE support.
Related
If I execute a query from vscode that fails, why should Azure issue a security threat for this? How should I prevent this from happening (other than never submitting a query that fails, lol)? Is this a permissions issue? Can I have the admin suppress this type of 'threat' from my machine/login? This also happens if there is a failed query from python/sqlalchemy. This doesn't happen often but it is rather annoying to have to explain.
Threat: Potential SQL Injection
Edit:
Description
Threat
This seems like a bug. I've reported this to someone on the product team to check. Can you send an example of this with a screenshot to me at AzCommunity#microsoft.com so that I can bubble this up?
I am facing a problem with logging into TFS. I get the following error:
Exception Message: TF246017: Team Foundation Server could not connect
to the database. Verify that the server that is hosting the database
is operational, and that network problems are not blocking
communication with the server. (type SoapException)SoapException
Details:
Hi the below steps worked for me.
Select Application Tier in the TFS Administration Console.
In the Application Tier Summary which contains the Service Account details.
Click Reapply Account.
I know this is old, but here was my situation:
We have 11 collections on our instance, 2 were failing with this error, showing me it wasn't an access / connection issue. Checking Event Viewer (as #Andy Li-MSFT suggests) showed it was
A timeout occurred while waiting for memory resources to execute the query in resource pool 'default' (2). Rerun the query.
Checking task manager showed the culprit - elastic search was using well over 2GB of memory. I killed the service, the collections applied the patch quickly without issue.
Looks like I need to ask our server admins to give us a bit more memory....
Please check below thing to narrow down the issue:
Make sure you are the member of the Administration Console Users.
Otherwise you cannot access the Admin Console.
Make sure the SQL Server is stated and available, and the network
connectivity is OK.
Check the Service Account, make sure the Service Account has been added in
SQL Server.
You can also refer to the solution in below link to fix the issue:
https://www.ganshani.com/alm/tfs/visual%20studio/solved-tf246017-team-foundation-server-could-not-connect-to-the-database/
If above solution can not resolve the problem, please check the Event log. The Windows Event Log is a good candidate where to look for the potential cause.
For me I've solved the issue by changing the recovery mode Simple -> Full in the database.
Please refer to: https://www.mssqltips.com/sqlservertutorial/3/sql-server-full-recovery-model/
My graphebedb_url is gotten from heroku to access my neo4j database online. It is correct but when I initiate db connection. It returns error 403.which is forbidden request.
I'm founder & CEO of GrapheneDB. philippkueng/node-neo4j supports authentication via URL.
According to the project's readme, the snippet should look like this. I've adjusted it to load the connection URI from the env variable:
var neo4j = require('node-neo4j');
db = new neo4j(process.env['GRAPHENEDB_URL']);
Attention: The latests release of the driver is 9 months old, so it might not be compatible with the latests versions of Neo4j. This is not related to your authentication issue though.
For an up-to-date nodejs driver I'd recommend thingdom/node-neo4j
Can you describe what you've tried?
Perhaps you need the username and password? Your driver might not support the username and password as part of the URL. You might need to specify it separately (keep in mind there are two node-neo4j drivers when looking at documentation)
Also, ideally you should be using the Heroku environment variable rather than hardcoding the URL.
I have had an issue with setting up my gerrit server. The machine has Ubuntu 12.04 LTS Server 64-bit installed on it. I am setting up git and gerrit as a way to manage source code and code review.
I require internal and external access to it. I setup a DNS that would work externally. However, during the initial setup, i left the canonicalWebUrl to its default value. It usually take's the machine's hostname (in this case it was vmserver).
The issue I was running into is exactly as explained here https://stackoverflow.com/questions/14702198/the-requested-url-openid-was-not-found-on-this-server, where after trying to sign in/register account with OPEN ID, it was saying url not found.
For some reason, it was changing the url in the address bar from the the DNS i setup to the CanonicalWebURL.
I tried to change the canonical web url in the gerrit.conf file found in etc of the gerrit site. After restarting the server, however, we were able to see the git project files present as they should be, but the account that was administrator seemed to no longer be registered and none of the projects were visible through gerrit.
I was wondering if there was a special procedure to changing the canonical web url in gerrit without disrupting access to a server?
any help or information on canonical urls would be much appreciated as i cannot find too much information on them.
edit:
looking deeper, i found some information that is way over my head regarding "submodules"
i do not understand if this is what i am looking for or not.
https://gerrit-review.googlesource.com/#/c/36190/
The canonical web url must be set, and it sounds like you have done that correctly.
I suspect the issue you are seeing is caused by changing the canonical web url - some OpenID providers (Google being the big one) will return a different user ID based on the URL of the request. This is a privacy thing and cannot be changed. So previous users will now show up as new users and won't be in their old groups (Administrators group in this case).
If you don't have many users, it might be easiest to migrate them by hand. You can modify the database to map the new user ID to the old user account.
I am completely new to ruby and I inherited a ruby system for a product catalogue. Most of my users are able to view everything as they should but overseas users (specifically Mexico) cannot contact the server once logged in. They are an active user. I'm sorry I cannot be more specific, and the system is private so I cannot grant access.
Has anyone had any issues similar to this before? Is it a user-end issue or a system error?
Speaking as somebody who regularly ends up on your user's side of the fence, the number one culprit for this symptom is "Clueless administrator". There are many, many sites which generically block either large blocks of IP space or which geolocate and carve out big portions of the world.
For example, a surprising number of American blogs block Asian countries (including Japan) out of a misplaced effort to avoid DDOS attacks (which actually probably originated in Russia or China but, hey, this species of administrator isn't very good on fine tuning solutions). I have to hop over to my American proxy server to access those sites.
So the first thing I'd do to diagnose your problems is to see whether your Mexican users are making it to the server at all, or whether they're being blocked somewhere earlier (router? firewall? etc). Then, to determine whether the problem is on your end or their end, I'd try to replicate the issue with you proxying your connection through a Mexican proxy and repeating the actions they took to cause the issue.
The fact that they get blocked after logging in could indicate that you have https issues , for example with an HTTPS accelerator installed [1], or it could be that your frontend server is properly serving up the static content but doing the checking on dynamic requests only.
[1] We've seen some really weird bugs at work caused by a malfunctioning HTTPS accelerator.
If it's working for everyone else then it would appear that the problem is not with Ruby or Rails working, since they are...
My first thought would be to check for a network issue: are the Mexican users all behind the same proxy server and/or firewall?
Is login handled within the Rails application or via some other resource? Can you see any evidence that requests from Mexican users are reaching your web server at all?
Login is handled by the rails app. Am currently trying to hunt down the logs, taking some time as again I am new to this system.
Cheers guys
Maybe INS is cracking down on cyber-immigration.