What should I do in OpsCenter after deleting the cassandra superuser? - datastax-enterprise

After building a DSE cluster with Opscenter, I cannot configure after deleting the cassandra user.
I see an error code that Opscenter is trying to log in to the DSE cluster with the cassandra user.
What operation should I do in Opscenter after deleting the cassandra user?
Please tell me.

You haven't provided a lot of information and I don't know what the problem is exactly but I'm going to assume that you enabled authentication on your DSE cluster.
In order to disable or drop the default cassandra superuser role, you need another superuser role to do that so I take it you've created a new superuser role.
Using the new superuser role, you need to create separate logins for users and applications. You should create a separate service account for OpsCenter. Once you've done that, you need to update the cluster connection properties in clusters/cluster_name.conf on the OpsCenter with the new credentials:
[cassandra]
username = opsc_svc_account
password = opsc_svc_account_password

Related

Running a pod as a service account to connect to a database with Integrated Security

I have a .NET Core service running on Azure Kubernetes Service and a Linux Docker image. It needs to connect to an on-premise database with Integrated Security. One of the service accounts in my on-premise AD has access to this database.
My question is - is it possible to run a pod under a specific service account so the service can connect to the database? (Other approach I took was to impersonate the call with WindowsIdentity.RunImpersonated, however that requires the DLL "advapi32.dll" and I couldn't find a way to deploy it to the Linux container and make it run.)
A pod can run with the permissions of an Azure Active Directory service account if you install and implement AAD Pod Identity components in your cluster.
You'll need to set up an AzureIdentity and an AzureIdentityBinding resource in your cluster then add a label to the pod(s) that will use permissions associated with the service account.
Please note that this approach relies on the managed identity or service principal associated with your cluster having the role "Managed Identity Operator" granted against the service account used to access SQL Server (service account must exist in Azure Active Directory).
I suspect you may have a requirement for the pods to take on the identity of a "group managed service account" which exists in your local AD only. I don't think this is supported in Linux containers (Recently, Windows nodes support GMSAs as a GA feature).

Which roles should I add to my service account utilised by CircleCi?

I'm running tests and pushing my docker images from CircleCi to Google Container Registry. At least I'm trying to.
Which roles does my service account require to be able to pull and push images to GCR?
Even as an account with the role "Project Owner", I get this error:
gcloud --quiet container clusters get-credentials $PROJECT_ID
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials)
ResponseError: code=403,
message=Required "container.clusters.get" permission(s)
for "projects/$PROJECT_ID/locations/europe-west1/clusters/$CLUSTER".
According to this doc, you will need the storage.admin role to Push (Read & Write), and storage.objectViewer to Pull (Read Only) from Google Container Registry.
On the topic of not being able to get credentials as owner, you are likely using the service account of the machine instead of your owner account. Check which account you are using with the command:
gcloud auth list
You can change the service account the machine is using through the UI by first stopping the instance, then editing the service account. You can also use your Google credentials using the command:
gcloud auth login
Hope this helps
When you get Required "___ANYTHING____" permission message:
go to Console -> IAM -> Roles -> Create new custom role [ROLE_NAME]
add container.clusters.get and/or whatever other permissions you need in order to get the whole thing going (I needed some rights for kubectl for example)
assign that role (Console -> IAM -> Add+) to your service account

InfluxDB can create the first user only when auth is turned off

Is there a way of creating an user in InfluxDB with authentication enabled? Disclaimer: I am a novice to InfluxDB.
I created a Docker container running InfluxDB with authentication enabled by setting auth-enabled = true in http section of the influxdb.conf file.
[http]
...
# Determines whether user authentication is enabled over HTTP/HTTPS.
auth-enabled = true
...
As there are no users, I tried to create one using the following command:
docker exec influxdb influx -execute "create user admin with password 'blabla' with all privileges"
However, this fails with
"stdout": "ERR: error authorizing query: no user provided
So it is kind of a chicken-and-egg problem. You cannot create a user, because this requires logging in as a user in the first place.
It works when authentication is disabled. So I can do the following:
Create config with authentication disabled.
Start InfluxDB
Create users
Change config so authentication is now enabled.
Restart InfluxDB
but in that case I have to store the config in a specific Docker volume and it still leaves a window when anybody could log in without authentication. So it can be automated, but it is not an elegant solution.
Is there an elegant solution for this problem?
Most DB images provide a way to configure an admin-user and admin-passwort via environment variables. InfluxDB does this too:
https://hub.docker.com/_/influxdb/
Set the environment variables INFLUXDB_ADMIN_USER and INFLUXDB_ADMIN_PASSWORD in your container to create the admin user with the given password. You can also enable auth by an environment variable INFLUXDB_HTTP_AUTH_ENABLED
2021 update: apparently there might be some caveats/edge cases as it comes to automatic admin/user creation in InfluxDB in Docker - see here: https://github.com/influxdata/influxdata-docker/issues/232
If you stamp on the following message: "create admin user first or disable authentication" even if you set envs as suggested by #adebasi then the above link might help you tackle the problem.
I've just checked the latest official InfluxDB docker and it works, however, as stated in the above link, if meta directory is present (even if empty) under /var/lib/influxdb then user won't be created.
There's also another case - while using unofficial InfluxDB docker suitable for RaspberryPi Zero (https://hub.docker.com/r/mendhak/arm32v6-influxdb) this functionality of creating users is not present there or at least didn't work for me (I've checked docker image and I saw no code to create users).

Opsworks : Rails Layer connect to Elasticache : Redis

I am attempting to connect my Rails Application running in Opsworks to an Elasticache Redis Layer.
I just can't get it to work.
My current configuration:
1 Stack (2 instances)
Layers
- Rails App Server
- MySQL
The rails app is in the AWS-OpsWorks-Rails-App-Server Security Group.
1 ElasticCache Cluster
The ES cluster is in the default security sg-ff58559a (VPC)(active) Security Group.
I am using the 'Primary Endpoint' to attempt to connect.
This value is visible from the
ElastiCache>Replication Groups
dashboard.
It looks similar to this:
<name>.oveuui.ng.0001.use1.cache.amazonaws.com:6379
In my rails console (after SSH into the rails layer) I try:
>r = Redis.new(:url => 'redis://<name>.oveuui.ng.0001.use1.cache.amazonaws.com:6379')
>r.connected
The results is:
Redis::CannotConnectError: Timed out connecting to Redis on...
If you launched your cluster into an Amazon Virtual Private Cloud (Amazon VPC), you can connect to your ElastiCache cluster only from an Amazon EC2 instance that is running in the same Amazon VPC. In this case, you will need to grant network ingress to the cluster.
To grant network ingress from an Amazon VPC security group to a cluster:
1.Sign in to the AWS Management Console and open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2.In the left navigation pane, under Network & Security, click Security Groups.
3.In the list of security groups, click the security group for your Amazon VPC. If you are a new ElastiCache user, this security group will be named default.
4.Click Inbound tab, and then do the following:
a. Click Edit.
b. Click Add rule.
c. In the Type column, select Custom TCP rule.
d. In the Port range box, type the port number for your cache cluster node. This number must be the same one that you specified when you launched the cluster. The default ports are as follows:
Memcached: port 11211
Redis: port 6379
e. In the Source box, select Anywhere which has the port range (0.0.0.0/0) so that any Amazon EC2 instance that you launch within your Amazon VPC can connect to your ElastiCache nodes..
f. Click Save.
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/GettingStarted.AuthorizeAccess.html
Amazon only lets servers in the same security group as your Elasticache server talk to it.
This blog post walks you through the process of adding your Rails Server Layer to the right security group: http://aws.amazon.com/blogs/aws/using-aws-elasticache-for-redis-with-aws-opsworks/. It assumes that when you created your Elasticache cluster you chose the "default" security group, which seems to be the case. If so, go to OpsWorks -> (select the right Stack) -> Layers, and click on Security for your Rails App Server layer. You should see this:
You want to ensure that you've added the "default" security group and then restart your instances. Note that when I did this, it still didn't work. I decided to go look at the details of my instance in the EC2 console (instead of in the OpsWorks console) and found that the new "default" security group that I had added to the layer actually had not propagated to my instance. I don't know why this was the case, so I deleted the instance and created a whole new one, and this new instance had the "AWS-OpsWorks-Rails-App-Server" and "default" security groups applied to it successfully. So, keep that in mind in case things don't work right away and click on the instance to see its settings and confirm that both security groups are displayed.
Let me know if this works for you.

Hive "Creating Hive Metastore Database Tables" command fails on installation 'Path A' using Cloudera Manager

I am installing Cloudera Manager onto an ec2 instance. I only added a single other ec2 instance to the cluster.
The installation succeeded, but when the manager initiates the cluster services (step 9 of 21) I get the
following error:
[2013-07-12 18:44:35,906]ERROR 63227[main]
com.cloudera.enterprise.dbutil.SqlRunner.open(SqlRunner.java:111)
- Error connecting to db with user 'hive' and jdbcUrl 'jdbc:postgresql://ip-xx-xxx-
xx-x.ec2.internal:7432/hive'
I manually opened port 7432 on the ec2 instance created by cloudera, because it did not appear to be open, I'm not sure if that was a bad idea. The cloudera manager docs claim
that the postgres db will be auto created on installation so I don't think that is the
problem either.
I've been getting this error more and more lately.
Check your Private DNS of the created AMI in EC2 Console and compare it to the JDBC URI from the error. I've found that the Private DNS is incorrect when I get this error, though I have no clue how to get around it.
I had the same issue. It turned out that manager instance had different Security Group than the one that was having instances launched by the manager. So, I provided full access to those security group instanced for each other.
It was fixed.
It looks like this can be caused by stopping/starting the Cloudera Manager instance, if it comes back up with a new IP address.
I fixed it by doing the following:
In the Cloudera Manager interface, click the "hive1" service.
Click Configuration / View and Edit.
Expand "Service-Wide" and click "Hive Metastore Database".
Check the "Hive Metastore Database Host" setting - it is probably pointing at an old address you don't have control of anymore.
Replace that with the Manager instance's current private DNS name, obtained from the EC2 console.

Resources