OCI CLI create_backup_from_onprem script error - oracle-call-interface

I am an encountered an issue while creating database backup using create_backup_from_onprem script in OCI CLI. i noticed that object storage namespace is not correct while executing backup script.
[oracle#oracledev oci-cli-scripts]$ ./create_backup_from_onprem --config-file /home/oracle/.oci/config --display-name testimport01 --availability-domain $AD --edition STANDARD_EDITION --opc-installer-dir /home/oracle/migrate --tmp-dir /home/oracle/migrate/onprem_upload --compartment-id $C --rman-password *****
oci._vendor.requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://swiftobjectstorage.ap-mumbai-1.oraclecloud.com/v1/dbbackupbom/iF0ydees7V0yWxyuAYtF/parameter.log
and parameter.log
Either the bucket named 'iF0ydees7V0yWxyuAYtF' does not exist in the namespace 'dbbackupbom' or you are not authorized to access it
My correct namespace is bmnoo8fd7ute
[oracle#oracledev oci-cli-scripts]$ oci os ns get
{
"data": "bmnoo8fd7ute"
}
Not sure how to correct object storage namespace in CLI. could you please any help me on this?

Adding cross reference to GitHub issue in OCI CLI in case the OCI database team can answer -- https://github.com/oracle/oci-cli/issues/201.

You have to change the tenant ocid in oci config file, for which the default location is ~/.oci/config. You might do it manually or using the oci setup config command. You can overwrite the current values, or you can create a new profile, which you can refer to in oci calls.
For more information please see the CLI config file doc.
In case the oci config file is already contains the correct value, you need to reinstall the oci_install specifying the right tenant ocid for -tOCID. (in this case OCID of bmnoo8fd7ute)
java -jar oci_install.jar -host swiftobjectstorage.ap-mumbai-1.oraclecloud.com -pvtKeyFile oci_private_key -pubFingerPrint oci_public_fingerprint -uOCID user_ocid -tOCID tenancy_ocid -walletDir /wallet_directory -libDir /library_directory
Update:
As the dbbackupbom is an internal resource ID, you can't change that with the reinstallation of oci_install. Rather this should be an authorization issue.
Please check if you have the right policies in place. If not, create a policy like this:
Name of the policy: ObjectStorageAccess
Add below statements:
Allow group ObjectAdmins to manage buckets in tenancy
Allow group ObjectAdmins to manage objects in tenancy
Finally add your user to ObjectAdmins, or use a different group which you are already part of.

Finally I got found correct answer and resolution for this issue.
Cause: Error occurred due to newly launched region having some internal bug. i used Mumbai region which was recently launched.
Resolution : Choose another stable region. Ashburn region works for me.

Related

Problem enabling Keycloak read-only user attributes

I've attempted to enable Read-only user attributes in Keycloak as per the docs: https://www.keycloak.org/docs/latest/server_admin/
However the documented configuration does not actually prevent a user from changing their attributes.
Using Keycloak 15.0.0 with the regular Docker image from docker hub
Made a .cli file and added it to my Docker image, built from
FROM jboss/keycloak:15.0.0
ADD RESTRICT_USER_ATTRIBUTES.cli /opt/jboss/startup-scripts/
With contents of RESTRICT_USER_ATTRIBUTES.cli:
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
run-batch
stop-embedded-server
The .cli file is processed according to the log. I can exec into the docker instance and check the configuration using jboss-cli.sh.
But the end user can freely edit myUserAttribute using Postman or another tool.
What am i doing wrong here?
I just had this issue, and it seems the documentation is out-of-date.
They changed the provider name, probably in 15.0.0.
Try changing your cli script to:
# ...
/subsystem=keycloak-server/spi=userProfile/:add
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:add(properties={},enabled=true)
/subsystem=keycloak-server/spi=userProfile/provider=declarative-user-profile/:map-put(name=properties,key=read-only-attributes,value=[myUserAttribute])
# ...

Default credentials can not be used to assume new style deployment roles

Following pipelines readme to set up a deployment pipeline, I ran
$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
aws://[ACCOUNT_ID]/us-west-2
to create the necessary roles. I would assume the roles would automatically add sts assume role permissions from my account principle. However, when I run cdk deploy I get the following warning
current credentials could not be used to assume
'arn:aws:iam::[ACCOUNT_ID]:role/cdk-hnb659fds-file-publishing-role-[ACCOUNT_ID]-us-west-2',
but are for the right account. Proceeding anyway.
I have root credentials in ~/.aws/credentials.
Looking at the deploy role policy, I don't see any sts permissions. What am I missing?
You will need to add permission to assume the role to the credentials from which you are trying to execute cdk deploy
{
"Sid": "assumerole",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"iam:PassRole"
],
"Resource": [
"arn:aws-cn:iam::*:role/cdk-readOnlyRole",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-deploy-role-*",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-file-publishing-*"
]
}
First thing you need to do is enabling the verbose mode to see what is actually happenning.
cdk deploy --verbose
If you see similar message below. Continue with step 2. Otherwise, you need to address the problem by understanding the error message.
Could not assume role in target account using current credentials User: arn:aws:iam::XXX068599XXX:user/cdk-access is not authorized to perform: sts
:AssumeRole on resource: arn:aws:iam::XXX068599XXX:role/cdk-hnb659fds-deploy-role-XXX068599XXX-us-east-2 . Please make sure that this role exists i
n the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Check S3 buckets related to CDK and CloudFormation stacks from AWS Console. Delete them manually.
Enable the new style bootstrapping by one of the method mentioned here
Bootstrap the stack using below command. Then it should create all required roles automatically.
cdk bootstrap --trust=ACCOUNT_ID --cloudformation-execution-policies=arn:aws:iam::aws:policy/AdministratorAccess --verbose
NOTE: If you are working with docker image assets, make sure you have setup your repository before you deploy. New style bootstrapping does not create the repos automatically for you as mentioned in this comment.
This may be of use to somebody... The issue could be a mismatch of regions. I spotted it in verbose mode - the roles were created for us-east-1 but I had specified eu-west-2 in the bootstrap. For some reason it had not worked. The solution was to set the region (by adding AWS_REGION=eu-west-2 before the cdk deploy command).
I ran into a similar error. The critical part of my error was
failed: Error: SSM parameter /cdk-bootstrap/<>/version not found.
I had to re-run using the new bootstrap method that creates the SSM parameter. To run the new bootstrap method first set CDK_NEW_BOOTSTRAP via export CDK_NEW_BOOTSTRAP=1
Don't forget to run cdk bootstrap with those credentials against your account [ACCOUNT_ID].
For me, the problem was that I was using expired credentials. I was trying to use temporary credentials from AWS SSO, which were expired. The problem was that the error message is misleading: it says
current credentials could not be used to assume 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1', but are for the right account. Proceeding anyway.
(To get rid of this warning, please upgrade to bootstrap version >= 8)
However, applying the --verbose flag as suggested above showed the real problem:
Assuming role 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1'.
Assuming role failed: The security token included in the request is expired
Could not assume role in target account using current credentials The security token included in the request is expired . Please make sure that this role exists in the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Getting the latest SSO credentials fixed the problem.
After deploying with --verbose I could see it was a clock issue in my case:
Assuming role failed: Signature expired: 20220428T191847Z is now earlier than 20220428T192528Z (20220428T194028Z - 15 min.)
I resolve the clock issue on ubuntu using:
sudo ntpdate ntp.ubuntu.com
which then resolves the cdk issue.

Deploying code on lambda failed using serverless

I was trying to deploy code on lambda using serverless deploy and got below error, tried multiple solutions available online but didn't work.
Error -
Serverless: Packaging service...
Serverless Error ---------------------------------------
The specified bucket does not exist
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: darwin
Node Version: 8.12.0
Serverless Version: 1.31.0
When you are deploying your Serverless application it uses the service attribute (defined in your serverless.yaml) as a unique identifier of your application in the CloudFormation.
Said so, you may have some conflict if you change the name of the bucket without removing the stack. Ex:
You deploy you application with the bucket called myBucket.
CloudFormation will be created considering this info.
You change this name to myBucketPlus and try to deploy.
Serverless will clean the mybucketPlus with the last deploy before pushing the new one.
But wait! myBucketPlus does not exist.
As you did not describe what exactly you did, I tried to give an example but it could be something else.
Also you could try removing and deploying again.
The best way to resolve this issue is -
Execute below command to see the lambda information which will also provide the S3 bucket name, region, endpoint info etc but you need only bucket name and region for this case.
sls info -v
Create the bucket in the intended region.
Done.

InfluxDB influx-enterprise.key.json no such file or directory

I try to install InfluxDB Enterprise Edition using this documentation: https://docs.influxdata.com/enterprise/v1.2/production_installation/. The Requirements suggest to either use a license-key or license-path, where I do it with the License Key.
In Step 2, after installing, configuring and starting the Data Nodes I try to join the data nodes to the cluster. But executing influxd-ctl add-data enterprise-data-01:8088 gives me the error:
add-data: operation exited with error: open /tmp/influx-enterprise.key.json: no such file or directory
although I configured not to use the license-key json but rather the license-key.
I also have the json file, so I tried it with the license-path but still getting the same error. Has somebody else encountered the same issues?
EDIT
Issue has been resolved, I had to restart the Data nodes after I changed the configuration to use the license-path facepalm. I went into this problem as I previously entered an old license key.

Cassandra fails to initialize with error "Cannot add table 'role_members' to non existing keyspace 'system_auth'"

I am running a Cassandra cluster in Docker containers using fleet for management. I am able to get the cluster up and running, but if I bring the units down with fleet and then back up again, the containers fail. The Cassandra logs has this entry on the second start.
Cannot add table 'role_members' to non existing keyspace 'system_auth'.
Fatal configuration error; unable to start server. See log for stacktrace.
INFO 20:59:34 InetAddress /172.17.8.102 is now DOWN
ERROR 20:59:34 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Cannot add table 'role_members' to non existing keyspace 'system_auth'.
at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:284) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:275) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.maybeAddTable(StorageService.java:1046) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.doAuthSetup(StorageService.java:1034) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:967) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:698) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:581) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:291) [apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:481) [apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:588) [apache-cassandra-2.2.0.jar:2.2.0]
I can't find any information on this particular error, and I really have no idea why it's happening. The closest information I can find is that the system_auth table needs to be configured specially if you are not using the default AllowAllAuthenticator, but I am using this. I haven't changed it in the cassandra.yaml file.
Does anyone know why this might be happening?
Is it possible that you are using CassandraAuthorizer without using PasswordAuthenticator? I think that might not work and cause this particular error.
system_auth is not applicable to AllowAllAuthenticator, you need to use PasswordAuthenticator instead. If you configure cassandra.yaml in the following way:
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
And then restart cassandra, it should create the system_auth keyspace for you. If you don't want to set up authorization, you can always use AllowAllAuthorizer instead. More information can be found here.
This turned out to be a rather unique configuration issue I had. I was mapping /var/lib/cassandra on the host to /var/lib/cassandra inside my docker container. But I was also inadvertently mapping /var/lib/cassandra/data to an auto-generated Docker directory on the host. As such when I stopped and restarted the containers, the data directory would disappear and Cassandra would fail as it tried to recreate data from the commitlog directory.
I got the problem just following the Datastax "Initializing a multiple node cluster (single data center)" tutorial.
I solved the same problem deleting the whole content of /var/lib/cassandra and not only the content of /var/lib/cassandra/system/
Why?
I think Kris got the real problem source: when restarting, the C* service found the commitLog full and recovered by trying to reconstruct the commits found there, failing due to a different configuration and a different table structure...

Resources