While trying to configure and create an admin user for rabbitmq with a JSON file. The users are created. But I am getting the following error while logging in with valid credentials from the web management console.
2022-07-22 08:15:56.342071+00:00 [warning] <0.847.0> HTTP access denied: user 'admin' - invalid credentials
My configurations and docker files are as follows.
rabbitmq.config
[
{rabbit, [
{loopback_users, [admin]}
]},
{rabbitmq_management, [
{load_definitions, "/etc/rabbitmq/definitions.json"}
]}
].
definitions.json
{
"users": [
{
"name": "guest",
"password_hash": "abcd",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": ""
},
{
"name": "admin",
"password_hash": "admin123",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}
],
"vhosts": [
{
"name": "/"
}
],
"permissions": [
{
"user": "admin",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
}
FROM rabbitmq:3.9-management
COPY conf/rabbitmq.config /etc/rabbitmq/
COPY conf/definitions.json /etc/rabbitmq/
RUN chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq.config /etc/rabbitmq/definitions.json
CMD ["rabbitmq-server"]
I also tried to log in with rabbitmqctl
>>rabbitmqctl authenticate_user admin admin123
Authenticating user "admin" ...
Error:
Error: failed to authenticate user "admin"
user 'admin' - invalid credentials
When the password is changed with rabbitmqctl change_password admin admin123 everything seems to work fine.
The only warning in the log on rabbitmq startup is
2022-07-22 08:15:30.218099+00:00 [warning] <0.652.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch
Could someone please tell me the possible cause and solution? If I've missed out anything, over- or under-emphasized a specific point, please let me know in the comments. Thank you so much in advance for your time.
Your definitions file must contain the HASH of the password, not the password itself. Generally what I do is set a user's password via change_password like you have, then export the current definitions. You'll notice that they contain the hashed password.
You can also generate the hash yourself. See this:
How to generate password_hash for RabbitMQ Management HTTP API
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Related
I have an Elasticsearch deployment on Kubernetes (AKS). I'm using the official Elastic's docker images for deployments. Logs are being stored in a persistent Azure Disk. How can I migrate some of these logs to another cluster with a similar setup? Only those logs that matches a filter condition based on datetime of the logs needs to be migrated.
Please use Reindex API for achieving the same
POST _reindex
{
"source": {
"remote": {
"host": "http://oldhost:9200",
"username": "user",
"password": "pass"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
Note:
Run the aforementioned command on your target instance.
Make sure that the source instance is whitelisted in elasticsearch.yml
reindex.remote.whitelist: oldhost:9200
Run the process asynchronously using below query param
POST _reindex?wait_for_completion=false
I have tried following the documentation on this link :
<https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-images-private>
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-alami
My .dockercfg file looks something like this:
{
"https://index.docker.io/v1/" :
{
"auth" : "username:pwd [base 64 enc]",
"email" : "email_id"
}
}
I'm trying to pull a private image from Docker hub and my Dockerrun.aws.json looks something like this :
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": ".dockercfg"
},
"Image": {
"Name": "dishvy/imgname:tag",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "80"
}
],
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
],
"Logging": "/var/log/nginx"
}
I have added the dockercfg file at the root of the bucket. And when i'm trying to deploy this to AWS Beanstalk, I get this error :
Error response from daemon: pull access denied for dishvy/imgname:tag, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
This essentially means my authentication file on S3 isn't correct but i followed the steps given in documentation and can't figure out where i went wrong. Can anybody help me out here?
I have used a method mentioned here to add credential to Jenkins programmatically. It worked successfully for adding secret texts and secrets files. But it gives an exception while adding ssh private keys. Below is the curl command I used.
curl -X POST 'http://localhost:8080/jenkins/credentials/store/system/domain/_/createCredentials' \
--data-urlencode 'json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "temp",
"username": "temp",
"privateKeySource": {
"stapler-class": "com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey$FileOnMasterPrivateKeySource",
"privateKeyFile": "/home/udhan/private-key.pem",
},
"stapler-class": "com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey"
}
}'
Here is the exception I get.
A problem occurred while processing the request.
Please check our bug tracker to see if a similar problem has already been reported.
If it is already reported, please vote and put a comment on it to let us gauge the impact of the problem.
If you think this is a new issue, please file a new issue.
When you file an issue, make sure to add the entire stack trace, along with the version of Jenkins and relevant plugins.
The users list might be also useful in understanding what has happened.</p><h2>Stack trace</h2><pre style="margin:2em; clear:both">org.kohsuke.stapler.NoStaplerConstructorException: There's no #DataBoundConstructor on any constructor of class com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey$FileOnMasterPrivateKeySource
at org.kohsuke.stapler.ClassDescriptor.loadConstructorParamNames(ClassDescriptor.java:265)
at org.kohsuke.stapler.RequestImpl.instantiate(RequestImpl.java:765)
at org.kohsuke.stapler.RequestImpl.access$200(RequestImpl.java:83)
at org.kohsuke.stapler.RequestImpl$TypePair.convertJSON(RequestImpl.java:678)
I've just bumped into this same problem right now. Rather than using a pem file, I ended up putting the SSH pem's value into a variable and passed it that way instead.
CRUMB=$(curl -s 'https://{{jenkins_admin_username}}:{{jenkins_admin_password}}#localhost:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')
SSH_KEY="$(cat /your/ssh/pem)"
curl -H $CRUMB -X POST 'https://{{jenkins_admin_username}}:{{jenkins_admin_password}}#localhost:8080/credentials/store/system/domain/_/createCredentials' --data-urlencode 'json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "'test-jenkins-id'",
"username": "'test-username'",
"password": "",
"privateKeySource": {
"stapler-class": "com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey$DirectEntryPrivateKeySource",
"privateKey": "$SSH_KEY",
},
"description": "test-jenkins-ssh description",
"stapler-class": "com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey"
}
}'
Not that I went with https instead of http here as we're passing things that should be secured.
I hope this helps.
So I have a consul check that watches over a container and is designed to go critical when the container is stopped. I want to create a consul watch that will run a script after the check has gone critical, or after several critical responses (for example if my check sends 5 critical responses I want it to run a script).
Here is the json for my working check and my guess as to what I my watch might look like:
{
// this check works
"checks": [
{
"id": "docker_stuff",
"name": "curl test",
"notes": "curls the docker container",
"script": "/scripts/docker.py",
"interval": "1s"
}
],
//this watch doesn't work
"watches": [
{
"Node": "client2",
"CheckID": "docker-stuff",
"Name": "docker-stuff-watch",
"Status": "critical",
"Status_amt": "5",
"handler": "/scripts/new-docker.sh",
"Output": "container relaunched",
}
]
}
What do I need to change in my watch to get it working?
Would I also need to use a consul event to watch my health check and then trigger a consul watch (of the event type) that runs my /scripts/new-docker.sh script? If so then how would I make a consul event that watches over my health check? For example if this was my consul check, watch and event, what would I need to change to get this working?
{
"checks": [
{
"id": "docker_stuff",
"name": "curl test",
"notes": "curls the docker container",
"script": "/scripts/docker.py",
"interval": "1s"
}
],
"watches": [
{
"type": "event",
"name": "docker-stuff-watch",
"handler": "/scripts/new-docker.sh"
}
],
"events": [
{
"Node": "client2",
"CheckID": "docker-stuff",
"Name": "docker-stuff-event",
"Status": "critical",
"Status_amt": "5",
"Output": "container relaunched",
}
]
}
What do I need to change in my watch to get it working?
Are there any errors? Make sure your watch handler '/scripts/new-docker.sh' is consuming STDIN that Consul will be sending, even if it is throwing it away to /dev/null, otherwise the process will wait forever for it to be consumed
Something like
while read -r -t 0; do read -r; done
I would recommend considering an upgrade to the next version of Docker 1.12 (release candidate at the moment). The new concept of services can be used to state the desired number of containers to be run.
https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/
There's also a new HEALTHCHECK directive in the Dockerfile that enables you to bundle a check script with the container image.
These new features might enable you to replace the functionality you've had to implement using consul.
I am using the rails-server-template available here (https://github.com/TalkingQuickly/rails-server-template) to provision a Rails server (Ubuntu 12.04) using Chef. When I am setting up a new server, I copy my public ssh key ssh-copy-id -i ~/.ssh/id_rsa.pub ubuntu#my-server.amazonaws.com and am able to enter my server fine.
But after I download a new copy of this template (updating the nodes/my-server.json file to this:)
{
"environment": "production",
"authorization": {
"sudo": {
"users": ["deploy", "vagrant"]
}
},
"run_list": [
"role[server]",
"role[postgres-server]"
],
"automatic": {
"ipaddress": "my-server.amazonaws.com"
},
"postgresql": {
"password": {
"postgres": "password"
}
}
}
And also updating the deploy.json user in data_bags/users:
{
"id": "deploy",
// generate this with: openssl passwd -1 "plaintextpassword"
"password": "password",
"ssh_keys": [ "ssh-rsa my-public-key from ~/.ssh/id_rsa.pub"
],
"groups": [ "sysadmin"],
"shell": "\/bin\/bash"
}
For some weird reason, after provisioning the server with bundle exec knife solo bootstrap ubuntu#my-server.com, I get a Permission denied (publickey) error. When trying to log-in using ssh, I get asked for the password for the ubuntu user, which I don't know. I can't even log in with my key pair .pem file from Amazon EC2 anymore.
Am I missing something? I didn't change the server.json role, and I can't seem to figure out what is going on. Has something changed my ssh configuration during provisioning?
Turns out when I was trying to ssh into my server, the user I was using was ubuntu, whereas in the data_bags, I set up a new user with the id deploy. I needed to ssh in as the deploy user.