I set up Neo4j on an EC2 instance using this
http://www.neo4j.org/develop/ec2
I have the SSH key so I can SSH into the instance, but I don't remember the password I set up for the web interface. I believe this is a Jetty basicauth equivalent, but I'm not sure, nor could I find the config files that might lead me to the right place. How can I reset this password?
`neo4j-server.properties´ has a setting for the auth-extension being used by the puppet script:
org.neo4j.server.credentials=<user>:<pass>
I'm not sure where neo4j-server.properties is located on your machine, check /etc/neo4j or use find / -name neo4j-server.properties.
You can reset neo4j web interface password by following these steps, provided you have SSH access to ec2 instance:
Login to ec2 instance from your local console:
ssh -i [your-key] ubuntu#[ec2-instance-ip]
login as superuser sudo su
Remove auth file from var/lib/neoj/data/dbms
rm -f var/lib/neoj/data/dbms/auth
Reset the password by running
neo4j-admin set-initial-password secret
Restart neo4j: systemctl restart neo4j
You can access neo4j web interface from browser with username as neo4j and new password.
Related
I have script that user can change password to ldap. The user write his password and the script is sending the command to ldap server. I can`t do that other way, only this way ldap server creates propper passwords. The command is:
ldappasswd -x -D "uid=userwhocanchangepassword,cn=users,dc=example,dc=org" -w "userpass" -h ldap.host -S 'uid=usertochange,cn=users,dc=example,dc=org' -s 'passwordTochange'
userwhocanchangepassword - is user that has perrmissions to change other users passwords
Autside the container it works perfectly (password is changed) but when I try to run same command in the docker container insted of password change i get ldappasswd help:
Change password of an LDAP user
usage: ldappasswd [options] [user]
user: the authentication identity, commonly a DN
It`s strange but it works well if I delete the -s param with the password. But if I do that command is prompting to pass the password.
On my dev machine and inside the docker container is the same version of the ldappasswd. Docker is ubuntu container with installed ldap-utils.
Is any other way to modify this command, or maybe some one has simillar problem?
Thanks for any help.
Background:
I am running a Google Compute Engine VM, called host.
There is a Docker container running on the machine called container.
I connect to the VM using an account called user#gmail.com.
I need to connect through ssh from the container to the host, without being prompted for the user password.
Problem:
Minutes after successfully connecting from the container to the host, the user/.ssh/authorized_keys gets "modified" by some process from Google itself. As far as I understood this process appends some ssh keys needed to connect to the VM. In my case though, the process seems to overwrite the key that I generated from the container.
Setup:
I connect to host using Google Compute Engine GUI, pressing on the SSH button.
Then I follow the steps described in this answer on AskUbuntu.
I set the password for user on host:
user#host:~$ sudo passwd user
I set PasswordAuthentication to yes in sshd_config, and I restart sshd:
user#host:~$ sudo nano /etc/ssh/sshd_config
user#host:~$ sudo systemctl restart sshd
I enter in the Docker container using bash, I generate the key, and I copy it on the host:
user#host:~$ docker exec -it container /bin/bash
(base) root#container-id:# ssh-keygen
(base) root#container-id:# ssh-copy-id user#host
The key is successfully copied to the host, the host is added to the known_hosts file, and I am able to connect from the container to the host without being prompted for the password (as I gave it during the ssh-copy-id execution).
Now, if I detach from the host, let some time pass, and attach again, I find that the user/.ssh/authorized_keys file contains some keys generated by Google, but there is no trace of my key (the one that allows the container to connect to the host).
What puzzles me more than everything is that we consistently used this process before and we never had such problem. Some accounts on this same host have still keys from containers that no longer exist!
Does anyone has any idea about this behavior? Do you know about any solutions that let me keep the key for as long as it is needed?
It looks like the accounts daemon is doing this task. You could refer this discussion thread for more details about this.
You might find the OS Login API a easier management option. Once enabled, you can use a single gcloud command or API call to add SSH keys.
In case anyone has trouble with this even AFTER adding SSH keys to the GCE metadata:
Make sure your username is in the SSH key description section!
For example, if your SSH key is
ssh-rsa AAAA...zzzz
and your login is ubuntu, make sure you actually enter
ssh-rsa AAAA...zzzz ubuntu
since it appears Google copies the key to the authorized_keys of the user specified inside the key.
In case anyone is still looking for solution for this, I solved this issue by storing the SSH Keys in Compute Engine Metadata https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
How do I configure SSH connections in jenkins, when I have an intermediate bastion with its own user and key like this:
Host jump
User user1
HostName jumpdns
IdentityFile /Users/myname/.ssh/jumpkey.pem
Host server
User user2
HostName serverdns
IdentityFile /Users/myname/.ssh/serverkey.pem
ForwardAgent yes
ProxyJump jump
This works on cli as ssh server. But I dont know how to encode that into my jenkins that is running locally in my laptop from within my user and not as a separate jenkins user ie. JENKINS_HOME=/Users/myname/.jenkins
I looked into Publish over SSH plugin and it does provide for a jumpdns option but not jump's own user and key. And it seems like others have been been looking for it without a solution.
What is the best way to configure Jenkins for my SSH setup?
Assuming you are on jenkins version: 2.303.2. This is the latest version as of now.
If your master has a SSH version(OpenSSH_7.4p1 for example) which supports jump host option then you can try this:
-Select Launch method as 'Launch agent via execution via execution of command on controller'
-Launch command: ssh -tt -J user#jump_host_name user#destination_host
https://www.tecmint.com/access-linux-server-using-a-jump-host/
[Help]
Description of Problem
jenkins-cli not authenticating with provided ssh private key
Observed
when passing the jenkins-cli command:
java -jar ~/jenkins-cli.jar -s http://localhost:8080 -i ~/.ssh/ccdevops who-am-i
The console output is:
Authenticated as: anonymous
Authorities:
Desired
Jenkins should authenticate as the user with the matching public key in their profile
Relevant Information
jenkins v 2.46.3 and using the correct cli jar for the version
Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-78-generic x86_64)
only using recommend plugins
running on azure cloud in china east datacenter
azure network security group for the vm is configured to allow traffic
ssh key being used was created on the ubuntu machine jenkins is running on and the public key is in the users entry in the jenkins user database
Key was created using the instructions on the github site
jenkins-cli is running on the server and not from a remote host
Steps Tried Already
tried different keys with and without passphrases
tried the web address with localhost and the ip address
tried other jenkins-cli commands with same result
tried to create other users and putting an public ssh key in their profile. (no duplicate keys between users)
tried moving the location of the jenkins-cli jar from server root to jenkins home directory
You should also specify the SSH method and the user on the command line with -ssh and -user USER_NAME respectively. After that, your command would look like this:
java -jar ~/jenkins-cli.jar -s http://localhost:8080 -i ~/.ssh/ccdevops who-am-i -ssh -user USER_NAME
Also note that you'll need to be able to access the server via SSH as well.
I am a newbie to Mesos. I have installed a DCOS cluster locally in one system (Centos 7).
Everything went up properly and I am able to access the GUI of DCOS but when I am trying to connect through CLI, it is asking me for password.
I have not been prompted for any kind of password during local installation through vagrant.
But when I issue the following command:
[root#blade7 dcos-vagrant]# dcos node ssh --master-proxy --leader
Running `ssh -A -t core#192.168.65.90 ssh -A -t core#192.168.65.90 `
core#192.168.65.90's password:
Permission denied, please try again.
core#192.168.65.90's password:
I don’t know the password to be given.
Kindly help me in resolving this issue
Since the local installation bases on vagrant, you can use the following convenient workaround: directly log into the virtual machines by using vagrant's ssh.
open a terminal and enter vagrant global-status to see a list of all running vagrant environments (name/id)
switch into your dcos installation directory (e.g., cd ~/dcos-vagrant), which contains the file Vagrantfile
run vagrant ssh <name or (partial) id> in order to ssh into the virtual machine. For example, vagrant ssh m1 connects to the master/leader node, which gives you essentially the same shell as dcos node ssh --master-proxy --leader would do.
Two more tips:
within the virtual machine, the directory /vagrant is mounted to the current directory of the host machine, which is nice for transferring files into/from the VM
you may try to find out the correct ssh credentials of the default vagrant user and then add these (rather than the pem file retrieved from a cloud service provider) via ssh-add to your host machine. This should give you the ability to login via dcos node ssh --master-proxy --leader --user=vagrant without a password
The command shows that you are trying to login to the server using the userid "core". If you do not know the password of user "core", I suggest reset "core" user password and try it again.