How to create automatically run a SSH-KEYGEN ? We need to generate key automatically and copy from one gcp instance to another.
For that we need to ssh within that instance and make changes within sshd_config file and change the parameter "permitrootlogin=yes , and passwordauthentication=yes"
How can we do this changes so automatically while creating an GCP Instance?
You can use startup scripts to generate SSH keys.
If you to simplify things you can create a script like this:
#! /bin/bash
ssh-keygen -b 2048 -t rsa -f /tmp/sshkey -q -N ""
Upload your script into a storage bucket (create new or use existing one) and change file permissions in a way, that It will be readable by everyone; click on "edit permissions" and "add a new item" and fill the rest as below;
Then you can create a VM instance with your startup script:
gcloud compute instances create test-instance1 --scopes storage-ro --metadata startup-script-url=https://storage.cloud.google.com/cfb1/my.script
You can run commands copying your newly created ssh keys onto other GCP machines too in the same way.
But - if all the VM instances are in the same VPC and region (both conditions have to apply) you can login from one to another without creating any additional SSH keys or doing anything. Compute Engine takes care of everything and you can just ssh to one and type: ssh my-vm2 and you're in.
Unless you have something else in mind (like VM's in different VPC's)
Related
I have seen some similar questions, but none of them appear to solve my problem. I want to add a user to a docker container and in my Dockerfile, I define the username with:
ARG USERNAME="some_user"
Instead, I want the username to be the current user's computer username, as obtained by running the command whoami in the local terminal.
So what I would like to have is something like
ARG USERNAME=$(whoami)
.
This $(whoami) should be obtained from the local system environment, and not from the docker container.
Is there a way to do this for dockerfiles? I have thought of .env and docker-compose solutions but these also require each user to set their own username according to my knowledge.
There is no integrated way to execute arbitrary commands on the host directly outside of a container using just docker build / docker-compose build.
So to execute an arbitrary command to get/generate the required information you'll need to provide a custom script / use another build system to call docker/docker-compose with the respective flags or maybe generate the .env file from a template / interactively.
If you only need the current user name you may want to use the $USER / $LOGNAME environment variables that are set by the system in many default configurations. But since these are just normal environment variables their values may be incorrect / empty / manually changed by the user, see this question.
I am trying to make an instance of GraphDB on Docker. After creating the instance, I need to make a repository to import the data to the instance. However, when I make a repository, it says that the repository does not exist. When I use the loadrdf command to import data I receive an error regarding that the repository does not exist.
dist/bin/loadrdf -f -i repo-test -m parallel /opt/graphdb/home/data/*.ttl
The default data location of GraphDB is the data sub-directory of GraphDB's home directory, which in turn defaults to the distribution directory.
For the docker image this is /opt/graphdb/dist, so the default data directory is /opt/graphdb/dist/data.
But also in the docker image the default home is changed to /opt/graphdb/home, so the data directory becomes /opt/graphdb/home/data. This is done by passing the -Dgraphdb.home=/opt/graphdb/home java option when starting GraphDB.
So, when you created your repository it was created at /opt/graphdb/home/data/repositories/repo-test.
Your problem is that the loadrdf tool doesn't know about the changed home directory.
To overcome this try exporting the GDB_JAVA_OPTS variable with value -Dgraphdb.home=/opt/graphdb/home before running loadrdf, or as a one-liner:
GDB_JAVA_OPTS='-Dgraphdb.home=/opt/graphdb/home' ./dist/bin/loadrdf -f -i repo-test -m parallel /opt/graphdb/home/data/*.ttl
I'm using Jenkins X for microservice build / deployment. In each environment there are shared secrets used across microservices (client keys etc) which are injected into deployment.yaml as environment variables using valueFrom and secretKeyRef. This works well in Production and Staging where the namespaces are well know, but since preview generates a new namespace each time, these secrets will no exist. Is there a way to copy secrets from another, known, namespace, or a better approach?
You can create another namespace called jx-preview to store preview specific secrets, and add this line after the jx preview command in your Jenkinsfile
sh "kubectl get secret {secret_name} --namespace={from_namespace} --export -o yaml | kubectl apply --namespace=jx-$ORG-$PREVIEW_NAMESPACE -f -"
Not sure if this is the best way though
We've got a command to service link services from one namespace to another - such as to link services from staging to your preview environment via jx step link services.
It would be nice to add a similar command to copy secrets from a namespace in the same way. I've raised an issue to track this new feature
Another option is to create your own Job in charts/preview/templates/myjob.yaml and in that job create whatever Secrets you need however you want and then annotate it so that its triggered as a post-install hook of your Preview chart
When my jenkins slave starts on its node, the command is run locally from that server:
/bin/java -jar /usr/local/jenkins/slave.jar \
-jnlpUrl https://example.com/computer/foo/slave-agent.jnlp \
-secret <big long hex id>
The "big long hex id" found its way into a git repository and is now compromised. How do I tell my jenkins master to change it?
If you delete the Slave from Jenkins, then re-add it, it should have a new ID assigned to it.
I am guessing that this is using the JNLP protocol and not JNLP4. The classes that generate the secrets: JNLPJnlpSlaveAgentProtocol/JnlpAgentReceiver uses a HMAC which uses the hostname as one input and a secret key as the other input. The secret input is fetched from the "DefaultConfidentialStore" which generates and stores a file in $JENKINS_HOME/secrets/. The name for the file in this case is probably: $JENKINS_HOME/secrets/jenkins.slaves.JnlpSlaveAgentProtocol.secret
To get a different result you either need to change the hostname or remove that file (a new one will be auto generated).
Links:
https://javadoc.jenkins-ci.org/jenkins/security/DefaultConfidentialStore.html
https://javadoc.jenkins-ci.org/jenkins/slaves/JnlpAgentReceiver.html
I am using the credentials plugin in Jenkins to manage credentials for git and database access for my team's builds. I would like to copy the credentials from one jenkins instance to another, independent jenkins instance. How would I go about doing this?
UPDATE: TL;DR Follow the link provided below in a comment by Filip Stachowiak it is the easiest way to do it. In case it doesn't work for you go on reading.
Copying the $HUDSON_HOME/credentials.xml is not the solution because Jenkins encrypts paswords and these can't be decrypted by another instance unless both share a common key.
So, either you use the same encription keys in both Jenkins instances (Where's the encryption key stored in Jenkins? ) or what you can do is:
Create the same user/password, you need to share, in the 2nd Jenkins instance so that a valid password is generated
What is really important is that user ids in both credentials.xml are the same. For that (see the credentials.xml example below) for user: Jenkins the identifier <id>c4855f57-5107-4b69-97fd-298e56a9977d</id> must be the same in both credentials.xml
<com.cloudbees.plugins.credentials.SystemCredentialsProvider plugin="credentials#1.22">
<domainCredentialsMap class="hudson.util.CopyOnWriteMap$Hash">
<entry>
<com.cloudbees.plugins.credentials.domains.Domain>
<specifications/>
</com.cloudbees.plugins.credentials.domains.Domain>
<java.util.concurrent.CopyOnWriteArrayList>
<com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
<scope>GLOBAL</scope>
<id>c4855f57-5107-4b69-97fd-298e56a9977d</id>
<description>Para SVN</description>
<username>jenkins</username>
<password>J1ztA2vSXHbm60k5PjLl5jg70ZooSFKF+kRAo08UVts=
</password>
</com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
</java.util.concurrent.CopyOnWriteArrayList>
</entry>
</domainCredentialsMap>
</com.cloudbees.plugins.credentials.SystemCredentialsProvider>
I was also facing the same problem. What worked for me is I copied the credentials.xml, config.xml and the secrets folder from existing jenkins to the new instance. After the restart of jenkins things worked fine.
This is what worked for me.
Create a job in Jenkins that takes the credentials and writes them to output. If Jenkins replaces the password in the output with ****, just obfuscate it first (add a space between each character, reverse the characters, base64 encode it, etc.)
I used a Powershell job to base64 encode it:
[convert]::ToBase64String([text.encoding]::Default.GetBytes($mysecret))
And then used Powershell to convert the base64 string back to a regular string:
[text.encoding]::Default.GetString([convert]::FromBase64String("bXlzZWNyZXQ="))
After trying quite a few things for several days this is the best solution I found for migrating my secrets from a Jenkins 2.176 to a new clean Jenkins 2.249.1 jenkins-cli was the best approach for me.
The process is quite simple just dump the credentials from the old instance to a local machine, or Docker pod with java installed, as a XML file (unencrypted) and then uploaded to the new instance.
Before starting you should verify the following:
Access to the credentials section on both Jenkins instances
Download the jenkins-ccli.jar from one of the instances (https://www.your-jenkins-url.com/cli/)
Have User and Password/Token at hand.
Notice: In case your jenkins uses an oAuth service you will need to
create a token for your user. Once logged into jenkins at the top
right if you click your profile you can verify both username and
generate password.
Now for the special sauce, you have to execute both parts from the same machine/pod:
Notice: If your instances are using valid Certificates and you want to
secure your connection you must remove the -noCertificateCheck
flag from both commands.
# OLD JENKINS DUMP #
export USER=madox#example.com
export TOKEN=f561banana6ead83b587a4a8799c12c307
export SERVER=https://old-jenkins-url.com/
java -jar jenkins-cli.jar -noCertificateCheck -s $SERVER -auth $USER:$TOKEN list-credentials-as-xml "system::system::jenkins" > /tmp/jenkins_credentials.xml
# NEW JENKINS IMPORT #
export USER=admin
export TOKEN=admin
export SERVER=https://new-jenkins-url.com/
java -jar jenkins-cli.jar -noCertificateCheck -s $SERVER -auth $USER:$TOKEN import-credentials-as-xml "system::system::jenkins" < /tmp/jenkins_credentials.xml
If you have the credentials.xml available and the old Jenkins instance still running, there is a way to decrypt individual credentials so you can enter them in the new Jenkins instance via the UI.
The approach is described over at the DevOps stackexchange by kenorb.
This does not convert all the credentials for an easy, automated migration, but helps when you have only few credentials to migrate (manually).
To summarize, you visit the /script page over at the old Jenkins instance, and use the encrypted credential from the credentials.xml file in the following line:
println(hudson.util.Secret.decrypt("{EncryptedCredentialFromCredentialsXml=}"))
To migrate all credentials to a new server, from Jenkins: Migrating credentials:
Stop Jenkins on new server.
new-server # /etc/init.d/jenkins stop
Remove the identity.key.enc file on new server:
new-server # rm identity.key.enc
Copy secret* and credentials.xml to new server.
current-server # cd /var/lib/jenkins
current-server # tar czvf /tmp/credentials.tgz secret* credentials.xml
current-server # scp credentials.tgz $user#$new-server:/tmp/
new-server # cd /var/lib/jenkins
new-server # tar xzvf /tmp/credentials.tgz -C ./
Start Jenkins.
new-server # /etc/init.d/jenkins start
Migrating users from a Jenkins instance to another Jenkins on a new server -
I tried following https://stackoverflow.com/a/35603191 which lead to https://itsecureadmin.com/2018/03/26/jenkins-migrating-credentials/. However I did not succeed in following these steps.
Further, I experimented exporting /var/lib/jenkins/users (or {JENKINS_HOME}/users) directory to the new instance on new server. After restarting the Jenkins on new server - it looks like all the user credentials are available on new server.
Additionally, I cross-checked if the users can log in to the new Jenkins instance. It works for now.
PS: This code is for redhat servers
Old server:
cd /var/lib/jeknins
or cd into wherever your Jenkins home is
tar cvzf users.tgz ./users
New server:
cd /var/lib/jeknins
scp <user>#<oldserver>:/var/lib/jenkins/user.tgz ~/var/lib/jenkins/.
sudo tar xvzf users.tgz
systemctl restart jenkins
Did you try to copy the $JENKINS_HOME/users folder and the $JENKINS_HOME/credentials.xml file to the other Jenkins instance?