How to add stream in coverity - analysis

I am new to Coverity Analysis. I need to add Stream in Coverity, how can I achieve this.
Below is my script
-solution:'nameofsolution.sln' -targets:"Rebuild" -configuration:"Release" -platform:"x64" -coverityHost:"%system.CoverityHost%" -coverityPort:%system.CoverityPort% -coverityUser:"%system.CoverityUser%" -coverityPassword:"%system.CoverityPassword%" -coverityStream:"TEST" -coverityOutputDir:"%env.CoverityWorkFolder%" -triggerType:'%teamcity.build.triggeredBy%' %ForceCoverity%.
Now, where and how can I add stream "TEST" in Coverity. Thanks for your help !!

cov-manage-im --host "<YOUR_HOST>" --user="<USER_NAME>" --password="<PASSWORD>" --mode streams --add --set --name:<STREAM_NAME>
For future reference, pass the "--help" flag to cov-manage-im to see if it has what you need.

I am not sure whether your script is right, your workflow and what kind of script it is.
To create a new stream just navigate your browser to Coverity connect and create one. Make sure you actually have permissions to add streams to your project.
In coverity connect you have one option like configuration in right most top corner.In that you can find Projects and stream which already created. You have to add(+stream) and name the stream name.Then It will be added.
You can use that stream in your script.
Example:
cov-commit-defects --dir /Users/admin/Coverity_intermediate_file_directory --host 192.178.196.125 --port 8080 --user admin --password admin123 -stream stream_name
Replace stream_name with your streams name which you have created in coverity connect.

Related

Powershell Custom Cmdlet

I've got the following task:
Create a script (cmdlet) to output the network adapters information and properties. Must include the -Name switch allowing to immediately output info on a particular network adapter, the -File switch allowing to save the info to a file. Add an instruction to call with the -? switch.
Thanks in advance.

Fail Connect to Running Phoenix Applications with IEx Remote Shell at Docker

Im deploy phx app using docker, running --remsh command from within the same container.
But it return could not contact remote node.
Anybody know the solution ?
Here is the snapshot
You seem to start the application as :nonode#nohost. To connect to it, you should have it started with either a short or fully qualified name.
mix release.init creates rel folder with two template files in it. Check env.sh.eex and make sure you start a release with a short name given. This should work:
export RELEASE_DISTRIBUTION=sname
export RELEASE_NODE=<%= #release.name %>
Sidenote: please post everything as plain text, not as images.
There is a problem in your command, please use --cookie instead of -cookie

Updating IOS's via SCP in bash with expect

Good day. I am attempting to create/run a script that will allow me to send an updated IOS from a server to my network devices. The following code works when I put in a manual IP address right before the ":flash" command.
#!/user/bin/expect
set IOSroot "/xxxxx/xxx/c3750e-universalk9-mz.150-2.SE10a.bin"
set pw xxxxxxxxxxxxxxxxxxx
spawn scp $IOSroot 1.1.1.1:flash:/c3750e-universalk9-mz.150-2.SE10a.bin
expect "TACACS Password:"
send "$pw\r"
interact
The code there works great and as expected. The issue arises when I try to use a file called "ioshost" with a list of IP's and use that within this script to get some automation. I have tried various things to get this to work. Some of them are as follows:
Settings Variables
IPHosts=$(cat ioshost)
set IPHost 'cat ioshost'
Along with trying to use the read/do command...
while read line; do
spawn scp $IOSroot $line:flash:/c3750e-universalk9-mz.150-2.SE10a.bin
done < ioshost
None of these seem to work and I am looking for guidance. Please note I understand that setting a password is not best practice but setting RSA keys as mentioned in other articles is not allowed so I am forced to do it this way.
Thank you for your time.
You can use one Expect script and one Bash script.
First update your Expect script a bit:
#!/user/bin/expect
set IOSroot "/xxxxx/xxx/c3750e-universalk9-mz.150-2.SE10a.bin"
set pw xxxxxxxxxxxxxxxxxxx
spawn scp $IOSroot [lindex $argv 0]:flash:/c3750e-universalk9-mz.150-2.SE10a.bin
# ^^^^^^^^^^^^^^^^
expect "TACACS Password:"
send "$pw\r"
interact
Then write a simple Bash for loop:
for host in $(<ioshost); do
expect /your/script.exp $host
done

How to export credentials from one jenkins instance to another?

I am using the credentials plugin in Jenkins to manage credentials for git and database access for my team's builds. I would like to copy the credentials from one jenkins instance to another, independent jenkins instance. How would I go about doing this?
UPDATE: TL;DR Follow the link provided below in a comment by Filip Stachowiak it is the easiest way to do it. In case it doesn't work for you go on reading.
Copying the $HUDSON_HOME/credentials.xml is not the solution because Jenkins encrypts paswords and these can't be decrypted by another instance unless both share a common key.
So, either you use the same encription keys in both Jenkins instances (Where's the encryption key stored in Jenkins? ) or what you can do is:
Create the same user/password, you need to share, in the 2nd Jenkins instance so that a valid password is generated
What is really important is that user ids in both credentials.xml are the same. For that (see the credentials.xml example below) for user: Jenkins the identifier <id>c4855f57-5107-4b69-97fd-298e56a9977d</id> must be the same in both credentials.xml
<com.cloudbees.plugins.credentials.SystemCredentialsProvider plugin="credentials#1.22">
<domainCredentialsMap class="hudson.util.CopyOnWriteMap$Hash">
<entry>
<com.cloudbees.plugins.credentials.domains.Domain>
<specifications/>
</com.cloudbees.plugins.credentials.domains.Domain>
<java.util.concurrent.CopyOnWriteArrayList>
<com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
<scope>GLOBAL</scope>
<id>c4855f57-5107-4b69-97fd-298e56a9977d</id>
<description>Para SVN</description>
<username>jenkins</username>
<password>J1ztA2vSXHbm60k5PjLl5jg70ZooSFKF+kRAo08UVts=
</password>
</com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
</java.util.concurrent.CopyOnWriteArrayList>
</entry>
</domainCredentialsMap>
</com.cloudbees.plugins.credentials.SystemCredentialsProvider>
I was also facing the same problem. What worked for me is I copied the credentials.xml, config.xml and the secrets folder from existing jenkins to the new instance. After the restart of jenkins things worked fine.
This is what worked for me.
Create a job in Jenkins that takes the credentials and writes them to output. If Jenkins replaces the password in the output with ****, just obfuscate it first (add a space between each character, reverse the characters, base64 encode it, etc.)
I used a Powershell job to base64 encode it:
[convert]::ToBase64String([text.encoding]::Default.GetBytes($mysecret))
And then used Powershell to convert the base64 string back to a regular string:
[text.encoding]::Default.GetString([convert]::FromBase64String("bXlzZWNyZXQ="))
After trying quite a few things for several days this is the best solution I found for migrating my secrets from a Jenkins 2.176 to a new clean Jenkins 2.249.1 jenkins-cli was the best approach for me.
The process is quite simple just dump the credentials from the old instance to a local machine, or Docker pod with java installed, as a XML file (unencrypted) and then uploaded to the new instance.
Before starting you should verify the following:
Access to the credentials section on both Jenkins instances
Download the jenkins-ccli.jar from one of the instances (https://www.your-jenkins-url.com/cli/)
Have User and Password/Token at hand.
Notice: In case your jenkins uses an oAuth service you will need to
create a token for your user. Once logged into jenkins at the top
right if you click your profile you can verify both username and
generate password.
Now for the special sauce, you have to execute both parts from the same machine/pod:
Notice: If your instances are using valid Certificates and you want to
secure your connection you must remove the -noCertificateCheck
flag from both commands.
# OLD JENKINS DUMP # 
export USER=madox#example.com
export TOKEN=f561banana6ead83b587a4a8799c12c307
export SERVER=https://old-jenkins-url.com/
java -jar jenkins-cli.jar -noCertificateCheck -s $SERVER -auth $USER:$TOKEN list-credentials-as-xml "system::system::jenkins" > /tmp/jenkins_credentials.xml
# NEW JENKINS IMPORT # 
export USER=admin
export TOKEN=admin
export SERVER=https://new-jenkins-url.com/
java -jar jenkins-cli.jar -noCertificateCheck -s $SERVER -auth $USER:$TOKEN import-credentials-as-xml "system::system::jenkins" < /tmp/jenkins_credentials.xml
If you have the credentials.xml available and the old Jenkins instance still running, there is a way to decrypt individual credentials so you can enter them in the new Jenkins instance via the UI.
The approach is described over at the DevOps stackexchange by kenorb.
This does not convert all the credentials for an easy, automated migration, but helps when you have only few credentials to migrate (manually).
To summarize, you visit the /script page over at the old Jenkins instance, and use the encrypted credential from the credentials.xml file in the following line:
println(hudson.util.Secret.decrypt("{EncryptedCredentialFromCredentialsXml=}"))
To migrate all credentials to a new server, from Jenkins: Migrating credentials:
Stop Jenkins on new server.
new-server # /etc/init.d/jenkins stop
Remove the identity.key.enc file on new server:
new-server # rm identity.key.enc
Copy secret* and credentials.xml to new server.
current-server # cd /var/lib/jenkins
current-server # tar czvf /tmp/credentials.tgz secret* credentials.xml
current-server # scp credentials.tgz $user#$new-server:/tmp/
new-server # cd /var/lib/jenkins
new-server # tar xzvf /tmp/credentials.tgz -C ./
Start Jenkins.
new-server # /etc/init.d/jenkins start
Migrating users from a Jenkins instance to another Jenkins on a new server -
I tried following https://stackoverflow.com/a/35603191 which lead to https://itsecureadmin.com/2018/03/26/jenkins-migrating-credentials/. However I did not succeed in following these steps.
Further, I experimented exporting /var/lib/jenkins/users (or {JENKINS_HOME}/users) directory to the new instance on new server. After restarting the Jenkins on new server - it looks like all the user credentials are available on new server.
Additionally, I cross-checked if the users can log in to the new Jenkins instance. It works for now.
PS: This code is for redhat servers
Old server:
cd /var/lib/jeknins
or cd into wherever your Jenkins home is
tar cvzf users.tgz ./users
New server:
cd /var/lib/jeknins
scp <user>#<oldserver>:/var/lib/jenkins/user.tgz ~/var/lib/jenkins/.
sudo tar xvzf users.tgz
systemctl restart jenkins
Did you try to copy the $JENKINS_HOME/users folder and the $JENKINS_HOME/credentials.xml file to the other Jenkins instance?

interactive docker build from dockerfile?

I want to use a Dockerfile to build an image. However, commands will need user input as they run. Currently, the build is not successful because docker exits on user input. I know I can use the -i -t options on docker run command but I want to do that on a Dockerfile. How is that possible?
You can try with expect or a similar tool.
The easiest way to configure it is using the autoexpect tool, which lets you run the commands interactively and creates an expect script for you.
I couldn't get the rvmsudo stuff working (I haven't used it and didn't want to spend too much time with it) so I decided to use vi instead. First run autoexpect
$ autoexpect vi test
This will open vi and you can create or edit the file and save it. After exiting the vi you'll see your file test as well as an expect script script.exp.
You can then remove the test file and execute script.exp. It will recreate the same file using the same steps.
The autoexpect tool is great, but you may have to create a script from scratch if you need to have more control over what happens. E.g. if you don't want the script to work with the exact expected input.

Resources