I am trying to use the ArtifactDeployer plugin to copy the artifacts from WORKSPACE/jobs/ directory into a remote directory on the windows 7 machine .The Jenkins machine OS is linux
However Jenkins never manages to succeed. Throwing errors like:
[ArtifactDeployer] - Starting deployment from the post-action ... [ArtifactDeployer] - [ERROR] - Failed to deploy. Can't create the directory ... Build step [ArtifactDeployer] - Deploy artifacts from workspace to remote directories' changed build result to FAILURE
I am not sure how to use the Remote Directory parameter.
Please check the sample code for how I am trying to specify the remote directory
remote Directory - \ip address of that machine\users\public
Is it possible to copy the artifacts which is on linux machine to windows 7 machine?
Please let me know how to specify the remote directory.
Reading the Plugin page doesn't seem to be very helpful when it comes to configuring it. The text seem to hint that you need to have local access (from the node where the job is running) to the (remote) folder you want to deploy too. For a first test, use a local directory (on your Linux box) to see if you get it to work. Second, the correct way to address a windows share is \\servername\sharename\subdirs. Remember that you might need to login to the share.
You might need to install samba or cifs to connect to the windows share from your linux system. There is also a setting in Windows that determines whether your windows box will accept connections to aliases. If that is not the case, you need to use the hostname in order to access the share. So IP and any alias for the server will not work then.
e.g
hostname: RTS3524
alias: JENKINSREPO
ip: 192.168.15.33
share: temp
For the example above, only \\RTS3524\temp will work but \\192.168.15.33 will not.
Related
I try and deploy an app in a kubernetes cluster following these instructions
https://cloud.ibm.com/docs/containers?topic=containers-cs_apps_tutorial#cs_apps_tutorial
Then I make a build following the instructions with ibmcloud cr build -t registry..bluemix.net//hello-world:1 .
Output looks good except a securitywarning
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
But as this was just a test I did not worry.
At the next stage running this command following instructions
kubectl run hello-world-deployment --image=registry..bluemix.net//hello-world:1
I get the following error
error: failed to discover supported resources: Get http://localhost:8080/apis/apps/v1?timeout=32s: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
As you see in the message it looks like it is trying to do something to my local PC rather than the IBMCloud. What have I missed to do?
As #N Fritze mentioned in the comment, in order to organize access to Kubernetes cluster you might require to set KUBECONFIG environment variable which holds a list of kubeconfig files needed to provide sufficient information about authentication method in API server.
Find more information about managing Kubernetes Service in official IBM Cloud documentation. As issue has been already solved, answer composed for any further contributors research.
I am using docker for MacOS / Win.
I connect to external servers via ssh from shell in docker container,
For now, I generate ssh-key in docker shell, and manually send sshkey to servers.
However in this method, everytime I re-build container, sshkey is deleted.
So I want to set initial sshkey when I build images.
I have 2 ideas
Mount .ssh folder from my macOS to docker folder and persist.
(Permission control might be difficult and complex....)
Write scripts that makes the ssh-keymake & sends this to servers in docker-compose.yml or Dockerfile.
(Everytime I build , new key is send...??)
Which is the best practice? or do you have any idea to set ssh-key automatically??
Best practice is typically to not make outbound ssh connections from containers. If what you’re trying to add to your container is a binary or application code, manage your source control setup outside Docker and COPY the data into an image. If it’s data your application needs to run, again fetch it externally and use docker run -v to inject it into the container.
As you say, managing this key material securely, and obeying ssh’s Unix permission requirements, is incredibly tricky. If I really didn’t have a choice but to do this I’d write an ENTRYPOINT script that copied the private key from a bind-mounted volume to my container user’s .ssh directory. But my first choice would be to redesign my application flow to not need this at all.
After reading the "I'm a windows user .." comment I'm thinking you are solving the wrong problem. You are looking for an easy (sane) shell access to your servers. The are are two simpler solutions.
1. Windows Linux subsystem -- https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux. (not my choice)
Cygwin -- http://www.cygwin.com -- for that comfy Linux feel to your cmd :-)
How I install it.
Download and install it (be careful to only pick the features beyond base that you need. (there is a LOT and most of it you will not need -- like the compilers and X). Make sure that SSH is selected. Don't worry you can rerun the setup as many times as you want (I do that occasionally to update what I use)
Start the bash shell (there will be a link after the installation)
a. run 'cygpath -wp $PATH'
b. look at the results -- there will be a couple of folders in the begging of the path that will look like "C:\cygwin\bin;C:\cygwin\usr\local\bin;..." simply all the paths that start with "C:\cygwin" provided you installed your Cygwin into "C:\Cygwin" directory.
c. Add these paths to your system path
d. Start a new instance of CMD. run 'ls' it should now work directly under windows shell.
Extra credit.
a. move the all the ".xxx" files that were created during the first launch of the shell in your C:\cygwin\home\<username> directory to you windows home directory (C:\Users\<username>).
b. exit any bash shells you have running
c. delete c:\cygwin\home directory
d. use windows mklink utility to create a link named home under cygwin pointing to C:\Users (Administrator shell) 'mklink /J C:\Cygwin\home C:\Users'
This will make your windows home directory the same as your cygwin home.
After that you follow the normal setup for ssh under Cygwin bash and you will be able to generate the keys and distribute them normally to servers.
NOTE: you will have to sever the propagation of credentials from windows to your <home>/.ssh folder (in the folder's security settings) leave just your user id. then set permissions on the folder and various key files underneath appropriately for SSH using 'chmod'.
Enjoy -- some days I have to squint to remember I'm on a windows box ...
I am trying to create a VM to run few tests and destroy once done. I am using Jenkins 'Boot up Vagrant VM' option to boot up a VM and using chef to install required packages and run the tests in it. When testing is completed in this VM, is there any way it(VM) can communicate the results back to the job in Jenkins which triggered it?
I am stuck with this part.
I have implemented booting up of VM based on the custom vagrant box which has all essential packages and softwares required to run the tests.
First of all thanks to Markus, who if had left an answer, I'd surely accept it.
I edited the Vagrantfile to add synched folders to
config.vm.synced_folder "host/","/guest".
It creates guest folder in the VM and the host folder which we created on the host system will also reflect on the VM.
All I did then as Markus suggested was do a polling from Jenkins (using Files Found trigger plugin) to some folder to search for some specific file that one is expected to see/communicated from VM.
In VM, whenever the testing is done, I'd simply put the result in host folder and it'd automatically reflect in my local machine, in the folder which Jenkins is polling and it will build the project whichever is polling this folder and ta dahhh ....!
These two parts are together because I think they're related to each other.
I'm running Jenkins' latest LTS war file (v1.596.2) directly from the command line. I'm using an Openshift DIY cartridge to do this.
I have set my "JENKINS_HOME" environment variable to "$OPENSHIFT_DATA_DIR/jenkins".
export JENKINS_HOME=$OPENSHIFT_DATA_DIR/jenkins
Part 1: where is my config.xml file?
This works fine and most files seem to have been stored there fine, but I can't find the config.xml file... I'm probably overlooking something but I can't find it anywhere!
Part 2: Boot up error
I also have this error when I boot up my server using:
java -jar jenkins.war --httpListenAddress=$OPENSHIFT_DIY_IP --ajp13Port=-1
It shows this error message in the console:
http://pastebin.com/30eBBHN5
The server does boot, but it just shows this screen:
http://i.imgur.com/PKVydeP.png
I know Openshift only allows you to bind to port 8080, otherwise you have to bind to a private port in the range 15000 - 35530 (see this). However, I couldn't find any documentation on what ports Jenkins tries to bind or how to change the bindings, other than the main http (8080) and https (not used) ports.
(my jenkins cartridge URL - may not be running)
Any ideas as to what I should try?
The config.xml (https://wiki.jenkins-ci.org/display/JENKINS/Administering+Jenkins) according to that page is stored in the JENKINS_HOME location, you need to create it there (along with any other configuration files that you need). You should be set with having set your JENKINS_HOME to a folder in your OPENSHIFT_DATA_DIR.
As for the port issue. You might check out the current Jenkins cartridge that OpenShift provides (https://github.com/openshift/origin-server/tree/master/cartridges/openshift-origin-cartridge-jenkins) and check out some of the configuration files they are using, or their startup commands and see if that information helps you get yours running.
Also, don't use export JENKINS_HOME=$OPENSHIFT_DATA_DIR/jenkins
use this "rhc set-env " instead, it's much safer than exporting...
VMWare player and workstation has the ability to easily create a shared folder directly to the host:
http://www.vmware.com/support/ws5/doc/ws_running_shared_folders.html
This feature seems to be missing or is moved in vSphere. How do you set it up in vSphere?
Thanks.
Actually, we can't have shared folders using ESXi. But we can workaround it by creating a folder in the host datastore and copying files from/to it using scp protocol. Of course, you need to have administrative privileges on the host for that.
This link explains how to set up SSH Server and Shell Access on ESXi:
http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vcli.migration.doc_50%2Fcos_upgrade_technote.1.4.html
This feature doesn't make sense with vSphere, which is why you can't find it.
Workstation, Player, Server all run on top of a "host OS" while ESX (vSphere managed) runs on bare-metal. You're not supposed to have access to the native file system on the host - so there is no option to do so.