Provision Travis CI PHP VM (trusty, sudo required) locally - travis-ci

Goal
I try to find the reason why a Laravel Dusk test fails on Travis CI, that is why I am trying to reproduce the used Travis CI environment locally.
Setting
In my .travis.yml I have
sudo: required
dist: trusty
since Laravel Dusk requires this.
This is why I am trying to reproduce a "full VM environment" locally (not a Docker based environment).
Current findings
I found out so far that Travis uses Chef to provision the full VMs or the packer templates, which eventually wrap the travis-cookbooks?
Question
How can I provision the "Travis full VM trusty sudo required" locally on Mac OS X?

An answer from Travis CI support states that it is not possible to do what I want:
Regarding your question about instructions for provisioning these
images from OS X, unfortunately, this is not possible at the moment
being so we don't have further instructions for that
Theoretically the following must be done: The travis_ci_sugilite Cookbook needs to be provisoned locally, the best starting point I could find is the Travis CI Packer Templates Readme.
In my case I could solve the failing build by using the new debug job feature.

Try setting set LD_LIBRARY_PATH which is like PATH for libraries. For example:
LD_LIBRARY_PATH= $HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
More detailed information about library path variables is here.
Environment variables that specifically influence how the configure script passes arguments to compilation are LIBS and LD_FLAGS. bash ./configure --help mentions these.
And as you mention in the comments LIBRARY_PATH also needs to be set. See LD_LIBRARY_PATH vs LIBRARY_PATH for an explanation of the difference.

Related

Travis CLI: Encrypting with "travis encrypt" returns — resource not found ({"file":"not found"})

I am trying to encrypt token in .travis.yml file using command travis encrypt 123 --add deploy.api_key --pro regarding Travis CI docs.
Instead I got in console resource not found ({"file":"not found"})
Prerequisites:
I have installed Travis CLI on my machine. Also I successfully logged in travis from CLI with command travis login --github-token {tokenHere} --com. Github is connected with Travis CI.
Result with --debug —
command result with --debug
If I go manually to https://api.travis-ci.com/repos/{Nickname}/HerokuTest I receive an XML file with info about the project.
Please, can you tell me, how can I see what is wrong here, or where else I can seek?
Resource not found basically means that Travis isn't able to find your project. As far as I know, secrets are associated with the project, so Travis needs to find which project are you generating your secrets for. All that said, you're probably missing the -r flag. Try something like this:
travis encrypt aws_secret_access_key=very_secret_string -r mygithubusername/reponame --com

Jenkins neo.sh not found error while using SAP Project Piper Library

I am setting up a CI/CD pipeline scenario for SCP NEO environment based on the prebuilt pipeline on Project Piper. I tried to execute a pre-built library called Project Piper for Jenkins and I got the following error.The error seems neo.sh is not found. But I downloaded neo SDK and placed it in the neo-sdk folder. Also neo.sh is available inside /opt/sap/neo-sdk/neo-java-web-sdk-3.39.10/tools folder in linux
Please see error in Jenkins
please see .pipeline/config file where that location is referenced
Docker is not used and I set-up Jenkins in ubuntu inside Vmware virtual machine.If the docker is not available,the library is capable of running locally in Jenkins server.
I am keeping neo-sdk tool in a local folder which contain neo.sh which is used to deploy application to SAP Cloud Platform.I am not writing any script my own as everything is prebuilt scripts from Project piper
As already state in the GH issue you should extend your PATH env var to also look inside /opt/sap/neo-sdk/neo-java-web-sdk-3.39.10/tools.
You do this by executing export PATH=$PATH:/opt/sap/neo-sdk/neo-java-web-sdk-3.39.10/tools.
Or an even better way would be to symlink the neo.sh into a folder that is already on the PATH.
With echo $PATH you can display the env var and have a look which directories are already exposed.
Issue is solved and thanks both of you for the same. I used envInjecter Plugin in Jenkins. Then go to manage jenkins->Configure->Set environment variables and set path as in
For more detail see the comment from XP84 in this StackOverflow link

Jenkins drops a letter from file paths

We have a Code Composer Studio (Eclipse) project that uses CMAKE to generate makefiles and build. The project compiles as expected when the project is manually imported onto the Jenkins slave (Win10 x64) and executed from the command line but fails when the build is handled by Jenkins. The failure always follows the same pattern: a singular letter is dropped from the path of an object file. For example, [Repo directory]/Cockpit_Scaling_and_Exceedance_data.dir becomes [Repo direcory]/Cockpit_Scaling_and_Exceedance_ata.dir and linking fails because it cannot find the referenced object file.
I made sure that there are no differences between the account environment variables and the system environment variables and have also configured the Jenkins Service to use the admin account on the slave instead of SYSTEM in order to get rid of as many differences between Jenkins and the command line as possible.
The project will build successfully using one of our other Jenkins slaves (also Win10 x64), so we know that it's not a Windows 10 issue or a problem with our Jenkins configuration. Since I can't find any differences between the configuration of the two slave machines, I was hoping that someone might be able to suggest somewhere to look for this path issue.
I never found out why the paths to object files were being mangled, but I did get the project to build successfully on the slave via Jenkins. All I did was change all of my system environment variables into user environment variables. I copy-pasted, so I know that the variables themselves did not change.
I have no idea why this corrected this issue as I had inserted a whoami call at the beginning of the build to confirm that Jenkins is indeed running as a user and not System. I guess from this point on all of my environment variables will be specific to a user and not SYSTEM...
EDIT: The problem has returned. I have made no further progress in tracking down the cause behind this issue, but I have found that I do not see this symptom when running the scripts in a bash environment instead of a Windows command prompt. Fortunately for me the scripts have all been written in such a way that they can be run in both environments, so I have had my coworkers use bash instead for them.

Continuous Deployment with Codeship doesn't recognize environment variables

Recently I started to use Codeship as CI/CD tool for a small website that I am maintaining. I set up my Codeship project to deploy via sftp as described in their guide here.
The part where it fails is in the production script. I created a deploy folder and a production.sh script which contains the line:
put -rp "${HOME}/clone/build/*" /path/to/remote/dir
However when running the build I get the following error:
sftp> put -rp "${HOME}/clone/build/*" /path/to/remote/dir
stat ${HOME}/clone/build/*: No such file or directory
Echoing $HOME in a test script directly in Codeship gives me my home directory, so the environment variable works. However, at the moment the batch script is run, the environment variable is unrecognized.
How can I fix this? I'd rather not hardcode the path in my deployment script. It also doesn't seem possible that this happens because I suffixed production.sh, whereas in the docs they only have a production script?
With no answer coming from the people from Codeship, I resulted to writing the absolute path to the ${HOME} directory. I've been doing this for a time now with a few different projects and it all seems to work.
replace ${HOME}/clone with ~/clone
this worked for me

Why is Jenkins ignoring the %PATH% variable when using MSBuild?

I am trying to use Jenkins to compile my MSBuild project created with Delphi. I have the MSBuild plugin installed into Jenkins and configured. I'm choosing the specific configuration for my build job.
I have set all the environmental variables in Jenkins that are required by the Delphi compiler (from rsvars.bat for you Delphi types.)
The project compiles just fine on the command line. If I do it on the command line, MSBuild reports a nice big fat PATH (the correct one) as part of the command line it uses to call the Delphi compiler.
However, when I try to use it with Jenkins, the result is quite different:
C:\Program Files (x86)\Embarcadero\RAD Studio\8.0\bin\dcc32.exe -$D- -$L- -$Y- --no-config -B -Q -AWinTypes=Windows;WinProcs=Windows;DbiTypes=BDE;DbiProcs=BDE;DbiErrs=BDE -DRELEASE -K00400000 HTMLWriterTestApp.dpr
Embarcadero Delphi for Win32 compiler version 22.0
Copyright (c) 1983,2010 Embarcadero Technologies, Inc.
Noet the complete lack of a path, or any other information about were to find what the compiler needs. This information is there when I run from the command line.
Can anyone think of any reason why Jenkins is failing to get the correct PATH information?
Depending on how you run Jenkins, it may not have the full path line that you are used to seeing. For example, if you run Jenkins as a Windows Service and have your USERS PATH variable populated, you won't necessarily have it populated for the SYSTEM user. In this case, modify the Logon Account used by the Service to be your account, rather than a system one.
I have Jenkins running on a server inside Glassfish, running as the local system account, as it was installed, by using a derivation of this blog post, and I was able to get it to work by setting property variables in the "system configuration" (Jenkins Environment Injector Plug-in) in Jenkins. (BDS, BDSCOMMONDIR, FrameworkDir, FrameworkSDKDir etc...)
Then the trick for Delphi to pick up the appropriate path is to send the command line parameter "Win32LibraryPath" to MSBuild. Make sure to escape your double quotes in this parameter in Jenkins or else you will pull out your hair.
I had Jenkins started as windows service and it could not find an SVN command even if I had SVN\bin in my PATH variable for the System user.
It seems that the service uses only the environment variables available at start up time.
So if later on you add some more environment varibales to the Windows System user, they will not be available to the service.
All you have to do is restart the window service and it will pick the new environment variables !
Anything with git pull/ where git commands, which are not executing from Jenkins is because of the path issue in the environmental variables in Windows.
Check the PATH in Environment variables.
Check the same command executes from windows command prompt or not.
If it is executing & Windows is running as slave service, then restart the slave service from services.msc.
Log out and login back to Jenkins.

Resources