Is there any method of using the p4 grep command line to conduct syntactical searches against the remote perforce repository? I understand p4 grep can be used when cloning a repo to local disk, but this would require hundreds of GBs of diskspace, not to mention the time required to sync that much data. I'd prefer a way of doing this without the requirement to clone source code locally.
There is absolutely no requirement to clone a local copy to run any Perforce command as long as you can connect to the server.
p4 set P4PORT=remote.p4.server:1666
p4 set P4USER=your_username
p4 login
p4 grep -e EXPR //depot/path/...
Related
Is there an AWS CLI command to simply load my image tar to AWS? (I do not have docker on my computer at all nor do I want to increase the image size of our CD pipeline as I prefer to keep it very very small since it adds to the time to start in circleCI/githubCI).
Is there a way to do this?
If not, what is the way to do this with docker so I do not have to upload into local docker registry and instead can just directly load to AWS ECR.
Context:
Our CI job builds on the PR 'and' writes a version/build number on the image and into a file in CI on the PR as well BUT NONE OF THIS should be deployed yet until on master branch. PR cannot merge into master until up to date with master and a merge of master into PR triggers CI again so PR is guaranteed to be latest when landing on master. This CI job has the artifact that can be deployed by CD (ie. there is no need to rebuild it all over again which takes a while sometimes)
Our CD job triggers on merge to master and reads the artifacts and needs to deploy them to AWS ECR and the image for our CD is very very small having just AWS tooling right now (no need for java, no need for gradle, etc. etc.).
I am sorry because I don't fully understand your requirements, but maybe you could use tools like skopeo in some way.
Among other things, it will allow you to copy your container tar to different registries including AWS ECR without the need to install docker, something like this:
skopeo login -u AWS -p $(aws ecr get-login-password) 111111111.dkr.ecr.us-east-1.amazonaws.com
skopeo copy docker-archive:build/your-jib-image.tar docker://111111111.dkr.ecr.us-east-1.amazonaws.com/myimage:latest
This approach is documented as well as a possible solution in this Github issue in the GoogleContainerTools repository.
In the issue they recommend another tool named crane which looks similar in its purpose to skopeo although I have never tested it.
I'm setting up Jenkins to build and then send changed files to a remote server using SSH. However, using the Publish over SSH plugin, I can only find an option to specify files to send over. I only want to send over files that have changed on GitHub. Is there a way to achieve this?
What you want to do might be outside of the scope of the Publish Over SSH plugin, but it is doable as a shell script.
You can run a command like this to get the files changed between the current commit and the last commit: git diff --name-only $GIT_PREVIOUS_COMMIT $GIT_COMMIT
Then using the results of that you can run a shell scp command.
You can do this in a pipeline or in a execute script post-build action.
Whenever I create a new job on Jenkins, it creates 2 workspaces on the perforce server. (with a suffix -0 and -1)
Is it possible to tell Jenkins to only create a local workspace on the machine and not create the workspace on the Perforce server.
As I have many Jenkins jobs, it is soon going to be clutterd on the perforce server with so many workspaces.
Perforce is a centralized system where the central server is the source of truth for the state of each workspace. It's technically possible to pull files from Perforce without creating a tracked workspace, and you could technically rewrite the Jenkins plugin to do this, but practically speaking that's like surfing Wikipedia via curl because you don't want to clutter your desktop by installing a browser.
My recommendation is to configure Jenkins to prefix all of its workspaces with a conveniently filterable string like ~jenkins~ so you can easily ignore them, and then to move on with your life.
I want to take scheduled backup of Jenkins jobs form Jenkins server to some remote machine. I tried exploring several Jenkins plugins for that but none of that are not able to take backup to the remote machine.
I successfully take backup of the Jenkins workspace using the shell script but in that case it's hard to restore the backup.
Any Suggestions?
If I can suggest a different approach - if you're using any kind of source control it'll be better to backup your files and configuration there. for example - if you will work with git you can open a repository for your Jenkins configuration.
Back up your:
jobs folder
nodes folder
parent folder files (config.xml, all plugins configurations, etc.)
then it is only a matter of running a scheduled job from Jenkins every 12 hours running:
cd $JENKINS_HOME
git add --all
git commit -m "Automated backup commit: $BUILD_TIMESTAMP"
git push
* Make sure you have the right permissions to run those command on the master
This will enable you to:
Keep backups for you Jenkins configuration
Manage versions of backups
View history of changes you made to your configurations
Hope it helps.
My repository url was changed so I updated hgrc file with new url. I also updated new url in jenkins job.
Now when I am building the job, it hangs with the following output
-----------------Console Output-------------------
Started by user user123
Building in workspace D:\jenkins\jobs\api\workspace
[workspace] $ "C:\Program Files\TortoiseHg\hg.exe" showconfig paths.default
[workspace] $ "C:\Program Files\TortoiseHg\hg.exe" pull --rev branch
And it will never move forward. If i run the same command on cmd
"C:\Program Files\TortoiseHg\hg.exe" pull --rev branch
It works fine with following output
pulling from ssh://repos-url/repos-name
no changes found
But jenkins hangs on this command. Need some help to move forward.
Thank you
It sounds to me more a jenkins configuration question than a mercurial one :)
Are you talking about the identical clone of the repository? Does the jenkins user have read permissions on the repository it pulls from? Is it configured to pull via ssh, too and does it have the necessary ssh credentials? Or, if pulling via http, is hgweb running on the repo or another webserver to support hg?
Also, unless your project is called 'api', the URL looks strange to me: Jenkins (by default) has its clones in /jenkins-home-directory/jobs/projectname/workspace
As I mentioned in my question that I recently changed repository url. The issue was that new server's key was not cached in the registry where jenkins was hosted.
Resolution:
I logged in to the administrator account(same account used by jenkins) on my server through RDP and on the other side I started building the job in jenkins. When the console output came to this line
[workspace] $ "C:\Program Files\TortoiseHg\hg.exe" pull --rev branch
RDP window showed me an alert which was
Putty Security Alert
I pressed Yes and I saw jenkins console is now progressing and build was successful after that.