I have a monorepo project that is using Rush which each projects can be either both frontend or backend dependencies. Some projects are used by both frontend and backend while some are for only either. Thus, I only want to install each specific dependency on the monorepo.
I first tried to install on the frontend using Git clone url with something like this:
https://bitbucket.myprivateorg.mycountry/scm/#.git/<monorepo_project_subdirectory>.
I also tried some other combinations but of course it did not work. I have looked it up, and I noticed that npm lacks this support. However, some solutions that I saw was GitPkg but it only works for GitHub. All my repos are strictly on BitBucket only.
Is there any equivalent of GitPkg for BitBucket?
Related
What is difference between Github Project and Git from Source Code Management in Jenkins?
Github is dedicated to using github services and it unlocks specific github features.
The other will work with any git SCM server including external vendors (github, bitbucket, your own infrastructure , team foundations server etc) . This means it will support only git standard features, nothing that is vendor specific.
You can configure both without problem.
If you install bitbucket plugin, you will have a similar third section.
We have created successfully a bower package and it is working great with Subversion and private-bower.
The issue I am facing now, is that we need also the generated files to be commited into (Subversion or Git) to work properly for
bower install
or
bower update
Now every build creates a conflict in the local copy of the repository.
My question is, can I tell bower to do a post install or post update command to execute a build?
In my case it should run a grunt task to build the files locally.
Just wondering if bower is not capable of doing such steps to avoid conflicts on the git/svn repository?
Or what is the suggested way to avoid merge conflicts?
There are postinstall hooks in bower https://github.com/bower/bower/blob/master/HOOKS.md but you can't rely on it as a package provider (they're designed to be used by the developpers who install your package)
For your situation, teams that provide bower packages that require a build step have two main workflows:
The repo tied to the bower registry is the source repo. Sources & build files are in it (like bootstrap). So when you bower install, you retrieve the whole repo with all the sources, build routines, etc ... which can be quite big. That's your case currently.
The repo tied to the bower registry and the repo holding the sources are two different repos (like angular):
Your build directory is in fact the repo tied to bower
Each time you make a new release, you build and then commit from that repo.
If you're having versionning problems, maybe you should switch to the second workflow (which will also let you clean all the unecessary files like build routines).
I have looked into jenkins tutorials and all most all of them mention that we should provide with the URL to the git repo.
Fine.
But once jenkins has an access to the git repo, what part of project does it look into to figure out which tests should be run or wether to run them at all etc ? Is it some configuration file in the repo ?
Guess that depends on what kind of project your repo is. If I understand the question correctly. The provided url gives Jenkins the information to do a git clone url which checks out the project in Jenkins workspace.
Then according to the type, lets say it's a Maven-project, you fill in the goals you'd like Jenkins to run locally. Usually clean test. It is then run at top level, root of the project, guessing it will find a pom.xml there. If not you'll have to tell it where to look.
A more clearer answer would perhaps be easier if you told what kind of project you'd like to build.
I have this confusion and perhaps it may be basic question. I am planning to work on a Rails project along with a friend who stays in a different location.
We have identified Heroku as our deployment platform and Bitbucket for SCM related activities.
Both me and my friend are new to rails but we are familiar with web development in general.
I m working on a Windows box while he is on a Mac. We both have the same rails version including the gems. However, I'm not sure really sure how do we manage the source code and code integration. The reason I say this is because, when we try to commit the entire code from our systems a few platform specific rails file gets uploaded on the server, thereby rendering the deployment useless.
So my question is if I am on Windows and my friend is on Mac, whats the recommended way of working together on a single RAILS project and deploy it on a common platform to get the same desired functionality.
Yes, by using the source control management (SCM) you selected when you set up your repository.
For instance, if you use git, you would copy your repository using git clone (the command is provided via the bitbucket interface by clicking on clone), make your changes, and then git push your changes back into the repository.
When you want to code next, execute a git pull command to get the latest repo changes and then work and git push your changes back to the repo.
For examples see Bitbuckets fantastic tutorial.
As a side note, bitbucket also supports mercurial, although I haven't used it.
As far as your actual issue, each person will need to make sure the platform dependent files are excluded from your repository. If you're using git, see the git book specifically the section on .gitignore and git rm
I am developing some school grading software and decided to use Github to host the project. After building some code on my Ubuntu box I pushed it to Github and then cloned it down to my MacBook Pro. After editing the code on the MBP I pushed it back to Github. The next morning I tried to update my repo on the Ubuntu box with a git pull and it gave me all kinds of trouble.
Whats the best way to work in this situation? I don't want to fork my own repo and I don't really want to send myself emails or pull requests. Why can't I just treat Github like a master and push/pull from it onto all of my personal repos on different computers?
I'll assume your problem was that the machine on which you first created the repo crapped out when you tried to issue the git pull command.
When you clone an existing git repository (like you did on your 2nd machine, the MacBook Pro), you're automatically set up to so your git pull commands will automatically merge the remote with your local changes.
However, when you initially create a repo and then share it on a remote repository, you have to issue a few commands to make things as automated as a on cloned repo.
# GitHub gives you that instruction, you've already done that
# git remote add origin git#github.com:user_name/repo_name.git
# GitHub doesn't specify the following instructions
git config branch.master.remote origin
git config branch.master.merge refs/heads/master
These last few instructions configure git so future git pull's from this repo will merge all remote changes automatically.
The following is a bit of shameless self-promotion. If you use Ruby, I have created a Ruby-based tool that lets you deal with all these kinds of things with git remote branches. The tool is called, unsurprisingly, git_remote_branch :-)
If you don't use Ruby, my tool is probably gonna be too much of a hassle to install. What you can do is look at an old post on my blog, where most of the stuff grb can do for you was explicitly shown. Whip out your git notes file :-)
You can also add multiple SSH public keys.