Bower docs says
N.B. If you aren't authoring a package that is intended to be consumed by others (e.g., you're building a web app), you should always check installed packages into source control.
Does anyone have a good answer to why?
If I am making a web app I don't want my repo cluttered with updates in version of library X.
I just want to update bower.json dependencies. I would think most projects will have a build step or similar, for instance with grunt. The build step would make sure to call bower install/update before building, so that those files are present for concat/minification etc. Or even a plain copy to some dist folder.
Am I missing something?
It's to lock down your dependencies so to prevent a bad dependency from breaking your app or the remote being down preventing deployment. This could happen even though you have a build step, since you probably don't thoroughly test on every build, and automated tests don't catch everything, especially not visual regressions. Also multiple developers might have different versions of a dependency. By having the dependencies committed you ensure everyone stays on the same version. I also find viewing the diff is a good way to ensure nothing malicious was introduced in the dependency tree.
In the Node world npm shrinkwrap partially solves this, but doesn't yet do checksum matching. Bower currently have an open ticket to implement the same.
You can read more about it in this blog post: Checking in front-end dependencies
This answer is non technical but a practical reason to not check in bower components.
I'd rather recommend bower packages to be locked down in bower.json rather than checking in these packages. Because trust me, you cannot have thousands of file downloading and unpacking in a computer. Slow performing computers have a problem with very large and deep file paths. And in this world of internet, I believe it's always easy to download the packages rather than carrying them around.
It is just a matter of preference. It all comes from experience. I have checked in a project with bower components on Github and it is worse while uploading and downloading. I did it through a relatively new Mac.
Related
The last time I deployed the project, the build worked perfectly.
In the meantime I have changed nothing that would affect the pip requirements, yet I get an error when building:
Could not find a version that matches Django<1.10,<1.10.999,<1.11,
<1.12,<1.9.999,<2,<2.0,==1.9.13,>1.3,>=1.11,>=1.3,>=1.4,>=1.4.10,
>=1.4.2,>=1.5,>=1.6,>=1.7,>=1.8
I get the same error when building the project locally with docker-compose build web.
What could be the problem?
The problem here is that although you may not have modified any requirements, the dependencies of a project can sometimes change on their own.
You may even have pinned all of your own requirements (which is generally a good idea) but that still won't help if one of them itself has an unpinned dependency.
Anywhere an unpinned dependency exists, you can run into this.
Here's an example. Suppose your requirements.in contains super-django==1.2.4. That's better than simply specifying super-django, as you won't be taken by surprised if a new, incompatible version of the Super Django package is released.
But suppose that in turn Super Django 1.2.4, in its requirements, lists:
Django==1.11
django-super-admin
If a new version of Django Super Admin is released, that requires say Django>=2.0, your next build will fail because of the mutually exclusive requirements.
To track down the culprit when you run into such a failure, you need to examine the build logs. You'll see there something like:
Could not find a version that matches Django==1.11,>=2.0 [etc].
So now you know to look back through the logs to find what is wanting to install Django>=2.0, and you'll find:
adding Django>=2.0
from django-super-admin==1.7.0
So now you know that it's django-super-admin==1.7.0 that is the key. Since you can't trust super-django to pin the correct version of django-super-admin, you'll have to do it yourself, by adding django-super-admin<1.7.0 to the requirements.in of your project.
There's more information about this at How to identify and resolve a dependency conflict.
You can also Pin all of your project’s Python dependencies to ensure this never happens again with any other dependency, though you sacrifice some flexibility for the guarantee.
Note: I am a member of the Divio team. This question is one that we see quite regularly via our support channels.
It seems there is a breaking change to the name of the bower package (https://github.com/Breeze/breeze.js.labs/commit/193e79443918c3836699aa06a0545e101f44d54d).
It looks like the name of the bower package was changed from breeze.js.labs to breeze-client-labs. I didn't see anything documented in any of the recent release notes.
This is causing my build to fail since the bower.json file still references breeze.js.labs and not breeze-client-labs.
I would expect the old package to remain to prevent from breaking peoples builds.
Thanks
You're catching us in release transition. Haven't prepared release notes yet. Just days away.
However, the breeze.js.labs github repo never served a bower package before. There was a "breeze.js.labs" bower package served out of a different github repo. It hadn't been updated in months and frankly I don't know what it held. It was not official. It was not ours. We had no responsibility for it. I asked the owner to give it up and he did (for which we are grateful).
I also note that "breeze labs" warns you that it is never to be regarded as stable or permanent. That's not a license to break you for the fun of it but it it gives us more leeway than we have in breeze core.
The net of it is that I'm sorry for the inconvenience .. and that's as far as that goes.
Moving forward, there are 2 packages to consider: breeze-client and breeze-client-labs. Please adjust accordingly.
We're using bower to manage all of our front-end dependencies for our project. I've run into an issue that I think is solvable, but I'm not familiar enough with bower to understand how to do it.
In our project we have one particular dependency that needs to be modified slightly, as in probably 3 lines of code, to meet out project needs. Obviously it's not kosher to edit the file in bower_components directly, but this change needs to be made. What's the best way to go about doing this, and maintaining the dependency tree without having to commit the bower_components.
Understandably we'd have to make a sort of "local copy" of the dependency that we grabbed from Github initially. Any tips?
You could fork it and point to your fork instead or just clone it somewhere locally and point to the local repo.
I am using bower in a client-side project. Not all devs will have bower on their machine, so we need to include all bower dependencies in our source repo. But we are only allowed to check in the parts of the bower_components directory that are directly being used by the project (ie- only check in css/js/html files, and avoid checking in test, docs, etc).
Is there an existing script that can help with this, or do I need to manually go through and delete all unwanted pieces of bower components?
Unfortunately that's the only safe way you can do it, for now. The real solution is to encourage package authors to add unneeded files to the ignore property in their bower.json.
Some times ago I asked the question about how to integrate an application using dependencies on a build server and I had quite satisfying answers. Today I am facing a different case. For a project I have to use non-redistribuable depedencies (RDL object model for SSRS). It means that out-of-the-box, these assemblies are not made to be deployed for development purpose. But somehow, I need to...
My first guess was to publish them in the GAC. Fine, it worked and the build server was able to compile the project smoothly. But then I realised that it broke some applications like the Report Server and the Report Builder (probably it would also break BIDS). So publishing in the GAC is definitely not a decent solution.
My second guess was to check the assemblies in source control. Well, it could work if I had only 2 assemblies for an amount of about 1MB. But here it is 23 assemblies and 29MB I have to check in, so it is definitely not suitable either.
I don't know much about MSBuild targets and maybe it could be a solution but I really have no idea on how to use it. I have been scratching my head hard and now I have to chose between breaking my builds or breaking my services!
As some people stated in comments we finally decided to source control the assemblies.
But as we are in an environment where we sometimes need to move a lot, which means not always in office, and need to work from distance with occasionally somewhat unreliable Internet connection, we decided to put some strict condition on whether we source control the assemblies or we deploy them on the build server and development machines.
Assemblies will be source controlled if all these criterias are met:
Assemblies/Framework is not deployable/redistribuable
Assemblies/Framework deployment may interfere with local machine services stability
Total amount of deployed assemblies on the project collection does not exceed 100MB
You could try using a different repository just for these assemblies, and do a checkout/update during the build job.
Also, if you want to keep it in the main repo as well, you could use svn:externals (http://svnbook.red-bean.com/en/1.0/ch07s03.html) to automatically update the DLLs when you update your working copy.