I have a small but growing Angular application that runs with a Rails backend.
Deployment right now takes a very long time to run because all of the devDependencies have to be installed and then everything has to compiled.
What I'd like to do is have a distribution created for my application, and then that is used to deploy from. I think I'm going to want this anyhow since the application is going to be downloaded by users. I'd rather they not have to deal with installing the mass of npm modules and gems that should only be needed for development if they didn't have to.
Is Jenkins going to fit the bill here? The tasks I see that need to be accomplished to create a new distribution are:
Lint the code with JSHint, Rubocop, Brakeman, ...
Compile and or compress the JavaScript, Sass, images
Run the Karma, Rspec tests
Rev the files
Clean up any temporary or unnecessary files
Create git tag
Commit build
Also is it odd to want to commit the builds to a {original}-dist repository?
Related
Is it possible to run bentoml build without importing the services.py file during the process?
I'm trying to put the bento build and containarize steps in our CI/CD server. Our model depends on some OS packages installed and some python packages. I thought I could run bentoml build to package the model code and binaries that are present. I'd leave the dependencies especification to the contanairize step.
To my surprise the bentoml build process tried to import the service file during the packaging and the build failed since I didn't have the dependencies installed in my CI/CD machine.
Can I prevent this importing while building/packaging the model? Maybe I should ignore the bento containarize and create my bento container by hand and just execute the bentoml serve inside.
I feel that the need to install by hand the dependencies is doubling the effort to specify them in the bentofile.yaml and preventing the reproducibility of my environment.
This is not currently possible. The community is working on an environment management feature, such that an environment with the necessary dependencies will be automatically created during build.
There are a few things done in monorepos/monobuilds (you can do a monorepo with no monobuild) that make things very nice but I don't see how yarn workspaces solves it just yet. One of the main ones is I do not see how yarn workspaces can do this part of a mono build process (very typical for scale)
git status to figure out which files changed
map those files to projects that have changed
build those projects and projects that depend on those and projects that depend on those
I am a little confused there. As a monobuild scales up, we really desire build times of a server change is under 3 minutes and changes to a library that may affect all projects would take a long time as it builds the entire repo (unless we split it out to different machines and the build time goes way down again).
Don't think there is necessarily one answer here but a number of things to consider in the context of your project:
If your project is really humungously large, consider someting like Bazel which is a bit complex but allows for incremental building and testing.
There are some specific tools to help with building large projects quickly. For instance, for JavaScript, there are Turborepo and Nx.
Yarn Workspaces or npm workspaces can generally help with enabling better monorepo build processes by allowing us to run build scripts only for a subset of workspaces. They won't solve the problem though of figuring out what to build when, they just provide us with the basic building block of running scripts selectively.
Finally a bit of Bash/Git/Makefile magic will probably be required. The following git command for instance can help us determine if files in particular paths have changed since the last commit git diff --quiet HEAD~1 HEAD -- [paths]. Note though this can can create a few annoying edge cases, especially if builds fail and we risk missing out on builing projects that we should build.
There are plugins for some CI/CD platforms that wrap the Git commands in a somewhat easier to use way. For instance, I have used the GitHub action has-changed-path and I think there was a plugin for BuildKite too, but I cannot find the link to that.
Generally I think it will be challenging to have a monorepo setup that avoids installing dependencies for all modules/workspaces and compiling all code. But I think it is possible to get to scale up to a few hundred thousand lines of code and hundreds of dependencies and keep install and compile times under 2-3 min using TypeScript in Yarn - when making good use of TypeScript project references and using something like Yarn Zero Installs.
I have an existing Symfony 5 project with a mysql database and a nginx Webserver. I wanna dockerize this project, but on the web I have found different opinions how to do that.
My plan is to write a multi-stage Docker file with at least a dev and a prod stage and let this build with docker-swarm. In my opinion it is useful to install the complete code during the build and having multiple composer.json files (one for every stage). In the web I have found opinions to not install the app new on every build but to copy the vendor and var folder to the container. Another opinion was to start the installation after the build process of the container is ready. But I think with that the service is not ready, when the app is successfully deployed.
What are you thinking is the best practice here?
Build exactly the same image for all environments
Do not build 2 different images for prod and dev. One of the main docker benefits is, that you can provide exactly the same environment for production and dev.
You should control your environment with ENV vars. for example, you can enable Xdebug for dev and disable it for prod.
Composer has option to install dev and production packages. You should use this feature.
If you decide to install some packages to dev. Try to use the same Dockerfile for both environment. Do not use Dockerfile.prod and Dockerfile.dev It will introduce some mess in the future.
Multistage build
You can do multistage build described in the official Docker documentation if your build environment requires much more dependencies than your runtime.
Example of it is compilation of the program. During compilation you need a lot of libraries, and you produce single binary. So your runtime does not need all dev libraries.
First stage can be build in second stage you just copy binary and it is.
Build all packages into the docker image
You should build your application when Docker image is building. All libraries and packages should be copied into image, you should not install them when the application is starting. Reasons:
Application starts faster when everything is installed
Some of the libraries can be changed in future or removed. You will be in troubles and probably you will spend a lot of time to do debugging.
Implement health checks
You should implement Health-Check. Applications require external dependencies, like Passwords, API KEY, some non sensitive data. Usually, We inject data with environment variables.
You should check if all required variables are passed, and have good format before your application is started. You can implement Health-Check or you can check it in the entrypoint.
Test your solution before it is released
You should implement mechanism of testing your images. For example in the CI:
Run your unit tests before image is built
Build the Docker image
Start new application image with dummy data. If you require PostgreSQL DB you can start another container,
Run integration tests.
Publish new version of the image only if all tests pass.
I am working on a research project where I am going to analyze several of the commits in the version control history for Ruby on Rails. I would like to see which commits pass or fail Rails' test scripts. If it is possible, I would also like to see which commits can be built.
For example, if I were analyzing a Java project, I would try to compile the project at each commit. If the compilation process failed, then I would know that the commit broke the build. Then, on a successful build, I could run the test scripts.
I know that most Ruby projects do not have a build process in the same sense that Java projects do because Ruby is interpreted. So an error that a compiler could check, like a syntax error or type error, will always be found at runtime. Because of this, I'm not sure whether Rails has a build process and if it does, what that process is. Does Rails have a build process or a process that is analogous to compiling a project in a compiled language?
Yes and no. There are definitely some setup steps before you can run any tests.
Like all good open source projects if you scroll down on the README page at the link you included in your question, you'll see the Contributing section. That links to everything you need to know in order to set up Rails and start either developing, or doing your test research project thing.
I'm working on my first rails app and am struggling trying to find an efficient and clean solution for doing automated checkouts and deployments.
So far I've looked at both CruiseControl.rb (having been familiar with CruiseControl.NET) and Capistrano. Unfortunately, unless I'm missing something, each one of them only does about half of what I want (with each one doing a different half).
For what I've seen so far:
CruiseControl
Strengths
Automated builds on repository checkouts upon commit
Also runs unit/functional tests and reports back
Weaknesses
No built-in deployment mechanisms (best I can find so far is writing your own bash scripts)
Capistrano
Strengths
Built for deployments
Weaknesses
Has to be kicked off via a command (i.e. doesn't do automated checkouts upon commit)
I've found ways that I can string the two together -- i.e. have CruiseControl ping the repository for changes, do a checkout upon commit, run the tests, etc. and then make a call to Capistrano when finished to do the deployment (even though Capistrano is also going to do a repository checkout).
Basically, when all is said and done, I'd like to have three projects set up:
Dev: Checkout/Deployment is entirely no touch. When someone commits a file, something checks it out, runs the tests, deploys the changes, and reports back
Stage: Checkout/Deployment requires a button click
Prod: Button click does either a tagged check out or moves the files from stage
I have this working with a combination of CruiseControl.NET and MSBuild in the .NET world, and it was fairly straightforward. I would guess this is also a common pattern in the ruby deployment world, but I could easily be mistaken.
I would give Hudson a try (free and open source). I started off using CruiseControl but got sick of having to relearn the XML configuration every time I needed to change a setting or add a project. Then I started using Hudson and never looked back. Hudson is more or less completely configurable over the web. It was initially a continuous integration tool for Java but has plugins for other development stack such as .NET and Ruby on Rails. There's a Rake plugin. If that doesn't work, you can configure it to execute any arbitrary command line after running your Rake builds/tests.
I should also add it's extremely easy to get Hudson going:
java -jar hudson.war
Or you can drop the war in any servlet container.
I would use two system to build and deploy anyway. At least two reasons: you should be able to run it separately and you should have two config files one for deploy and one for build. But you can easily glue the two systems together.
Just create a simple capistrano task, that tests and reports back to you. You can use the "run" command to do anything you want.
If you don't want any command line tool there was webistrano 2 years ago.
To could use something like http://github.com/benschwarz/gitnotify/tree/master to trigger the build deploy if you use git as repository.
At least for development automated deployments, check out the hook scripts available in git:
http://git-scm.com/docs/githooks
I think you'll want to focus on the post-receive hook script, since this runs after a push to a remote server.
Also worth checking out Mislav's git-deploy on github. Makes managing deployments pretty clean.
http://github.com/mislav/git-deploy