Yarn 2.0 zero installs production vs development deployment? - docker

So we are in the process of moving from yarn 1.x to yarn 2 (yarn 3.1.1) and I'm getting a little confused on how to configure yarn in my CI/CD config. As of right now our pipeline does the following to deploy to our kubernetes cluster:
On branch PR:
Obtain branch repo in gitlab runner
Lint
Run jest
Build with environment variables, dependencies, and devdependencies
Publish image to container registry with tag test
a. If success, allow merge to main
Kubernetes watches for updates to test and deploys a test pod to cluster
On merge to main:
Obtain main repo in gitlab runner
Lint
Run jest
Build with environment variables and dependencies
Publish image to container registry with tag latest
Kubernetes watches for updates to latest and deploys a staging pod to cluster
(NOTE: For full-blown production releases we will be using the release feature to manually deploy releases to the production server)
The issue is that we are using yarn 2 with zero installs and in the past we have been able prevent the production environment from using any dev dependencies by running yarn install --production. In yarn 2 this command is deprecated.
Is there any ideal solution to prevent dev dependencies from being installed on production? I've seen some posts mention using workspaces but that seems to be more tailored towards mono-repos where there are more than one application.
Thanks in advance for any help!

I had the same question and came to the same conclusion as you. I could not find an easy way to perform a production build on yarn 2. Yarn Workspaces comes closest but I did find the paragraph below in the documentation:
Note that this command is only very moderately useful when using zero-installs, since the cache will contain all the packages anyway - meaning that the only difference between a full install and a focused install would just be a few extra lines in the .pnp.cjs file, at the cost of introducing an extra complexity.
From: https://yarnpkg.com/cli/workspaces/focus#options-production
Does that mean that there essentially is no production install? It would be nice if that was officially addressed somewhere but this was the closest I could find.
Personally, I am using NextJS and upgraded my project to Yarn 2. The features of Yarn 2 seem to work (no node_modules folder) but I can still use yarn build from NextJS to create a production build with output in the .next folder.

Related

BentoML: how to build a model without importing the services.py file?

Is it possible to run bentoml build without importing the services.py file during the process?
I'm trying to put the bento build and containarize steps in our CI/CD server. Our model depends on some OS packages installed and some python packages. I thought I could run bentoml build to package the model code and binaries that are present. I'd leave the dependencies especification to the contanairize step.
To my surprise the bentoml build process tried to import the service file during the packaging and the build failed since I didn't have the dependencies installed in my CI/CD machine.
Can I prevent this importing while building/packaging the model? Maybe I should ignore the bento containarize and create my bento container by hand and just execute the bentoml serve inside.
I feel that the need to install by hand the dependencies is doubling the effort to specify them in the bentofile.yaml and preventing the reproducibility of my environment.
This is not currently possible. The community is working on an environment management feature, such that an environment with the necessary dependencies will be automatically created during build.

Best practice for running Symfony 5 project with Docker and Docker-Swarm

I have an existing Symfony 5 project with a mysql database and a nginx Webserver. I wanna dockerize this project, but on the web I have found different opinions how to do that.
My plan is to write a multi-stage Docker file with at least a dev and a prod stage and let this build with docker-swarm. In my opinion it is useful to install the complete code during the build and having multiple composer.json files (one for every stage). In the web I have found opinions to not install the app new on every build but to copy the vendor and var folder to the container. Another opinion was to start the installation after the build process of the container is ready. But I think with that the service is not ready, when the app is successfully deployed.
What are you thinking is the best practice here?
Build exactly the same image for all environments
Do not build 2 different images for prod and dev. One of the main docker benefits is, that you can provide exactly the same environment for production and dev.
You should control your environment with ENV vars. for example, you can enable Xdebug for dev and disable it for prod.
Composer has option to install dev and production packages. You should use this feature.
If you decide to install some packages to dev. Try to use the same Dockerfile for both environment. Do not use Dockerfile.prod and Dockerfile.dev It will introduce some mess in the future.
Multistage build
You can do multistage build described in the official Docker documentation if your build environment requires much more dependencies than your runtime.
Example of it is compilation of the program. During compilation you need a lot of libraries, and you produce single binary. So your runtime does not need all dev libraries.
First stage can be build in second stage you just copy binary and it is.
Build all packages into the docker image
You should build your application when Docker image is building. All libraries and packages should be copied into image, you should not install them when the application is starting. Reasons:
Application starts faster when everything is installed
Some of the libraries can be changed in future or removed. You will be in troubles and probably you will spend a lot of time to do debugging.
Implement health checks
You should implement Health-Check. Applications require external dependencies, like Passwords, API KEY, some non sensitive data. Usually, We inject data with environment variables.
You should check if all required variables are passed, and have good format before your application is started. You can implement Health-Check or you can check it in the entrypoint.
Test your solution before it is released
You should implement mechanism of testing your images. For example in the CI:
Run your unit tests before image is built
Build the Docker image
Start new application image with dummy data. If you require PostgreSQL DB you can start another container,
Run integration tests.
Publish new version of the image only if all tests pass.

Front-End assets build using gulp in Jenkins CI server

I use Jenkins to deploy a PHP webapp which uses gulp for building its assets (javascripts, styles, image optimization...).
I have a package.json and a gulpfile.js in my repo root folder which are used for development purposes, but I thought it would be a good idea to use them as well to build all my assets from my Jenkins server creating a gulp build task.
So this would be the Jenkins deployment process:
Download git repo
Run tests
Create assets from source files in repo
Do changes in code for production environment
Upload files to production server
in step 3 I have to run npm install which takes ages. Is there any way to tell Jenkins to run npm install only if package.json has changed from last commit?
Any ideas?

Build Rails+SPA as a distribution prior to deployment

I have a small but growing Angular application that runs with a Rails backend.
Deployment right now takes a very long time to run because all of the devDependencies have to be installed and then everything has to compiled.
What I'd like to do is have a distribution created for my application, and then that is used to deploy from. I think I'm going to want this anyhow since the application is going to be downloaded by users. I'd rather they not have to deal with installing the mass of npm modules and gems that should only be needed for development if they didn't have to.
Is Jenkins going to fit the bill here? The tasks I see that need to be accomplished to create a new distribution are:
Lint the code with JSHint, Rubocop, Brakeman, ...
Compile and or compress the JavaScript, Sass, images
Run the Karma, Rspec tests
Rev the files
Clean up any temporary or unnecessary files
Create git tag
Commit build
Also is it odd to want to commit the builds to a {original}-dist repository?

Is there a way to install the ios-universal-framework on a cloudbees spawned Jenkins slave?

Using a Master Jenkins on premises, and the cloudbees plugin, I am able to kick off iOS builds to a cloud bees instance. It's pretty sweet.
My IOS developers require cloning the ios-universal-framework repo, and then running an install.sh script contained within that repo. Everything works fine until the script issues a "sudo" command to copy files into the directory
"/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Specifications"
The cp command needs sudo privileges. I'm thinking this is not possible but since I'm on the free trial plan, this is where I can find support. Thanks to all for reading.
Tony

Resources