We have a Cloud Composer environment running a no longer current version of Cloud Composer that we would like to upgrade. I found no documentation on how to do that. Does anyone have recommendations on how to upgrade without creating a new environment and losing the run history?
The only current way to update a Composer Environment is to create a new one and migrate all of the data.
This script should useful for recreating the Environments and maintain your DAG run history and settings:
Script to create a copy of an existing Cloud Composer Environment.
Creates a clone of a Composer Environment, copying Environment
Configurations, DAGs/data/plugins/logs, and DAG run history. This
script can be useful when migrating to new Cloud Composer releases.
Related
Although it could be a silly question! but I can't find better solution so posting here.
There is existing old rails project with following detials:
Repository of on Gitlab
Using drone CI & Capistrano (already functional and doing deployments with CI)
Target server is our own server
We want to dockerize our rails app! So, Added Dockerfile and docker-compose files in project & using db from host machine. Project is working fine at local
Want to run this dockerize rails app in CI and deploy to target ENV (Staging, UAT, Production etc)?
Note: We have our own server to save docker images. Plus we don't wanna use sh scripts for the deployment.
I think using docker & drone CI, Capistrano will be removed!
So we are in the process of moving from yarn 1.x to yarn 2 (yarn 3.1.1) and I'm getting a little confused on how to configure yarn in my CI/CD config. As of right now our pipeline does the following to deploy to our kubernetes cluster:
On branch PR:
Obtain branch repo in gitlab runner
Lint
Run jest
Build with environment variables, dependencies, and devdependencies
Publish image to container registry with tag test
a. If success, allow merge to main
Kubernetes watches for updates to test and deploys a test pod to cluster
On merge to main:
Obtain main repo in gitlab runner
Lint
Run jest
Build with environment variables and dependencies
Publish image to container registry with tag latest
Kubernetes watches for updates to latest and deploys a staging pod to cluster
(NOTE: For full-blown production releases we will be using the release feature to manually deploy releases to the production server)
The issue is that we are using yarn 2 with zero installs and in the past we have been able prevent the production environment from using any dev dependencies by running yarn install --production. In yarn 2 this command is deprecated.
Is there any ideal solution to prevent dev dependencies from being installed on production? I've seen some posts mention using workspaces but that seems to be more tailored towards mono-repos where there are more than one application.
Thanks in advance for any help!
I had the same question and came to the same conclusion as you. I could not find an easy way to perform a production build on yarn 2. Yarn Workspaces comes closest but I did find the paragraph below in the documentation:
Note that this command is only very moderately useful when using zero-installs, since the cache will contain all the packages anyway - meaning that the only difference between a full install and a focused install would just be a few extra lines in the .pnp.cjs file, at the cost of introducing an extra complexity.
From: https://yarnpkg.com/cli/workspaces/focus#options-production
Does that mean that there essentially is no production install? It would be nice if that was officially addressed somewhere but this was the closest I could find.
Personally, I am using NextJS and upgraded my project to Yarn 2. The features of Yarn 2 seem to work (no node_modules folder) but I can still use yarn build from NextJS to create a production build with output in the .next folder.
I have an existing Symfony 5 project with a mysql database and a nginx Webserver. I wanna dockerize this project, but on the web I have found different opinions how to do that.
My plan is to write a multi-stage Docker file with at least a dev and a prod stage and let this build with docker-swarm. In my opinion it is useful to install the complete code during the build and having multiple composer.json files (one for every stage). In the web I have found opinions to not install the app new on every build but to copy the vendor and var folder to the container. Another opinion was to start the installation after the build process of the container is ready. But I think with that the service is not ready, when the app is successfully deployed.
What are you thinking is the best practice here?
Build exactly the same image for all environments
Do not build 2 different images for prod and dev. One of the main docker benefits is, that you can provide exactly the same environment for production and dev.
You should control your environment with ENV vars. for example, you can enable Xdebug for dev and disable it for prod.
Composer has option to install dev and production packages. You should use this feature.
If you decide to install some packages to dev. Try to use the same Dockerfile for both environment. Do not use Dockerfile.prod and Dockerfile.dev It will introduce some mess in the future.
Multistage build
You can do multistage build described in the official Docker documentation if your build environment requires much more dependencies than your runtime.
Example of it is compilation of the program. During compilation you need a lot of libraries, and you produce single binary. So your runtime does not need all dev libraries.
First stage can be build in second stage you just copy binary and it is.
Build all packages into the docker image
You should build your application when Docker image is building. All libraries and packages should be copied into image, you should not install them when the application is starting. Reasons:
Application starts faster when everything is installed
Some of the libraries can be changed in future or removed. You will be in troubles and probably you will spend a lot of time to do debugging.
Implement health checks
You should implement Health-Check. Applications require external dependencies, like Passwords, API KEY, some non sensitive data. Usually, We inject data with environment variables.
You should check if all required variables are passed, and have good format before your application is started. You can implement Health-Check or you can check it in the entrypoint.
Test your solution before it is released
You should implement mechanism of testing your images. For example in the CI:
Run your unit tests before image is built
Build the Docker image
Start new application image with dummy data. If you require PostgreSQL DB you can start another container,
Run integration tests.
Publish new version of the image only if all tests pass.
I am trying to setup a fairly simple CI/CD toolchain in TravisCI for a PHP project using composer libraries, resulting in deployment on a baremetal server via rsync.
Steps are:
Getting the code from the Github Repo upon git push.
Run composer install to get the dependencies.
(Perform Unit tests - Integration tests) - Not setup yet
Lint, codequality steps
Deploy the code to a remote apache server via rsync, using ssh keys.
Toolchain works OK so far, but I can't seem to get my head around on how the SQL migrations (in Doctrine or Phinx) can be executed automatically on the remote server.
Is the strategy of executing doctrine:migrations:migrate via ssh as the last step on the deploy section of TravisCI the best choice, or is there another better option? How do you deploy your migrations?
Thanks a lot
I once deployed to Heroku using Travis.
It was for a project using Laravel.
Because Heroku is sofisticated I have been able to tell it (from its configuration) to migrate your database after you have deployed.
However, with a classic rsync server you would need to connect to it from travis using SSH in order to migrate. (If your are as lazy as me and want to automate everything).
According to this doc you can add a after_deploy or after_success step. From this step you would run your ssh commands and migrate your database.
Apparently you can even run commands or a script via ssh so it might not be that hard. Look at the following: https://www.shellhacks.com/ssh-execute-remote-command-script-linux/
You have to pay EXTRA attention at what you put in your github repo in order to avoid security troubles with your rsync server.
Whether use this way to provide credentials to your Travis Job or that way
Using a Master Jenkins on premises, and the cloudbees plugin, I am able to kick off iOS builds to a cloud bees instance. It's pretty sweet.
My IOS developers require cloning the ios-universal-framework repo, and then running an install.sh script contained within that repo. Everything works fine until the script issues a "sudo" command to copy files into the directory
"/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Specifications"
The cp command needs sudo privileges. I'm thinking this is not possible but since I'm on the free trial plan, this is where I can find support. Thanks to all for reading.
Tony