I've set up config to tell circle ci what to build and how to build.
After the the build I want to send all the built files to my ftp server, which is a share host (host-gator)
Can I instruct circleCI to do so?
There's two separate things here. If the build files that you want to upload are your application itself, then this is considered a deploy. You can do this in the deployment phase in circle.yml. More info can be found here: https://circleci.com/docs/configuration/#deployment
If the build is "other" files that you want to upload for record keeping, debugging, or basically a deployment for someday in the future, you can utilize what are called build artifacts: https://circleci.com/docs/build-artifacts/
Related
I have this situation, because the documentation was not clear. The gcloud builds submit --tag gcr.io/[PROJECT-ID]/helloworld command will
archive the contents of my source folder and then run the docker build on the Google build server.
Also it is only looking at the .gitignore file for the contents to archive. If it is a docker build, it should honor the .dockerignore file.
Also there is no word about how to compile the application. It has to be compiled if is not precompiled application before it is dockerized.
the quick guide only considers that the application is a precompiled one and all the contents of the folder as per the .gitignore are required required to run the application. People will not be aware of all that for a new technology. I have just figured it out by myself.
So, the alternate way of doing all that is either include the build steps in the docker file (which will make my image heavy) or create a docker image locally (manually) and then submit the image to the repository (manually) and then publish to the cloud run (using the second command documented or manually).
Is there anything I am missing over here?
Cloud Build respects .dockerignore. It will upload all files that are not in .gitignore, but once uploaded, it will respect .dockerignore regarding which files to use for the build.
Compiling your application is usually done at the same time as "containerizing" it. For example, for a Node.js app, the Dockerfile must run npm install --production. I recommend looking at the many examples in the quickstart.
I think you've got it, essentially your options are:
Building using Cloud Build
Building locally and pushing using Docker
Generally if you need additional build steps, I would recommend including them in your Docker file. Ideally you should be able to go from source + Dockerfile to a complete image in either case.
We have a PoC on deploying a file to an old mainframe. There are many types of deployments that we do but this question focuses on individual files. We are able to SSH into the mainframe and we have a deployment pipeline with the steps needed to get one file into the correct location.
The problem is we have over 54,000 of these individual files. During a release we may deploy as little as 1-5 files or large deployment may be 250 files. Each of them will have a different source and target destination. Some of them may be sources from the same folder and deployed to the same folder but that is not guaranteed.
We can make the assumption that the files are immutable. There are issues on both build and release to consider:
Build - what is the artifact? Do we use one artifact for each release that could contain 1-250 files? We don't want to have 250 build scripts for a release, that we know.
Release - How do we use the pipelines. If you batch them together then is it a one click deploy to that environment? How would you determine if someone added a file to the release? I guess we would need a new build that would create a new pipeline?
There are a few other things that come up like we need to check the status in our change management system to confirm that the ticket for that File is in a status that is approvable. That is a deployment step currently.
I'm not sure this is the "answer" or not but this is our take on it so far:
The Artifact
We are going to create a "release" data file. In this file there will be a list of files going with each deployment. We will organize the files by product line and create a branch of all files for a specific product. Then the build will read the files and create the artifact from the list of files related to that release. We will also include the data file in the artifact.
Deployment
We will create a Parent/Child release process. The Parent script will loop through the data file and call the child script. The Child script will deploy an individual file which will be represented by a row in the data file. To deploy to Production the Parent will be deployed only. The child will not every be deployed individually.
Multiple Deployment Times/Dependencies
We have a requirement to Deploy certain files at certain times. One production file deployment may be at 1 PM and another at 7 PM in the same release. To accommodate this
we will include deployment time in the data file. After each file is deployed we will some how keep track that this file has been deployed.
Change Management
We will do our change management system check in each child script to make sure the file is ready to deploy. If the individual file is not approved we will not stop processing, we will finish the deployment for any other files in the list that are approved and then as the last step in the deployment we will fail the deploy. We need to make the "tracking" available to the teams to see what caused the deploy to fail.
Making some assumptions here and is happy path, but perhaps this will help get you to the ultimate solution.
Have a master branch that has a products folder. This folder would then have subfolders for each product, which has the files:
master/
products/
productA
productB
productN
Dev Team would work on files in separate fix branches then merge into master via pull requests. You can setup policies and gates for audit
Create a build pipeline with powershell script task that checks for deltas (possible example) in master and copy/publish only those changes to an artifact destination folder with the same product subfolder layout
Create a release pipeline that has a stage for each product and/or destination path on the mainframe. Each stage would have a custom task that copies the files from the appropriate product folder to the destination via SSH. You could even create a task group that gets re-used then just use variables for folder paths, etc. NOTE: The will be quite a few stages, but that's what release pipelines are for :)
Schedule the release pipeline to run at the desired times. You can setup notifications on failures so someone or process can investigate/retry etc.
I am working on a website that will be deployed to various environments - Dev, UAT and Production - and each of them has different config settings defined through the use of config files.
The deployment process consists of two steps:
Publish the latest build output
Copy and replace the default config files with the one specific to the environment were the deployment is being done (these files are currently under source control)
I am trying to automate the deployment process using VSTS and Azure App Services but I couldn't find any task or option that would let me copy files into an App Service.
What is it the best way to implement this deployment process?
You can make this much easier on yourself by using config transforms for your web.config file.
Basically, make sure that you've defined a Build Configuration for each environment. Debug and Release are defined out of the box for Visual Studio MVC projects. You can add as many configurations as you want, such as a UAT configuration.
Once you have your configurations defined, make sure there's web.[your build config].config file located beneath your web.config in the Visual Studio solution explorer. Within each of these build configuration specific transform files, you can override settings as needed.
To close the loop, you can specify a build configuration to target when creating a build in VSTS. This will automatically execute the transform for the build configuration you've selected.
More details on build configs and web.config transforms here.
Alternatively, you could specify your app settings and connection strings directly in the Application Settings of your Azure Web App. These override anything in your deployed web.config file. What I like about this approach is that you don't have to expose sensitive information like connection strings to other developers on your team, and it removes the minor complexity of web.config transforms.
Kudu api give you the ability to upload and download files from azure web app with overwrite
The git:
https://github.com/projectkudu/kudu/wiki/REST-API
Not sure if vsts have this ability.
I recently did what you describe with Jenkins . Now I'm trying to integrate Jenkins to vsts
Hope it give you an answer
I'm trying to understand what it does. Currently, this is the value that I see - dist/.tgz
From what I understand, our grunt scripts makes a tgz file. However, I don't know what Jenkins does.
I got an error when I didn't specify any pattern
ERROR: No artifacts are configured for archiving.
You probably forgot to set the file pattern, so please go back to the configuration and specify it.
If you really did mean to archive all the files in the workspace, please specify "**"
Build step 'Archive the artifacts' changed build result to FAILURE
Most importantly, it allows you to archive items from your job's workspace in a persistent and accessible way, linked to the specific build number.
I.e. you have a job Build that compiles your sources into program.exe, archiving it linked to the build it was produced by, and keeping it accessible for developers or other jobs can come in very handy.
Additionally, archived artifacts are transferred to your jenkins master, so your job can run on any slave, but your archived files will be always accessible, even when that particular slave is offline.
Also, with the right configuration and plugins, other projects can access archived artifacts from other projects. I.e. a job Deploy that uploads your program.exe to some location is as trivial as copying the archived artifact of the last successful build into its workspace for the upload.
Theres quite some information on SO already, i.e. here.
I'm using Jenkins for Continuous Integration.
Right now I have a job with this command in Jenkins in the command line arguments for a build step:
This is the command:
"%WORKSPACE%\OEVizion\ITVizion.OEVizion.Web\ITVizion.OEVizion.Web.csproj" /p:DeployOnBuild=true /p:PublishProfile="IT Vizion - Web Deploy Package for a given domain.pubxml"
It works just fine, that is, the web deploy package (.zip) is created in the specified folder defined in the .pubxml file.
However what I'd like to do is to generate a .zip web deploy package for each of the .pubxml files that I have (right now 3) for this specific ITVizion.OEVizion.Web.csproj in a single shot\command.
With this I'd have multiple .zip packages with different settings ready to be deployed to different servers every time a commit is pushed to the repository and the project builds successfully.
Is this possible? How should I approach this?
The best practice is to build only once and then publish multiples times from the same build. Here is a screenshot of how to set that up:
After posting the question I saw the way to go about this: Add multiple build steps. One for each .pubxml file. That way the build process will run 3 times for the 3 publish profiles and you'll end up with 3 web deploy .zip packages at the end of the Job execution in Jenkins. Nice.