I'm trying to deploy my MVC4 app to ELB. The project has several post-build steps which pull together dependencies. The AWS SDK publish wizard then does not do the trick - it builds a Web Deploy package behind the scenes, which does not action those post-build steps or preserve the resulting directory structure.
So, I downloaded the command-line EB tools, got a git repository working, but can't work out the next step: what do I push to the server with git aws.push: because if it's just the resulting files then I can't specify the "Enable 32-bit applications" flag (required), etc. Do I then push a web deploy package from my repository?
I presume so, but if so, how do I include the files copied into the output folder during "normal" builds by my post-build steps?
Here we go. This seems to be in conflict with what Jim Flanagan was saying - below it's a zip file, but Jim says it's the contents of it.
#Jim Flanagan - perhaps you could comment if you have some time. Thanks.
Hi thanks for contacting AWS Premium Support
Communication from the Elastic Beanstalk Engineering Team.
When you aws.push an ASP.NET/MVC app you do not push the web deploy archive, rather you push the artifacts as you want them deployed on the machine. From the customers stack overflow question it seems they have already found the local git repo that the VS deployment wizard created and looking their should give them a good indication of what is needed in the git repository.
There isn't a nice way through the aws.push to specify what the "Enable 32-bit Application" app pool setting should be (or any other configuration setting). If you need a specific configuration setting set I would suggest creating the environment (via the console or using the eb command line tool) which allow you to specify the configuration. And then use git aws.push to deploy to that environment, git aws.push will just use the configuration that is already present on the environment.
The last question about still being incremental is not really valid since you are not pushing just one big zip file. But if you were, it could still be incremental depending on what changed in the zip file, it might just send a diff between the two versions of the zip file. As the question implies though that use case is not really what incremental deployments were designed to help with.
Related
What does it mean by deploying code from dev to prod environments using Jenkins. Can anyone please help. I currently have the source code in my gitlab. I need to deploy this code from dev env to prod env
Thanks in advance.
Source code present in GitLab is just the files that is needed to create a WAR/EAR/JAR to run the application.
It's the environment files if present which makes the application behave slightly different on each environment i.e. DEV/PROD the data that you see on DEV will not be the same that you see on PROD(application is live), as developers tend to test/modify code/data to ensure that the application works as excepted. This is fine on DEV but is a big no-no on PROD as it will impact business.
Deploying code from dev to prod environments just means building the application with the right environment files e.g DEV points to xyz DB but prod points to abc DB.
All this can be achieved with jenkins and if your project uses maven/gradle then with a single line command you can achieve the above.(A bit of googling will help you here)
If your project doesn't involve Maven/Gradle then you will have to replace the environment file each time a build happens based on a parameter which can be passed from jenkins.
This whole process is part of DevOps culture. In simple terms it looks like this:
Developer pushes changes to source control (i.e. gitlab).
Build server (i.e. Jenkins) automatically downloads latest changes and builds an application (i.e. creates setup files or just the binaries). Usually you run tests (unit, integration, automation tests etc.). If something fails then developers get notified about it. This whole process is called continuous integration.
If everything went right then you can deploy your application to production. This part of the process is called continuous deployment.
It's a common strategy for web apps. For larger projects QA team tests the software and the software gets deployed once QA team approves it.
A web application typically consists of code, config and data. Code can often be made open source on GitHub. But per-instance config and data may contain secretes therefore are inappropriate be saved in GH. Data can be imported to a persistent storage so disregard for now.
Assuming the configs are file based and are saved in another private secured SVN repo, in order to deploy the web app to OpenShift and implement CI, I need to merge config files with code prior to running build scripts. In addition, the build strategy should support GH webhooks for automated build.
My questions are, to be more specific:
Does OS BuildConfig support multiple data sources, especially from svn?
If not, how to deploy such web app to OS?
The solution I came up with so far:
Instead of relying on OS for CI, use Jenkin instead.
Merge config files with code using Jenkins.
Instead of using Git source type in BuildConfig, use binary source instead
Let jenkins run
oc start-build --from-dir=<directory>
where <directory> contains merged code/config
I've been trying for a week to deploy a webrole to Azure Clous Services without quite getting there.
Here is my setup:
I've got a cloud solution with a cloud project and a MVC application (standard no changes to template yet). Its under source control in Visual Studio Online.
I'm using octopack to try generating the nuget package
I'm using the buildt in nuget repo from Octopus
The Octopus server and tentacle is hosted on a VM in azure
I've created a step-template for my deployment step (see this article)
My plan:
I'd like to have a CI build to a dev-service and a seperate build to push my project to the staging environment and roll it onto the production environment using Octopus.
My problem:
The packages that are produced by Octopack seems to not contain what they should. And I've tried to play around with the nuspec file included in my webrole to get it just right. Something ends up missing either way i try.
Have anyone gotten this to work? I'd appreciate any tips pointing me in the right direction as I've slowly been running out of ideas. So i turn to you my fellow nerdlings for some much needed help.
Regards
ZiGGstern
Correct me if I'm wrong but it looks like you're in need of the octo.exe to automate deployments after build within Visual Studio/TFS Online to your target environments.
I'm trying to focus on this statement:
I'd like to have a CI build to a dev-service and a seperate build to
push my project to the staging environment and roll it onto the
production environment using Octopus.
You can configure within your build-template, using the "Post-Deploy Script Path" a PowerShell script to call the Octo.exe (with an API Key) and fire off a deployment for your desired environment(s). You can customize this per build if you so choose. I've used this method by creating a folder within the root of my Solution (I call it 'Tools' but the name doesn't matter). Within that Tools folder, I add a PowerShell script AND the octo.exe. The PS script fires the Octo.exe which makes a call to my Octopus Server and with the "create release" option, I'm able to automatically deploy to whatever environment AFTER my build finishes within TFS. Make sure to always include those files (right-click in VS and in file properties select 'always copy').
I'm not quite sure why your NuGet packages would not be configured correctly, but that should be remedied first. Your question is trying to ask for two things and it's not clear which is more important to you; NuGet package or the Deployment from CI build. Having said that, I think you need to give more details on why you think your NuGet package is inadequate or not working correctly for your Azure services.
Please note, the site you supplied is using a custom PowerShell script in the form of a step template. It may be best to try the default Azure step within Octopus first before using a customized script. Just a thought.
Read more about the Octo.exe here: http://docs.octopusdeploy.com/pages/viewpage.action?pageId=360596
I am using jenkins to continuously build the website front-end code from github repository, package it up into the tar archive and post the it to the S3 bucket. The Jenkins build creates files named like this FrontEnd-122.tgz where 122 is the build number.
I am using the following recipe to deploy the app onto the server:
deploy_version = node['my-app']['build-number']
deploy_from = "http://mybucket.com/FrontEnd-#{deploy_version}.tgz"
tar_extract "#{deploy_from}" do
target_dir '/usr/local/site/FrontEnd'
creates '/usr/local/site/FrontEnd/index.php'
tar_flags [ '--strip-components 1' ]
end
This all works great, however I have to manually update the node attribute my-app/build-number. Which is fine for QA and production deployments.
What I would like to do is to have a snapshot deployment VM, where I the latest code gets deployed, for further testing with selenium and friends. However, to do that I need to
have a way for the above cookbook to figure out what is the latest build number is and deploy from there. Do you have any suggestions?
Tricky one because you need a mechanism for chef to determine the latest revision stored in S3.
Presumably you store the code in a revision control system like subversion or git? Would it be feasible to use the chef deploy resource instead? Let chef pull the website code from your trunk or master branch, for testing purposes.
Another option would be to use a binary repository manager that understands the concept of "Snapshots". Take a look at products like Nexus, Artifactory and Archiva. You could then use S3 for both backup and a distribution area for approved and released copies of your site.
So, I used the dumb way to solve this issue. Besides putting the versioned archive to the S3 bucket. I also push the same archive using the name like 'FrontEnd-Latest'. Also I modified the cookbooks to use a version parameter. The staging server has version parameter set to 'Latest' and the production server has the parameter set to whatever version is considered to be stable.
I have this confusion and perhaps it may be basic question. I am planning to work on a Rails project along with a friend who stays in a different location.
We have identified Heroku as our deployment platform and Bitbucket for SCM related activities.
Both me and my friend are new to rails but we are familiar with web development in general.
I m working on a Windows box while he is on a Mac. We both have the same rails version including the gems. However, I'm not sure really sure how do we manage the source code and code integration. The reason I say this is because, when we try to commit the entire code from our systems a few platform specific rails file gets uploaded on the server, thereby rendering the deployment useless.
So my question is if I am on Windows and my friend is on Mac, whats the recommended way of working together on a single RAILS project and deploy it on a common platform to get the same desired functionality.
Yes, by using the source control management (SCM) you selected when you set up your repository.
For instance, if you use git, you would copy your repository using git clone (the command is provided via the bitbucket interface by clicking on clone), make your changes, and then git push your changes back into the repository.
When you want to code next, execute a git pull command to get the latest repo changes and then work and git push your changes back to the repo.
For examples see Bitbuckets fantastic tutorial.
As a side note, bitbucket also supports mercurial, although I haven't used it.
As far as your actual issue, each person will need to make sure the platform dependent files are excluded from your repository. If you're using git, see the git book specifically the section on .gitignore and git rm