A web application typically consists of code, config and data. Code can often be made open source on GitHub. But per-instance config and data may contain secretes therefore are inappropriate be saved in GH. Data can be imported to a persistent storage so disregard for now.
Assuming the configs are file based and are saved in another private secured SVN repo, in order to deploy the web app to OpenShift and implement CI, I need to merge config files with code prior to running build scripts. In addition, the build strategy should support GH webhooks for automated build.
My questions are, to be more specific:
Does OS BuildConfig support multiple data sources, especially from svn?
If not, how to deploy such web app to OS?
The solution I came up with so far:
Instead of relying on OS for CI, use Jenkin instead.
Merge config files with code using Jenkins.
Instead of using Git source type in BuildConfig, use binary source instead
Let jenkins run
oc start-build --from-dir=<directory>
where <directory> contains merged code/config
Related
I've been using AWS amplify to build my iOS app's backend.
I have created 4 DTAP environments in the backend, with 4 different configurations, and use a run-script to switch in the correct versions of awsconfiguration.json and amplifyconfiguration.json at compile-time based on the selected scheme.
Since these auto-generated config files contain a number of secrets and API keys, I am keeping them away from source control in my .gitignore as this would be a point of failure, and I don't want to expose my entire backend in this way.
This works fine locally, but when I run my CI on Bitrise, the build fails since these config files aren't present. I need to find a way to get these AWS and Amplify config files into the CI to be able to create my test builds.
If I am being overly cautious, and the config files are actually fine to keep in source control (i.e. not secret), please let me know. I really don't want to set up secrets as individual environment variables, since Amplify will have several secrets and endpoints for each environment I need, and it feels too messy and complicated to have a script building these config files as a CI stage.
Things I've tried:
Creating mock config files with fake secrets that is copied in at compile time - this fails because the compile-time script still tries to copy the non-existent config files for the real environment
Using individual environment variables as secrets in Bitrise - this is likely to work, but will be a monumental effort for my 1-dev startup to maintain
Touching a fake config file to copy over - this works but means the actual AWS infra doesn't work in the test builds
I'll be grateful for any thoughts, suggestions or experience anyone has.
Thanks
Jacob
I would recommend using Generic File Storage and the related step to download them. This will inject them into your build and you will be able to put them where they need to be before the project is compiled.
I have a console application where I need some ideas on how to build/release the config part of the application. When running locally in VS the config file is called app.config. After a build the file changes to .exe.config. We are using XDT transformation for building the config file to the different enviroment. But what would be the smartest way to ensure the naming convension is correct when release the build version to a server?
Seems you want to use TFS Build and deploy to multiple environments via Release Management.
For handling configuration in Release Management, there are two techniques generally used Config Per Environment and Tokenization.
If you prefer a clean separation between build and deploy. To achieve that, recommend tokenizing configuration.
More details please take a look at this wonderful blog: Config Per Environment vs Tokenization in Release Management
Environment specific application settings values configured in the app.config are tokenized. Above blog's method essentially inserts tokens into setting values during the build process. When deployed the tokens are replaced with matching Release definition configuration values.
Besides, for an example of a separate build and release solution, you could also take a look at this blog: Using web.config transforms and Release Manager – TFS 2017/Team Services edition (similar to app.config)
I am new to Kubernetes and so I'm wondering what are the best practices when it comes to putting your app's source code into container run in Kubernetes or similar environment?
My app is a PHP so I have PHP(fpm) and Nginx containers(running from Google Container Engine)
At first, I had git volume, but there was no way of changing app versions like this so I switched to emptyDir and having my source code in a zip archive in one of the images that would unzip it into this volume upon start and now I have the source code separate in both images via git with separate git directory so I have /app and /app-git.
This is good because I do not need to share or configure volumes(less resources and configuration), the app's layer is reused in both images so no impact on space and since it is git the "base" is built in so I can simply adjust my dockerfile command at the end and switch to different branch or tag easily.
I wanted to download an archive with the source code directly from repository by providing credentials as arguments during build process but that did not work because my repo, bitbucket, creates archives with last commit id appended to the directory so there was no way o knowing what unpacking the archive would result in, so I got stuck with git itself.
What are your ways of handling the source code?
Ideally, you would use continuous delivery patterns, which means use Travis CI, Bitbucket pipelines or Jenkins to build the image on code change.
that is, every time your code changes, your automated build will get triggered and build a new Docker image, which will contain your source code. Then you can trigger a Deployment rolling update to update the Pods with the new image.
If you have dynamic content, you likely put this a persistent storage, which will be re-mounted on Pod update.
What we've done traditionally with PHP is an overlay on runtime. Basically the container will have a volume mounted to it with deploy keys to your git repo. This will allow you to perform git pull operations.
The more buttoned up approach is to have custom, tagged images of your code extended from fpm or whatever image you're using. That way you would run version 1.3 of YourImage where YourImage would contain code version 1.3 of your application.
Try to leverage continuous integration and continuous deployment. You can use Jenkins as CI/CD server, and create some jobs for building image, pushing image and deploying image.
I recommend putting your source code into docker image, instead of git repo. You can also extract configuration files from docker image. In kubernetes v1.2, it provides new feature 'ConfigMap', so we can put configuration files in ConfigMap. When running a pod, configuration files will be mounted automatically. It's very convenience.
I'm trying to deploy my MVC4 app to ELB. The project has several post-build steps which pull together dependencies. The AWS SDK publish wizard then does not do the trick - it builds a Web Deploy package behind the scenes, which does not action those post-build steps or preserve the resulting directory structure.
So, I downloaded the command-line EB tools, got a git repository working, but can't work out the next step: what do I push to the server with git aws.push: because if it's just the resulting files then I can't specify the "Enable 32-bit applications" flag (required), etc. Do I then push a web deploy package from my repository?
I presume so, but if so, how do I include the files copied into the output folder during "normal" builds by my post-build steps?
Here we go. This seems to be in conflict with what Jim Flanagan was saying - below it's a zip file, but Jim says it's the contents of it.
#Jim Flanagan - perhaps you could comment if you have some time. Thanks.
Hi thanks for contacting AWS Premium Support
Communication from the Elastic Beanstalk Engineering Team.
When you aws.push an ASP.NET/MVC app you do not push the web deploy archive, rather you push the artifacts as you want them deployed on the machine. From the customers stack overflow question it seems they have already found the local git repo that the VS deployment wizard created and looking their should give them a good indication of what is needed in the git repository.
There isn't a nice way through the aws.push to specify what the "Enable 32-bit Application" app pool setting should be (or any other configuration setting). If you need a specific configuration setting set I would suggest creating the environment (via the console or using the eb command line tool) which allow you to specify the configuration. And then use git aws.push to deploy to that environment, git aws.push will just use the configuration that is already present on the environment.
The last question about still being incremental is not really valid since you are not pushing just one big zip file. But if you were, it could still be incremental depending on what changed in the zip file, it might just send a diff between the two versions of the zip file. As the question implies though that use case is not really what incremental deployments were designed to help with.
We are using maven in the development process. Maven provides a nice feature of configuring the repositories. Using this feature I have created a remote internal repository and I can download the dependencies from that repository.
The development machines are pointing to this remote internal repository. Each development machine has its own local repository(~/.m2/repository/) and hence the dependencies of the project are downloaded from the remote internal repositor**y to the **local repository(~/.m2/repository/) on each developer machine.
Is there any way that the local repository(~/.m2/repository/) on developer machines can be set to the internal remote repository that we have created and which is used for downloading the dependencies from.
If take a look on Maven Introduction to Repositories first paragraph says:
There are strictly only two types of repositories: local and remote.
There is no way how you could change this behavior.
If you would handle that differently it would cause many problems. E.g. build would take much longer because of downloading file all files, IDE would work not work properly (project dependencies would not be stored local), ...
May I suggest another approach to share dependencies and artifacts. In our projects we use nexus as a proxy and repository for our artifacts. It works well with no issues. A basic configuration I already posted here.
After nexus is running you could also setup continous integration using jenkins and enjoy a fully automated environment.
Is your requirement to avoid each developer from having to download all dependencies to his local repository?
Assuming your remote internal repository has the same format as a maven local repository, you can achieve this by adding the following line in the settings.xml of all your developers.
<localRepository>shared-drive-location-of-remote-repository</localRepository>