Include common tests in Erlang release - erlang

I have a project written in Erlang (and releases generated by rebar) and I want to do integration testing in an environment which is as close as possible to the deployment environment.
The project pulls in a few other Erlang applications as dependencies. One of these applications has common tests in test/. It is these tests I wish to run in the release.
Is there maybe a way to have the common tests included in a generated release, and somehow run them on the target instance?
I don't want to run these tests on the application in deps/, but on the actual release itself.
Thanks!

Leave the tests out of the release. Build the release, then start it from a CT run (test_server has a nice way to start slave nodes). Now you can call into the other node to execute test cases.
I find that this method is often easier to get working.

Related

In TFS Releases, is there a way to execute powershell script befor downloading artifacts on same agent?

Problem:
We are running Selenium tests using release pipelines. If environment deployment which runs those tests is cancelled then the drivers might not be killed, and this will lock the working folder. So when the deployment happens again on same environment within the release definition (does not matter if it is new release or redeployment), release agent will throw the error that the working folder is locked.
So we do have powershell task with an inline script that does the clean up (it is inline so no dependencies), but unfortunately TFS release pipeline tries to download the artifacts into the locked folder before running mentioned powershell script.
Is there a way to execute an inline powershell before the release pipeline downloads the artifacts?
We do have a partial solution that uses multiple phases but this will only work as long a the deployment queue is not busy, and we are getting to the point where it will be in the future, and when queue is busy TFS might pick different agents for different phases of specific environment deployment, resulting in this approach not to work. So bonus question from this one: Alternatively, is possible to lock the agent for specific environment deployment so that agent does not change between phases?
I did searches for both solutions and it looks like there are no out of the box solutions, or did I miss one? if not then is there some creative way to achieve either of these?
You're approaching this from the wrong end. If the process failed, it needs to clean up. Thus, add a task at the end of the release with a condition of canceled() (or perhaps ne(succeeded()) to perform your cleanup operations.
Also, you didn't specify what language you're doing your Selenium testing in, but in C# you can wrap your webdriver creation in a using block to ensure it properly cleans up the driver. There are ostensibly similar constructs or patterns in other languages. Basically, "if the web driver goes out of scope, clean it up, period".
I had the similar issue with downloading artifact, you can disable this step by Click on the Environment name, expand Additional options and then select "skip artifact download":

Using build system to run tests or interact with clusters

What is the purpose of projects like these below that use Bazel for things other than building software?
https://github.com/bazelbuild/rules_webtesting
https://github.com/bazelbuild/rules_k8s
Are they just conveniently providing environment for run command (as opposed to building portable executables) or am I missing something?
The best I can tell is that Bazel could be used to run only subset of E2E tests based on knowledge what changed.
Disclaimer: I have only cursory knowledge about k8s and docker.
Bazel isn't just used for building and testing, it can also deploy, as you've discovered with the rules in those projects.
The best I can tell is that Bazel could be used to run only subset of E2E tests based on knowledge what changed.
Correct, but also extend tests to deployments. If you've only changed a single string in your Go binary that's injected into an image, Bazel is able to use rules_k8s, rules_docker, and rules_go to:
incrementally and reproducibly rebuild the minimum set files to
build the new Go executable
create a new image layer containing the Go executable (without using Docker)
push the image to the registry
redeploy changed pod(s) into cluster
The notable thing is that if you didn't change the source file, Bazel will always create an image with the same digest due to its reproducibility. That means that you can now trust the deployment workflow to not redeploy/restart a pod even if you do a bazel run twice or more.
For more, check out this BazelCon 2017 talk: Using Bazel for Fast, Correct Docker Deployments w/ Databricks
Fun fact: you can also use bazel run to start a REPL for your build target, from 0.15.0 onwards. Haskell and Scala rules use this.

TFS build Fails when Test projects containing MSCRM Organization service Url as localhost

I have a VM where MS CRM is installed and can be access using http://localhost:5555/Orgname/main.aspx.
I have created Unit test cases in my VM by refering the Organization Url as
http://localhost:5555/XRMServices/2011/OrganizationService.svc?wsdl
When I build the test project it connects to CRM and executes the test methods without any error.
Where as when I do a check-in ,the build is getting failed due to the reference to url "localhost".
for Build we have a separate Build server.
Can any one let me know how to solve this.
Your tests are being executed on the build server and it looks like some of your tests are of the Integration kind and not of the Unit kind, as such it's looking for a configured CRM instance on that server (localhost resolves to the host itself for every machine), and can't find any. Which means you have a few options:
Install CRM on the build server, extend the build process to deploy CRM to the build server during build in order to run your tests. A build process like the one developed by Wael Hamze can be extremely helpful for such a solution.
Do not include a localhost address, but actually check in a location that points to a shared dev environment, the build server can connect to. This is not ideal, as the build may be dependent on specific data being present and concurrent builds may break due to strange interactions. If you configure the build agent to only run 1 concurrent builds, it may work.
Disable the tests that depend on an installed version of CRM. You could put a [TestCategory("Integration")] on these tests and then set a filter condition on the build to exclude this test category.
Or you could try to improve your tests by making them independent of your configured instance, using Fakes or any other mocking framework. There are several testing frameworks specifically made for CRM workflow activities and other parts specific to CRM.
You need to remove the Integration Test from the list of Unit Tests that are executed as part of the build. I recommend creating a new project called [projectundertest].IntegrationTests and adding all your integration tests there. Then configure the build to only execute UnitTests...
You build server is trying to execute all of the tests in your solution. Unfortunately you have an Integration Test masquerading as a Unit Test. When it is executed the test tried to access CRM and fails. This is correct behavior.

TFS2010 build - how to run debug with unit tests before release build

We are working on fine tuning the automated build process using TFS 2010. During development, we use a special configuration to run our unit tests. During the build, does it make sense/possible to do the following:
Compile the application in UnitTest configuration and run unit tests. If all pass, run the build in release configuration and deploy.
The reasoning behind the above suggestion is that we are using config file transformation for some settings. However, I can make the build server match those, without the need for a different set up. I also wonder if the above approach is supported in TFS build. I.E: How do you run two compilations in different configurations.
Or is the following approach better:
Compile the application in Release configuration and run unit tests. If all pass, deploy.
If you edit your build definition in Team Explorer and navigate to the Process page, you'll see an "Items to Build" parameter in the Required section. If you expand that parameter, you'll see a child parameter called "Configurations to Build." Clicking the ellipsis button for that property invokes a dialog where you can specify the platforms and configuration you want to build. By default, TFS will build the default platform and configuration only. You can, however, specify as many configurations as you like.
Regarding which configuration to test and deploy, I would personally lean towards the release version since. Make sure you're still generating symbols for that configuration and you should still get full stack trace information for any test failures. If you intend to deploy the release build, then that's probabaly the configuration you should be running your tests on.

Automated Deployment in Rails

I'm working on my first rails app and am struggling trying to find an efficient and clean solution for doing automated checkouts and deployments.
So far I've looked at both CruiseControl.rb (having been familiar with CruiseControl.NET) and Capistrano. Unfortunately, unless I'm missing something, each one of them only does about half of what I want (with each one doing a different half).
For what I've seen so far:
CruiseControl
Strengths
Automated builds on repository checkouts upon commit
Also runs unit/functional tests and reports back
Weaknesses
No built-in deployment mechanisms (best I can find so far is writing your own bash scripts)
Capistrano
Strengths
Built for deployments
Weaknesses
Has to be kicked off via a command (i.e. doesn't do automated checkouts upon commit)
I've found ways that I can string the two together -- i.e. have CruiseControl ping the repository for changes, do a checkout upon commit, run the tests, etc. and then make a call to Capistrano when finished to do the deployment (even though Capistrano is also going to do a repository checkout).
Basically, when all is said and done, I'd like to have three projects set up:
Dev: Checkout/Deployment is entirely no touch. When someone commits a file, something checks it out, runs the tests, deploys the changes, and reports back
Stage: Checkout/Deployment requires a button click
Prod: Button click does either a tagged check out or moves the files from stage
I have this working with a combination of CruiseControl.NET and MSBuild in the .NET world, and it was fairly straightforward. I would guess this is also a common pattern in the ruby deployment world, but I could easily be mistaken.
I would give Hudson a try (free and open source). I started off using CruiseControl but got sick of having to relearn the XML configuration every time I needed to change a setting or add a project. Then I started using Hudson and never looked back. Hudson is more or less completely configurable over the web. It was initially a continuous integration tool for Java but has plugins for other development stack such as .NET and Ruby on Rails. There's a Rake plugin. If that doesn't work, you can configure it to execute any arbitrary command line after running your Rake builds/tests.
I should also add it's extremely easy to get Hudson going:
java -jar hudson.war
Or you can drop the war in any servlet container.
I would use two system to build and deploy anyway. At least two reasons: you should be able to run it separately and you should have two config files one for deploy and one for build. But you can easily glue the two systems together.
Just create a simple capistrano task, that tests and reports back to you. You can use the "run" command to do anything you want.
If you don't want any command line tool there was webistrano 2 years ago.
To could use something like http://github.com/benschwarz/gitnotify/tree/master to trigger the build deploy if you use git as repository.
At least for development automated deployments, check out the hook scripts available in git:
http://git-scm.com/docs/githooks
I think you'll want to focus on the post-receive hook script, since this runs after a push to a remote server.
Also worth checking out Mislav's git-deploy on github. Makes managing deployments pretty clean.
http://github.com/mislav/git-deploy

Resources