Protractor e2e tests with bitbucket pipelines - docker

I already have all e2e tests written and they run successfully on my local machine and on the Codeship.
I want to move our CI from the codeship to Bitbucket pipelines. So I created my own Docker image with testing environment.
When I run docker conatiner in my local workspace, the tests work fine, but when build runs in bitbucket pipelines all the tests fail by timeout saying that angular can't be found on the page.
Server is definitely up and running in the container ant tests also start, but the problem is with pages opening.
Does anyone has any ideas about it?
If any code is required, I'll post everything that is needed.

So I managed to solve my issue.
Not sure if it can be useful to anyone else, but the problem appeared in my environment setup.
I forgot to add starting of webpack service that should generate some required server files and minified files of the server sources.
So server started successfully, but requesting the routes from the browser failed because it wasn't able to find requested files.

Related

Gitlab CI: How to configure cypress e2e tests with multiple server instances?

My goal is to run a bunch of e2e tests every night to check if the code changes made the day before break core features of our app.
Our platform is an Angular app which calls 3 separate Node.js backends (auth-backend, old- and new-backend). Also we use a MongoDB as Database.
Let's consider every of the 4 projects to have a branch called develop which should only be testet.
My approach would be the following:
I am running every backend plus the database in a separate docker container.
Therefor I need to get either the latest build of that project from gitlab using ssh
or clone the repo to the docker container and run a build inside it.
After all project are running on the right ports (which I'd specify somewhere) I start the npm script for running cypress e2e tests.
All of that should be defined in some file. Is that even possible?
I do not have experience with the gitlab CI, but I know, that other CI-systems provide the possibility, to run e.g. bash scripts.
So I guess you can do the following:
Write a local bash script that pulls all the repos (since gitlab can provide secret keys, you can use these in order to authenticate against your gitlab repos)
After all of these repos were pulled, you can run all your build commands for your different repos
Since you have some repos working and depending on each other, you possibly have to add a build command for exactly this use case, so that you always have production state, or whatever you need
After you have pulled and built your repos, you should start your servers for your backends
I guess your angular app uses some kind of environment variables to define the servers to send the request to, so you also have to define them in your build command/script for your app
Then you should be able to run your tests
Personally I think that docker is kind of overdose for this use case. Possibly you should define and run a pipeline to always create a new develop state of your backend, push the docker file to your sever. Then you should be able to create your test-pipeline which first starts the docker-container on your own server (so you do not have an "in-pipeline-server"). This should then have started all your backends, so that your test pipeline can now run your e2e tests against those set up Backend servers.
I as well advise, that you should not run this pipeline every night, but when the develop state of one of those linked repos changes.
If you need help setting this up, feel free to contact me.

How to make JUnit tests access remote server folder when ran thru a jenkins job?

I'm running my Junit tests using a jenkins job, some of the tests require access to a remote server folder in order to create a text file on the folder. How do I configure jenkins job with a remote server for JUnit tests to create a text on the server?
I'm using a maven project which has logic to create a text file on the server. It works all good when I run JUnit tests locally in my IDE because I have access to localhost.
Thanks for reading! will greatly appreciate if someone guides me regarding this.
Make that remote server as a slave node of jenkins, then execute the job that creates folder on the slave, so that it will create the text file on the remove server as you intended.

Disabling all builds after migrating Jenkins

I am in the process of migrating a Jenkins server from an internal resource to AWS EC2. I have completed the copying of all files in /var/lib/jenkins. However, when I start Jenkins it immediately wants to run builds, and they all fail because I need to make some changes. Devs don't like the tons of emails.
How do I start Jenkins with all jobs/builds disabled by default, so I can test and configure things before cutting over to the new server installation?
Here is a useful link! This groovy script needs to be placed in $JENKINS_HOME/init.groovy

When selenium test runs by Jenkins and nUnit, the browser doesn't come up however there are valid results

I would say that my problem rather lack of information and I need some confirmation than a real problem. It seems somebody else had similar question question.
I put together a machine (Windows Server 2012R2) for POC reasons where a Jenkins installed and it executes Selenium UI tests using nunit. The nunit tests are generated by Specflow.
I could do:
install jenkins
jenkins run by a valid user not by Service account
set up jenkins properly
it can pull the source code from TFS-GIT
it can compile the C# solution
it can execute the test project
the test results are correct
Selenium plugin installed on Jenkins but I don't think it is used in this case because the text execution is about executing nunit and it deals with everything else.
At the moment I don't need the capability to delegate test execution to other Jenkins slaves or machines because the Jenkins does have only one compile task. Compiling, executing and test running can go parallel, the machine able to deal with it.
But, when I log in the server where the Jenkins runs and I watch what happens during CI build (compile and test execution) I can't see that the browser (Firefox) starts, however, the test results and the logs show that a browser was executed.
What I did so far:
jenkins runs as service, the account is an existing account
If I remote to the machine with the account which is set up for the service, then I can't see the browser will be executed, however, the log shows that something had happened.
My question is that, what the hack is happening when my tests are executed by Jenkins? If I execute the command which is used by Jenkins from console on the same machine then I can see that Firefox starts, does what is programmed in the tests and the results are in the result.xml. Can I accept the result as valid result? Can I somehow set up Jenkins the way the browser really executed (I can believe it when I see it :) )?
I think this is because you run Jenkins as a service. Services do not show up in desktop. Workaround is to run Jenkins or slave from CMD.
Jenkins windows slave service does not interact with desktop

How to deploy a successful build using Travis CI and Scalr

We're currently evaluating CI servers and Travis CI caught our eye since it is a hosted solution. I haven't been able to find any information about it being able to deploy to Scalr though. Has anyone had any luck setting this up? I found information about using Jenkins to deploy to Scalr but I'd rather not go with Jenkins.
Thanks.
Deploying an application upon a Travis CI build success if functionally similar to deploying one upon a Jenkins success. All you need to do is to hook in to Scalr through its API when you build succeeds.
Using Travis CI, you can't really run arbitrary post-build shell scripts (unlike Jenkins). This makes integration a bit more complicated than using Jenkins (with Jenkins you just use the Scalr Command Line Tools to call the Scalr API), but it remains feasible.
All you need to do is have Travis CI send a notification to a Webhook Endpoint to a webapp you control (host that on your cloud infrastructure, or on e.g. Heroku), and have that webapp call the Scalr API.
Disclaimer: I work at Scalr.

Resources