I have created a development environment with docker. This dev environment includes ubuntu 14.04 container (installed apache and node dependencies) , mysql container (official image) and a phpmyadmin container. How can I ensure that all containers are working fine and all dependency software installed in relevant containers using Travis-CI ?.
I could not find any test tool that can use to test docker containers. But in order to test whether docker containers are up and running, I wrote some CLI command test cases (functional tests) using a testing framework. I suggest you to try codeception PHP testing framework which is built by extending PHPUnit. And it is very easy to write shell command test cases using codeception framework.
we can simply automate the codeception test cases in travis-ci
Related
Although it could be a silly question! but I can't find better solution so posting here.
There is existing old rails project with following detials:
Repository of on Gitlab
Using drone CI & Capistrano (already functional and doing deployments with CI)
Target server is our own server
We want to dockerize our rails app! So, Added Dockerfile and docker-compose files in project & using db from host machine. Project is working fine at local
Want to run this dockerize rails app in CI and deploy to target ENV (Staging, UAT, Production etc)?
Note: We have our own server to save docker images. Plus we don't wanna use sh scripts for the deployment.
I think using docker & drone CI, Capistrano will be removed!
I'm trying to start two containers (each with different image) using Jenkins shell commands. I tried installing docker extension in Jenkins and/or setting docker in global configuration tools. I am also doing all this in a pipeline. After executing docker run... I'm getting Docker: not found error in Jenkins console output.
I am also having a hard time finding a guide on the internet that describes exactly what I wish to accomplish. If it is of any importance, I'm trying to start a Selenium Grid and a Selenium Chrome Node and then using maven (that is configured and works correctly) send a test suite on that node.
If u have any experience with something similiar to what I wish to accomplish, please share your thoughts as what the best approach is to this situation.
Cheers.
That's because docker images that you probably create within your pipeline cannot also run (become containers) within the pipeline environment, because that environment isn't designed to also host applications.
You need to find a hosting provider for your docker images (e.g. Azure or GCP). Once you set up the hosting part, you need to add a step to your pipeline to upload/push the image to that provider's docker registry or to the free public Docker Hub. Then, finally, add a step to your pipeline to send a command to your hosting, to download the image from whichever docker registry you chose, and to launch the image into a container (this last part of download and launch is covered by docker run). Only at that point you have a running app.
Good luck.
Somewhat relevant (maybe it'll help you understand how some of those things work):
Command docker build is comparable to the proces of producing an installer package such as MSI.
Docker image is comparable to an installation package (e.g. MSI).
Command docker run is comparable to running an installer package with the goal of installing an app. So, using same analogy, running an MSI installs an app.
Container is comparable to installed application. Just like an app, docker container can run or be in stopped state. This depends on the environment, which I referred to as "hosting" above.
Just like you can build an MSI package on one machine and run it on other machines, you build docker images on one machine (pipeline host, in your case), but you need to host them in environments that support that.
Basically i want to run my SoapUI tests (ReadyAPI is downloaded on my computer). With Teamcity, which is running in docker container.
So i started my Teamcity server and agent in docker container. Like this: https://strangeway.org/2017/12/%D0%BF%D1%80%D0%BE%D0%B1%D1%83%D0%B5%D0%BC-teamcity-%D1%81-docker/
After that i installed SoapUI pro plugin (SoapUI Pro Functional Testing) into my teamcity server.
To start a build in Teamciy using SoapUI Pro Functional Testing it is mandatory to give a path to testrunner shell script into Teamcity server.
My question is: how do i give path to a shell script that is located on my computer to a Teamcity server that is running inside a docker container?
hard way.
map your local directory to your TeamCity container.
use this volume(inside the container) in your test build step
best way
store your tests to VCS like git.
add VSC roots to TeamCity Server
attach VCS to your build
use files in the build steps
I have some dockerized F# tests written by expecto and fscheck frameworks, as a dotnet core standalone executable.
How to configure continuous integration on Team Foundation Server to run them and get the report after each run accordingly?
If you mean you already have the docker image which has the proper SDK/envirement, then you just need to run the image in TFS with CI build (Configure continuous integration).
To run the image, you need to install the Docker Integration extension, please refer to Build, push and run Docker images with Visual Studio Team Services for details.
Other articles may help:
Running Selenium Tests in Docker using VSTS Release Management
How to run .NET unit tests in a docker container
Configure Expecto to output NUnit-equivalent XML files with https://www.nuget.org/packages/Expecto.TestResults/
The docs are here https://github.com/haf/expecto/#testresults-file
Then just run the executable and have the CI server pick up the outputted XML file.
I am new to DevOps. Recently I practiced the Docker examples. I have one usecase in my current project. Here it is
This is java project. It contains one war project that depends on jar project. Build tool is maven.
We are using Jboss Wildfly server.
DataBase is mysql 5.7.
We are using testng framework for the unit test cases.
sonarqube for code analysis. Selenium for testing.
So in any linux box the infrastructure we need is java8, wildfly server and mysql 5.7.
Consider we have 2 boxes. one is dev and another is test. Developer works on his local windows machine.
Sometimes we need some configurations required in jboss folder. Suppose we changed one xml in wildfly configuration folder in this release. How to communicate same to dev and test boxes as local machine. Another case is for dev box we need different configuration in xml and for test box it is different from dev box(SSL information in standalone.xml and user properties). How to handle this?
I would suggest to run all required stack (wildfly and mysql) via docker-compose. Also you can extend official wildfly image to provide your custom configuration so you should build 2 different images, one with tag 'dev' and second with tag 'test'.