We have a Spring-batch (3.0.8) application using db2 for data persistence.
We have built Docker image of the application and are trying to figure out how to test it using Jenkins pipeline. We launch the application using CommandLineJobRunner in a format similar to this:
bluecost-docker]$ docker run -v /home/bluecost/config:/home/bluecost/config -v /home/bluecost/data:/home/bluecost/data -v /home/bluecost/logs:/home/bluecost/logs bluecost com.mycomp.cloud.cost.LoadBMSData CommandLineJobRunner load-bms-job-xml LoadBMSJob ../data/input/CSVMapping-Mar2018.csv
The results of the job are recorded in the DB2 database table. I'm having trouble figuring out how to test this containerized application that doesn't expose a RESTful interface to the outside world.
The goals of the testing is User Acceptance. The test scenarios are done using Cucumber (Features > Scenarios > Tests). The testing must check the results of the job run (db table) against the expected outcomes.
Question: Do we have to write an integration layer around the jobs so we can launch them using REST and retrieve the results using REST or is there some other way?!
Related
I've been given a small node.js website to test.
Initially, I tried to keep it all JavaScript and even managed to write a few tests and weave those into a CI YAML file that instructs GitLab to deploy the container, build the site, run the tests...
However, my tests are getting more and more complicated and I must resort to my Java skills.
Now, the problem is that I don't know to form the CI tasks: There is no one container that has all the needed technology (nor is it what CONTAINERS are for, anyway).
On the other hand, I don't know to get more than one image in the task.
Somehow, my mind imagines I could deploy as: one container has the node.js stuff and builds and runs the site, exposing an endpoint.
Another container has the Java-Maven-Chrome stuff and builds and runs the tests, which access the site via the exposed endpoint.
Or maybe I have the whole concept wrong?
Would appreciate to learn what is the professional solution here. Surely, I am not the first Java QA guy, trying to test a node.js website!
I would really appreciate some example for the YAML file. Because, I can only imagine it as having one field in the beginning "image" - and then that's where my container goes and no room for another.
I have a test framework running on my local (& git) that is based on TestCafe-Cucumber (Node.js) example: https://github.com/rquellh/testcafe-cucumber & it works really well.
Now, I am trying to use this framework in the deployment (post-deployment) cycle by hosting it as a service or creating a docker container.
The framework executes through the CLI command (npm test) with few parameters.
I know the easiest way is to call the git repo directly as & when required by adding a Jenkins step, however, that is not the solution I am looking for.
So far, I have successfully built the docker image & container now runs on my localhost 8085 port as http://0.0.0.0:8085 (although I get DNS server as it's not an app - please correct me if I am wrong here)
The concern here is: How can I make it work like an app hosted so that once the deployment completes, the Jenkins/Octopus could call it as a service through the URL (http://0.0.0.0:8085) along with few parameters that the framework used to execute the test case?
I request all experts to provide a solution if there are any.
I guess there is no production-ready application or service to solve this task.
However, you can use a REST framework to handle network requests and subprocesses to start test sessions. If you like Node.js, you can start with the Express framework and the execa module.
This way you can build a basic service that can start your tests. If you need a more flexible solution, you can take look at gherkin-testcafe that provides access to TestCafe's API. You can use it instead of starting TestCafe as a subprocess since this way you will have more options to manage your test sessions.
We deployed a rails app in Google Cloud Run using their managed platform. The app is working fine and it is able to serve requests.
Now we want to get access to the rails console of the deployed app. Can anyone suggest a way to achieve this?
I'm aware that currently, Cloud Run supports only HTTP requests. If no other way is possible I'll have to consider something like rails web console
I think you cannot.
I'm familiar with Cloud Run but I'm not familiar with rails.
I assume you'd need to be able to shell into a container in order to be able to run IRB. Generally, you'd do this by asking the runtime (Docker Engine, Kubernetes, Cloud Run) to connect you to the container so that you could do this.
Cloud Run does not (appear) to permit this. I think it's a potentially useful feature request for the service. For those containers that contain shells, this would be the equivalent of GCE's gcloud compute ssh.
Importantly, your app may be serviced by multiple, load-balanced containers and so you'd want to be able to console into any of these.
However, you may wish to consider alternatives mechanisms for managing your app: monitoring, logging, trace etc. These mechanisms should provide you with sufficient insight into your app's state. Errant container instances should be terminated.
This follows the concept of "pets vs. cattle" whereby, instead of nurturing individual containers (is one failing?), you nurture the containers holistically (is the service comprising many containers failing?)
For completeness, if you think that there's an issue with a container image that you're unable to resolve through other means, you could run the image elsewhere (e.g. locally) where you can use IRB. Since the same container image will behave consistently wherever it's run, you should be able to observe the issue using IRB locally too.
I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.
My infrastructure is based on AWS, 3 for EC2 instances for Rails App Server, 1 for RDS (MongoDB), 1 for EC2 instance as Redis server.
Will the TravisCI launch similar services (eg. MongoDB, Redis) for pass the RSpec tests.
If not, what's the logic behind the TravisCI?
Would it be more practical to run the test on my real infrastructure rather than in TravisCI?
Yes! Travis CI fully supports Ruby on Rails, and can launch the same services you need for the tests, so I expect you'd be all set. When you go to create your travis.yml file, you'll be able to set the configuration for your build environment, including setting up services including MongoDB and/or Redis. Here's some sample code on how that looks:
services
- mongodb
- redis
From a practical standpoint, using a separate environment makes it easier to ensure test integrity, though you do have to do the additional software setup. The main benefit though is that you get a clean slate at each build for all your tests and it's well away from your production code in case there's a problem.