E2E testing Docker compose + React + Spring Boot - docker

I have a project with a Docker compose file that have a React frontend and a Spring Boot backend with a PostgreSQL database.
I need something that starts the app and then runs the e2e tests, but I haven't found the way yet. I haven't written the e2e tests, so they can be in any language or framework.
Also, I want to be able to do it in a CI environment so it have to be done automatically.

Related

How to configure Cypress in Docker-Compose project

Context
I am trying to configure Cypress as an E2E test runner in my current employer's codebase, and will utilize this for Snapshot tests at a later point (TBD based on our experience w/ Cypress). We currently utilize docker-compose to manage our Frontend (FE), Backend (BE) and Database (DB) services (images).
FE tech-stack
NextJS and React, Recoil and yarn pkg manager
Problem
I am having a difficult time configuring Cypress, here is a list of things that are hindering this effort:
Am I supposed to run my E2E tests in its own Docker Service/Image separate from the FE Image?
I am having a tough time getting Cypress to run in my Docker container on my M1 mac due to CPU architecture issues (the docker image makes use of the linux x64 architecture which croaks out when I try to get cypress to run on my Mac, but works fine when I run it in the cloud in a Debian box). This is a known issue within Cypress.
There is a workaround to this as Cypress works when installed globally/on the local machine itself, outside the container. So instead of calling on the tests inside the container (ideal), I'm having to call the test on my local in the FE directory root.
If I need to run snapshot tests with Cypress, do I need to configure that separately from my E2E tests and place that suite of tests within my FE image? Since I will be needing the components in FE to be mounted to be tested by Cypress.
Goal
The goal here is to configure Cypress in a way that works INSIDE the docker container on both in the cloud (CI/CD and Production/Staging) and on local M1 mac machines. Furthermore, (this is good-to-have, not necessary), have Cypress live in a place where it can be used for both Snapshot and E2E tests, within Docker-Compose.
Any help, advice or links are appreciated, I'm a bit out of my depth here. Thanks!

Running whole app thru kubernetes while one part is externally deployed

My question concerns kubernetes set up for the development purposes. The app consosts of 4 services (react + express backend, nginx for routing, neo4j as a db). Neo4J Db is deployed in google cloud (not by me and it is not mantained by me either) but all other services are running now locally as I’m developing the app. What I want to achieve is to start up and run all those services at once, all together with a simple command as it is possible in docker compose world (thru docker-compose up).

Gitlab CI with docker+machine - Using multiple containers to test app

I'm using Gitlab CI, configured with a docker+machine executor, to build and test my app on spot instances.
My main app requires a few microservices to be available on production as well as in the test step. All of these microservices are built and tested in the same Gitlab CI server (each in his own pipeline). The output of all microservices are docker images that are pushed to the Gitlab Docker Registry.
The test step I'm trying to build:
Provision a spot instance (if there's no idle one), installed with the microservice
docker
Test step
2.1. Provision a spot instance (if there's no idle one), installed with app docker
2.2. Testing script
2.3. Stop the app container, release the spot instance
Stops the microservice container, release the spot instance
I've got 2.1, 2.2, 2.3 to work by following the instructions here, but I'm not sure how to achieve the rest. I can run docker-machine explicitly in the yaml, but I'd like to use gitlab's docker+machine executor as it's configured with the credentials, limitations, offpeak settings, etc.
Is this possible to with gitlab's executor? How?
What's the "correct" way to go about doing something like this? I'm sure I'm not the first one testing with microservices but I couldn't find any info of how to do so.
You are probably looking for the CI Services functionality. They have a couple of examples of how to use a service (MySQL, PostgreSQL, Redis) or if you were using another docker image, the docker service will have the same hostname as the docker image name (eg, tutum/wordpress will have a dns hostname of tutum-wordpress and tutum__wordpress, for more info, refer to the details about hostnames).
There are also details about running the postgres in the shell executor if you were so inclined and there is a presentation on Testing things with Gitlab CI and docker.

How to run micro services using docker

Am newbie to Spring boot. I need to create micro services and need to run by docker. I have attached my project structure here. Problem which is every time i need to up the micro services manually. For example am having 4 micro services and i just up this services manually. But all micro services should be started itself when deploying into docker. How to achieve this.
Also am using Cassandra database.
I don't know if it is the best solution, but it is the one i used:
First say to the spring boot maven plugin to create an executable jar :
<configuration>
<executable>true</executable>
</configuration>
After that you can add your application as a service in init.d and make it start when the container starts.
You can find a better explaination here : http://www.baeldung.com/spring-boot-app-as-a-service
Please have a look at the numerous tutorials that exists for spring boot and dockerizing this application.
Here is one which explains every step that is necessary
Build Application as Jar File
Create your docker image with Dockerfile
In this dockerfile you create an environment like you would have a new setup linux server and you define what you need for software to run your application: like java. Have a look at existing images like anapsix/alpine-java.
Now think of what you need to do to start your app in this environment: java -jar --some-options -location-of-your-jar.jar
Make sure to be able to reach your app by exposing the docker port so that you can see that is runs.
As I sad if these instruction is not helpful for you, then please read tutorials for docker and dockerizing spring boot applications.
You should use docker-compose. Best way to manage releases/versions and builds is to make own repository for dedicated docker images(nexus is an example).
In docker-compose you can describe all your infrastructure, create services, network, connecting services to communicate other services, so I think you should go this way to create nice developmnet and production build flow for your microservice application
For cassandra and other known services you can find prefered images on https://hub.docker.com.
In each microservice you should have Dockerfile, then in main directory of your solution you can create docker-compose.yml file with services definitions.
You can build your microservices in docker container too. Read more about "Java application build flow with docker" in google.
All about docker compose you can find here: https://docs.docker.com/compose/
All about docker swarm you can find here: https://docs.docker.com/engine/swarm/

How to integrate Capistrano with Docker for deployment?

I am not sure my question is relevant as I may try to mix tools (Capistrano and Docker) that should not be mixed.
I have recently dockerized an application that is deployed with Capistrano. Docker compose is used both for development and staging environments.
This is how my project looks like (the application files are not shown):
Capfile
docker-compose.yml
docker-compose.staging.yml
config/
deploy.rb
deploy
staging.rb
The Docker Compose files creates all the necessary containers (Nginx, PHP, MongoDB, Elasticsearch, etc.) to run the app in development or staging environment (hence some specific parameters defined in docker-compose.staging.yml).
The app is deployed to the staging environment with this command:
cap staging deploy
The folder architecture on the server is the one of Capistrano:
current
releases
20160912150720
20160912151003
20160912153905
shared
The following command has been run in the current directory of the staging server to instantiate all the necessary containers to run the app:
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
So far so good. Things get more complicated on the next deploy: the current symlink will point to a new directory of the releases directory:
If deploy.rb defines commands that need to be executed inside containers (like docker-compose exec php composer install for PHP), Docker tells that the containers don't exist yet (because the existing ones were created on the previous release folder).
If a docker-compose up -d command is executed in the Capistrano deployment process, I get some errors because of port conflicts (the previous containers still exist).
Do you have an idea on how to solve this issue? Should I move away from Capistrano and do something different?
The idea would be to keep the (near) zero-downtime deployment that Capistrano offers with the flexibility of Docker containers (providing several PHP versions for various apps on the same server for instance).
As far as i understood, you are using capistrano on the host , to redeploy the whole application stack, means containers. So you are using capistrano to orchestrate building, container creation and thus deployment.
While you do so you basically, when running cap deploy
build the app ( based on the current base you pulled on the host ) - probably even includes gulp/grunt/build tasks
then you "package" it into your image using "volume mounts"
during that you start / replace the containers
You do so to get a 'nearly' zero downtime deployment.
If you really care about the downtime and about formalising your deployment process that much, you should do it right by using a proper pipeline implementation for
packaging / ci
deployment / distribution
I do not think capistrano can/should be one of the tools you can use during this strategy. Capistrano is meant for deployment of an application directly on a server using ssh and git as transport. Using cap to build whole images on the target server to then start those as containers, is really over the top, IMHO.
packaging / building
Either use a CI/CD server like jenkins/bamboo/gocd to build an release-image for you application. Assuming only the app is customised in terms of 'release', lets say you have db and app as containers/services, app will include your source-code and will regularly change during releases..
Thus its a CD/CI process to build a new app-image (release) offsite on your CI server. Pulling the source code of your application an packaging it into your image using COPY and then any RUN statement to compile your assets ( npm / gulp / grunt whatever ). That all happens not on the production server, but on the CI/CD agent. Using multistage builds for slim images is encouraged.
Then you push this release-image, lets call this image yourregistry.com/yourapp into your private registry as a new 'version' for deployment.
deployment
with downtime (easy)
To deploy into your production or staging server WITH downtime, you would simply do a docker-composer pull && docker-composer up - this will pull the newer image and then start it in your stack - your app is upgraded. Using tagged images in the release stage would require to change the the docker-compose.yml
The server should of course be able to pull from your private repository.
withou downtime (more effort)
Achieving a zero-downtime deployment you should use the blue-green deployment concept. Thus you add a proxy to your setup and do no longer expose the public port from the app, but rather using this proxy public port. Your current live system might be running on a random port 21231, the proxy is forwarding from 443 to 21231.
We are using random ports to avoid the conflict during deploying the "second" system, covering one of the issue you mentioned.
When redeploying, you will only start a "new" container based on the new app-image in addition (to the old one), it gets a new random port 12312 - if you like, run your integration tests agains 12312 directly ( do not use the proxy ). If you are done and happy, reconfigure the proxy to now forward to 12312 - then remove the old container (21231).
If you like to automate the proxy-reconfiguration, which in detail is out of scope for this question, you can use service-discovery and a registrator which makes random ports much more practical and makes it easy to reconfigure you proxy, let it be nginx/haproxy while they are running. Tools would be, for example.
consul
consul watch + consul-template or tiller on the proxy to update the proxy-config
Registator for centralized registration or consul agent client mode with a service-configuration.json (depends on you choice)
-
I don't think Capistrano is the right tool for the job. This was recently discussed in a PR for SSHKit, which underlies Capistrano.
https://github.com/capistrano/sshkit/pull/368
#EugenMayer does a better job of explaining a "normal" way of using Docker.

Resources