I have a test framework running on my local (& git) that is based on TestCafe-Cucumber (Node.js) example: https://github.com/rquellh/testcafe-cucumber & it works really well.
Now, I am trying to use this framework in the deployment (post-deployment) cycle by hosting it as a service or creating a docker container.
The framework executes through the CLI command (npm test) with few parameters.
I know the easiest way is to call the git repo directly as & when required by adding a Jenkins step, however, that is not the solution I am looking for.
So far, I have successfully built the docker image & container now runs on my localhost 8085 port as http://0.0.0.0:8085 (although I get DNS server as it's not an app - please correct me if I am wrong here)
The concern here is: How can I make it work like an app hosted so that once the deployment completes, the Jenkins/Octopus could call it as a service through the URL (http://0.0.0.0:8085) along with few parameters that the framework used to execute the test case?
I request all experts to provide a solution if there are any.
I guess there is no production-ready application or service to solve this task.
However, you can use a REST framework to handle network requests and subprocesses to start test sessions. If you like Node.js, you can start with the Express framework and the execa module.
This way you can build a basic service that can start your tests. If you need a more flexible solution, you can take look at gherkin-testcafe that provides access to TestCafe's API. You can use it instead of starting TestCafe as a subprocess since this way you will have more options to manage your test sessions.
Related
I've been given a small node.js website to test.
Initially, I tried to keep it all JavaScript and even managed to write a few tests and weave those into a CI YAML file that instructs GitLab to deploy the container, build the site, run the tests...
However, my tests are getting more and more complicated and I must resort to my Java skills.
Now, the problem is that I don't know to form the CI tasks: There is no one container that has all the needed technology (nor is it what CONTAINERS are for, anyway).
On the other hand, I don't know to get more than one image in the task.
Somehow, my mind imagines I could deploy as: one container has the node.js stuff and builds and runs the site, exposing an endpoint.
Another container has the Java-Maven-Chrome stuff and builds and runs the tests, which access the site via the exposed endpoint.
Or maybe I have the whole concept wrong?
Would appreciate to learn what is the professional solution here. Surely, I am not the first Java QA guy, trying to test a node.js website!
I would really appreciate some example for the YAML file. Because, I can only imagine it as having one field in the beginning "image" - and then that's where my container goes and no room for another.
I'm using reactjs for frontend with Nginx load balancer and laravel for backend with MongoDB.
as old architecture design, code upload to GitHub with different frontend and backend repo.
still did not use DOCKER AND KUBERNETS, I want to implement them, in the new Architecture design, I used a private cloud server so, restricted to deploying on AWS\AZURE\GCP\etc...
share your Architecture plan and implementation for a better approach to microservices!
as per my thinking,
first make a docker file for react and laravel project
then upload to docker private registry.[dockerhub]
install docker and k8s on VM
deploy container of 1=react and 2=laravel from image
also deploy 3=nginx and 4=mongo container from default market image
Some of my questions:
How to make the connection?
How to take a new pull on the container, for the new update release version?
How to make a replica, for a disaster recovery plan?
How to monitor errors and performance?
How to make the pipeline MOST important?
How to make dev, staging, and production environments?
This is more of planning question, more of the task can be automated by developer/devops, except few administrative task like monitoring and environment creation.
still this can be shared responsibility. or if team is available to manage product/services.
You can use gitlab, which can directly attach to kub8 provider. Can reduce multiple build steps.
Debugging Docker containers is very easy on my local PC. Say I have this scenario:
1) A web application project
2) A Docker-Compose project
I set the Docker-Compose project as the startup project and then debug the project. Any breakpoints I add to my web application project work i.e. the code stops.
I have now enabled Kubernetes in Docker for Desktop and I have created a very simple app and deployed it. However, it seems to be very complex to setup a debugging environment - for example as described here: https://medium.com/#pavel.agarkov/debugging-asp-net-core-app-running-in-kubernetes-minikube-from-visual-studio-2017-on-windows-6671ddc23d93, which is making me think that I am doing something wrong. Is there a simple way to debug Kubernetes when it is installed locally like it is when debugging local Docker containers?
I was hoping that I would be able to just launch Visual Studio and it would start debugging Kubernetes containers - like with Docker. Is this possible?
Kubernetes is a tool designed to run multiple copies of a packaged application Somewhere Else. It is not designed as a live-development tool.
Imagine that you built a desktop application, packaged it up somehow, and sent it off to me. I'm running it on my desktop (Somewhere Else) and have a problem with it. I can report that problem to you, but you're not going to be able to attach your IDE to my desktop system. Instead, you need to reproduce my issue on your own development system, write a test case for it, and fix it; once you've done that you can release an update that I can run again.
Kubernetes is much more focused on this "run released software" model than a live-development environment. You can easily roll a Deployment object back to the previous version of the software that has been released, for example, assuming you have a scheme to tag distinct releases. You need to do a lot of hacking to try to get a local development tree to run inside a container, though.
The other important corollary to this is that, when you "attach your IDE to a Docker container", you are not running the code in your image! A typical setup for this starts a Docker container from an image but then overwrites all of the application code (via a bind mount) with whatever content you have on your local system. Aside from perhaps encapsulating some hard-to-install dependencies, this approach one the one hand gets the inconveniences of using Docker at all (must have root-equivalent permissions, can't locally run support tools, ...) and on the other this hides the code in the image (so you'll need to repeat whatever tests when you want to deploy to production).
I'd recommend using a local development environment for local development, and not trying to simulate it using other tools. Kubernetes isn't well-suited to live development at all, and I wouldn't try to incorporate it into your day-to-day workflow other than for pre-deployment testing once other tests have passed.
Telepresence is an useful tool to debug pods in kubernetes.Telepresence works by running your code locally, as a normal local process, and then forwarding requests to/from the Kubernetes cluster.This means development is fast: you only have to change your code and restart your process. Many web frameworks also do automatic code reload, in which case you won't even need to restart.
https://www.telepresence.io/tutorials/kubernetes
You're right it is more complicated then it needs to be. I wrote an open source framework called Robusta to solve this.
I do some tricks with code-injection to inject debuggers into already running pods. This lets you bypass the typically complex work of setting up a debug-friendly environment in advance.
You can debug any python pod in the cluster like this:
robusta playbooks trigger python_debugger name=myapp namespace=default
This will setup the debugger. All that remains to do is run kubectl port-forward into the cluster and connect Visual Studio code.
I don't know what language you're using, but if it isn't Python it should still be easy to setup. (Feel free to comment and I'll help you.)
We deployed a rails app in Google Cloud Run using their managed platform. The app is working fine and it is able to serve requests.
Now we want to get access to the rails console of the deployed app. Can anyone suggest a way to achieve this?
I'm aware that currently, Cloud Run supports only HTTP requests. If no other way is possible I'll have to consider something like rails web console
I think you cannot.
I'm familiar with Cloud Run but I'm not familiar with rails.
I assume you'd need to be able to shell into a container in order to be able to run IRB. Generally, you'd do this by asking the runtime (Docker Engine, Kubernetes, Cloud Run) to connect you to the container so that you could do this.
Cloud Run does not (appear) to permit this. I think it's a potentially useful feature request for the service. For those containers that contain shells, this would be the equivalent of GCE's gcloud compute ssh.
Importantly, your app may be serviced by multiple, load-balanced containers and so you'd want to be able to console into any of these.
However, you may wish to consider alternatives mechanisms for managing your app: monitoring, logging, trace etc. These mechanisms should provide you with sufficient insight into your app's state. Errant container instances should be terminated.
This follows the concept of "pets vs. cattle" whereby, instead of nurturing individual containers (is one failing?), you nurture the containers holistically (is the service comprising many containers failing?)
For completeness, if you think that there's an issue with a container image that you're unable to resolve through other means, you could run the image elsewhere (e.g. locally) where you can use IRB. Since the same container image will behave consistently wherever it's run, you should be able to observe the issue using IRB locally too.
I am learning microservice architecture, but now there is some confusion.
situation
there are 4 projects written in golang
orderService
userService
tools
web ( forward HTTP request )
orderService , userService, web communicate via grpc
all api requests through web forwarding to orderService or userService
orderService and userService have their own independent database
they are all in docker containers
confusion
when I want to test a request, I have to do the following steps:
cd orderService
govender update +vendor
go build
cd userService
govender update +vendor
go build
cd web
govender update +vendor
go build
docker-compose build
docker-compose up
when I changed some code, I have to do this steps again.
I think this is unscientific and abnormal. I want to know whether all of these steps are necessary to integration test four microservices in docker.
If you change Go code you have to recompile; that's not abnormal. Go code compiles quickly so that's usually not a big deal.
I'm not sure why you are running govendor repeatedly. If you are working on a dependency it is often easier to unvendor it temporarily.
To speed things up, don't use Docker during development; at least not for the Go code. Since Go doesn't have any dependencies beside itself there's no advantage to doing this. You can still run backing services (e.g. a database) in Docker, but you don't have to rebuild and restart that all the time.
You did not explain what kind of test you want to do. The way you are doing it it seems you are running instances with database so I treat it more like an integration test.
I assume you have written unit tests before doing this.
If you wish to test independently I suggest making use of http://onsi.github.io/ginkgo/
This will help you test most of the apis and business scenario and test the flow of your orderService and userService individually.
First, I don't think repeating of go vendor is necessary unless dependency has changed. It should not happen often.
Second, Golang does require you to build/compile, so if you make changes you would have to re-build the service. However, most of the times, you should make changes only to one services to make your life easier. It will make the testing and debugging easier during development.
Lastly, there are a few ways to speed things up with development using docker. I would suggest keep using docker as it will make your devops life easier, although you don't have to. For simplicity, you can move the build step to Docker container. What you have to do is to copy the dir including the dependency into container and build inside the container (or just do go run inside container). This is OK for development. If you would like to have one set of container definition for development and deployment, you can write a script to compile each services, copy the binary into one directory and have docker copy the binary from that directory. A few lines of python script should be enough. This has the added benefits of using docker-compose where you can stand up the whole services with a single click. Then you can also write your integration test launcher using another docker container which will run your test once and exit.