Context
I am trying to configure Cypress as an E2E test runner in my current employer's codebase, and will utilize this for Snapshot tests at a later point (TBD based on our experience w/ Cypress). We currently utilize docker-compose to manage our Frontend (FE), Backend (BE) and Database (DB) services (images).
FE tech-stack
NextJS and React, Recoil and yarn pkg manager
Problem
I am having a difficult time configuring Cypress, here is a list of things that are hindering this effort:
Am I supposed to run my E2E tests in its own Docker Service/Image separate from the FE Image?
I am having a tough time getting Cypress to run in my Docker container on my M1 mac due to CPU architecture issues (the docker image makes use of the linux x64 architecture which croaks out when I try to get cypress to run on my Mac, but works fine when I run it in the cloud in a Debian box). This is a known issue within Cypress.
There is a workaround to this as Cypress works when installed globally/on the local machine itself, outside the container. So instead of calling on the tests inside the container (ideal), I'm having to call the test on my local in the FE directory root.
If I need to run snapshot tests with Cypress, do I need to configure that separately from my E2E tests and place that suite of tests within my FE image? Since I will be needing the components in FE to be mounted to be tested by Cypress.
Goal
The goal here is to configure Cypress in a way that works INSIDE the docker container on both in the cloud (CI/CD and Production/Staging) and on local M1 mac machines. Furthermore, (this is good-to-have, not necessary), have Cypress live in a place where it can be used for both Snapshot and E2E tests, within Docker-Compose.
Any help, advice or links are appreciated, I'm a bit out of my depth here. Thanks!
Related
What are the "best practices" workflow for developing and testing an image (locally I guess) that is going to be deployed into a K8s cluster, and that has different hardware than my laptop?
To explain the context a bit, I'm running some deep learning code that needs gpus and my laptop doesn't have any so I launch a "training job" into the K8s cluster (K8s is probably not meant to be used this way, but is the way that we use it where I work) and I'm not sure how I should be developing and testing my Docker images.
At the moment I'm creating a container that has the desired gpu and manually running a bunch of commands till I can make the code work. Then, once I got the code running, I manually copy all the commands from history that made the code work and then copy them to a local docker file on my computer, compile it and push it to a docker hub, from which the docker image is going to be pulled the next time I launch a training job into the cluster, that will create a container from it and train the model.
The problem with this approach is that if there's a bug in the image, I have to wait until the deployment to the container to realize that my Docker file is wrong and I have to start the process all over again to change it. Also finding bugs from the output of kubectl logs is very cumbersome.
Is it a better way to do this?
I was thinking of installing docker into the docker container and use IntelliJ (or any other IDE) to attach it to the container via SSH and develop and test the image remotely; but I read in many places that this is not a good idea.
What would you recommend then instead?
Many thanks!!
So I have a docker file which compiles and installs gtsam with cython wrapper. The docker setup works fine on the local machine but it runs out of memory when building on docker hubs automated build.
I believe I can reduce the memory usage by changing to make -j1, but I'd still like faster builds when performed locally.
I tried accessing the sys/fs/cgroup/memory/memory.limit_in_bytes which shows to be 9223372036854771712 way more then the 2G limit on the servers.
Is there a way to detect if the build is taking place through the automated build and accordingly adjust the -j flag
Docker Hub sets environment variables that are available at build time.
For example, you can test if the build process happens on Docker Hub by checking if SOURCE_BRANCH is set.
The full list of env variables can be found here.
I'd like to pull down a standard docker container and then issue it a command that will read and execute a .jmx test file from the current folder (or specified path) and drop the results into the same folder (or another specified path/filename). Bonus points if the stdout from jmeter's console app comes through from the docker run command.
I've been looking into this for quite some time and the solutions I've found are way more complex than I'd like. Some require that I create my own dockerfile and build my own image. Others require that I set up a Docker volume first on my machine and then use that as part of the command. Still others want to run fairly lengthy bash shell scripts. I'm running on Windows and would prefer something that just works with the standard docker CLI running in any Windows prompt (it should work from cmd or PowerShell or bash, not just one of these).
My end goal is that I want to test some APIs using jmeter tests that already exist. The APIs are running in another locally running container that I can execute with a path and port. I want to be able to run these tests from any machine without first having to install Java and jmeter.
I need to create docker image, which will be describe only which tools needs to be installed.
Then, I want to start container from that image and run tests ( which is based on Robot Framework) and get logs in my local machine. Is it possible? If possible - how?
If it is not possible, how to create image so that I don't need to rebuild it after every change in code?
For example, I have Test Suite with 10 tests, if i build image, I cant just add test to test suite, I need to rebuild it, how to force docker "watch" for any changes?
Long story short - I need to run tests ( amount of tests will always increase) from isolated end ( docker container) - how can i do it?
This is my first docker experience.
I think the right solution is to build the container without the tests, and then use a mount point to make the tests available inside the container. That way the container won't actually contain any of the tests and won't need to be rebuilt.
I have tests that I run locally using a docker-compose environment.
I would like to implement these tests as part of our CI using Jenkins with Kubernetes on Google Cloud (following this setup).
I have been unsuccessful because docker-in-docker does not work.
It seems that right now there is no solution for this use-case. I have found other questions related to this issue; here, and here.
I am looking for solutions that will let me run docker-compose. I have found solutions for running docker, but not for running docker-compose.
I am hoping someone else has had this use-case and found a solution.
Edit: Let me clarify my use-case:
When I detect a valid trigger (ie: push to repo) I need to start a new job.
I need to setup an environment with multiple dockers/instances (docker-compose).
The instances on this environment need access to code from git (mount volumes/create new images with the data).
I need to run tests in this environment.
I need to then retrieve results from these instances (JUnit test results for Jenkins to parse).
The problems I am having are with 2, and 3.
For 2 there is a problem running this in parallel (more than one job) since the docker context is shared (docker-in-docker issues). If this is running on more than one node then i get clashes because of shared resources (ports for example). my workaround is to only limit it to one running instance and queue the rest (not ideal for CI)
For 3 there is a problem mounting volumes since the docker context is shared (docker-in-docker issues). I can not mount the code that I checkout in the job because it is not present on the host that is responsible for running the docker instances that I trigger. my workaround is to build a new image from my template and just copy the code into the new image and then use that for the test (this works, but means I need to use docker cp tricks to get data back out, which is also not ideal)
I think the better way is to use the pure Kubernetes resources to run tests directly by Kubernetes, not by docker-compose.
You can convert your docker-compose files into Kubernetes resources using kompose utility.
Probably, you will need some adaptation of the conversion result, or maybe you should manually convert your docker-compose objects into Kubernetes objects. Possibly, you can just use Jobs with multiple containers instead of a combination of deployments + services.
Anyway, I definitely recommend you to use Kubernetes abstractions instead of running tools like docker-compose inside Kubernetes.
Moreover, you still will be able to run tests locally using Minikube to spawn the small all-in-one cluster right on your PC.