How to get an XML file via HTTP when Jenkins runs Codeception - jenkins

Our developers develop in Virtualbox VMs which have a web server configured to point at the codebase they are working on. When they run Codeception tests, the code calls the local web server requesting a data file from within the codebase and proceeds with the test. Works well.
The problem I'm running into is running the tests on our Jenkins Server (using a Declarative Pipeline, if that matters). Jenkins installs the code in a temporary directory (/var/lib/jenkins/workspace/). Obviously I can't point the web server on Jenkins to point to the temp directory (especially when I'm building different branches in different temporary directories) so the tests fail when trying to fetch the data file.
The reason we fetch the data file via HTTP is because the code we are testing does precisely that in production; fetches code from another web server not under our control and we want to mimic the functionality in our test.
How should this be handled? Copy the data files to a common directory outside of Jenkins that the web server points to? If so, how do I handle different developers using the same data file with different values? Or do I just tell them "Don't do that?"
Any suggestions?

Related

What is the correct way to use the gitlab-pipelines CI when testing a node.js website with a Java Webdriver project?

I've been given a small node.js website to test.
Initially, I tried to keep it all JavaScript and even managed to write a few tests and weave those into a CI YAML file that instructs GitLab to deploy the container, build the site, run the tests...
However, my tests are getting more and more complicated and I must resort to my Java skills.
Now, the problem is that I don't know to form the CI tasks: There is no one container that has all the needed technology (nor is it what CONTAINERS are for, anyway).
On the other hand, I don't know to get more than one image in the task.
Somehow, my mind imagines I could deploy as: one container has the node.js stuff and builds and runs the site, exposing an endpoint.
Another container has the Java-Maven-Chrome stuff and builds and runs the tests, which access the site via the exposed endpoint.
Or maybe I have the whole concept wrong?
Would appreciate to learn what is the professional solution here. Surely, I am not the first Java QA guy, trying to test a node.js website!
I would really appreciate some example for the YAML file. Because, I can only imagine it as having one field in the beginning "image" - and then that's where my container goes and no room for another.

Jenkins + OpenLiteSpeed NodeJS integration

I have a problem integrating Jenkins and OpenLiteSpeed. The configuration is such that Jenkins is connected to GitLab and when a change is made to GitLab he pulls the changes and needs to build a project. The project consists of two parts frontend and backend (they are separate projects on gitLab) that run on different ports (83 and 84) which I set up in OpenLiteSpeed ​​where I have 2 virtual hosts and 2 listers. I looked up the instructions on the internet and set everything up as said. Jenkins builds an app and reports that everything is fine but it happens that he builds an old app and there are no changes I have made. When I look in the File System I see that the changes are included but in the browser they are not. I thought it might be up to the cache but the same thing happens in incognito mode. Then I don't know how to set my backend to start the server on localhost: 3000 after construction? I am not sure what is the best way to have backend and frontend on the same server. Is this the wrong way to use OpenLiteSpeed with a different port, is there any better way to do this? Oh, the project is made with NodeJS - React. Also, the frontend connects successfully with the backend on my PC but on my Linux server, it won't. Thank you in advance.

How to use TestCafe-Cucumber Node.js project in DevOps deployments

I have a test framework running on my local (& git) that is based on TestCafe-Cucumber (Node.js) example: https://github.com/rquellh/testcafe-cucumber & it works really well.
Now, I am trying to use this framework in the deployment (post-deployment) cycle by hosting it as a service or creating a docker container.
The framework executes through the CLI command (npm test) with few parameters.
I know the easiest way is to call the git repo directly as & when required by adding a Jenkins step, however, that is not the solution I am looking for.
So far, I have successfully built the docker image & container now runs on my localhost 8085 port as http://0.0.0.0:8085 (although I get DNS server as it's not an app - please correct me if I am wrong here)
The concern here is: How can I make it work like an app hosted so that once the deployment completes, the Jenkins/Octopus could call it as a service through the URL (http://0.0.0.0:8085) along with few parameters that the framework used to execute the test case?
I request all experts to provide a solution if there are any.
I guess there is no production-ready application or service to solve this task.
However, you can use a REST framework to handle network requests and subprocesses to start test sessions. If you like Node.js, you can start with the Express framework and the execa module.
This way you can build a basic service that can start your tests. If you need a more flexible solution, you can take look at gherkin-testcafe that provides access to TestCafe's API. You can use it instead of starting TestCafe as a subprocess since this way you will have more options to manage your test sessions.

Transforming Web.Config in a Windows Docker container

I have a .NET app that uses Octopus to deploy to the server. In this process, Octopus updates some of the values in the Web.Config (API keys, database connection string etc). I'm moving this app into a container on the same server, and the image has been built before Octopus gets anywhere near it.
How do I update the Web.Config based on the environment in which the docker run command is being triggered?
I've found this blog post, which seems to necessitate a config transformation file.
I could also pass everything in as environment variables to the container, but then I'd have to change how the app accesses them, which I don't want to do because there are lots of other apps to be done.

serving the folder produce by pub build with the same server that serve the api (complete app)

I would like to join my 2 apps written in dart:
backend that serve the data (api)
front with angulardart (build folder generated by pub build)
Pretty simple question I know but I don't usually do that, so I was wondering:
Do I need to serve all the build folder ?
A simple static file server can do that ?
what package do you usually use ?
any interesting links ?
If your backend is written such that you have a script in your bin folder that uses HttpRequest then you need to host it in the Dart VM (by calling dart bin\script.dart . You cannot host it as static files like the frontend, because the dart files that should be executed would just be served up to the browser.
Depending on your operating system, you'd likely want to run this as some sort of service to avoid needing a user logged on to run it. The Dart Docker images might be a good way to do this.
If you want to serve the frontend code using the same server, then your backend will need to be able to serve up the static files when handling the request. You might be able to do this easily with the shelf package.
You can see an example of full stack Dart application (backend + frontend) here: https://github.com/luizmineo/io_2014_contacts_demo
The server (backend) is configurable, so it can serve the client from any folder (usually, web or build/web). The example also includes a build script, which can be used to produce a deployable build (server + compiled client).

Resources