I have different projects with integration tests, starting an embedded wildfly to run the tests.
To be able to run them parallel I want to use the port offset, but it seems that arquillian doesn't recognize the offset.
Is there any possibility to use the offset? I don't want to use the hardcoded management port of the arquillian.xml, because it should be configurable as CLI parameter.
Thanks for any hints.
Related
I have a test framework running on my local (& git) that is based on TestCafe-Cucumber (Node.js) example: https://github.com/rquellh/testcafe-cucumber & it works really well.
Now, I am trying to use this framework in the deployment (post-deployment) cycle by hosting it as a service or creating a docker container.
The framework executes through the CLI command (npm test) with few parameters.
I know the easiest way is to call the git repo directly as & when required by adding a Jenkins step, however, that is not the solution I am looking for.
So far, I have successfully built the docker image & container now runs on my localhost 8085 port as http://0.0.0.0:8085 (although I get DNS server as it's not an app - please correct me if I am wrong here)
The concern here is: How can I make it work like an app hosted so that once the deployment completes, the Jenkins/Octopus could call it as a service through the URL (http://0.0.0.0:8085) along with few parameters that the framework used to execute the test case?
I request all experts to provide a solution if there are any.
I guess there is no production-ready application or service to solve this task.
However, you can use a REST framework to handle network requests and subprocesses to start test sessions. If you like Node.js, you can start with the Express framework and the execa module.
This way you can build a basic service that can start your tests. If you need a more flexible solution, you can take look at gherkin-testcafe that provides access to TestCafe's API. You can use it instead of starting TestCafe as a subprocess since this way you will have more options to manage your test sessions.
currently my web application is running on a server, where all the services (nginx, php, etc.) are installed directly in the host system. Now I wanted to use docker to separate these different services into specific containers. Nginx and php-fpm are working fine. But in the web application pdfs can be generated, which is done using wkhtmltopdf and as I want to follow the single-service-per-container pattern, I want to add an additional container which houses wkhtmltopdf and takes care of this specific service.
The problem is: how can I do that? How can I call the wkhtmltopdf binary from the php-fpm container?
One solution is to share the docker.socket, but that is a big security flaw, so I really don‘t like to it.
So, is there any other way to achieve this? And isn‘t this "microservice separation" one of the main purposes/goals of docker?
Thanks for your help!
You can't directly call binaries from one container to another. ("Filesystem isolation" is also a main goal of Docker.)
In this particular case, you might consider "generate a PDF" as an action your service takes and not a separate service in itself, and so executing the binary as a subprocess is a means to an end. This doesn't even raise any complications since presumably mkhtmltopdf isn't a long-running process, you'll launch it once per request and not respond until the subprocess runs to completion. I'd install or include it in the Dockerfile that packages your PHP application, and be architecturally content with that.
Otherwise the main communication between containers is via network I/O and so you'd have to wrap this process in a simple network protocol, probably a minimal HTTP service in your choice of language/framework. That's probably not worth it for this, but it's how you'd turn this binary into "a separate service" that you'd package and run as a separate container.
Docker is a wonderful tool for running/deploying your application in a well-defined, controlled environment, and is well supported by e.g. the GitLab CI or by MS Azure.
We would like to use it also in the development phase, so that all developers have the same environment available. Of course, we want to keep the image as light as possible and we do not want e.g. any IDE or other development tool inside of it.
So the actual development takes place outside of docker.
Running our (python) application inside of docker is no problem, but debugging it is not trivial: I do not know of a way to attach a debugger to an application running inside docker. In theory this should be possible, but how does one do it?
Additional info: we use visual studio code, that does have some docker, plugin, but nothing of this sort is mentioned.
Turns out that this is possible, following the same steps needed for remote debugging.
The IP address of the docker image can be retrieved through:
docker inspect <container_id> | grep -i ip
just be sure to add at the beginning of your application:
import ptvsd
# Allow other computers to attach to ptvsd at this IP address and port, using the secret
ptvsd.enable_attach(secret=None, address = ('0.0.0.0', 3000))
ptvsd.wait_for_attach()
'0.0.0.0' means on all interfaces.
For vscode, the last steps consists in adapting the python: Attach configuration, specifying the address and the remote and local roots for your script.
However, for some mysterious reason my breakpoints are ignored.
I have multiple docker containers. In local they talk to each other using network and compose.yml. I have pushed my containers to PCF only i don't know how to make them talk to each other. Can anyone help me out??
My understanding is that you would utilize Cloud Foundry's container to container networking to do this.
https://docs.cloudfoundry.org/concepts/understand-cf-networking.html
By default, no connections are allowed between containers but you can use the cf cli to allow connections on specific ports between your applications. Your applications just need to be configured to start and listen on the ports that you allow.
While it's not docker specific, there's a good example here.
https://github.com/cloudfoundry/cf-networking-examples/blob/master/docs/c2c-no-service-discovery.md
Using docker should be minimally different. You'll obviously need to create your own docker images and make sure that those are going to listen on the correct ports, otherwise it's just an adjustment to how you push the application (i.e. pass the docker image name to cf push). The cf add-network-policy commands should be the same.
Hope that helps!
** UPDATE **
If you are looking for docker-compose like behavior, kind of a run one command and deploy multiple apps, you can achieve this with cf push and a manifest.yml file.
The manifest.yml file allows you to define multiple applications. Thus you can use it to deploy a series of applications that work together, like you often see with docker-compose.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html
You have quite a bit of flexibility with manifest.yml, you can deploy buildpack based apps or docker image based apps. You can configure routes, bound services, health checks, memory/disk quotas, and if you're on a new enough version, even processes and sidecars. That said, it can't do 100% of what you can with the cf cli, for example you cannot control the above mentioned network policy using a manifest.yml. If you need to control something not exposed through manifest.yml, the other option would be to script the deployment.
I'm trying to develop an application that has two main containers, a Java-Tomcat webserver and a Python and Lua one for machine learning scripts.
Soo here is the issue: I need to send a command on the Python/Lua container's CLI whenever the Java one receives a certain Request. I know that if the webserver wasn't a container I could simply use docker exec, but wouldn't having the Java part of my application as a non-container break the whole security idea of dockers?
Thanks a lot and sorry for my poor english!
(+1 for #larsks) Set up a REST API that allows one container to trigger actions on the other container.
You can setup Container communication across links. Docs here https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
After that you can call from container A to B using B:port/<your API>