How to handle Keystores / Deployment with docker - docker

I'm trying to use Ballerina to build a REST-Api that uses JWT Authentication and is deployed with docker.
I managed building a simple service with a few endpoints and deploying a Docker image.
Now I want to add JWT authentication.
I tried using this example: https://ballerina.io/learn/by-example/secured-service-with-jwt-auth.html
(v1.2 and Swan Lake)
However, when I try to run the example i get:
"error: KeyStore File \bre\security\ballerinaKeystore.p12 not found" (I'm using Windows)
(I probably have to set my own keystore here for it to work, but the example does no say anything about that.)
EDIT: Nevermind... I'm an idiot. Forgot to pass --b7a.home=
But that still leaves my following questions regarding deployment with docker.
Also: (I think) I understand what a keystore is and why I need it. but: How do I handle keystores during development or when deploying? Seems like a bad idea to push the keystore file to a repo. Where do I save it? and how do I deploy it? Or did I get something completely wrong here?

You could refer to Sample with Docker, Sample with Kubernetes on how to deploy https services using the Annotations.
To use without annotations, you will need to copy the keystores/trustores to the docker file and give that path to the configurations of the http service's listener. In production you will most probably have your own keystores and truststores. So it is always better to copy these to the docker file and make your services run.

Related

I can't find jetty-https.xml in Nexus 3

I installed a instance of Nexus Repository Manager 3 in rancher and i'm trying to use https port for a docker hosted repository. This means that i need to create a self-signed certificate to make it work. After a lot of research i came down to a problem, i cant find jetty-https.xml in /etc. The questions is, do this file exist or do i need to create it?
Source:
https://support.sonatype.com/hc/en-us/articles/217542177?_ga=2.62350444.1144825414.1623920039-1845083682.1622816513
https://help.sonatype.com/repomanager3/system-configuration/configuring-ssl#ConfiguringSSL-HowtoEnabletheHTTPSConnector
After modify the nexus.properties file in /nexus-data/etc/ and uncomented the nexus-args and restart the container the jetty-https.xml appeared on $install-dir/etc/jetty/. if you check the logs you can see the exact location of the jetty config folder.

How to use TestCafe-Cucumber Node.js project in DevOps deployments

I have a test framework running on my local (& git) that is based on TestCafe-Cucumber (Node.js) example: https://github.com/rquellh/testcafe-cucumber & it works really well.
Now, I am trying to use this framework in the deployment (post-deployment) cycle by hosting it as a service or creating a docker container.
The framework executes through the CLI command (npm test) with few parameters.
I know the easiest way is to call the git repo directly as & when required by adding a Jenkins step, however, that is not the solution I am looking for.
So far, I have successfully built the docker image & container now runs on my localhost 8085 port as http://0.0.0.0:8085 (although I get DNS server as it's not an app - please correct me if I am wrong here)
The concern here is: How can I make it work like an app hosted so that once the deployment completes, the Jenkins/Octopus could call it as a service through the URL (http://0.0.0.0:8085) along with few parameters that the framework used to execute the test case?
I request all experts to provide a solution if there are any.
I guess there is no production-ready application or service to solve this task.
However, you can use a REST framework to handle network requests and subprocesses to start test sessions. If you like Node.js, you can start with the Express framework and the execa module.
This way you can build a basic service that can start your tests. If you need a more flexible solution, you can take look at gherkin-testcafe that provides access to TestCafe's API. You can use it instead of starting TestCafe as a subprocess since this way you will have more options to manage your test sessions.

How do i start MinIO Server with custom config in newer versions?

I'm trying to get up and running with Minio server. I've read their "server config guide" here, but there's one thing i don't get.
The guide says that previously you could put a config.json in the mino home dir you specify, but that it's now deprecated. You're instead supposed to use their client ('mc') to update configs via admin commands?
This seems very cumbersome to me, although i understand that you can pass in an entire json.config file via the mc client.
However, what if you have a docker container and want to start it with a custom config? I don't understand how you'd do that, and their "docker run" only contains info about how to start it with environment variables for custom username/password.
To me, it makes more sense to still have a config.json in the minio home dir, i don't totally get why they removed it.
If someone could help me understand the config better, i'd be a happier minio camper.

Dockerizing a meteor app

So, the idea is to dockerize an existing meteor app from 2015. The app is divided into two (backend and frontend). I already made a huge bash script to handle all the older dependencies...software dependencies...etc etc. I just need to run the script and we get the app running. But the idea now is to create a docker image for that project. How should I achieve this? Should I create an empty docker image and run my script there?. Thanks. I'm new to docker.
A bit more info about the stack, the script, the dependencies could be helpful.
Assuming that this app is not in development, you can simply use eg an nginx image, and give it the frontend files to serve.
For the backend there is a huge variety of options like php, node, etc.
The dockerfile of your backend image should contain the installation and setup of dependencies (except for other services like database. There are images to do those separated).
To keep things simple you should try out docker-compose to configure your containers to act as a service as a whole (and save you some configurations).
Later, to scale things up, you could look for orchestration tools such as kubernetes. But I assume, this service is not there yet (based on your question). :)

Docker and SSH for development with phpStorm

I am trying to setup a small development environment using Docker. phpStorm team is working hard on get Docker integrated for remote interpreter and therefore for debugging but sadly is not working yet (see here). The only way I have to add such capabilities for debugging is by creating and enabling an SSH access to the container which works like a charm.
Now, I have read a lot about this and some people like the one on this post says is not recommended. I have read others which says to have a dedicated SSH Docker container which I don't get how to fit on this environment.
I am already creating a user docker-user (check repo here) for certain tasks like run composer without root permissions. That could be used for this SSH stuff easily by adding a default password to it.
How would you handle this under such circumstances?
I too have implemented the ssh server workaround when using jetbrains IDEs.
Usually what I do is add a public ssh key to the ~/.ssh/authorized_keys file for the SSH user in the target container/system, and enable passwordless sudo.
One solution that I've thought of, but not yet had the time to implement, would be to make some sort of SSH service that would be a gateway to a docker exec command. That would potentially allow at least some functionality without having to modify your images in any way for this dev requirement.

Resources