How to make a Docker architecture with Traefik and Snort3 - docker

I'm trying to find a way to do the following architecture with Docker, Traefik and Snort3.
I need to understand what it is possible to do with this two systems to make that secure.
In the left image Snort3 is in the middle of the architecture and in the right image is as a side load of Traefik.
I don't know what is possible and best practices for this and trying also to find a possibility for a demo and materials for it.
Like an example of a docker file with same network which can explain this architecture.
The API in this case is NodeJS but the most important is how to place Snort3 with Docker between a Traefik container and the API container

Related

How to use Dapr to communicate in Docker Compose

I'm trying to learn Dapr and Docker Compose at the same time, though I am running into some problems. I have a very basic docker-compose.yaml, shown below
version:"3.7"
services:
python-service:
image: python-image
java-service:
image: java-image
My goal is to make these be able to communicate over Dapr (currently they are simple hello world programs, but I'm trying to get the connection working first.)
My goal architecture would be something like:
[python-service][Dapr-sidecar]
[java-service][Dapr-sidecar]
Having the services talk to the sidecar, and the sidecars talk to eachother over a network. I'm quite stumped on how to achieve this, and I can't seem to find any guides online to fit my exact case.
I tried to follow the guide here: https://stackoverflow.com/a/66402611/17494949, However it game me a bunch of errors, seemed to use some other system, and didn't explain how to run it. Any help would be appreciated.

How to create a single project out of multiple docker images

I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/

Does docker-compose have something similar to service accounts and kubernetes-client library?

By creating Service accounts for pods, it is possible to access the kubectl api's of the whole cluster from any pod. Kubernetes-client libraries implemented in different languages make it possible to have a pod in cluster that serves this purpose.
Does docker-compose have something similar to this? My requirement is to control the life cycle(create, list, scale, destroy, restart, etc) of all the services defined in a compose file. As far as I've searched, no such feature is available for compose.
Or does docker-swarm provide any such features?
Docker provides an API which can be used to interact with the daemon. In fact that is exactly what docker-compose is using to achieve the functionality provided.
Docker does not provide fine grained access control like kubernetes does, though. But you can mount the docker socket to a container and make use of the API. A good example or that is ‚portainer‘ which provides a web based UI for docker.

Can I isolate pair of services in docker swarm?

I am building a web app using docker swarm.
Manager machine will have database and load balancer.
Next I have two pieces of software: tornado server, which acts as middle layer between user and node server. They should always be served together. And one tornado server should always talk to one node server.
I want containers to be as isolated as possible (in order to keep scalability), but how I ensure that kind of communication?
Right now my approach is to build two separate images - one for tornado and one for node and then create muli-stage container which connects them both. I do not feel this is optimal as I have to run two start commands in CMD.
What is preferable solution? Can you force docker to couple images (e.g. without specifying IPs)?
There is a link feature in docker compose files: https://docs.docker.com/compose/compose-file/#links. But recently Docker marked it as deprecated and suggests using user-defined networks: https://docs.docker.com/network/.
P.S: Also pay attention to the notes:
- If you define both links and networks, services with links between them must share at least one network in common to communicate.
- This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.

How to link multiple docker swarm services?

I'm a huge fan of the docker philosophy (or at least I think I am). Even so, I'm still quite novice in the sense that I don't seem to grasp the intended way of using docker.
As I see it currently, there are two ways of using docker.
Create a container with everything I need for the app in it.
For example, I would like something like a Drupal site. I would then put nginx, php, mysql and code into a container. I could run this as a service in swarm mode and scale it as needed. If I need another Drupal site, I would then run a second container/service that holds nginx, php and mysql and (slightly) different code. I would now need 2 images to run a container or service off.
Pro's - Easy, everything I need in a single container
Con's - Cannot run each container on port 80 (so need a reverse proxy or something). (Not sure at but I could imagine) Server load is higher since there are multiple containers/services running nginx, php and mysql.
Create 4 separate containers. 1 nginx container, 1 php container, 1 mysql container and 1 code/data container.
For example, I would like the same Drupal site. I could now run them all as a separate service and scale them across my servers as the amount of code containers (Drupal sites or other sites) increases. I would only need 1 image per container/service instead of a separate image for each site.
Pro's - Modular, single responsibility per service (1 for database, 1 for webserver etc), easy to scale only the area that needs scaling (scale database if requests increase, nginx if traffic increases etc).
Con's - I don't know how to make this work :).
Personally I would opt to make a setup according to the second option. Have a database container/service, nginx container/service etc. This seems much more flexible to me and makes more sense.
I am struggling however on how to make this work. How would I make the nginx service look at the php service and point the nginx config to the code folder in the data service etc. I have read some stuff about an overlay network but that does not make clear to me how nginx would look for php in a separate container/service.
I therefore have 2 (and a half) questions:
How is docker meant to be used (option 1 or 2 above or totally different)?
How can I link services together (make nginx look for php in a different service)?
(half) I know I am a beginner trying to grasp the concept but setting up a simple webserver and running websites seems like a basic task (at least, it is for me in conventional ways) but I can't seem to find my answers online anywhere. Am I totally off par in the way I think I would like to use docker or have I not been looking well enough?
How is docker meant to be used (option 1 or 2 above or totally different)?
Upto you, I prefer using Option #2, but i have at times used mix of Option #1 and options #2 also. So it all depends on the use case and which options looks better for the use case. At one of our client it was needed to have SSH and Nginx, PHP all in same container. So we mixed #1 and #2. Mysql, redis on their own container and app on one container
How can I link services together (make nginx look for php in a different service)?
Use docker-compose to define your services and docker stack to deploy them. You won't have to worry about the names of the services
version: '3'
services:
web:
image: nginx
db:
image: mysql
environment:
- "MYSQL_ROOT_PASSWORD=root"
Now deploy using
docker stack deploy --compose-file docker-compose.yml myapp
In your nginx container you can reach mysql by using it's service name db. So linking happens automatically and you need not worry.
I know I am a beginner trying to grasp the concept but setting up a simple webserver and running websites seems like a basic task (at least, it is for me in conventional ways) but I can't seem to find my answers online anywhere. Am I totally off par in the way I think I would like to use docker or have I not been looking well enough
There are lot of good resources available in forms of articles, you just need to look

Resources