How to use Dapr to communicate in Docker Compose - docker

I'm trying to learn Dapr and Docker Compose at the same time, though I am running into some problems. I have a very basic docker-compose.yaml, shown below
version:"3.7"
services:
python-service:
image: python-image
java-service:
image: java-image
My goal is to make these be able to communicate over Dapr (currently they are simple hello world programs, but I'm trying to get the connection working first.)
My goal architecture would be something like:
[python-service][Dapr-sidecar]
[java-service][Dapr-sidecar]
Having the services talk to the sidecar, and the sidecars talk to eachother over a network. I'm quite stumped on how to achieve this, and I can't seem to find any guides online to fit my exact case.
I tried to follow the guide here: https://stackoverflow.com/a/66402611/17494949, However it game me a bunch of errors, seemed to use some other system, and didn't explain how to run it. Any help would be appreciated.

Related

How to make a Docker architecture with Traefik and Snort3

I'm trying to find a way to do the following architecture with Docker, Traefik and Snort3.
I need to understand what it is possible to do with this two systems to make that secure.
In the left image Snort3 is in the middle of the architecture and in the right image is as a side load of Traefik.
I don't know what is possible and best practices for this and trying also to find a possibility for a demo and materials for it.
Like an example of a docker file with same network which can explain this architecture.
The API in this case is NodeJS but the most important is how to place Snort3 with Docker between a Traefik container and the API container

Dockerizing composer-playground with deployed (embedded) business network archive

I found out there is hyperledger/composer-playground as a docker image. It's easily startable using
docker run --name composer-playground --publish 8080:8080 --detach hyperledger/composer-playground
Now I want to make a Dockerfile out of it that can serve an existing Business Network Definition as demo application. It should be embedded, so no real Fabric network is required. What possibilities do I have to accomplish that?
First idea: Card file structures could be copied into /home/composer/.composer/cards but as far as I understand, these cards could only have the embedded connection type, otherwise a real Fabric network is required.
Second idea: Is there some API endpoint that could be queried to create an embedded network for a .bna file?
Interesting idea, and with the direction of Composer playground cropping up a bit recently, it would be a good one to discuss on a Composer community call
As for how things are now, I think you'll have to set everything up with a real Fabric. I haven't seen a Dockerfile that does that but seems doable. The hosted playground does everything in local storage and pouch DB (indexedDB) so I don't think you would be able to get a demo bna in there without changes to the playground.
One thing that I had pondered in the past was making it possible to configure where the playground looks for sample networks, and that could even include the primary 'get started' network.
Might that help in this case? Could be worth opening a Github issue to explore the use cases if that does sound useful (pull requests gratefully accepted!)

How to link multiple docker swarm services?

I'm a huge fan of the docker philosophy (or at least I think I am). Even so, I'm still quite novice in the sense that I don't seem to grasp the intended way of using docker.
As I see it currently, there are two ways of using docker.
Create a container with everything I need for the app in it.
For example, I would like something like a Drupal site. I would then put nginx, php, mysql and code into a container. I could run this as a service in swarm mode and scale it as needed. If I need another Drupal site, I would then run a second container/service that holds nginx, php and mysql and (slightly) different code. I would now need 2 images to run a container or service off.
Pro's - Easy, everything I need in a single container
Con's - Cannot run each container on port 80 (so need a reverse proxy or something). (Not sure at but I could imagine) Server load is higher since there are multiple containers/services running nginx, php and mysql.
Create 4 separate containers. 1 nginx container, 1 php container, 1 mysql container and 1 code/data container.
For example, I would like the same Drupal site. I could now run them all as a separate service and scale them across my servers as the amount of code containers (Drupal sites or other sites) increases. I would only need 1 image per container/service instead of a separate image for each site.
Pro's - Modular, single responsibility per service (1 for database, 1 for webserver etc), easy to scale only the area that needs scaling (scale database if requests increase, nginx if traffic increases etc).
Con's - I don't know how to make this work :).
Personally I would opt to make a setup according to the second option. Have a database container/service, nginx container/service etc. This seems much more flexible to me and makes more sense.
I am struggling however on how to make this work. How would I make the nginx service look at the php service and point the nginx config to the code folder in the data service etc. I have read some stuff about an overlay network but that does not make clear to me how nginx would look for php in a separate container/service.
I therefore have 2 (and a half) questions:
How is docker meant to be used (option 1 or 2 above or totally different)?
How can I link services together (make nginx look for php in a different service)?
(half) I know I am a beginner trying to grasp the concept but setting up a simple webserver and running websites seems like a basic task (at least, it is for me in conventional ways) but I can't seem to find my answers online anywhere. Am I totally off par in the way I think I would like to use docker or have I not been looking well enough?
How is docker meant to be used (option 1 or 2 above or totally different)?
Upto you, I prefer using Option #2, but i have at times used mix of Option #1 and options #2 also. So it all depends on the use case and which options looks better for the use case. At one of our client it was needed to have SSH and Nginx, PHP all in same container. So we mixed #1 and #2. Mysql, redis on their own container and app on one container
How can I link services together (make nginx look for php in a different service)?
Use docker-compose to define your services and docker stack to deploy them. You won't have to worry about the names of the services
version: '3'
services:
web:
image: nginx
db:
image: mysql
environment:
- "MYSQL_ROOT_PASSWORD=root"
Now deploy using
docker stack deploy --compose-file docker-compose.yml myapp
In your nginx container you can reach mysql by using it's service name db. So linking happens automatically and you need not worry.
I know I am a beginner trying to grasp the concept but setting up a simple webserver and running websites seems like a basic task (at least, it is for me in conventional ways) but I can't seem to find my answers online anywhere. Am I totally off par in the way I think I would like to use docker or have I not been looking well enough
There are lot of good resources available in forms of articles, you just need to look

Setup commands for Mesos and Kubernetes on Docker?

When trying to move a web container (Tomcat) to the latest technologies for better growth and support, I came across this blog. This part seems ideal for my needs:
... we are also incorporating Kubernetes into Mesos to manage the deployment of Docker workloads. Together, we provide customers with a commercial-grade, highly-available and production-ready compute fabric.
Now, how to setup a local test environment to try this out? All these technologies seem interchangable! I can run docker on mesos, mesos on docker, etc etc etc. Prepackaged instances allow me to run on others Clouds. Other videos also make this seem great! Running out on the cloud is not a viable (allowed) option for me. Unfortunately, I can not find 'instructions' on how to setup the configuration described/marketed/advertised.
If I am new to these technologies, and know there will be a learning curve, is there a way to get initialized for doing such a "simple task": running a tomcat container on a Docker machine that is running Mesos/Kubernetes? That is, without spending days trying to learn and figure out each individual part! This is the picture from the blog site referenced:
Assuming that I "only" know how to create a docker container(s) (for say, centos-7). What commands, in what order, (i.e. the secret 'code') do I need to use to configure small (2 or 3) local environment to try out running Tomcat?
Although I searched quite a bit, apparently not enough! Someone pointed me to this:
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/mesos-docker.md
which is pretty close to exactly what I was looking for.

Kubernetes Guestbook Example Not Loading Page

New question:
I've followed the guestbook tutorial here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/guestbook/README.md
And the output of my commands match their outputs exactly. When I try to access the guestbook web server, the page does not load.
Specifically, I have the frontend on port 80, I have enabled http/s connections on the console for all instances, I have run the command:
gcloud compute firewall-rules create --allow=tcp:<PortNumberHere> --target-tags=TagNameHere TagNameHere-<PortNumberHere>
and also
cluster/kubectl.sh get services guestbook -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}'
But when I run curl -v http://:, the connection simply times out.
What am I missing?
Old Question - Ignore:
Edit: Specifically, I have 3 separate docker images. How can I tell kubernetes to run these three images?
I have 3 docker images, each of which use each other to perform their tasks. One is influxdb, the other is a web app, and the third is an engine that does data processing.
I have managed to get them working locally on my machine with docker-compose, and now I want to deploy them on googles compute engine so that I can access it over the web. I also want to be able to scale the software. I am completely, 100% new to cloud computing, and have never used gce before.
I have looked at Kubernetes, and followed the docs, but I cannot get it to work on a gce instance. What am I missing/not understanding? I have searched and read all the docs I could find, but I still don't feel any closer to getting it than before.
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/gce.md
To get best results on SO you need to ask specific questions.
But, to answer a general question with a general answer, Google's Cloud Platform Kubernetes wrapper is Container Engine. I suggest you run through the Container Engine tutorials, paying careful attention to the configuration files, before you attempt to implement your own solution.
See the guestbook to get started: https://cloud.google.com/container-engine/docs/tutorials/guestbook
To echo what rdc said, you should definitely go through the tutorial, which will help you understand the system better. But the short answer to your question is that you want to create a ReplicationController and specify the containers' information in the pod template.

Resources