How to access non container App data/Logs from an Azure-IoT-Edge Container App in gateway/Server? - azure-iot-edge

In Azure-IoT-Edge, How to Access non container App data/Logs from an Azure-IoT-Edge Container App in gateway/Server and Push data to Azure IoT Hub cloud?

At the end of the day it's the docker container running on a machine and in docker container you can run services that listen to the data that is posted on those services. Some solutions can be:-
A http server running on your edge module container and producers posting data to the RESTful api exposed by the container. Once data received, push it to IoT Hub using routes in edgeHub.
You can also run a consumer on docker container that listens to a message broker and passes that data to IoT hub using edgeHub routes.

Related

Communicate between 2 different Docker networks

I am trying to communicate between 2 different docker networks, but they cannot seem to be reached by each other.
I have one Docker network that is for a group mail server container
I have another Docker network that is for some websites, so a container running nginx, a container running php-fpm, and a container running mariadb.
I am trying to connect to the mail server container in a PHP script (from php-fpm container) to send an email, using the domain pointing to the mail server's external IP (smtp.example.com). This mail server is running fine and can be reached and used from external nodes, it's just when trying to reach it from a container on a different Docker network.
How can I access the mail server network from a different docker network? I don't want to consolidate the two networks into one.

Jhipster - unable to Use a Gateway app when deploying everything on docker host except the Gateway itself, Mixed Docker and locally deployment

I have some JHipster Spring Microservice and gateway projects. I deployed all of them on a host using docker except the gateway. I started the gateway on another host.
I use Keycloak for OAuth authentication.
Everything works fine when i deploy all of the microservices and databases and Gateways as docker containers on a docker network using docker-compose.
But it doesn't work when i just deploy everything on docker except the gateway.i mean if the gateway resides outside of docker-created network. the motivation for this action is that I just want my UI programmer to up and run the gateway on his own PC, and use microservices which are deployed on server host. Just for ease of UI development in need to up and run this sole gateway using gradle bootRun -Pprod.
I used a technique to assign a separate IP to each container on my docker network. This technique is called Docker MacVLan networking. so that every container in the host have a separate IP address in physical network and each of these containers are visible on other hosts in the network.
the problem is that in normal docker deployment (when gateway is deployed in a docker network in same host) everything works fine. but in my scenario after successful login, every microservice return error 401.
in microservice it says this error:
o.s.s.oauth2.client.OAuth2RestTemplate : Setting request Accept header to [application/json, application/x-jackson-smile, application/cbor, application/*+json]
o.s.s.oauth2.client.OAuth2RestTemplate : GET request for "http://keycloak:9080/auth/realms/jhipster/protocol/openid-connect/userinfo" resulted in 401 (Unauthorized); invoking error handler
n.m.b.s.o.CachedUserInfoTokenServices : Could not fetch user details: class org.springframework.security.oauth2.client.resource.OAuth2AccessDeniedException, Unable to obtain a new access token for resource 'null'. The provider manager is not configured to support it.
p.a.OAuth2AuthenticationProcessingFilter : Authentication request failed: error="invalid_token", error_description="token string here"
it says that your token is invalid. the same mechanism just works when everything is deployed in same host in docker. is it for the Keycloak that prevents the token to validate for external hosts? i personally doubt that , because it didn't prevent me from logging into gateway successfully. and i just checked keycloak. its up by the command -b 0.0.0.0
Please help me up and run a gateway just by gradle bootRun -Pprod.
In summary I could rephrase my question to: i just want the UI Developer be able to test his angular/spring-gateway project in it's own PC while other services are deployed in powerful server using docker (authentication using Keycloak). and it is not possible to deploy those other services on UI developers own PC. how to do it in JHipster?
add server.use-forward-headers=true to your config when using the gateway

Nodejs Docker Development microservices

I'm building a application with microservices architecture.
So basically, my app look like this
API GATEWAY(port 3000) => USERS-SERVICE(port 9090), AUTH-SERVICE(port 8080), SEND-SMS-SERVICE(port 7070).
all work fine until now.
now I try to implement docker in my project. I build an image for each service
and run container instance for each on my local machine.
now I want to develop new service Customer-Service. and this service run on
http://localhost:3030
.
question:
1) How i can request http://localhost:3030 from api gateway, if in development I run api-gateway from container.
You must understand the network concept, when you start independent docker instance and you don't define the network they will be unreachable between them.
There is other things, you CAN'T access to one micro service hosted in a Docker to other Micro services hosted in other docker image using localhost, localhost is a 127.0.0.1. This is a call for the local machine. Then the concept of docker is like "diferent machines running on a same machine" is like a virtual machine but docker shares the host machine kernel.
You can access to another docker image in 2 ways.
Configure in a host network, which i do not recommend
Create a network, add every docker image instance to this network and call other micro services using the container name. IE you can use http://my-service-1:3400/api/v1/post
I recommend you to use docker-compose.
This is one of my repositories, I created with the propuse of share an Node App using JWT, but this project use Docker and docker-compose
https://github.com/camiloperezv/jwt-template
how you can see, i define an Network attribute in the docker-compose.ymland use this network in all of my services.
In the service section you will put all your micro-services, and in the code you will make the http request using the container name instead of using localhost or an IP address.
In my services y use the build: . this is for development propuse, in production you should use the pre build docker image instead of building it on the production server.
Feel free to use my github code.
Regards
As far as I understand from the question, a new service Costumer-Service runs on http://localhost:3030 on the host machine.
If yes, api-gateway docker container should be started in the host network:
docker run --network host -d <api-gateway_image_name>
After this Costumer-Service will be reachable on localhost:3030 from the api-gateway container.

Running Client program and API in same Docker Machine

I had an API which is running as one container and Client program running as another container. API is accessible on the remote machine but Client container not communicating with API. It's not communicating because of Socket Error

What is the Docker Engine?

When people talk about the 'Docker Engine' do they mean both the Client and the Daemon? Or is it something else entirely?
As I see it there is a Docker Client, a Docker Daemon. The Client runs locally and connects to the Daemon which does the actual running of the containers. The Client can connect to a remote Daemon. Are these both together the Engine? thanks
The Docker Engine is the Docker Daemon running on a single host, installed with the Docker Client CLI. Here are the docs that answer this specific question.
On top of that, you can have a Swarm running that joins multiple hosts to horizontally scale and provide fault tolerance. And there are numerous other projects from Docker, like their Registry, Docker Cloud, and Universal Control Plane, that are each separate from the engine.
Docker engine is a client-server application which comprises of 3 components.
1. Client: Docker CLI or the command line window that helps us to interact.
2. REST API: Client communicate with the server with REST API, the commands issued by the client is sent to the server in the form of REST API, it is this reason our server can either be in the local or remote machine.
3. Server: Server here is either the local or remote machine or host machine which has a daemon process running in it which receives the commands and creates, manages and destroys the docker objects like images, containers, volumes etc.

Resources