How to pass environment variables to a front-end web application in nginx? - docker

I am using docker-compose with an image made by someone else and I would like to use environment variables to assign it dynamically
docker-compose.yml
version: "3.7"
services:
appfronted2:
image: trafex/alpine-nginx-php7
container_name: fronted2
ports:
- "80:8080"
volumes:
- ./fronted2:/var/www/html
environment:
- HOST_BACKEND=172.99.0.11
- PORT_BACKEND=4000
networks:
tesis:
ipv4_address: 172.99.0.13
and this is my javascript where I would like to get the variables but I can't get those variables
api.js
const HOST = process.env.HOST_BACKEND || "127.0.0.1"
const PORT = process.env.PORT_BACKEND || "4000"
const URL_API = `http://${HOST}:${PORT}/api`

You are using nginx web server container to serve your html and JS files. The web server serves these files to browser as they are. This is different from using npm start where Node engine serves the HTML and JS files dynamically.
When your JS file runs on client browser, there's no variable called process.env.
Going over comments for following issue in Create React app might help you understand more:
https://github.com/facebook/create-react-app/issues/2353
If you don't have more environment variables, simplest solution would be to use window.location.hostname and prepare or select the API url accordingly.
app-config.js
let backendHost;
const hostname = window && window.location && window.location.hostname;
if(hostname === 'whatsgoodonmenu.com') {
backendHost = 'https://api.whatsgoodonmenu.com';
} else {
backendHost = 'http://localhost:8080';
}
export const API_ROOT = `${backendHost}`;
Using in component
import React from "react"
import {API_ROOT} from './app-config'
export default class UserCount extends React.Component {
constructor(props) {
super(props);
this.state = {
data: null,
};
}
componentDidMount() {
fetch(`${API_ROOT}/count`)
.then(response => response.json())
.then(data => this.setState({ data }));
}
render(){
return(
<label>Total visits: {this.state.data}</label>
);
}
}

Related

grpc client in docker can't reach server on host

I have a node grpc-server running on localhost and my grpc-client is a python flask server. If the client also runs on localhost directly then everything works as intended. Once I host the client(flask server) in a docker-container it is unable to reach the grpc-server though.
The error simply states:
RPC Target is unavaiable
I can call the flask-api from the host without issues. Also I changed the server address from 'localhost' to 'host.docker.internal', which is getting resolved correctly. Not sure if I am doing something wrong or this just doesn't work. I greatly appreciate any help or suggestions. Thanks!
Code snippets of the server, client and docke-compose :
server.js (Node)
...
const port = 9090;
const url = `0.0.0.0:${port}`;
// gRPC Credentials
import { readFileSync } from 'fs';
let credentials = ServerCredentials.createSsl(
readFileSync('./certs/ca.crt'),
[{
cert_chain: readFileSync('./certs/server.crt'),
private_key: readFileSync('./certs/server.key')
}],
false
)
...
const server = new Server({
"grpc.keepalive_permit_without_calls": 1,
"grpc.keepalive_time_ms": 10000,
});
...
server.bindAsync(
url,
credentials,
(err, port) => {
if (err) logger.error(err);
server.start();
}
);
grpc_call.py (status_update is called by app.py)
import os
import logging as logger
from os.path import dirname, join
import config.base_pb2 as base_pb2
import config.base_pb2_grpc as base_pb2_grpc
import grpc
# Read in ssl files
def _load_credential_from_file(filepath):
real_path = join(dirname(dirname(__file__)), filepath)
with open(real_path, "rb") as f:
return f.read()
# -----------------------------------------------------------------------------
def status_update(info, status, info=""):
SERVER_CERTIFICATE = _load_credential_from_file("config/certs/ca.crt")
SERVER_CERTIFICATE_KEY = _load_credential_from_file("config/certs/client.key")
ROOT_CERTIFICATE = _load_credential_from_file("config/certs/client.crt")
credential = grpc.ssl_channel_credentials(
root_certificates=SERVER_CERTIFICATE,
private_key=SERVER_CERTIFICATE_KEY,
certificate_chain=ROOT_CERTIFICATE,
)
# grpcAddress = "http://localhost"
grpcAddress = "http://host.docker.internal"
grpcFull = grpcAddress + ":9090"
with grpc.secure_channel(grpcFull, credential) as channel:
stub = base_pb2_grpc.ProjectStub(channel)
request = base_pb2.ContainerId(id=int(info), status=status)
try:
response = stub.ContainerStatus(request)
except grpc.RpcError as rpc_error:
logger.error("Error #STATUS_UPDATE")
if rpc_error.code() == grpc.StatusCode.CANCELLED:
logger.error("RPC Request got cancelled")
elif rpc_error.code() == grpc.StatusCode.UNAVAILABLE:
logger.error("RPC Target is unavaiable")
else:
logger.error(
f"Unknown RPC error: code={rpc_error.code()} message={rpc_error.details()}"
)
raise ConnectionError(rpc_error.code())
else:
logger.info(f"Received message: {response.message}")
return
Docker-compose.yaml
version: "3.9"
services:
test-flask:
image: me/test-flask
container_name: test-flask
restart: "no"
env_file: .env
ports:
- 0.0.0.0:8010:8010
command: python3 -m flask run --host=0.0.0.0 --port=8010

OpenId with IdentityServer4

I am working on a project where we would like to use IdentityServer4 as a token server and have other services authenticated within this token server. I have dev env on Windows using Docker and linux containers. I configured IdentityServer and it's working, I configured Api client and it's working but, when I configured MVC client to authenticate, it's failing to access token server through docker. OK, I realized that Docker works in a way of having external/internal ports, so I configured the api and mvc client this way.
MVC Client
services.AddAuthentication(opts =>
{
opts.DefaultScheme = "Cookies";
opts.DefaultChallengeScheme = "oidc";
})
.AddCookie("Cookies", opts =>
{
opts.SessionStore = new MemoryCacheTicketStore(
configuration.GetValue<int>("AppSettings:SessionTimeout"));
})
.AddOpenIdConnect("oidc", opts =>
{
opts.ResponseType = "code id_token";
opts.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
opts.ClientId = "Mvc.Application";
opts.ClientSecret = "Secret.Mvc.Application";
opts.Authority = "http://authorization.server/";
//opts.Authority = "http://localhost:5001/";
//opts.MetadataAddress = "http://authorization.server/";
opts.UsePkce = true;
opts.SaveTokens = true;
opts.RequireHttpsMetadata = false;
opts.GetClaimsFromUserInfoEndpoint = true;
opts.Scope.Add("offline_access");
opts.Scope.Add("Services.Business");
opts.ClaimActions.MapJsonKey("website", "website");
});
This part is working, because document discovery is working. However it'll fail to access http://authorization.server url, because it's internal container address, not external accessible through web browser. So I tried to set 2 different urls: MetadataAddress from which document from OpenId server should be fetched and Authority, where all Unauthorized requests are redirected. However when I set both MetadataAddress and Authority in OpenIdConnectOptions when calling AddOpenIdConnect, it'll use MetadataAddress instead of Authority. I checked logs, discovery of document is successfull, because I'm hitting http://authorization.server/.well-known..., but it's also initiating request to the IdentityServer to authenticate with the same url http://authorization.server/connect...
Api Client
services.AddAuthorization()
.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddIdentityServerAuthentication(opts =>
{
opts.RequireHttpsMetadata = false;
opts.ApiName = "Api.Services.Business";
opts.ApiSecret = "Secret.Api.Services.Business";
opts.Authority = "http://authorization.server/";
});
This it's working fine using the internal container address.
IdentityServer configuration
services.AddIdentityServer(opt =>
{
opt.IssuerUri = "http://authorization.server/";
})
.AddAspNetIdentity<User>()
.AddSigningCredential(Certificate.Get())
.AddProfileService<IdentityProfileService>()
.AddInMemoryApiResources(Configuration.ApiResources())
.AddInMemoryIdentityResources(Configuration.IdentityResources())
.AddInMemoryClients(Configuration.Clients());
Configuration.cs
public static IEnumerable<Client> Clients(string redirectUri, string allowedCorsOrigins)
{
return new List<Client>
{
new Client
{
ClientId = "Services.Business",
ClientName = "Api Business",
AllowedGrantTypes = GrantTypes.ResourceOwnerPassword,
AllowedScopes =
{
"Services.Business"
},
ClientSecrets =
{
new Secret("Secret.Services.Business".Sha256())
}
},
new Client
{
ClientId = "Mvc.Application",
ClientName = "Mvc Application",
RequireConsent = false,
AllowOfflineAccess = true,
AllowedGrantTypes = GrantTypes.Hybrid,
AllowedScopes =
{
"Services.Business",
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile
},
ClientSecrets =
{
new Secret("Secret.Mvc.Application".Sha256())
},
RedirectUris =
{
$"{redirectUri}/signin-oidc"
},
PostLogoutRedirectUris =
{
$"{redirectUri}/signout-callback-oidc"
}
}
};
}
Docker-compose.yml
version: '3.4'
networks:
fmnetwork:
driver: bridge
services:
authorization.server:
image: authorization.server
container_name: svc.authorization.server
build:
context: .
dockerfile: Authorization.Server/Dockerfile
ports:
- "5000:80"
- "5100:443"
environment:
ASPNETCORE_HTTPS_PORT: 5100
ASPNETCORE_ENVIRONMENT: Staging
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_Kestrel__Certificates__Default__Password: "devcertaspnet"
ASPNETCORE_Kestrel__Certificates__Default__Path: /root/.dotnet/https/aspnetapp.pfx
depends_on:
- sql.server
volumes:
- D:\Docker\Data\Fm:/root/.dotnet/https
- D:\Docker\Data\Fm\Logs:/Fm.Logs
networks:
- fmnetwork
services.business:
image: services.business
container_name: api.services.business
build:
context: .
dockerfile: Services.Business/Dockerfile
ports:
- "5001:80"
- "5101:443"
environment:
ASPNETCORE_ENVIRONMENT: Staging
ASPNETCORE_HTTPS_PORT: 5101
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_Kestrel__Certificates__Default__Password: "devcertaspnet"
ASPNETCORE_Kestrel__Certificates__Default__Path: /root/.dotnet/https/aspnetapp.pfx
depends_on:
- sql.server
volumes:
- D:\Docker\Data\Fm:/root/.dotnet/https
- D:\Docker\Data\Fm\Logs:/Fm.Logs
networks:
- fmnetwork
mvc.application:
image: mvc.application
container_name: svc.mvc.application
build:
context: .
dockerfile: Mvc.Application/Dockerfile
ports:
- "5002:80"
- "5102:443"
environment:
ASPNETCORE_ENVIRONMENT: Staging
ASPNETCORE_HTTPS_PORT: 5102
ASPNETCORE_URLS: "https://+;http://+"
ASPNETCORE_Kestrel__Certificates__Default__Password: "devcertaspnet"
ASPNETCORE_Kestrel__Certificates__Default__Path: /root/.dotnet/https/aspnetapp.pfx
volumes:
- D:\Docker\Data\Fm:/root/.dotnet/https
- D:\Docker\Data\Fm\Logs:/Fm.Logs
networks:
- fmnetwork
I just faced this same issue and was able to solve it as follows.
Some things to keep in mind:
This is not an issue with Identity Server itself but with the mismatch between the internal Docker URL (http://authorization.server) that your container sees and the local host URL (http://localhost:5001) that your browser sees.
You should keep using the local URL for Identity Server (http://localhost:5001) and add a special case to handle the container to container communication.
The following fix is only for development when working with Docker (Docker Compose, Kubernetes), so ideally you should check for the environment (IsDevelopment extension method) so the code is not used in production.
IdentityServer configuration
services.AddIdentityServer(opt =>
{
if (Environment.IsDevelopment())
{
// It is not advisable to override this in production
opt.IssuerUri = "http://localhost:5001";
}
})
MVC Client
services.AddAuthentication(... /*Omitted for brevity*/)
.AddOpenIdConnect("oidc", opts =>
{
// Your working, production ready configuration goes here
// It is important this matches the local URL of your identity server, not the Docker internal URL
options.Authority = "http://localhost:5001";
if (Environment.IsDevelopment())
{
// This will allow the container to reach the discovery endpoint
opts.MetadataAddress = "http://authorization.server/.well-known/openid-configuration";
opts.RequireHttpsMetadata = false;
opts.Events.OnRedirectToIdentityProvider = context =>
{
// Intercept the redirection so the browser navigates to the right URL in your host
context.ProtocolMessage.IssuerAddress = "http://localhost:5001/connect/authorize";
return Task.CompletedTask;
};
}
})
You can tweak the code a little bit by passing said URLs via configuration.

Accessing chrome devtools protocol in docker grid

My tests are running against a docker grid with selenium docker images for hub and chrome. What I am trying to do is access chrome devtools protocols in the chrome node so that I can access/intercept a request.Any help is appreciated
I was able to get it working without docker in my local. But could not figure out a way to connect the devtools in chrome node of docker grid. Below is my docker-compose and code
docker compose
version: "3.7"
services:
selenium_hub_ix:
container_name: selenium_hub_ix
image: selenium/hub:latest
environment:
SE_OPTS: "-port 4445"
ports:
- 4445:4445
chrome_ix:
image: selenium/node-chrome-debug:latest
container_name: chrome_node_ix
depends_on:
- selenium_hub_ix
ports:
- 5905:5900
- 5903:5555
- 9222:9222
environment:
- no_proxy=localhost
- HUB_PORT_4444_TCP_ADDR=selenium_hub_ix
- HUB_PORT_4444_TCP_PORT=4445
- NODE_MAX_INSTANCES=5
- NODE_MAX_SESSION=5
- TZ=America/Chicago
volumes:
- /dev/shm:/dev/shm
Here is sample code how I got it working in local without grid (chrome driver in my mac)
const CDP = require('chrome-remote-interface');
let webDriver = require("selenium-webdriver");
module.exports = {
async openBrowser() {
this.driver = await new webDriver.Builder().forBrowser("chrome").build();
let session = await this.driver.session_
let debuggerAddress = await session.caps_.map_.get("goog:chromeOptions").debuggerAddress;
let AddressString = debuggerAddress.split(":");
console.log(AddressString)
try {
const protocol = await CDP({
port: AddressString[1]
});
} catch (err) {
console.log(err.message)
const {
Network,
Fetch
} = protocol;
await Fetch.enable({
patterns: [{
urlPattern: "*",
}]
});
}
await Fetch.requestPaused(async ({
interceptionId,
request
}) => {
console.log(request)
})
}
return this.driver;
},
}
When it is grid, I just change the way build the driver to below
this.driver = await new webDriver.Builder().usingServer(process.env.SELENIUM_HUB_IP).withCapabilities(webDriver.Capabilities.chrome()).build();
With that I am getting the port number but could not create a CDP session and I get a connection refused error.

Masstransit in docker using Request/Response model, Request Consumer exception, host not found while responding

I'm quite new to Masstransit/RabbitMq and I encountered a problem cannot deal with.
I have a Rabbitmq server running in docker, also a small microservice in docker container which consumes an event. Beside this I run a windows service on the host machine, which has the task to send the event via the masstransit Request/Response model to the microservice. The interesting thing is that the event arrives to the consumer as supposed but when I try to response the context.RespondAsync from the consume method I get an exception
R-FAULT rabbitmq://autbus/exi_bus 80c60000-eca5-3065-0093-08d62a09d168 HwExi.Extensions.Events.ReservationCreateOrUpdateEvent HwExi.Api.Consumers.ReservationCrateOrUpdateConsumer(00:00:07.8902444) The host was not found for the specified address: rabbitmq://127.0.0.1/bus-SI-GEPE-HwService.Api-oddyyy8cwwagkoscbdmnwncfrg?durable=false&autodelete=true, MassTransit.EndpointNotFoundException: The host was not found for the specified address: rabbitmq://127.0.0.1/bus-SI-GEPE-HwService.Api-oddyyy8cwwagkoscbdmnwncfrg?durable=false&autodelete=true
I'm using this model to messaging between microservices without any problem and its working properly in another queue.
Here is the yaml of microservice / Bus
exiapi:
image: exiapi
build:
context: .
dockerfile: Service/HwExi.Api/Dockerfile
ports:
- "54542:80"
environment:
"BUS_USERNAME": "guest"
"BUS_PASSWORD": "guest"
"BUS_HOST": "rabbitmq://autbus"
"BUS_URL": "exi_bus"
autbus:
image: rabbitmq:3-management
hostname: autbus
ports:
- "15672:15672"
- "5672:5672"
- "5671:5671"
volumes:
- ~/rabbitmq:/var/lib/rabbitmq/mnesia
the config of the windows service:
"Bus": {
"Username": "guest",
"Password": "guest",
"Host": "rabbitmq://127.0.0.1",
"Url": "exi_bus"
},
The windows service connects like this:
var builder = new ContainerBuilder();
builder.Register(context =>
{
return Bus.Factory.CreateUsingRabbitMq(rmq =>
{
var host = rmq.Host(new Uri(options.Value.Bus.Host), "/", h =>
{
h.Username(options.Value.Bus.Username);
h.Password(options.Value.Bus.Password);
});
rmq.ExchangeType = ExchangeType.Fanout;
});
}).As<IBusControl>().As<IBus>().As<IPublishEndpoint>().SingleInstance();
The microservice inside container connects like this
public static class BusExtension
{
public static void InitializeBus(this ContainerBuilder builder, Assembly assembly)
{
builder.Register(context =>
{
return Bus.Factory.CreateUsingRabbitMq(rmq =>
{
var host = rmq.Host(new Uri(Constants.Bus.Host), "/", h =>
{
h.Username(Constants.Bus.UserName);
h.Password(Constants.Bus.Password);
});
rmq.ExchangeType = ExchangeType.Fanout;
rmq.ReceiveEndpoint(host, Constants.Bus.Url, configurator =>
{
configurator.LoadFrom(context);
});
});
}).As<IBusControl>().As<IBus>().As<IPublishEndpoint>().SingleInstance();
builder.RegisterConsumers(assembly);
}
public static void StartBus(this IContainer container, IApplicationLifetime lifeTime)
{
var bus = container.Resolve<IBusControl>();
var busHandler = TaskUtil.Await(() => bus.StartAsync());
lifeTime.ApplicationStopped.Register(() => busHandler.Stop());
}
}
than windows service fires the event like this:
var reservation = ReservationRepository.Get(message.KeyId, message.KeySource);
var operation = await ReservationCreateOrUpdateClient.Request(new ReservationCreateOrUpdateEvent { Reservation = reservation });
if (!operation.Success)
{
Logger.LogError("Fatal error while sending reservation create or update message to exi web service");
return;
}
Finally the microservice catches the event like this.
public class ReservationCrateOrUpdateConsumer : IConsumer<ReservationCreateOrUpdateEvent>
{
public async Task Consume(ConsumeContext<ReservationCreateOrUpdateEvent> context)
{
await context.RespondAsync(new MessageOperationResult<bool>
{
Result = true,
Success = true
});
}
}
I'm using autofac to register the requestclient in windows service:
Timeout = TimeSpan.FromSeconds(20);
ServiceAddress = new Uri($"{Configurarion.Bus.Host}/{Configurarion.Bus.Url}");
builder.Register(c => new MessageRequestClient<ReservationCreateOrUpdateEvent, MessageOperationResult<bool>>(c.Resolve<IBus>(), ServiceAddress, Timeout))
.As<IRequestClient<ReservationCreateOrUpdateEvent, MessageOperationResult<bool>>>().SingleInstance();
Can anybody help debug this out? Also share opinion if this structure is a proper one, maybe I should use https for sending message from the client machine to my microservice environment, and convert it to the bus via a gateway or similar approach more suitable? Thanks

Grunt Livereload + Grunt Connect Proxy

I am using Rails for my API, AngularJS on the front and I am having some issues getting livereload / grunt connect proxy to work properly.
Here is the snippet from my gruntfile:
connect: {
options: {
port: 9000,
// Change this to '0.0.0.0' to access the server from outside.
hostname: 'localhost',
livereload: 35729
},
proxies: [
{
context: '/api',
host: 'localhost',
port: 3000
}
],
livereload: {
options: {
open: true,
base: [
'.tmp',
'<%= yeoman.app %>'
],
middleware: function (connect, options) {
var middlewares = [];
var directory = options.directory || options.base[options.base.length - 1];
// enable Angular's HTML5 mode
middlewares.push(modRewrite(['!\\.html|\\.js|\\.svg|\\.css|\\.png$ /index.html [L]']));
if (!Array.isArray(options.base)) {
options.base = [options.base];
}
options.base.forEach(function(base) {
// Serve static files.
middlewares.push(connect.static(base));
});
// Make directory browse-able.
middlewares.push(connect.directory(directory));
return middlewares;
}
}
},
test: {
options: {
port: 9001,
base: [
'.tmp',
'test',
'<%= yeoman.app %>'
]
}
},
dist: {
options: {
base: '<%= yeoman.dist %>'
}
}
}
If I 'grunt build' everything works perfectly - off localhost:3000
However if I 'grunt serve' it opens a window through 127.0.0.1:9000 and I get 404 to all my API calls.
Also under serve it is mangling my background images from a CSS file I get this warning:
Resource interpreted as Image but transferred with MIME type text/html: "http://127.0.0.1:9000/images/RBP_BG.jpg"
I haven't done this before - so chances are I am doing it all wrong.
I don't like too much code in your connect.livereload.middleware configuration.
Is that all necessary ?
Take a look at this commit - chore(yeoman-gruntfile-update): configured grunt-connect-proxy in some of my projects.
backend is Django
ports: frontend: 9000, backend: 8000
generator-angular was in v.0.6.0 when generating the project
my connect.livereload.middleware configuration was based on: https://stackoverflow.com/a/19403176/1432478
This is an old post, but please make sure that you actually initialize the proxy in the grunt serve task by calling configureProxies before livereload.
grunt.task.run([
'clean:server',
'bower-install',
'concurrent:server',
'autoprefixer',
'configureProxies',
'connect:livereload',
'watch'
]);
Should work fine afterwards.
I have a similar problem with you but I have no use yeoman.
My solution is to add the task 'configureProxies'.
this is my tasks:
grunt.registerTask('serve', ['connect:livereload','configureProxies',
'open:server', 'watch']);
and,'connect:livereload','configureProxies'——After my test, the order of these two tasks will not affect the results.
github grunt-connect-proxy

Resources