docker-compose with rabbitmq - docker

I'm trying to setup a docker-compose script - to start a dummy: website, API, Gateway and RabbitMQ. (micro service approach)
Request pipeline:
Web >> Gateway >> API >> RabbitMQ
My docker-compose looks like this:
version: "3.4"
services:
web:
image: webclient
build:
context: ./WebClient
dockerfile: dockerfile
ports:
- "4000:4000"
depends_on:
- gateway
gateway:
image: gatewayapi
build:
context: ./GateWayApi
dockerfile: dockerfile
ports:
- "5000:5000"
depends_on:
- ordersapi
ordersapi:
image: ordersapi
build:
context: ./ExampleOrders
dockerfile: dockerfile
ports:
- "6002:6002"
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.7-management
container_name: rabbitmq
hostname: rabbitmq
volumes:
- rabbitmqdata:/var/lib/rabbitmq
ports:
- "7000:15672"
- "7001:5672"
environment:
- RABBITMQ_DEFAULT_USER=rabbitmquser
- RABBITMQ_DEFAULT_PASS=some_password
This pipe works:
Web >> Gateway >> API
I get a response from the API on the website.
But when I try to push a message to rabbitmq from the API, I get the following error:
System.AggregateException: One or more errors occurred. (Connection failed) ---> RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException: Connection refused 127.0.0.1:7001
The RabbitMQ managment GUI still works on the defined port 7000.
Requests to port 7001 does not.
However, if I start the API and RabbitMQ manually, it works like a charm. The API I simply start with a debugger (.Net core + IIS - default settings hitting F5 in VS) and this is the command I use to start the docker image manually:
docker run -p 7001:5672 -p 7000:15672 --hostname localhost -e RABBITMQ_DEFAULT_USER=rabbitmquser -e RABBITMQ_DEFAULT_PASS=some_password rabbitmq:3.7-management
Update
This is how I inject the config in the .Net core pipe.
startup.cs
public void ConfigureServices(IServiceCollection services)
{
// setup RabbitMQ
var configSection = Configuration.GetSection("RabbitMQ");
string host = configSection["Host"];
int.TryParse(configSection["Port"], out int port);
string userName = configSection["UserName"];
string password = configSection["Password"];
services.AddTransient<IConnectionFactory>(_ => new ConnectionFactory()
{
HostName = host,
Port = port,
UserName = userName,
Password = password
});
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
}
controller.cs
private readonly IConnectionFactory _rabbitFactory;
public ValuesController(IConnectionFactory rabbitFactory)
{
_rabbitFactory = rabbitFactory;
}
public void PublishMessage()
{
try
{
using (var connection = _rabbitFactory.CreateConnection())
using (var channel = connection.CreateModel())
{
string exchangeName = "ExampleApiController";
string routingKey = "MyCustomRoutingKey";
channel.ExchangeDeclare(exchange: exchangeName, type: "direct", durable: true);
SendMessage("Payload to queue 1", channel, exchangeName, routingKey);
}
}
catch (Exception e)
{
Console.WriteLine(e.InnerException);
}
}
private static void SendMessage(string message, IModel channel, string exchangeName, string routingKey)
{
byte[] body = Encoding.UTF8.GetBytes(message);
channel.BasicPublish(exchange: exchangeName,
routingKey: routingKey,
basicProperties: null,
body: body);
Console.WriteLine($" Sending --> Exchange: { exchangeName } Queue: { routingKey } Message: {message}");
}

I imagine in your caller you set rabbitmq_url to localhost:7001. However, the caller is in a container, which does have anything running on the port 7001. Rabbitmq is running on port 7001 on your host.
You need to change the url to rabbitmq:5672 to use the internal network or use host.docker.internal:7001 if you are using window or mac docker 18.03+

Related

Docker container can't reach another container using container name

I have 2 Docker containers running in the same network and I want 1 of them to call another via spring Webclient.
I'm sure they all are in the same network -> docker network inspect <network_ID> proves this.
AFAIK I can ping one container from another to check if they can talk to each other by docker exec -ti attachment-loader-prim ping attachment-loader-sec
If I run this - I see responses from attachment-loader-sec like 64 bytes from 172.21.0.5: seq=0 ttl=64 time=0.220 ms, which means they can communicate.
When I send Postman request to attachment-loader-prim by its exposed port localhost:8085, I expect that after some business logic it calls for attachment-loader-sec via Webclient, but on that step I get a 500 error with such a message:
"finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80; nested exception is
io.netty.channel.AbstractChannel$AnnotatedConnectException:
finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80"
Both attachment-loader-prim and attachment-loader-sec can be accessed separately via postman and both send a response, no problem.
This is my docker-compose:
version: '3'
services:
attachment-loader-prim:
container_name: attachment-loader-prim
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8085
networks:
- loader_network
expose:
- 8085
ports:
- 8005:8005
- 8085:8085
attachment-loader-sec:
container_name: attachment-loader-sec
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8086
networks:
- loader_network
expose:
- 8086
ports:
- 8006:8005
- 8086:8086
networks:
loader_network:
driver: bridge
And this is a Webclient which makes a call:
class RemoteServiceCaller(private val fetcherWebClientBuilder: WebClient.Builder) {
suspend fun getAttachmentsFromRemote(id: String, params: List<Param>, username: String): Result? {
val client = fetcherWebClientBuilder.build()
val awaitExchange = client.post()
.uri("/{id}/attachment", id)
.contentType(MediaType.APPLICATION_JSON)
.bodyValue(params)
.header(usernameHeader, username)
.accept(MediaType.APPLICATION_OCTET_STREAM)
.awaitExchange {
if (it.statusCode().is2xxSuccessful) {
handleSucessCode(it)
} else it.createExceptionAndAwait().run {
LOG.error(this.responseBodyAsString, this)
throw ProcessingException(this)
}
}
return awaitExchange
}
private suspend fun handleSucessCode(response: ClientResponse) {
// some not important logic
}
}
P.S. BasicUri for Webclient defined as Config Bean like http://attachment-loader-sec/list
All my investigations brought me to such problems as:
Calling container using localhost instead of container name
Containers are not in the same network.
All that seems not relevant for me.
Any ideas will be really appreciated.
The problem was in calling a service without its port. The url became now http://attachment-loader-sec:8086/list and it is correct now. In my case I get 404, which means that my url path is not quite correct, but that is outside of current question

Dapr golang Docker Compose - running into a ""errorCode":"ERR_DIRECT_INVOKE","message":"invoke API is not ready" error

I am trying out Dapr for the first time ....refering to the Dapr go sdk at https://github.com/dapr/go-sdk...
... trying to host a Dapr service using golang with Docker Compose on my Windows 10 machine - using VSCode - and running into an issue connecting to ther service.
I have the docker compose file set to do a simple configuration as follows. And trying to connect to the service via the Dapr API using curl
golang service (taskapi service) => Dapr SideCar (taskapidapr)
I based it off of the example from https://github.com/dapr/go-sdk/blob/main/example/Makefile, but using Docker Compose.
When I try to connect connect to the service using
curl -d "ping" -H "Content-type: text/plain;charset=UTF-8"
"http://localhost:8300/v1.0/invoke/taskapi/method/echo"
I am running into the following error.
{"errorCode":"ERR_DIRECT_INVOKE","message":"invoke API is not ready"}
And the Dapr logs in Docker show a 'no mDNS apps to refresh.' - not sure if this is the cause of it and how to handle it.
Anyone can point me to what I am missing - greatly appreciate it.
Thank you
Athadu
golang package
package main
import (
"context"
"errors"
"fmt"
"log"
"net/http"
"github.com/dapr/go-sdk/service/common"
daprd "github.com/dapr/go-sdk/service/http"
)
func main() {
port := "8085"
address := fmt.Sprintf(":%s", port)
log.Printf("Creating New service at %v port", address)
log.Println()
// create a Dapr service (e.g. ":8080", "0.0.0.0:8080", "10.1.1.1:8080" )
s := daprd.NewService(address)
// add a service to service invocation handler
if err := s.AddServiceInvocationHandler("/echo", echoHandler); err != nil {
log.Fatalf("error adding invocation handler: %v", err)
}
if err := s.Start(); err != nil && err != http.ErrServerClosed {
log.Fatalf("error listenning: %v", err)
}
}
func echoHandler(ctx context.Context, in *common.InvocationEvent) (out *common.Content, err error) {
if in == nil {
err = errors.New("invocation parameter required")
return
}
log.Printf(
"echo - ContentType:%s, Verb:%s, QueryString:%s, %s",
in.ContentType, in.Verb, in.QueryString, in.Data,
)
out = &common.Content{
Data: in.Data,
ContentType: in.ContentType,
DataTypeURL: in.DataTypeURL,
}
return
}
docker-compose.yml
version: "3"
services:
taskapi:
image: golang:1.16
volumes:
- ..:/go/src/lekha
working_dir: /go/src/lekha/uploader
command: go run main.go
ports:
- "8085:8085"
environment:
aaa: 80
my: I am THE variable value
networks:
- lekha
taskapidapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"-app-id", "taskapi",
"-app-protocol", "http",
"-app-port", "8085",
"-dapr-http-port", "8300",
"-placement-host-address", "placement:50006",
"-log-level", "debug",
"-components-path", "/components"
]
volumes:
- "../dapr-components/:/components" # Mount our components folder for the dapr runtime to use
depends_on:
- taskapi
ports:
- "8300:8300"
networks:
- lekha
#network_mode: "service:taskapi" # Attach the task-api-dapr service to the task-api network namespace
############################
# Dapr placement service
############################
placement:
image: "daprio/dapr"
command: ["./placement", "-port", "50006"]
ports:
- "50006:50006"
networks:
- lekha
networks:
lekha:
Daprd shows these mDNS messages in logs - not sure if this is the cause
time="2021-05-24T01:06:13.6629303Z" level=debug msg="Refreshing all
mDNS addresses." app_id=taskapi instance=442e04c9e8a6
scope=dapr.contrib type=log ver=edge
time="2021-05-24T01:06:13.6630421Z" level=debug msg="no mDNS apps to
refresh." app_id=taskapi instance=442e04c9e8a6 scope=dapr.contrib
Additionally, I see the containers on the expected ports ... running fine in Docker desktop...
enter image description here
{
"errorCode": "ERR_DIRECT_INVOKE",
"message": "invoke API is not ready"
}
same as yours

Connect to docker container using hostname on local environment in kubernetes

I have a kubernete docker-compose that contains
frontend - a web app running on port 80
backend - a node server for API running on port 80
database - mongodb
I would like to ideally access frontend via a hostname such as http://frontend:80, and for the browser to be able to access the backend via a hostname such as http://backend:80, which is required by the web app on the client side.
How can I go about having my containers accessible via those hostnames on my localhost environment (windows)?
docker-compose.yml
version: "3.8"
services:
frontend:
build: frontend
hostname: framework
ports:
- "80:80"
- "443:443"
- "33440:33440"
backend:
build: backend
hostname: backend
database:
image: 'mongo'
environment:
- MONGO_INITDB_DATABASE=framework-database
volumes:
- ./mongo/mongo-volume:/data/database
- ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
ports:
- '27017-27019:27017-27019'
I was able to figure it out, using the docker-compose aliases & networks I was able to connect every container to the same development network.
There was 3 main components:
container mapping node dns server - Grabs the aliases via docker ps and creates a DNS server that redirects those requests to 127.0.0.1 (localhost)
nginx reverse proxy container - mapping the hosts to the containers via their aliases in the virtual network
projects - each project is a docker-compose.yml that may have an unlimited number of containers running on port 80
docker-compose.yml for clientA
version: "3.8"
services:
frontend:
build: frontend
container_name: clienta-frontend
networks:
default:
aliases:
- clienta.test
backend:
build: backend
container_name: clienta-backend
networks:
default:
aliases:
- api.clienta.test
networks:
default:
external: true # connect to external network (see below for more)
name: 'development' # name of external network
nginx proxy docker-compose.yml
version: '3'
services:
parent:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80" #map port 80 to localhost
networks:
- development
networks:
development: #create network called development
name: 'development'
driver: bridge
DNS Server
import dns from 'native-dns'
import { exec } from 'child_process'
const { createServer, Request } = dns
const authority = { address: '8.8.8.8', port: 53, type: 'udp' }
const hosts = {}
let server = createServer()
function command (cmd) {
return new Promise((resolve, reject) => {
exec(cmd, (err, stdout, stderr) => stdout ? resolve(stdout) : reject(stderr ?? err))
})
}
async function getDockerHostnames(){
let containersText = await command('docker ps --format "{{.ID}}"')
let containers = containersText.split('\n')
containers.pop()
await Promise.all(containers.map(async containerID => {
let json = JSON.parse(await command(`docker inspect ${containerID}`))?.[0]
let aliases = json?.NetworkSettings?.Networks?.development?.Aliases || []
aliases.map(alias => hosts[alias] = {
domain: `^${alias}*`,
records: [
{ type: 'A', address: '127.0.0.1', ttl: 100 }
]
})
}))
}
await getDockerHostnames()
setInterval(getDockerHostnames, 8000)
function proxy(question, response, cb) {
var request = Request({
question: question, // forwarding the question
server: authority, // this is the DNS server we are asking
timeout: 1000
})
// when we get answers, append them to the response
request.on('message', (err, msg) => {
msg.answer.map(a => response.answer.push(a))
});
request.on('end', cb)
request.send()
}
server.on('close', () => console.log('server closed', server.address()))
server.on('error', (err, buff, req, res) => console.error(err.stack))
server.on('socketError', (err, socket) => console.error(err))
server.on('request', async function handleRequest(request, response) {
await Promise.all(request.question.map(question => {
console.log(question.name)
let entry = Object.values(hosts).find(r => new RegExp(r.domain, 'i').test(question.name))
if (entry) {
entry.records.map(record => {
record.name = question.name;
record.ttl = record.ttl ?? 600;
return response.answer.push(dns[record.type](record));
})
} else {
return new Promise(resolve => proxy(question, response, resolve))
}
}))
response.send()
});
server.serve(53, '127.0.0.1');
Don't forget to update your computers network settings to use 127.0.0.1 as the DNS server
Git repository for dns server + nginx proxy in case you want to see the implementation: https://github.com/framework-tools/dockerdnsproxy

How can I use IdentityServer4 from inside and outside a docker machine?

I want to be able to authenticate against an Identity Server (STS) from outside and inside a docker machine.
I am having trouble with setting the correct authority that works both inside and outside the container. If I set the authority to the internal name mcoidentityserver:5000 then the API can authenticate but the client cannot get a token as the client lies outside of the docker network. If I set the authority to the external name localhost:5000 then the client can get a token but the API doesn't recognise the authority name (because localhost in this case is host machine).
What should I set the Authority to? Or perhaps I need to adjust the docker networking?
Diagram
The red arrow is the part that I'm having trouble with.
Detail
I am setting up a Windows 10 docker development environment that uses an ASP.NET Core API (on Linux), Identity Server 4 (ASP.NET Core on Linux) and a PostgreSQL database. PostgreSQL isn't a problem, included in the diagram for completeness. It's mapped to 9876 because I also have a PostgreSQL instance running on the host for now. mco is a shortened name of our company.
I have been following the Identity Server 4 instructions to get up and running.
Code
I'm not including the docker-compose.debug.yml because it has run commands pertinent only to running in Visual Studio.
docker-compose.yml
version: '2'
services:
mcodatabase:
image: mcodatabase
build:
context: ./Data
dockerfile: Dockerfile
restart: always
ports:
- 9876:5432
environment:
POSTGRES_USER: mcodevuser
POSTGRES_PASSWORD: password
POSTGRES_DB: mcodev
volumes:
- postgresdata:/var/lib/postgresql/data
networks:
- mconetwork
mcoidentityserver:
image: mcoidentityserver
build:
context: ./Mco.IdentityServer
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mconetwork
mcoapi:
image: mcoapi
build:
context: ./Mco.Api
dockerfile: Dockerfile
ports:
- 56107:80
links:
- mcodatabase
depends_on:
- "mcodatabase"
- "mcoidentityserver"
networks:
- mconetwork
volumes:
postgresdata:
networks:
mconetwork:
driver: bridge
docker-compose.override.yml
This is created by the Visual Studio plugin to inject extra values.
version: '2'
services:
mcoapi:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "80"
mcoidentityserver:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "5000"
API Dockerfile
FROM microsoft/aspnetcore:1.1
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "Mco.Api.dll"]
Identity Server Dockerfile
FROM microsoft/aspnetcore:1.1
ARG source
WORKDIR /app
COPY ${source:-obj/Docker/publish} .
EXPOSE 5000
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "Mco.IdentityServer.dll"]
API Startup.cs
Where we tell the API to use the Identity Server and set the Authority.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();
app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
{
// This can't work because we're running in docker and it doesn't understand what localhost:5000 is!
Authority = "http://localhost:5000",
RequireHttpsMetadata = false,
ApiName = "api1"
});
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
}
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
Identity Server Startup.cs
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddIdentityServer()
.AddTemporarySigningCredential()
.AddInMemoryApiResources(Config.GetApiResources())
.AddInMemoryClients(Config.GetClients());
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseIdentityServer();
app.Run(async (context) =>
{
await context.Response.WriteAsync("Hello World!");
});
}
}
Identity Server Config.cs
public class Config
{
public static IEnumerable<ApiResource> GetApiResources()
{
return new List<ApiResource>
{
new ApiResource("api1", "My API")
};
}
public static IEnumerable<Client> GetClients()
{
return new List<Client>
{
new Client
{
ClientId = "client",
// no interactive user, use the clientid/secret for authentication
AllowedGrantTypes = GrantTypes.ClientCredentials,
// secret for authentication
ClientSecrets =
{
new Secret("secret".Sha256())
},
// scopes that client has access to
AllowedScopes = { "api1" }
}
};
}
}
Client
Running in a console app.
var discovery = DiscoveryClient.GetAsync("localhost:5000").Result;
var tokenClient = new TokenClient(discovery.TokenEndpoint, "client", "secret");
var tokenResponse = tokenClient.RequestClientCredentialsAsync("api1").Result;
if (tokenResponse.IsError)
{
Console.WriteLine(tokenResponse.Error);
return 1;
}
var client = new HttpClient();
client.SetBearerToken(tokenResponse.AccessToken);
var response = client.GetAsync("http://localhost:56107/test").Result;
if (!response.IsSuccessStatusCode)
{
Console.WriteLine(response.StatusCode);
}
else
{
var content = response.Content.ReadAsStringAsync().Result;
Console.WriteLine(JArray.Parse(content));
}
Thanks in advance.
Ensure IssuerUri is set to an explicit constant. We had similar issues with accessing Identity Server instance by the IP/hostname and resolved it this way:
services.AddIdentityServer(x =>
{
x.IssuerUri = "my_auth";
})
P.S. Why don't you unify the authority URL to hostname:5000? Yes, it is possible for Client and API both call the same URL hostname:5000 if:
5000 port is exposed (I see it's OK)
DNS is resolved inside the docker container.
You have access to hostname:5000 (check firewalls, network topology, etc.)
DNS is the most tricky part. If you have any trouble with it I recommend you try reaching Identity Server by its exposed IP instead of resolving hostname.
To make this work I needed to pass in two environment variables in the docker-compose.yml and setup CORS on the identity server instance so that the API was allowed to call it. Setting up CORS is outside the remit of this question; this question covers it well.
Docker-Compose changes
The identity server needs IDENTITY_ISSUER, which is name that the identity server will give itself. In this case I've used the IP of the docker host and port of the identity server.
mcoidentityserver:
image: mcoidentityserver
build:
context: ./Mco.IdentityServer
dockerfile: Dockerfile
environment:
IDENTITY_ISSUER: "http://10.0.75.1:5000"
ports:
- 5000:5000
networks:
- mconetwork
The API needs to know where the authority is. We can use the docker network name for the authority because the call doesn't need to go outside the docker network, the API is only calling the identity server to check the token.
mcoapi:
image: mcoapi
build:
context: ./Mco.Api
dockerfile: Dockerfile
environment:
IDENTITY_AUTHORITY: "http://mcoidentityserver:5000"
ports:
- 56107:80
links:
- mcodatabase
- mcoidentityserver
depends_on:
- "mcodatabase"
- "mcoidentityserver"
networks:
- mconetwork
Using these values in C#
Identity Server.cs
You set the Identity Issuer name in ConfigureServices:
public void ConfigureServices(IServiceCollection services)
{
var sqlConnectionString = Configuration.GetConnectionString("DefaultConnection");
services
.AddSingleton(Configuration)
.AddMcoCore(sqlConnectionString)
.AddIdentityServer(x => x.IssuerUri = Configuration["IDENTITY_ISSUER"])
.AddTemporarySigningCredential()
.AddInMemoryApiResources(Config.GetApiResources())
.AddInMemoryClients(Config.GetClients())
.AddCorsPolicyService<InMemoryCorsPolicyService>()
.AddAspNetIdentity<User>();
}
API Startup.cs
We can now set the Authority to the environment variable.
app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
{
Authority = Configuration["IDENTITY_AUTHORITY"],
RequireHttpsMetadata = false,
ApiName = "api1"
});
Drawbacks
As shown here, the docker-compose would not be fit for production as the hard coded identity issuer is a local IP. Instead you would need a proper DNS entry that would map to the docker instance with the identity server running in it. To do this I would create a docker-compose override file and build production with the overridden value.
Thanks to ilya-chumakov for his help.
Edit
Further to this, I have written up the entire process of building a Linux docker + ASP.NET Core 2 + OAuth with Identity Server on my blog.
If you are running your docker containers in same network, you can do the followings:
Add IssuerUri in your identity server.
services.AddIdentityServer(x =>
{
x.IssuerUri = "http://<your_identity_container_name>";
})
This will set your identity server's URI. Therefore, your other web api services can use this URI to reach your identity server.
Add Authority in your web api that has to use identity server.
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(o =>
{
o.Authority = "http://<your_identity_container_name>";
o.Audience = "api1"; // APi Resource Name
o.RequireHttpsMetadata = false;
o.IncludeErrorDetails = true;
});

psql: could not translate host name "somePostgres" to address: Name or service not known

I am building a java spring mvc application in docker and dockefile build involves interacting with postgres container. Whenever i run docker-compose up the step in dockerfile which interacts with the postrges sometimes fails with an exception
psql: could not translate host name "somePostgres" to address: Name or service not known
FAILED
FAILURE: Build failed with an exception.
DockerCompose file:
abcdweb:
links:
- abcdpostgres
build: .
ports:
- "8080:8080"
volumes:
- .:/abcd-myproj
container_name: someWeb
abcdpostgres:
image: postgres
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
container_name: somePostgres
The somePostgres seems to start very quickly and There is no late loading of postgres container problem. Currently i am running this in virtual box created by docker-machine. Unable to get error as it's not persistent.
PS: Added Dockerfile
FROM java:7
RUN apt-get update && apt-get install -y postgresql-client-9.4
ADD . ./abcd-myproj
WORKDIR /abcd-myproj
RUN ./gradlew build -x test
RUN sh db/importdata.sh
CMD ./gradlew jettyRun
Basically what this error means is that psql was unable to resolve the host name, try using the ip address instead.
https://github.com/postgres/postgres/blob/313f56ce2d1b9dfd3483e4f39611baa27852835a/src/interfaces/libpq/fe-connect.c#L2275-L2285
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
&conn->addrlist);
if (ret || !conn->addrlist)
{
appendPQExpBuffer(&conn->errorMessage,
libpq_gettext("could not translate host name \"%s\" to address: %s\n"),
ch->host, gai_strerror(ret));
goto keep_going;
}
break;
https://github.com/postgres/postgres/blob/8255c7a5eeba8f1a38b7a431c04909bde4f5e67d/src/common/ip.c#L57-L75
int
pg_getaddrinfo_all(const char *hostname, const char *servname,
const struct addrinfo *hintp, struct addrinfo **result)
{
int rc;
/* not all versions of getaddrinfo() zero *result on failure */
*result = NULL;
#ifdef HAVE_UNIX_SOCKETS
if (hintp->ai_family == AF_UNIX)
return getaddrinfo_unix(servname, hintp, result);
#endif
/* NULL has special meaning to getaddrinfo(). */
rc = getaddrinfo((!hostname || hostname[0] == '\0') ? NULL : hostname,
servname, hintp, result);
return rc;
}
I think links are not encouraged lately.
But, if you want to have services to communicate over network and explicitly here is the config:
You need to configure network an both services to attach to that network. It is something like:
networks:
network:
external: true
abcdweb:
links:
- abcdpostgres
build: .
ports:
- "8080:8080"
volumes:
- .:/abcd-myproj
container_name: someWeb
networks:
network: null
abcdpostgres:
image: postgres
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
container_name: somePostgres
networks:
network: null
In this way the service will communicate via the network with service names as adress.
I had to set my secret_key_base in secrets.yml.
With the incorrect key, my app did not have permission to resolve the database domain.
I'm running a rails app in docker that makes use of secret_key_base. The problem is that I was running the app on the production database using the development environment. The development environment entailed the development secrete_key_base. Once I began using the correct key, I could connect to the database.
The error showed up in my rails container logs as
Raven 2.13.0 configured not to capture errors: No host specified, no public_key specified, no project_id specified
See this question for how to set the secret_key_base in secrets.yml

Resources