Hi I am creating a little go binary for custom health information from a Virtuoso db (this all runs in K8s but doing testing locally with docker-compose). I am installing the isql binary on an Alpine container (and copying my go binary to it), however I am unable to connect to the database (even when on same network), I am using the following command to test: isql localhost:1111 dba dba but I always get this error: [ISQL]ERROR: Could not SQLConnect.
What am I doing wrong? Here is my docker-compose file:
version: "3.9"
services:
virtuoso:
image: openlink/virtuoso-opensource-7
environment:
- DBA_PASSWORD=dba
ports:
- "1111:1111"
- "8890:8890"
networks:
- virtuosonet
health:
build:
context: .
dockerfile: Dockerfile
environment:
- VIRT_USERNAME=dba
- VIRT_PASSWORD=dba
entrypoint: sh -c "sleep 300"
networks:
- virtuosonet
networks:
virtuosonet:
Related
Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.
I am launching containers via docker-compose, but 2 out of 3 containers are failing stating -:"exec user process caused "exec format error" "
The above error is caused while executing a file places at location /opt/whatsapp/bin/wait_on_postgres.sh, i need to add #!/bin/bash at top of this file.
Problem is, the container is exiting in no time so how to access this file to make necessary changes ??
Below is the docker-compose.yml i am using -:
version: '3'
volumes:
whatsappMedia:
driver: local
postgresData:
driver: local
services:
db:
image: postgres:10.6
command: "-p 3306 -N 500"
restart: always
environment:
POSTGRES_PASSWORD: testpass
POSTGRES_USER: root
expose:
- "33060"
ports:
- "33060:3306"
volumes:
- postgresData:/var/lib/postgresql/data
network_mode: bridge
wacore:
image: docker.whatsapp.biz/coreapp:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
network_mode: bridge
links:
- db
waweb:
image: docker.whatsapp.biz/web:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
Problem got resolved by using 64bit guest OS image.
I was running this container over 32 bit Centos which was causing the error.
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
Docker noob here.
I have two files docker-compose.build.yml and docker-compose.up.yml in my docker folder. Following are the contents of both files..
docker-compose.build.yml
version: "3"
services:
base:
build:
context: ../
dockerfile: ./docker/Dockerfile.base
args:
DEBUG: "true"
image: ottertune-base
labels:
NAME: "ottertune-base"
web:
build:
context: ../
dockerfile: ./docker/Dockerfile.web
image: ottertune-web
depends_on:
- base
labels:
NAME: "ottertune-web"
volumes:
- ../server:/app
driver:
build:
context: ../
dockerfile: ./docker/Dockerfile.driver
image: ottertune-driver
depends_on:
- base
labels:
NAME: "ottertune-driver"
docker-compose.up.yml
version: "3"
services:
web:
image: ottertune-web
container_name: web
expose:
- "8000"
ports:
- "8000:8000"
links:
- backend
- rabbitmq
depends_on:
- backend
- rabbitmq
environment:
DEBUG: 'true'
ADMIN_PASSWORD: 'changeme'
BACKEND: 'postgresql'
DB_NAME: 'ottertune'
DB_USER: 'postgres'
DB_PASSWORD: 'ottertune'
DB_HOST: 'backend'
DB_PORT: '5432'
DB_OPTS: '{}'
MAX_DB_CONN_ATTEMPTS: 30
RABBITMQ_HOST: 'rabbitmq'
working_dir: /app/website
entrypoint: ./start.sh
labels:
NAME: "ottertune-web"
networks:
- ottertune-net
driver:
image: ottertune-driver
container_name: driver
depends_on:
- web
environment:
DEBUG: 'true'
working_dir: /app/driver
labels:
NAME: "ottertune-driver"
networks:
- ottertune-net
rabbitmq:
image: "rabbitmq:3-management"
container_name: rabbitmq
restart: always
hostname: "rabbitmq"
environment:
RABBITMQ_DEFAULT_USER: "guest"
RABBITMQ_DEFAULT_PASS: "guest"
RABBITMQ_DEFAULT_VHOST: "/"
expose:
- "15672"
- "5672"
ports:
- "15673:15672"
- "5673:5672"
labels:
NAME: "rabbitmq"
networks:
- ottertune-net
backend:
container_name: backend
restart: always
image: postgres:9.6
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'ottertune'
POSTGRES_DB: 'ottertune'
expose:
- "5432"
ports:
- "5432:5432"
labels:
NAME: "ottertune-backend"
networks:
- ottertune-net
networks:
ottertune-net:
driver: bridge
Nothing wrong with the dockerfiles, i just have a few doubts about this approach.
What purpose does having multiple files serve instead of just one docker-compose.yml?
How does docker-compose work when used with multiple files?
When i do docker-compose -f docker-compose.build.yml build --no-cache
Building base
Step 1/1 : FROM ubuntu:18.04
---> 775349758637
[Warning] One or more build-args [DEBUG] were not consumed
Successfully built 775349758637
Successfully tagged ottertune-base:latest
Building web
Step 1/1 : FROM ottertune-base
---> 775349758637
Successfully built 775349758637
Successfully tagged ottertune-web:latest
Building driver
Step 1/1 : FROM ottertune-base
---> 775349758637
Successfully built 775349758637
Successfully tagged ottertune-driver:latest
and then docker-compose up i get the error
rabbitmq is up-to-date
backend is up-to-date Starting web ... error
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346:
starting container process caused "exec: \"./start.sh\": stat ./start.sh: no such file or
directory": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346:
starting container process caused "exec: \"./start.sh\": stat ./start.sh: no such file or
directory": unknown
ERROR: Encountered errors while bringing up the project.
this entrypoint start.sh is defined in the docker-compose.up.yml file which I didn't pass as an argument to
docker-compose build
So, why is the docker-compose up trying to run this entrypoint from a yml file which is not even passed during build? Really confused on this and didn't find much about it on google and stackoverflow.
If you docker-compose -f a.yml -f b.yml ..., Docker Compose merges the two YAML files. If you look at the two files you've posted, one has all of the run-time settings (ports:, environment:, ...), and if you happened to have the images already it would be enough to run the application. The second only has build-time settings (build:), but requires the source tree checked out locally to be able to run.
You probably need to specify both files on every docker-compose invocation
docker-compose -f docker-compose.build.yml -f docker-compose.up.yml up --build
It does seem like the author of these files intended for them to be run separately
docker-compose -f docker-compose.build.yml build
docker-compose -f docker-compose.up.yml up
but note that some of the run-time options in the build file, like volumes: to hide the application built into the image, will never take effect.
(You should be able to delete a large number of settings in the "up" YAML file that either duplicate what's in the image or that Docker Compose can provide for you: container_name:, expose:, links:, working_dir:, entrypoint:, networks:, and (probably) labels: are all unnecessary and can be deleted.)
What purpose does having multiple files serve instead of just one docker-compose.yml?
You can share configuration across environments. For example, I keep the common configuration such as the network and server in a docker-compose.yml. I keep my development environment specifics such as a server with automatic reload and debugging enabled in a docker-compose.override.yml. I keep the production-specific configs in a docker-compose.prod.yml. Then I can run docker-compose up --build for my development environment (Docker Compose uses docker-compose.yml and docker-compose.override.yml by default). And I can run my prod environment with docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --build. You can read about this in the dedicated docs page.
How does docker-compose work when used with multiple files?
It takes the first file as the base file, and adds or replaces configs from subsequent files ot the base file. See the relevant docs.
When i do docker-compose -f docker-compose.build.yml build --no-cache ...
As for your last question, I can't really tell by what I've seen. But unlike Dockerfiles which need two commands (docker build and docker run), docker-compose only needs one. So when you do docker-compose up, it looks for a file named docker-compose.yml (and also docker-compose.override.yml if it's present).
I'm building 2 docker containers, "app" and "db", via a docker-compose file.
The app server just installs java/tomcat via a Dockerfile which is what docker-compose uses to build.
The db server uses an MS SQL image.
When I run:
docker-compose up
I follow that with a build process of software I need to load which deploys a war to the tomcat directory in the app server and builds the database in the database server.
My problem is: The build process can reference localhost:8080 to install/patch the software to the app server and reference localhost:1433 to install/patch the database portion of the software to the database server. However, when I start Tomcat the system doesn't come online because the app server can't connect to the database server via "localhost:1433" so it requires me to jump in and update the properties file after the build to the docker internal IP address and THEN it works.
My question is: How am I able to get my localhost and my app container to reference the DB in the same manner in a database url?
Dockerfile for app server:
FROM centos:centos7
COPY apache-tomcat-9.0.20.tar.gz /tmp/
WORKDIR /tmp/
RUN yum -y update
RUN yum -y install java-11-openjdk-devel
RUN tar -xf apache-tomcat-9.0.20.tar.gz
RUN mv apache-tomcat-9.0.20 /opt/tomcat/
RUN export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/
RUN export PATH=$PATH:$JAVA_HOME/jre/bin
RUN export CATALINA_HOME=/opt/tomcat/
RUN export PATH=$PATH:$CATALINA_HOME/bin
WORKDIR /opt/tomcat/webapps
RUN mkdir testapp
enter code here
enter code here
Docker-Compose File:
version: '3.3'
services:
db:
image: "mcr.microsoft.com/mssql/server:2017-latest"
restart: always
volumes:
- db_data:/var/lib/mssql
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=Test123
network_mode: bridge
hostname: db
ports:
- "1433:1433"
app:
build: './testapp'
volumes:
- './system/build:/opt/tomcat/webapps/testapp/'
ports:
- "8080:8080"
- "8009:8009"
network_mode: bridge
tty: true
depends_on:
- db
volumes:
db_data:
Bring your service to the same network and target the service by service name. For that you need to define a docker network like below. For the following example I can access DB with http://mongo:27017.
mongo:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
networks:
- my-net
spring:
depends_on:
- mongo
image: docker-spring-http-alpine
ports:
- "8080:8080"
networks:
- my-net
networks:
my-net: