How to run Kong API Gateway using docker containers? - docker

I am very new to Kong API Gateway, and am currently attempting to run a Kong container with PostgreSQL as my database container.
How can i achieve this?

1. Start your database:
$ docker run -d --name kong-database \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
postgres:9.4
2. Start Kong:
Start a Kong container and link it to your database container, configuring the KONG_DATABASE environment variable with postgres.
$ docker run -d --name kong \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 7946:7946 \
-p 7946:7946/udp \
kong
3.Verify Kong is running:
$ curl http://127.0.0.1:8001

You can follow Kong installation guide. It worked for me as expected.
Step 1: Start Postgres container
docker run -d --name kong-database \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
postgres:9.5
Step 2: migrate database
docker run --rm \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
kong:latest kong migrations up
Step 3: start Kong
docker run -d --name kong \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001" \
-e "KONG_ADMIN_LISTEN_SSL=0.0.0.0:8444" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:latest
Step 4: verify
curl -i http://localhost:8001/

Answering #StefanWalther's question, here's an example on how to make it work with docker-compose:
version: "2.1"
services:
kong:
image: kong:latest
depends_on:
kong-database:
condition: service_healthy
healthcheck:
test:
- CMD
- nc
- -z
- localhost
- "8443"
retries: 10
links:
- kong-database:kong-database
command:
- "kong"
- "start"
- "--vv"
environment:
- KONG_DATABASE=cassandra
- KONG_CASSANDRA_CONTACT_POINTS=kong-database
- KONG_ADMIN_LISTEN=0.0.0.0:8001
- KONG_ADMIN_LISTEN_SSL=0.0.0.0:8444
- KONG_NGINX_DAEMON=off
ports:
- "443:8443"
- "8001:8001"
restart: always
network_mode: "bridge"
kong-database:
image: cassandra:3
healthcheck:
test:
- "CMD-SHELL"
- "[ $$(nodetool statusgossip) = running ]"
volumes:
- ~/kong-database/cassandra:/var/lib/cassandra
expose:
- "9042"
restart: always
network_mode: "bridge"
And, as an extra, you can add kongfig to reconfigure the instance again:
kong-configurer:
image: mashupmill/kongfig
depends_on:
kong:
condition: service_healthy
links:
- kong:kong
volumes:
- ~/config.yml:/config.yml:ro
command: --path /config.yml --host kong:8001
network_mode: "bridge"
You can dump the configuration to use in this last container with:
kongfig dump --host localhost:8001 > ~/config.yml
More info on Kongfig, here.
Cheers.

Did you check the following repo?
https://github.com/Mashape/docker-kong

Here's my own docker compose and it works perfectly(from kong's docker project on github, i used kong-oidc, you can choose whatever version you like).
kong:
image: kong:1.3.0-alpine-oidc
container_name: kong
depends_on:
- kong-db
healthcheck:
test: ["CMD", "kong", "health"]
interval: 10s
timeout: 10s
retries: 10
restart: on-failure
ports:
- "8000:8000" # Listener
- "8001:8001" # Admin API
- "8443:8443" # Listener (SSL)
- "8444:8444" # Admin API (SSL)
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-db
KONG_PG_PORT: 5432
KONG_PG_DATABASE: api-gw
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_PROXY_LISTEN: 0.0.0.0:8000, 0.0.0.0:8443 ssl
KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
KONG_PLUGINS: bundled,oidc
KONG_LOG_LEVEL: debug
kong-migrations:
image: kong:1.3.0-alpine-oidc
command: kong migrations bootstrap
container_name: kong-migrations
depends_on:
- kong-db
environment:
KONG_DATABASE: postgres
KONG_PG_DATABASE: api-gw
KONG_PG_HOST: kong-db
KONG_PG_PASSWORD:
KONG_PG_USER: kong
links:
- kong-db:kong-db
restart: on-failure
kong-migrations-up:
image: kong:1.3.0-alpine-oidc
container_name: kong-migrations-up
command: kong migrations up && kong migrations finish
depends_on:
- kong-db
environment:
KONG_DATABASE: postgres
KONG_PG_DATABASE: api-gw
KONG_PG_HOST: kong-db
KONG_PG_PASSWORD:
KONG_PG_USER: kong
links:
- kong-db:kong-db
restart: on-failure

Update 2020
Create bridge network for containers can access each others
docker network create my-net
Start kong-database container
docker run -d --name kong-database --network my-net -p 5432:5432 -e "POSTGRES_USER=kong" -e "POSTGRES_HOST_AUTH_METHOD=trust" -e "POSTGRES_DB=kong" postgres:alpine
Run temporary container to initialize data for kong-database
docker run --rm \
--network my-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
kong:latest kong migrations bootstrap
Run kong container
docker run -d --name kong \
--network my-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001" \
-e "KONG_ADMIN_LISTEN_SSL=0.0.0.0:8444" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:latest
Verify
curl http://127.0.0.1:8001

Related

docker; invalid reference format

I tried to create a mongo-express container using the command below:
docker run -d \
-p 8081:8081 \
-e ME_CONFIG_MONGODB_ADMINUSERNAME =mongoadmin \
-e ME_CONFIG_MONGODB_ADMINPASSWORD =secret \
--net mongo-network \
--name mongo-express \
-e ME_CONFIG_MONGODB_SERVER =mongodb \
mongo-express
And got the error:
docker: invalid reference format.

Backup and restore Zookeeper and Kafka data

zookeeper cluster
sudo docker run -itd --name zookeeper-1 --restart=always -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=1 -e ZOO_SERVERS=0.0.0.0:2888:3888,ip2:2888:3888,ip3:2888:3888 -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-2 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=2 -e ZOO_SERVERS=ip1:2888:3888,0.0.0.0:2888:3888,ip3:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-3 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=3 -e ZOO_SERVERS=ip1:2888:3888,ip2:2888:3888,0.0.0.0:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
kafka cluster
sudo docker run -itd --name kafka-1 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-2 --network sevenchats -v /data/kafka/data:/bitnami --restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-3 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
after running these commands my cluster is working fine
I want to restore my Kafka data or topics to new cluster so using tar command I took the backup of the docker volumes on my host under /data/kafka/ folder
tar cvzf data_kafka1.tar.gz data
when I check the size of the folder on my existing cluster the /data/kafka folder is around 200MB
and when I copy my backup folder to new cluster and extract there using the command
tar xvzf data_kafka1.tar.gz
when I check the size of the data folder
du -hs data
it is 20GB on my new cluster
I want my backup size to be same on both the clusters
small difference is also fine but there is a huge difference in size
let me know what mistake am i doing?

translate docker run command (tensorflow-serving) into docker-compose

Here is the docker run command:
docker run -p 8501:8501 \
--name tfserving_classifier \
-e MODEL_NAME=img_classifier \
-t tensorflow/serving
Here is what I tried but I am not able to get the MODEL_NAME to work
tensorflow-servings:
container_name: tfserving_classifier
ports:
- 8501:8501
command:
- -e MODEL_NAME=img_classifier
- -t tensorflow/serving
tensorflow-servings:
container_name: tfserving_classifier
image: tensorflow/serving
environment:
- MODEL_NAME=img_classifier
ports:
- 8501:8501

What I have missed when I linked the tor container to microblog?

So from begin I was trying to link the microblog from here by using next commands:
sudo docker build -t microblog:latest .
sudo docker run --name mysql -d -e MYSQL_RANDOM_ROOT_PASSWORD=yes \
-e MYSQL_DATABASE=microblog -e MYSQL_USER=microblog \
-e MYSQL_PASSWORD=onion~12 \
mysql/mysql-server:5.7
sudo docker run --name elasticsearch -d -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.2
sudo docker run --name microblog -d -p 8000:5000 -e SECRET_KEY=my-secret-key \
-e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
-e MAIL_USERNAME=admin_onion123#gmail.com -e MAIL_PASSWORD=123456780 \
--link mysql:dbserver \
-e DATABASE_URL=mysql+pymysql://microblog:onion~12#dbserver/microblog \
--link elasticsearch:elasticsearch \
-e ELASTICSEARCH_URL=http://elasticsearch:9200 \
microblog:latest
Up here everything is perfect!
Now I was getting the docker container with Tor Hidden-Service from here by using next command:
sudo docker run -itd --link microblog goldy/tor-hidden-service
The ideea is when I am using the command :
sudo docker logs {container of tor}
is showing me the myrandomoninonaddress.onoin:5000.
What I did wrong or what I have missed and why is listening port 5000 instead of 8000?

ACME certbot: Dry run is successful although no connection open

I have a question about certbot:
At the moment, I test it using the following command:
docker run -t --rm
-v nginx-docker_certs:/etc/letsencrypt
certbot/certbot
certonly --dry-run --standalone
-d mydomain.tld
When I run it, it fails because it cannot connect. So I add -p 80:80 -p 443:443:
docker run -t --rm
-v nginx-docker_certs:/etc/letsencrypt
-v nginx-docker_certs-data:/data/letsencrypt
-p 80:80 -p 443:443
certbot/certbot
certonly --dry-run --standalone
-d mydomain.tld
Now it works.
When I remove -p 80:80 -p 443:443 again and do a test run again, the test run is successful. But why?

Resources