Setup of Cyberark Conjur server - docker

I've created a project in node.js to store and fetch credentials from cyberark conjur (using its REST-API)
But to test the application I'm stumbling to setup conjur server.
Problem is server is running fine within docker container, but how to access it outside(host machine) (port mapping is not working)
Or is there any conjur server hosted on Internet for public usage
All I want is to test API calls

As of writing this, the Conjur Node.js API is not currently actively being supported. Here are some suggestions for testing the API's.
Can I see the command you're using to start docker/docker-compose file?
Method 1
If you're using the setup from the Conjur Quickstart Guide, your docker-compose.yml file should look something like:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
depends_on:
- database
restart: on-failure
proxy:
image: nginx:1.13.6-alpine
container_name: nginx_proxy
ports:
- "8443:443"
volumes:
- ./conf/:/etc/nginx/conf.d/:ro
- ./conf/tls/:/etc/nginx/tls/:ro
depends_on:
- conjur
- openssl
restart: on-failure
...
This means Conjur is running behind an NGINX proxy to handle the SSL and does not have a port exposed to outside the Docker network it is running on. With this setup you can access the Conjur Server on https://localhost:8443 on your local machine.
Note: You will need the SSL cert located in ./conf/tls/. Since this is a demo environment, these are made readily available for testing like this.
Method 2
If you do not care about security and are just purely testing the REST API endpoints, you could always cut out the SSL and just modify the docker-compose.yml to expose the Conjur server's port to your local machine like this:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
ports:
- "8080:80"
depends_on:
- database
restart: on-failure
Now you'll be able to talk to the Conjur Server on your local machine through http://localhost:8080.
For more info: Networking in Docker Compose docs

Related

How to simulate AWS S3 in docker-compose using MinIO?

I have an application server that must fetch data from AWS S3, e.g. https://my-bucket.s3.us-east-1.amazonaws.com/assets/images/557a84a8-bd4b-7a8e-81c9-d445228187c0.png
I want to test this application server using docker-compose.
I can spin up an MinIO server quite easily, but how do I configure things so that my application accesses the local MinIO server as if it were the AWS one?
I am using the standard .NET AWS SDK and I do not want to change my application code for testing (this would defeat the point of the tests).
What I have so far:
version: '3.9'
services:
s3:
image: quay.io/minio/minio:RELEASE.2022-08-13T21-54-44Z
command: minio server /data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
MINIO_ACCESS_KEY: minio_access_key
MINIO_SECRET_KEY: minio_secret_key
restart: always
server:
image: server:latest
ports:
- "8080:8080"
environment:
AWS_ACCESS_KEY_ID: minio_access_key
AWS_SECRET_ACCESS_KEY: minio_secret_key
depends_on:
s3:
condition: service_started
You can set a network
alias
on your s3 container, to make it available as
my-bucket.s3.us-east-1.amazonaws.com.
You can tell minio server to recognize name-based buckets rooted at
s3.us-east-1.amazonaws.com by setting the MINIO_DOMAIN
environment variable (see the Server Confihttps://docs.docker.com/compose/compose-file/compose-file-v3/#networksguration
Guide
You can change the port on which minio listens by setting the
--address command line option (or by putting a proxy in front of
it)
That gets you:
services:
s3:
image: quay.io/minio/minio:RELEASE.2022-08-13T21-54-44Z
command: minio server /data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
MINIO_ACCESS_KEY: minio_access_key
MINIO_SECRET_KEY: minio_secret_key
MINIO_DOMAIN: s3.us-east-1.amazonaws.com
restart: always
networks:
default:
aliases:
- my-bucket.s3.us-east-1.amazonaws.com
This will almost work: your bucket would be available at
http://my-bucket.s3.us-east-1.amazonaws.com:9000. If you want to
make it available at https://my-bucket.s3.us-east-1.amazonaws.com,
you would need to set up an SSL terminating proxy in front of it
(something like Traefik, Nginx, etc), and you would need to create and
install the necessary certificates so that your client trusts the
server.
Hopefully this is enough to point you in the right direction!

Can't log MLflow artifacts to S3 with docker-based tracking server

I'm trying to set up a simple MLflow tracking server with docker that uses a mysql backend store and S3 bucket for artifact storage. I'm using a simple docker-compose file to set this up on a server and supplying all of the credentials through a .env file. When I try to run the sklearn_elasticnet_wine example from the mlflow repo here: https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine usingTRACKING_URI = "http://localhost:5005 from the machine hosting my tracking server, the run fails with the following error: botocore.exceptions.NoCredentialsError: Unable to locate credentials. I've verified that my environment variables are correct and available in my mlflow_server container. The runs show up in my backend store so the run only seems to be failing at the artifact logging step. I'm not sure why this isn't working. I've seen a examples of how to set up a tracking server online, including: https://towardsdatascience.com/deploy-mlflow-with-docker-compose-8059f16b6039. Some use minio also but others just specify their s3 location as I have. I'm not sure what I'm doing wrong at this point. Do I need to explicitly set the ARTIFACT_URI as well? Should I be using Minio? Eventually, I'll be logging runs to the server from another machine, hence the nginx container. I'm pretty new to all of this so I'm hoping it's something really obvious and easy to fix but so far the Google has failed me. TIA.
version: '3'
services:
app:
restart: always
build: ./mlflow
image: mlflow_server
container_name: mlflow_server
expose:
- 5001
ports:
- "5001:5001"
networks:
- internal
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
command: >
mlflow server
--backend-store-uri mysql+pymysql://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
--default-artifact-root s3://${AWS_S3_BUCKET}/mlruns/
--host 0.0.0.0
--port 5001
nginx:
restart: always
build: ./nginx
image: mlflow_nginx
container_name: mlflow_nginx
ports:
- "5005:80"
networks:
- internal
depends_on:
- app
networks:
internal:
driver: bridge
Finally figure this out. I didn't realize that the client also needed to have access to the AWS credentials for S3 storage.

How can I get a LAMP stack application in docker to send an HTTP POST to another service running on my local host?

I have a simple web application that uses HTML and PHP to capture information via a HTML form. It take this information and uses HTML-POST to sent it to a service, also running on my local host on port 8400, as application XML. I have this application running in a LAMP stack on macOS and it works perfectly. The XML gets to the service without any issues.
When I moved this app into a containerized LAMP stack using Docker, the application runs, but when my PHP tries to post it to the other service running on port 8400, it cannot get there.
I am confident that this is an issue with Docker networking, but I am not sure what the problem is. Here is my docker-compose.yml file:
version: "1.0"
services:
php:
build: "./php/"
networks:
- backend
- frontend
volumes:
- ./public_html/:/var/www/html/
apache:
build: "./apache/"
depends_on:
- php
- mysql
networks:
- frontend
- backend
ports:
- "8080:80"
volumes:
- ./public_html/:/var/www/html/
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
networks:
frontend:
backend:
I think the answer probably lies in the allowing the container to reach my localhost network, but being relatively new to Docker, I am unsure.
How can I configure Docker networking to allow posting to services outside of Dockernet on specific ports?

How to connect to rabbitmq container from the application server container

I am new to docker and I am trying to dockerize this application I have written in Golang. It is a simple web server that interacts with rabbitmq and mongodb
It takes the creadentials form a toml file and loads it into a config struct before starting the application server on port 3000. These are the credentials
mongo_server = "localhost"
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#localhost:5672/"
If it can't connect to these urls it fails with an error. Following is my docker-compose.yml
version: '3'
services:
rabbitmq:
image: rabbitmq
ports:
- 5672:5672
mongodb:
image: mongo
ports:
- 27017:27017
web:
build: .
image: palash2504/collect
container_name: collect_service
ports:
- 3000:3000
depends_on:
- rabbitmq
- mongodb
links: [rabbitmq, mongodb]
But it fails to connect with rabbitmq on the url used for local development i.e. amqp://guest:guest#localhost:5672/
I realise that the rabbitmq container might be running on some different address other than the one provided in the config file.
I would like to know the correct way for setting any env credentials to be able to connect my app to rabbitmq.
Also what approach would be best to change my application code for initializing connections to external services? I was thinking about ditching the config.toml file and using os.Getenv and os.Setenv to get the urls for connections.
Localhost addresses are resolved, well, locally. They thus will not work inside containers, since they will look for a local address (i.e. inside the container).
Services can access each other by using service names as an address. So in the web container you can target mongodb for example.
You might give this a shot:
mongo_server = mongodb
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#rabbitmq/"
It is advisable to set service target environment variables in the compose file itself:
#docker-compose.yml
#...other stuff...
web:
#...other stuff...
environment:
RABBITMQ_SERVER: rabbitmq
MONGO_SERVER: mongodb
depends_on:
- rabbitmq
- mongodb
This gives you a single place to make adjustments to the configuration.
As a side note, to me it seems that links: [rabbitmq, mongodb] can be removed. And I would advise not to alter the container name (remove container_name: collect_service unless it is necessary)

Docker swarm containers connection issues

I am trying to use docker swarm to create simple nodejs service that lays behind Haproxy and connect to mysql. So, I created this docker compose file:
And I have several issues:
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
The backend tries to connect to the database too soon even though it depends on it.
The application can't be reached from outside.
version: '3'
services:
db:
image: test_db:01
ports:
- 3306
networks:
- db
test:
image: test-back:01
ports:
- 3000
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=localhost
- NODE_ENV=development
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
restart_policy:
condition: on-failure
max_attempts: 3
window: 60s
networks:
- web
- db
depends_on:
- db
extra_hosts:
- db:10.0.1.4
proxy:
image: dockercloud/haproxy
depends_on:
- test
environment:
- BALANCE=leastconn
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- web
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
driver: overlay
db:
driver: bridge
I am running the following:
docker stack deploy --compose-file=docker-compose.yml prod
All the services are running.
curl http://localhost/api/test <-- Not working
But, as I mentioned above the issues I have.
Docker version 18.03.1-ce, build 9ee9f40
docker-compose version 1.18.0, build 8dd22a9
What do I missing?
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
don't use IP addresses for connection. Use just the DNS name.
So you must change connection to DATABASE_HOST=db, because this is the service name you've defined.
Localhost is wrong, because the service is running in a different container as your test service.
The backend tries to connect to the database too soon even though it depends on it.
depends_on does not work as you expected. Please read https://docs.docker.com/compose/compose-file/#depends_on and the info box "There are several things to be aware of when using depends_on:"
TL;DR: depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
The application can't be reached from outside.
Where is your haproxy configuration that it must request for http://test:3000 when something requests haproxy on /api/test?
For DATABASE_HOST=localhost - the localhost word means my local container. You need to use the service name where db is hosted. localhost is a special dns name always pointing to the application host. when using containers - it will be the container. In cloud development, you need to forget about using localhost (will point to the container) or IPs (they can change every time you run the container and you will not be able to use load-balancing), and simply use service names.
As for the readiness - docker has no possibility of knowing, if the application you started in container is ready. You need to make the service aware of the database unavailability and code/script some mechanisms of polling/fault tolerance.
Markus is correct, so follow his advice.
Here is a compose/stack file that should work assuming your app listens on port 3000 in the container and db is setup with proper password, database, etc. (you usually set these things as environment vars in compose based on their Docker Hub readme).
Your app should be designed to crash/restart/wait if it can't fine the DB. That's the nature of all distributed computing... that anything "remote" (another container, host, etc.) can't be assumed to always be available. If your app just crashes, that's fine and a normal process for Docker, which will re-create the Swarm Service task each time.
If you could attempt to make this with public Docker Hub images, I can try to test for you.
Note that in Swarm, it's likely easier to use Traefik for the proxy (Traefik on Swarm Mode Guide), which will autoupdate and route incoming requests to the correct container based on the hostname you give the labels... But note that you should test first just the app and db, then after you know that works, try adding in a proxy layer.
Also, in Swarm, all your networks should be overlay, and you don't need to specify as that is the default in stacks.
Below is a sample using traefik with your above settings. I didn't give the test service a specific traefik hostname so it should accept all traffic coming in on 80 and forward to 3000 on the test service.
version: '3'
services:
db:
image: test_db:01
networks:
- db
test:
image: test-back:01
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=db
- NODE_ENV=development
networks:
- web
- db
deploy:
labels:
- traefik.port=3000
- traefik.docker.network=web
proxy:
image: traefik
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "8080:8080" # traefik dashboard
command:
- --docker
- --docker.swarmMode
- --docker.domain=traefik
- --docker.watch
- --api
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
db:

Resources