Change amazon/dynamodb-local Port - docker

How can I change the port dynamodb starts on through the Amazon Docker image?
According to this answer, the -port option can be used when executing the dynamodb java file.
However, when running the docker image with this command: docker run -p 8000:8000 amazon/dynamodb-local I do not have the option of specifying the port dynamodb listens to, just the port connected between my host and the container.
Would I have to make my own Dockerfile, specifying the OS and installing dynamodb and whatnot, so I can run the java command and specify my port?

You don't need to rebuild the image. Doing
docker inspect amazon/dynamodb-local
shows that the Entrypoint is set as
"Entrypoint": [
"java"
],
So running the below command gives an error:
$ docker run -p 8001:8001 amazon/dynamodb-local -port 8001
Unrecognized option: -port
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
because we are trying to pass -port argument to java but we need to pass it to DynamoDBLocal.jar.
Once we know that, we can add the jar to the docker run and the following works:
$ docker run -p 8001:8001 amazon/dynamodb-local -jar DynamoDBLocal.jar -port 8001
Initializing DynamoDB Local with the following configuration:
Port: 8001
InMemory: false
DbPath: null
SharedDb: false
shouldDelayTransientStatuses: false
CorsParams: *
I would raise it as a bug but https://hub.docker.com/r/amazon/dynamodb-local/ doesn't mention a public github repo to raise the issue.

For some reason docker-compose does not map ports to the host for dynamodb. The next configuration will not work for the dynamodb-local image:
ports:
- "9000:8000"
The workaround is to put the port directly to the command,
-port 9000 is the same as ports: - "9000:9000" in docker-compose.
version: "3.8"
services:
service-dynamodb:
image: amazon/dynamodb-local
image: "amazon/dynamodb-local:latest"
working_dir: /home/dynamodblocal
# -port {number} is the same as
# ports: - {number}:{number} in Docker
command: "-jar DynamoDBLocal.jar -port 9000 -sharedDb -optimizeDbBeforeStartup -dbPath ./data"
volumes:
- ./data:/home/dynamodblocal/data

I tried the official image to override entry point but there was some unknown error, But you can good to go with this approach.
Just create a New Docker image from amazon/dynamodb-local as a base image.
Build.
docker build -t mydb .
and run it
docker run -it --rm -p 8001:8001 mydb
Below is the Dockerfile
FROM amazon/dynamodb-local
WORKDIR /home/dynamodblocal
ENTRYPOINT ["java", "-jar", "DynamoDBLocal.jar", "-port", "8003"]
As you will see the port.

Related

Docker ps doesn't show containers created/runing with docker-compose

I'm trying to understand why I can't see containers created with docker-compose up -d using docker ps. If I go to the folder where is the docker-compose.yaml located and run docker-compose ps I can see the container runing. I did the same on windows because i'm using ubuntu and it works as expected, I can see the container just runing docker ps. Could anyone give me a hint about this behavior, please? Thanks in advance.
Environment:
Docker version 20.10.17, build 100c701
docker-compose version 1.25.0, build unknown
Ubuntu 20.04.4 LTS
in my terminal i see this output:
/GIT/project$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
/GIT/project$ cd scripts/
/GIT/project/scripts$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
/GIT/project/scripts$ docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------------
scripts_db_1 docker-entrypoint.sh --def ... Up 0.0.0.0:3306->3306/tcp,:::3306->3306/tcp,
33060/tcp
/GIT/project/scripts$
docker-compose.yaml
version: '3.3'
services:
db:
image: mysql:5.7
# NOTE: use of "mysql_native_password" is not recommended: https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password
# (this is just an example, not intended to be a production configuration)
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
# <Port exposed> : < MySQL Port running inside container>
- 3306:3306
expose:
# Opens port 3306 on the container
- 3306
# Where our data will be persisted
volumes:
- treip:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: changeit
MYSQL_DATABASE: treip
volumes:
treip:
I executed the container with sudo and the problem was solve. now the container apear using docker ps, so instead of docker-compose up I executed it with sudo sudo docker-compose up . Sorry, my bad.

Celery connection refused with RabbitMQ running in Docker container

I am trying to test an API that sends long-running jobs to a queue processed by Celery workers.. I am using RabbitMQ running in a Docker container as the message queue. However, when sending a message to the queue I get the following error: Error: [Errno 111] Connection refused
Steps to reproduce:
Start RabbitMQ container: docker run -d -p 5672:5672 rabbitmq
Start Celery server: celery -A celery worker --loglevel=INFO
Build docker image: docker build -t fastapi .
Run container docker run -it -p 8000:8000 fastapi
Dockerfile:
FROM python:3.9
WORKDIR /
COPY . .
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
requirements.txt:
anyio==3.6.1
asgiref==3.5.2
celery==5.2.7
click==8.1.3
colorama==0.4.4
fastapi==0.78.0
h11==0.13.0
httptools==0.4.0
idna==3.3
pydantic==1.9.1
python-dotenv==0.20.0
PyYAML==6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.2.0
uvicorn==0.17.6
watchgod==0.8.2
websockets==10.3
app.py:
from fastapi import FastAPI
import tasks
#app.get("/{num}")
async def root(num):
tasks.update_db.delay(num)
return {"success": True}
tasks.py:
from celery import Celery
import time
celery = Celery('tasks', broker='amqp://')
#celery.task(name='update_db')
def update_db(num: int) -> None:
time.sleep(30)
return
You can't connect to rabbitmq on localhost; it's not running in the same container as your Python app. Since you've exposed rabbit on your host, you can connect to it using the address of your host. One way of doing that is starting the app container like this:
docker run -it -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi
And then modify your code like this:
celery = Celery('tasks', broker='amqp://host.docker.internal')
With that code in place, let's re-run your example:
$ docker run -d -p 5672:5672 rabbitmq
$ docker run -d -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi
$ curl http://localhost:8000/1
{"success":true}
There's no reason to publish the rabbitmq ports on your host if you only need to access it from within a container. When building an application with multiple containers, using something like docker-compose can make your life easier.
If you used the following docker-compose.yaml:
version: "3"
services:
rabbitmq:
image: rabbitmq
app:
build:
context: .
ports:
- "8000:8000"
And modified your code to connect to rabbitmq:
celery = Celery('tasks', broker='amqp://rabbitmq')
You could then run docker-compose up to bring up both containers. Your app would be exposed on host port 8000, but rabbitmq would only be available to your app container.
Incidentally, rather than hardcoding the broker uri in your code, you might want to get that from an environment variable instead:
celery = Celery('tasks', broker=os.getenv('APP_BROKER_URI'))
That allows you to use different connection strings without needing to rebuild your image every time. We'd need to modify the docker-compose.yaml to include the appropriate variable:
version: "3"
services:
rabbitmq:
image: rabbitmq
app:
build:
context: .
environment:
APP_BROKER_URI: "amqp://rabbitmq"
ports:
- "8000:8000"
Update tasks.py
import time
celery = Celery('tasks', broker='amqp://user:pass#host:port//')
#celery.task(name='update_db')
def update_db(num: int) -> None:
time.sleep(30)
return

How to set up alertmanager.service for running in docker container

I am running prometheus in a docker container, and I want to configure an AlertManager for making it send me an email when the service is down. I created the alert_rules.yml and the prometheus.yml, and I run everything with the following command, mounting both the yml files onto the docker container at the path /etc/prometheus:
docker run -d -p 9090:9090 --add-host host.docker.internal:host-gateway -v "$PWD/prometheus.yml":/etc/prometheus/prometheus.yml -v "$PWD/alert_rules.yml":/etc/prometheus/alert_rules.yml prom/prometheus
Now, I also want prometheus to send me an email when an alert comes up, and that's where I encounter some problems. I configured my alertmanager.yml as follows:
route:
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receiver: email-me
receivers:
- name: 'gmail'
email_configs:
- to: 'my_email#gmail.com'
from: 'askonlinetraining#gmail.com'
smarthost: smtp.gmail.com:587
auth_username: 'my_email#gmail.com'
auth_identity: 'my_email#gmail.com'
auth_password: 'the_password'
I actually don't know if the smarthost parameter is configured correctly since I can't find any documentation about it and I don't know which values it should contain
I also created an alertmanager.service file:
[Unit]
Description=AlertManager Server Service
Wants=network-online.target
After=network-online.target
[Service]
User=root
Group=root
Type=Simple
ExecStart=/usr/local/bin/alertmanager \
--config.file /etc/alertmanager.yml
[Install]
WantedBy=multi-user.target
I think something here is messed up: I think the first parameter I pass to ExecStart is a path that doesn't exist in the container, but I have no idea on how I should replace it.
I tried mounting the last two files into the docker container in the same directory where I mount the first two yml files by using the following command:
docker run -d -p 9090:9090 --add-host host.docker.internal:host-gateway -v "$PWD/prometheus.yml":/etc/prometheus/prometheus.yml -v "$PWD/alert_rules.yml":/etc/prometheus/alert_rules.yml -v "$PWD/alertmanager.yml":/etc/prometheus/alertmanager.yml -v "$PWD/alertmanager.service":/etc/prometheus/alertmanager.service prom/prometheus
But the mailing alert is not working and I don't know how to fix the configuration for smoothly running all of this into a docker container. As I said, I suppose the main problem resides in the ExecStart command present in alertmanager.service, but maybe I'm wrong. I can't find anything helpful online, hence I would really appreciate some help
The best practice with containers is to aim to run a single process per container.
In your container, this suggests one container for prom/prometheus and another for prom/alertmanager.
You can run these using docker as:
docker run \
--detach \
--name=prometheus \
--volume=${PWD}:/prometheus.yml:/etc/prometheus/prometheus.yml \
--volume=${PWD}:/rules.yml:/etc/alertmanager/rules.yml \
--publish=9090:9090 \
prom/prometheus:v2.26.0 \
--config.file=/etc/prometheus/promtheus.yml
docker run \
--detach \
--name=alertmanager \
--volume=${PWD}:/rules.yml:/etc/alertmanager/rules.yml \
--publish=9093:9093 \
prom/alertmanager:v0.21.0
A good tool when you run multiple container is Docker Compose in which case, your docker-compose.yml could be:
version: "3"
services:
prometheus:
restart: always
image: prom/prometheus:v2.26.0
container_name: prometheus
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ${PWD}/prometheus.yml:/etc/prometheus/prometheus.yml
- ${PWD}/rules.yml:/etc/alertmanager/rules.yml
expose:
- "9090"
ports:
- 9090:9090
alertmanager:
restart: always
depends_on:
- prometheus
image: prom/alertmanager:v0.21.0
container_name: alertmanager
volumes:
- ${PWD}/alertmanager.yml:/etc/alertmanager/alertmanager.yml
expose:
- "9093"
ports:
- 9093:9093
and you could:
docker-compose up
In either case, you can then browse:
Prometheus on the host's port 9090 i.e. localhost:9090
Alert Manager on the host's port 9093, i.e. localhost:9093

Ports not picked up ansible docker module

I am using ansible (2.0) docker module to start a jboss docker container. My playbook looks as follows:
- name: Pull aplication jboss container
docker:
name: jboss
image: jboss/wildfly
state: started
pull: always
ports:
- "9990:9990"
- "8080:8080"
command: "/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0"
I want to mimic the command shown in the docs:
docker run -p 8080:8080 -p 9990:9990 -it jboss/wildfly /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
If I execute the playbook, and run docker ps, my ports are not bound to 9990, only 8080:
0.0.0.0:8080->8080/tcp
If I do not use the playbook, and only run my docker container using the aforementioned command that I want to mimic, I can see both ports:
0.0.0.0:8080->8080/tcp, 0.0.0.0:9990->9990/tcp
How would I use the docker module to bind both 8080 and 9990 ports?
I ended up manually exposing the both ports to make this work, via the expose command:
- name: Pull aplication jboss container
docker:
name: jboss
image: jboss/wildfly
state: started
pull: always
expose:
- 9990
- 8080
ports:
- "9990:9990"
- "8080:8080"
command: "/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0"
I am not sure if that is the best answer, but for now, it is solving the problem of the ports not being exposed.

Putting docker run parameters into Dockerfile

I have a working docker command:
docker run -p 3001:8080 -p 50000:50000 -v /Users/thomas/Desktop/digital-ocean-jenkins/jenkins:/var/jenkins_home jenkins/jenkins:lts
I'd like to put these config variables in a Dockerfile:
FROM jenkins/jenkins:lts
EXPOSE 3001 8080
EXPOSE 50000 50000
VOLUME jenkins:var/jenkins_home
However it's not taking any of these configuration variables. How can I pass in the parameters I am passing to docker run as apart of the build?
I built and ran using this:
docker build -t treggi-jenkins .
docker run treggi-jenkins
I think you'd need to use docker-compose for something like that.
See docker-compose docs
The docker-compose file could look something like this
version: '3'
services:
jenkins:
image: jenkins/jenkins:lts
ports:
- "3001:8080"
- "50000:50000"
volumes:
- jenkins:var/jenkins_home
volumes:
jenkins:

Resources