Map Docker compose volume with cloudwatch logs - docker

Docker compose services uses the volume with /test/app. The logs will be stored on /test/app. Then I used to view the logs through AWS cloudwatch console. But i am facing the below error while starting the services using docker-compose.
version: '3.9'
services:
test-media:
image: test-media:test
container_name: test
restart: "no"
volumes:
- /test/app:/app/test-media/logs
ports:
- 3000:3000
networks:
- test-network
networks:
test-network:
"Cannot start service test-media: error while creating mount source path '/test/app': mkdir /test: read-only file system"
suggest me on how to map the docker volume path /test/app to cloudwatch logs group.
Followed the below steps for cloudwatch configurations.
To install and configure CloudWatch Logs on an existing Ubuntu Server
curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
sudo python ./awslogs-agent-setup.py --region ap-south-1
Step 1 of 5: Installing pip ...libyaml-dev does not exist in system python-dev does not exist in system DONE
Step 1 of 5: Installing pip ...libyaml-dev does not exist in system python-dev does not exist in system DONE
Step 2 of 5: Downloading the latest CloudWatch Logs agent bits ... DONE
Step 3 of 5: Configuring AWS CLI ...
AWS Access Key ID [****************UY56]:
AWS Secret Access Key [****************67BR]:
Default region name [ap-south-1]:
Default output format [json]:
Step 4 of 5: Configuring the CloudWatch Logs Agent ...
Path of log file to upload [/var/log/syslog]: /test/app/test-media.log
Destination Log Group name [/test/app/test-media.log]: /test/app/test-media.log

Related

Running a Multi-service docker compose script in Google Compute Engine VM

I have a docker-compose.yml file which specifies two services aaa and bbb as follows,
version: "3.4"
services:
aaa:
platform: linux/amd64
build: .
image: aaa
environment:
- ENV_VAR=1
volumes:
- ./data:/root/data
ports:
- 5900:5900
restart: on-failure
bbb:
image: bbb
build: ./service_directory
platform: linux/amd64
environment:
- PYTHONUNBUFFERED=1
volumes:
- ./data:/root/data
ports:
- 5901:5901
restart: on-failure
depends_on:
- aaa
I'm hoping to run both the above services simultaenously on a google cloud machine VM via cloudbuild.yaml which reads,
steps:
- name: 'gcr.io/$PROJECT_ID/docker-compose'
args: ['up']
tags: ['cloud-builders-community']
My deployment script looks like
#!/bin/bash
container=mycontainer # container name
pid=my-nginx-363907 # process id
zone=us-west4-b
instance=instance-${zone} # instance name
gcloud builds submit \
--tag gcr.io/${pid}/${container} \
--project=${pid}
gcloud compute instances create-with-container ${instance} \
--zone=${zone} \
--tags=http-server,https-server \
--machine-type=e2-micro \
--container-image gcr.io/${pid}/${container} \
--project=${pid}
gcloud compute instances list --project=${pid}
Here's my directory structure:
project
| cloudbuild.yaml
| docker-compose.yml
| Dockerfile
|
|--service_directory
|
|--Dockerfile
The docker compose up command does kick in, but it appears to build only service aaa, and not bbb. What's worse is that the service aaa does not actually appear to run or have been installed in the VM instance. This is despite messages of apparent success:
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
ab4785dd-7c4e-413d-acf6-1fdc64308387 2022-09-29T11:25:28+00:00 6M17S gs://my-nginx-363907_cloudbuild/source/1664450585.644241-04488e692b644a6186d922270dfbe667.tgz gcr.io/my-nginx-363907/aaa (+1 more) SUCCESS
Can someone please explain how to run both the services on Google Cloud Compute Engine VM as specified by the docker-compose.yml file?
You probably need not use Cloud Build if you just want to run Docker Compose.
Cloud Build is often used (but not limited to) building container images.
You could (but need not) use Cloud Build to build the 2 container images that your docker-compose.yaml uses (aaa, bbb) but you would need to revise the cloudbuild.yaml to just perform the build and push steps and then you'd need to revise docker-compose.yaml to consume the images produced by Cloud Build.
I think you should create a Compute Engine VM ensure that Docker, Docker Compose and your build content (.) are available and then run your docker-compose.yaml as you would on any Linux machine.

Docker - all-spark-notebook Communications link failure

I'm new using docker and spark.
My docker-compose.yml file is
volumes:
shared-workspace:
services:
notebook:
image: docker.io/jupyter/all-spark-notebook:latest
build:
context: .
dockerfile: Dockerfile-jupyter-jars
ports:
- 8888:8888
volumes:
- shared-workspace:/opt/workspace
And the Dockerfile-jupyter-jars is:
FROM docker.io/jupyter/all-spark-notebook:latest
USER root
RUN wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar
RUN mv mysql-connector-java-8.0.28.jar /usr/local/spark/jars/
USER jovyan
To it start up a run
docker-compose up --build
The server is up and running and I'm interested to use spark-sql, but it is throwing and error trying to connect to mysql server:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
I can see the mysql-connector-java-8.0.28.jar in the "jars" folder, and I have used same sql instruction in apache spark non docker version and it works.
Mysql db server is also reachable from the same server I'm running the Docker.
Do I need to enable something to reach external connections? Any idea?
Reference: https://hub.docker.com/r/jupyter/all-spark-notebook
The docker-compose.yml and Dockerfile-jupyter-jars files were correct, since I was using mysql-connector-java-8.0.28.jar it requires a SSL or to disable explicitly.
jdbc:mysql://user:password#xx.xx.xx.xx:3306/inventory?useSSL=FALSE&nullCatalogMeansCurrent=true
I'm going to left this example for: Docker - all-spark-notebook with MySQL dataset

Boto3 timeout connecting to local dynamodb but can curl

I have been attempting to follow the various instructions and troubleshooting to get a docker container to connect to another docker container running local dynamodb via boto3. References/troubleshooting so far:
In compose, can just use automatic linking. Incidentally I tried explicitly specifying a shared network but could not succeed this way.
Make sure to use the resource name from compose, not localhost
Github repo with a template
Dockerfile (docker build -t min-example:latest .):
FROM python:3.8
RUN pip install boto3
RUN mkdir /app
COPY min_example.py /app
WORKDIR /app
Docker compose (min-example.yml):
version: "3.3"
services:
db:
container_name: db
image: amazon/dynamodb-local
ports:
- "8000:8000"
app:
image: min-example:latest
container_name: app
depends_on:
- db
min_example.py
import boto3
if __name__ == '__main__':
ddb = boto3.resource('dynamodb',
endpoint_url='http://db:8000',
region_name='dummy',
aws_access_key_id='dummy',
aws_secret_access_key='dummy')
existing_tables = [t.name for t in list(ddb.tables.all())]
print(existing_tables)
existing_tables = [t.name for t in list(ddb.tables.all())]
print(existing_tables)
Run with
docker-compose run -f min-example.yml app python min_example.py
It hangs on the ddb.tables.all() call, and times out with the error:
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: "http://db:8000/"
Interestingly, I can curl:
docker-compose -f min-example.yml run app curl http://db:8000/
{"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken","message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."}
Which suggests the containers can communicate.

ERROR Unable to push image 'library/web:latest' to registry 'docker.io'. Error: denied: requested access to the resource is denied - Kompose up

I have a simple docker-compose.yml file defined this way:
version: '3'
services:
db:
image: postgres:10.5-alpine
ports:
- 5432:5432
volumes:
- ./tmp/postgres_data:/var/lib/postgresql/data
web:
build:
context: .
dockerfile: Dockerfile
command: /bin/bash -c "rm -f /tmp/server.pid && bundle exec rails server -b 0.0.0.0 -P /tmp/server.pid"
ports:
- 3000:3000
depends_on:
- db
volumes:
- .:/app
I'm using [Kompose] (https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/#kompose-up) for converting my docker-compose.yml to Kubernetes
When I do
kompose convert, everything looks fine.
This is the output:
โœ— kompose convert
INFO Kubernetes file "db-service.yaml" created
INFO Kubernetes file "web-service.yaml" created
INFO Kubernetes file "db-deployment.yaml" created
INFO Kubernetes file "db-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "web-deployment.yaml" created
INFO Kubernetes file "web-claim0-persistentvolumeclaim.yaml" created
๐Ÿ‘‡๐ŸฝMy issue is when I do kompose up I get the following errors๐Ÿ‘‡๐Ÿฝ
โœ— kompose up
WARN Volume mount on the host "/Users/salo/Desktop/ibm-watson-ruby/tmp/postgres_data" isn't supported - ignoring path on the host
INFO Build key detected. Attempting to build image 'web'
INFO Building image 'web' from directory 'ibm-watson-ruby'
INFO Image 'web' from directory 'ibm-watson-ruby' built successfully
INFO Push image enabled. Attempting to push image 'web'
INFO Pushing image 'library/web:latest' to registry 'docker.io'
WARN Unable to retrieve .docker/config.json authentication details. Check that 'docker login' works successfully on the command line.: Failed to read authentication from dockercfg
INFO Authentication credentials are not detected. Will try push without authentication.
INFO Attempting authentication credentials 'docker.io
ERRO Unable to push image 'library/web:latest' to registry 'docker.io'. Error: denied: requested access to the resource is denied
FATA Error while deploying application: k.Transform failed: Unable to push Docker image for service web: unable to push docker image(s). Check that `docker login` works successfully on the command line
As a note, I'm currently logged in to my Docker Hub account. I did docker login
Thanks in advance! :)
Kubernetes basically requires a Docker registry to be able to work; it cannot build local images. You mention you have a Docker Hub account, so you need to add a reference to that as the container's image: in the docker-compose.yml file.
version: '3'
services:
web:
build: .
image: myname/web # <-- add this line
ports:
- 3000:3000
depends_on:
- db
# command: is in the image
# volumes: overwrite code in the image and don't work in k8s
When Kompose tries to push the image, it will use the image: name. You should be able to separately attempt to docker-compose push which will do the same thing.
Note that I deleted the volumes: that bind-mounts your application code into your container. This setup does not work in Kubernetes: it doesn't have access to your local system and you can't predict which node will actually be running your application. It's worth double-checking in plain Docker that your built image works the way you expect, without overwriting its code. (Debugging this in Docker will be easier than debugging it in Kubernetes.)

How to deploy container using docker-compose to google cloud?

i'm quite new to GCP and been using mostly AWS. I am currently trying to play around with GCP and want to deploy a container using docker-compose.
I set up a very basic docker-compose.yml file as follows:
# docker-compose.yml
version: '3.3'
services:
git:
image: alpine/git
volumes:
- ${PWD}:/git
command: "clone https://github.com/PHP-DI/demo.git"
composer:
image: composer
volumes:
- ${PWD}/demo:/app
command: "composer install"
depends_on:
- git
web:
image: php:7.4-apache
ports:
- "8080:${PORT:-80}"
- "8000:${PORT:-8000}"
volumes:
- ${PWD}/demo:/var/www/html
command: php -S 0.0.0.0:8000 -t /var/www/html
depends_on:
- composer
So the container will get the code from git, then install the dependencies using composer and finally be available on port 8000.
On my machine, running docker-compose up does everything. However how can push this docker-compose to google cloud.
I have tried building a container using the docker/compose image and a Dockerfile as follows:
FROM docker/compose
WORKDIR /opt
COPY docker-compose.yml .
WORKDIR /app
CMD docker-compose -f /opt/docker-compose.yml up web
Then push the container to the registry. And from there i tried deploying to:
cloud run - did not work as i could not find a way to specify mounted volume for /var/run/docker.sock
Kubernetes - i mounted the docker.sock but i keep getting an error in the logs that /app from the git service is read only
compute engine - same error as above
I don't want to make a container by copying all local files into it then upload, as the dependencies could be really big thus making a heavy container to push.
I have a working docker-compose and just want to use it on GCP. What's the easiest way?
This can be done by creating a cloudbuild.yaml file in your project root directory.
Add the following step to cloudbuild.yaml:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
On Google Cloud Platform > Cloud Builder : configure the file type of your build configuration as Cloud Build configuration file (yaml or json), enter the file location : cloudbuild.yaml
If the repository event that invokes trigger is set to "push to a branch" then Cloud Build will launch docker-compose.yml to build your containers.
Take a look at Kompose. It can help you convert the docker compose instructions into Kuberenetes specific deployment and services. You can then apply the Kubernetes files against your GKE Clusters. Note that you will have to build the containers and store in Container Registry first and update the image tag in service definitions accordingly.
If you are trying to setup same as on-premise VM in GCE, you can install these and run. Ref: https://dev.to/calvinqc/the-easiest-docker-docker-compose-setup-on-compute-engine-1op1

Resources