docker service logs empty (?) when deployed in stack on AWS - docker

I've got a docker-compose.yml which, when deployed locally as either using stack or compose yields 3 services (parse-server, mongodb, web-app in nginx). I can get logs from those services using docker service logs <id>.
Using the same docker-compose.yml to deploy the stack to Amazon EC2, docker service logs <id> calls to the running services returns nothing. As if I were cat'ing an empty file.
Does anybody know what could cause this and / or how I can fix it?

When you deploy a swarm to AWS using the Docker Docs buttons or via cloud, I believe it usually pipes all output to CloudWatch, organized by individual container. This is only helpful if that is how you created your swarm.

Related

Running multiple docker containers in the same environment

Can't seem to connect API running on my localhost with webapp that's running as well on another local host. Note they are containerized on 2 different containers. API calls don't seem to get endpoints on my application.
What's the way to run them both in the same environment?
I've used maven to create a .jar of my app and did docker build and docker run on specified port on localhost. Currently searching how to run docker-compose of these two so that endpoints can be reached. (note API is on port:1026 and app on port:1028)
Any ideas?

Kubernetes on Docker for Windows -> AKS/EKS

With the Kubernetes orchestrator now available in the stable version of Docker Desktop for Win/Mac, I've been playing around with running an existing compose stack on Kubernetes locally.
This works fine, e.g., docker stack deploy -c .\docker-compose.yml myapp.
Now I want to go to the next step of running this same application in a production environment using the likes of Amazon EKS or Azure AKS. These services expect proper Kubernetes YAML files.
My question(s) is what's the best way to get these files, or more specifically:
Presumably, docker stack is performing some conversion from Compose YAML to Kubernetes YAML 'under the hood'. Is there documentation/source code links as to what is going on here and can that converted YAML be exported?
Or should I just be using Kompose?
It seems that running the above docker stack deploy command against a remote context (e.g., AKS/EKS) is not possible and that one must do a kubectl deploy. Can anyone confirm?
docker stack deploy with a Compose file to Kube only works on Docker's Kubernetes distributions - Docker Desktop and Docker Enterprise.
With the recent federation announcement you'll be able to manage AKS and EKS with Docker Enterprise, but using them direct means you'll have to use Kubernetes manifest files and kubectl.

Deploying multiple docker containers to AWS ECS

I have created a Docker containers using docker-compose. In my local environment, i am able to bring up my application without any issues.
Now i wanted to deploy all my docker containers to the AWS EC2 (ECS). After going over the ECS documentation, i found out that we can make use of the same docker-compose to deploy in ECS using ECS-CLI. But ECS-CLI is not available for windows instances as of now. So now i am not sure how to use my docker-compose to build all my images using a single command and deploy it to the ECS from an windows instance.
It seems like i have to deploy my docker containers one by one to ECS as like below steps,
From the ECS Control Panel, create a Docker Image Repository.
Connect your local Docker client with your Docker credentials in ECS:
Copy and paste the Docker login command from the previous step. This will log you in for 24 hours
Tag your image locally ready to push to your ECS repository – use the repo URI from the first step
Push the image to your ECS repoository\
create tasks with the web UI, or manually as a JSON file
create a cluster, using the web UI.
Run your task specifying the EC2 cluster to run on
Is there any other way of running the docker containers in ECS ?
Docker-Compose is wrong at this place, when you're using ECS.
You can configure multiple containers within a task definition, as seen here in the CloudFormation docs:
ContainerDefinitions is a property of the AWS::ECS::TaskDefinition resource that describes the configuration of an Amazon EC2 Container Service (Amazon ECS) container
Type: "AWS::ECS::TaskDefinition"
Properties:
Volumes:
- Volume Definition
Family: String
NetworkMode: String
PlacementConstraints:
- TaskDefinitionPlacementConstraint
TaskRoleArn: String
ContainerDefinitions:
- Container Definition
Just list multiple containers there and all will be launched together on the same machine.
I got the same situation as you. One way to resolve this was using the Terraform to deploy our containers as a Task Definitions on AWS ECS.
So, we using the docker-compose.yml to run locally and the terraform is a kind of mirror of our docker-compose on AWS.
Also another option is Kubernetes that you can translate from docker-compose to Kubernetes resources.

How to collect Docker Swarm logs to separate files, each log file for service?

I have Docker Swarm enabled on my server (Docker version is 17.03.0-ce).
Docker Swarm has services which are located on different nodes.
Each service writes logs to files.
Is it possible to use some of Docker driver to collect logs from all services and store these logs centralized in separate files for each service. For example, service-1.log, service-2.log.
Is it possible to use custom file format for log file names? For example, 10-07-17-service-1.log, 10-07-17-service-2.log, 10-07-17-service-2.log?
I have read about using ELK stack but I want to implement more simple approach (write logs to files).
you can create a sidecar container that will execute a command like this:
docker service logs -f serviceName >> service-1-"`date +"%d-%m-%Y"`".log

How to deploy docker container to Cloud Foundry?

My application is comprised of two separate docker containers. One being a Grails based web application and second being a RESTful Python Flask application. Both docker containers are sitting on my local computer. They are not hosted on docker hub. They are proprietary and I don't want to host them publicly.
I would like to try Cloud Foundry to deploy these docker containers and see how it works. However, from the documentation I get a sense that Cloud Foundry doesn't support deploying docker containers sitting on a local machine.
Question
Is there a way to deploy docker containers sitting on a local computer to CloudFoundry? If not, what is a way to securely host the containers somewhere from CF can fetch them?
Is CloudFoundry capable of running a docker container that is a Python Flask application?
One option you have is to not use Docker images, and just push your code directly, one of the nice features of CF. PCF comes with a python buildpack which should automatically detect your Flask app.
Another option would be run your own trusted docker registry, push your images there, and then when you push your app, tell it to grab the images from your registry. If you google "cloud foundry docker registry" you get the following useful results you should check out:
https://github.com/cloudfoundry-community/docker-registry-boshrelease
http://docs.pivotal.io/pivotalcf/1-8/adminguide/docker.html#caveats
https://docs.pivotal.io/pivotalcf/1-7/opsguide/docker-registry.html

Resources