What I want to accomplish is the equivalent of :
docker run -v /var/run/docker.sock:/var/run/docker sock <image>
EDITED:
I followed this: Does ECS task definition support volume mapping syntax?
But then it is unable to save because this type of bind mount is not available for fargate.
Is there another way to accomplish it for fargate?
You are trying to access the Docker socket from within a container managed by AWS. I guess this is not available. Instead, you need to make API calls to AWS to launch new containers. Maybe something like this? Or probably rethink the whole thing.
Related
I'm going to transfer what I worked on the previous EC2 to the ECS.
In traditional EC2, the -v /home/ubuntu:/data option allowed the volume to be set.
First, I added volume through "Volume add in task definition" and proceeded with mounting as before.
However, this did not produce a normal result.
So I have some concerns.
For Ubuntu, it's the /home/ubuntu path, but I'm not sure how the ECS Fargate path is configured.
Secondly, I am wondering if adding :/data at the end of the container path is the right way.
Defined Volume
Volume set to existing EC2 written in JSON
Mount Points in ECS
With Fargate you would need to use an EFS volume for this. You don't have access to host volumes with Fargate.
My task is to deploy a third-party OSRM service on Amazon ECS Fargate.
For OSRM docker at startup, you need to transfer a file containing geodata.
The problem is that Amazon ECS Fargate does not provide access to the host file system and does not provide the ability to attach files and folders during container deployments.
Therefore, I would like to create an intermediate image that, when building, saved the file with geodata, and when starting the container, it would use it when defining volumes.
Thanks!
As I understand it, Amazon ECS is a plain container orchestrator and does not implement docker swarm, so things like docker configs are off the cards.
However, you should be able to do something like this :-
ID=$(docker create --name my-osrm osrm-base-image)
docker cp ./file.ext $ID:/path/in/container
docker start $ID
The solution turned out to be quite simple.
For this Dockerfile, I created an image on my local machine and hosted it on DockerHub:
FROM osrm/osrm-backend:latest
COPY data /data
ENTRYPOINT ["osrm-routed","--algorithm","mld","/data/andorra-latest.osm.pbf"]
After that, without any settings and volumes, I launched this image in AWS ECS
I am pretty new to Docker, and this looks like a very simple question, that I do not seem to find an answer for, unless I am missing something obvious. I am creating a service using the following command:
docker service create --env-file host.env ...
The swarm consists of Windows nodes only, but it is possible Linux nodes will join in the future. Where do I put host.env file so I do not have to hardcode the path?
I would recommend you to use docker compose. But generally, the host.env should be in the directory you are executing docker service create.
I have monolithic application that i am trying to containerize. The foler structure is like this:
--app
|
|-file.py <-has a variable foo that is passed in
--configs
|
|-variables.py <- contains foo variable
Right now, I have the app in a container and the configs in a container. When I try to start up the app container, it fails because a dependency on the config container variable.
What am i doing wrong? And how should I approach this issue. Should the app and config be in one big container for now?
I was thinking docker-compose could solve this issue. Thoughts?
The variables.py file could be (in) a volume accessed by the app container that you import from the config container with --volumes-from config option to docker run. With Docker Compose you would use the volumes_from directive.
Less recommended way -
Run the Config Container first, it will have its own docker.sock.
You can mount the above Docker Socket in first app Container via -v "/var/run/docker.sock:/var/run/docker.sock", which will let the App container access the Config container, but I think this will need privileged access.
This is similar to Docker in Docker concept.
You can also consider design changes to your application by serving that foo variable over HTTP, which will result in much simpler solution. You can use simple web server and urllib3 module in Python to have a simple solution which will serve the variable via Internal Docker Networking.
I started playing with Docker Cloud and am trying to deploy a tomav/docker-mailserver container to an EC2 instance. The EC2 and dockercloud-agent seems to work fine for container deployment.
The docker-compose.yml uses hostname and domainname parameters which are required to properly configure it, but I can't find their equivalent in Docker Cloud's interface.
One of them is using the container auto-generated name, which I need to override.
Anybody knows if I am missing something ? Or is it not possible yet ?
Thank you for your help !
What you want is a stack file, roughly equivalent to docker-compose