ECS Container Environment Configuration - docker

I have a recently-Dockerized web app that I would like to get running on AWS ECS, and a few fundamental concepts (which I don't see explained in the AWS docs) are throwing me off.
First, when you Edit/configure a new container, it asks you to specify the image to use, but then also has an Environment section:
The Entry point, Command and Working directory fields look suspiciously similar to the commands I already specified when creating my Docker image (here's my Dockerfile):
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
So if ECS is asking me for an image (that's already been built using this Dockerfile), why in tarnation do I need to re-specify the exact same values for WORKDIR, EXPOSE, ENTRYPOINT, CMD, etc.?!?
Also outside of ECS I run my container like so:
docker run -it -p 9200:9200 -d --net="host" --env-file ~/myapp-local.env --name myapp myapp
Notice how I specify the env file? Does ECS support env files, or do I really have to enter each and every env var from my env file into this UI here?
Also I see there is a Docker Labels section near the bottom:
Are these different than env vars, or are they interchangeable?

Yes you need to add environment variable either through UI or through CLI .
For CLI you need to pass it as JSON template .
Also if you have already specified these values in Dockerfile then you dont need to pass these values again.
All the values that will be passed externally will overwrite internal/default values in Dockerfile

Related

How to set environment variables in a docker container after it starts

I need to set some environment variables in a docker container after it starts. When the docker starts env X gets value, then I want to set env Y with a value which is the first part of the value X with this command:
Y=$(echo $X | cut -d'#' -f 1)
Is there any way to do this?
I tried ENTRYPOINT and CMD in the Dockerfile, but it doesn't work.
The docker will be deployed on a Kubernetes cluster, and I also tried to set them in the config.yaml file but it doesn't work either.
You are on the right track that you would have to handle this by either CMD or ENTRYPOINT, because you want it to be dynamic and derived from existing data. The specifics would depend on your container and use case though.
You can use the ENV command in your dockerfile like below:
ENV PORT 8080
Source and more info - https://vsupalov.com/docker-build-time-env-values/

Docker containers out of same image don't work as expected

I have a docker-compose.yml set up like this:
app:
build:
dockerfile: ./docker/app/Dockerfile.dev
image: test/test:${ENV}-test-app
...
Dockerfile called here has this line present:
...
RUN ln -s ../overrides/${ENV}/plugins ../plugins
...
And there is also a script I am running to get the whole environment up (it is dependant upon several containers so I tried to omit irrelevant info).
It is a bash script and running the following:
ENV=$1 docker-compose -p $1 up -d --force-recreate --build app
What I wanted to achieve is that i can run two app containers at the same time, and this works as follows:
sh initializer.sh foo -> creates foo-test-app container
sh initializer.sh bar -> creates bar-test-app container
Now the issue I'm having is that even when I have --force-recreate flag present two images created actually are seen as the same image with two different tags.
And what this does when I inspect the containers is that both containers have a symbolic link to:
overrides/foo/plugins
It doesn't notice when I create the new container to re-do that part. How can I fix it?
Also if I sh to one container and change the symbolic link, it is automatically changed in the other container as well.
$ENV in your dockerfile is not the same as the one in your compose file.
When you run docker-compose up, it can be roughly seen as a docker build followed by a docker run. So Docker builds the image, layer by layer, at that stage there is not env called ENV. Only at docker run will $ENV be used.
Environment variables at build stage can be used though, they are passed via ARG
// compose.yml
build:
context: frontend
args:
- BUILD_ENV=${BUILD_ENV}
// dockerfile
ARG BUILD_ENV
RUN ./node_modules/.bin/ng build --$BUILD_ENV
You can do this to solve your problem however this will create one image per project, which you may not want. Or you can do it in an entrypoint script.
I have found the answer to be in project flag when creating my containers. So this is what I did:
docker-compose -p foo up -d
docker-compose -p bar up -d
This would bring containers up as 2 separate projects.
Link to documentation

Configuring cassandra.yaml for password auth inside docker

Can someone tell me how to change cassandra.yaml inside a docker container?
I want to enable password authentication inside docker for cassandra access.
If you're using the official Cassandra Docker image, you'll already have the docker-entrypoint.sh. See: https://github.com/docker-library/cassandra/blob/master/docker-entrypoint.sh for some of the variables already defined, as examples.
To have these included when your container starts, you could:
fork and edit the docker-entrypoint.sh starting at (currently) line 51 to add your own variables like this:
for yaml in \
broadcast_address \
broadcast_rpc_address \
[your_selected_yaml_variable] \
...
include the values you want to override in docker-compose.yml like this:
environment:
- CASSANDRA_SEEDS=DC1C1,DC1C2,DC2C1,DC2C2
- CASSANDRA_CLUSTER_NAME=Dev_Cluster
- CASSANDRA_ENDPOINT_SNITCH=GossipingPropertyFileSnitch
- CASSANDRA_[YOUR_SELECTED_YAML_VARIABLE]
You could create a docker entry point (basically it'a script file that you instruct Docker to copy on the container and it's defined as entrypoint).
COPY docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["bin/sh", "/docker-entrypoint.sh"]
In that file you can do whatever changes you like on cassandra.yaml file using sed.
sed -ri '/^# data_file_directories:/{n;s/^#.*/'" - $CASSANDRA_DATA_DIRECTORY"'/}' "$CASSANDRA_CONFIG/cassandra.yaml"
Note that $CASSANDRA_DATA_DIRECTORY and $CASSANDRA_CONFIG are some variables defined in advance.

Linode/lamp + docker-compose

I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp

Docker Compose and execute command on starting container

I am trying to get my head around the COMMAND option in docker compose. In my current docker-compose.yml i start the prosody docker image (https://github.com/prosody/prosody-docker) and i want to create a list of users when the container is actually started.
The documentation of the container states that a user can be made using environment options LOCAL, DOMAIN, and PASSWORD, but this is a single user. I need a list of users.
When reading some stuff around the internet it seemed that using the command option i should be able to execute commands in a starting or running container.
xmpp:
image: prosody/prosody
command: prosodyctl register testuser localhost testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
But this seems not to work, i checked to running container using docker exec -it <imageid> bash but the user is not created.
Is it possible to execute a command on a started container using docker-compose or are there other options?
The COMMAND instruction is exactly the same as what is passed at the end of a docker run command, for example echo "hello world" in:
docker run debian echo "hello world"
The command is interpreted as arguments to the ENTRYPOINT of the image, which in debian's case is /bin/bash. In the case of your image, it gets passed to this script. Looking at that script, your command will just get passed to the shell. I would have expected any command you pass to run successfully, but the container will exit once your command completes. Note that the default command is set in the Dockerfile to CMD ["prosodyctl", "start"] which is presumably a long-running process which starts the server.
I'm not sure how Prosody works (or even what it is), but I think you probably want to either map in a config file which holds your users, or set up a data container to persist your configuration. The first solution would mean adding something like:
volumes:
- my_prosodoy_config:/etc/prosody
To the docker-compose file, where my_prosody_config is a directory holding the config files.
The second solution could involve first creating a data container like:
docker run -v /etc/prosody -v /var/log/prosody --name prosody-data prosody-docker echo "Prosody Data Container"
(The echo should complete, leaving you with a stopped container which has volumes set up for the config and logs. Just make sure you don't docker rm this container by accident!)
Then in the docker-compose file add:
volumes_from:
- prosody-data
Hopefully you can then add users by running docker exec as you did before, then running prosodyctl register at the command line. But this is dependent on how prosody and the image behave.
CMD is directly related to ENTRYPOINT in Docker (see this question for an explanation). So when changing one of them, you also have to check how this affects the other. If you look at the Dockerfile, you will see that the default command is to start prosody through CMD ["prosodyctl", "start"]. entrypoint.sh just passes this command through as Adrian mentioned. However, your command overrides the default command, so your prosody demon is never started. Maybe you want to try something like
xmpp:
image: prosody/prosody
command: sh -c prosodyctl register testuser localhost testpassword && prosodyctl start
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
instead. More elegant and somehow what the creator seems to have intended (judging from the entrypoint.sh script) would be something like
xmpp:
image: prosody/prosody
environment:
- LOCAL=testuser
- DOMAIN=localhost
- PASSWORD=testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
To answer your final question: no, it is not possible (as of now) to execute commands on a running container via docker-compose. However, you can easily do this with docker:
docker exec -i prosody_container_name prosodyctl register testuser localhost testpassword
where prosody_container_name is the name of your running container (use docker ps to list running containers).

Resources