Running shell script against Localstack in docker container - docker

I've been using localstack to develop a service against locally. I've just been running their docker image via docker run --rm -p 4567-4583:4567-4583 -p 8080:8080 localstack/localstack
And then I manually run a small script to set up my S3 buckets, SQS queues, etc.
Now, I'd like to make this easier for others so I thought I'd just add a Dockerfile and docker-compose.yml file. Unfortunately, when I try to get this up and running, using docker-compose up I get an error that the command from my setup script can't connect to the localstack services.
make_bucket failed: s3://localbucket Could not connect to the endpoint URL: "http://localhost:4572/localbucket"
Dockerfile:
FROM localstack/localstack
#since this is just local dev set up, localstack doesn't require
anything specific here.
ENV AWS_DEFAULT_REGION='[useast1]'
ENV AWS_ACCESS_KEY_ID='[lloyd]'
ENV AWS_SECRET_ACCESS_KEY='[christmas]'
COPY bin/localSetup.sh /localSetup.sh
COPY fixtures/notifications.json /notifications.json
RUN ["chmod", "+x", "/localSetup.sh"]
RUN pip install awscli
# expose service & web dashboard ports
EXPOSE 4567-4582 8080
ENTRYPOINT ["/localSetup.sh"]
docker-compose.yml
version: '3'
services:
localstack:
build: .
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localhost:4572 s3 mb s3://localbucket
#additional similar calls but left off for brevity
I've tried switching localhost to 127.0.0.1 in my script commands, but I wind up with the same error. I'm probably missing something silly here.

There is another way to create your custom AWS resources when localstack freshly starts up. Since you already have a bash script for your resources, you can simply volume mount your script to /docker-entrypoint-initaws.d/.
So my docker-compose file would be:
localstack:
image: localstack/localstack:latest
container_name: localstack_aws
ports:
- '4566:4566'
volumes:
- './localSetup.sh:/etc/localstack/init/ready.d/init-aws.sh'
Also, I would prefer awslocal over aws --endpoint in the bash script, as it leverages the credentials work and endpoint for you.

try adding hostname to the docker-compose file and editing your entrypoint file to reflect that hostname.
docker-compose.yml
version: '3'
services:
localstack:
build: .
hostname: localstack
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localstack:4572 s3 mb s3://localbucket
This was my docker-compose-dev.yaml I used for testing out an app that was using localstack. I used the command docker-compose -f docker-compose-dev.yaml up, I also used the same localSetup.sh you used.
version: '3'
services:
localstack:
image: localstack/localstack
hostname: localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8082}:${PORT_WEB_UI-8082}"
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- backend
sample-app:
image: "sample-app/sample-app:latest"
networks:
- backend
links:
- localstack
depends_on:
- "localstack"
networks:
backend:
driver: 'bridge'

Related

How to export Yii2 migrations into docker container

I have successfully containerized my basic Yii2 application with docker and it runs on localhost:8000. However, I cannot use the app effectively as most of its data are stored in migration files. Is there a way I could export the migrations into docker after running it? (or during execution)
This is my docker compose file
version: '2'
services:
php:
image: yiisoftware/yii2-php:7.1-apache
volumes:
- ~/.composer-docker/cache:/root/.composer/cache:delegated
- ./:/app:delegated
ports:
- '8000:80'
networks:
- my-network
db:
image: mysql:5.7
restart: always
environment:
- MYSQL_DATABASE=my-db
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=password
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- mydb:/var/lib/mysql
networks:
- my-network
memcached:
container_name: memcached
image: memcached:latest
ports:
- "0.0.0.0:11211:11211"
volumes:
restatdb:
networks:
my-network:
driver: bridge
and my Dockerfile
FROM alpine:3.4
ADD . /
COPY ./config/web.php ./config/web.php
COPY . /var/www/html
# Let docker create a volume for the session dir.
# This keeps the session files even if the container is rebuilt.
VOLUME /var/www/html/var/sessions
It is possible to run yii commands in docker. First let the yii2 container run in the background or another tab of the terminal. The yii commands can be run using the docker exec on the interactive interface which would let us interact with the running container
sudo docker exec -i <container-ID> php yii migrate/up
You can get the container ID using
sudo docker ps

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

OSRM Server setup using docker-compose - container exited

I'm trying to setup the OSRM server on the top of Docker. I need to configure it using docker-compose.yml in side my micro-services project using .net core.
version: '3.4'
services:
test_web:
image: ${DOCKER_REGISTRY-}osrmweb
build:
context: .
dockerfile: OSRM_WEB/Dockerfile
ports:
- 1111
osrm-data:
image: irony/osrm5
volumes:
- /data
osrm:
image: irony/osrm5
volumes:
- osrm-data
ports:
- 5000:5000
command: ./start.sh Sweden http://download.geofabrik.de/europe/sweden-latest.osm.pbf
I have this docker-compose.yml file. When I run the docker-compose up There is no error but the container is Exited.
I cant see any response in this url.
https://localhost:5000/route/v1/driving/13.388860,52.517037;13.397634,52.529407;13.428555,52.523219?overview=false
Any thing else I need to do in configurations? Any suggestions?

Host to Docker Container to Docker Container communication

I'm building 2 docker containers, "app" and "db", via a docker-compose file.
The app server just installs java/tomcat via a Dockerfile which is what docker-compose uses to build.
The db server uses an MS SQL image.
When I run:
docker-compose up
I follow that with a build process of software I need to load which deploys a war to the tomcat directory in the app server and builds the database in the database server.
My problem is: The build process can reference localhost:8080 to install/patch the software to the app server and reference localhost:1433 to install/patch the database portion of the software to the database server. However, when I start Tomcat the system doesn't come online because the app server can't connect to the database server via "localhost:1433" so it requires me to jump in and update the properties file after the build to the docker internal IP address and THEN it works.
My question is: How am I able to get my localhost and my app container to reference the DB in the same manner in a database url?
Dockerfile for app server:
FROM centos:centos7
COPY apache-tomcat-9.0.20.tar.gz /tmp/
WORKDIR /tmp/
RUN yum -y update
RUN yum -y install java-11-openjdk-devel
RUN tar -xf apache-tomcat-9.0.20.tar.gz
RUN mv apache-tomcat-9.0.20 /opt/tomcat/
RUN export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/
RUN export PATH=$PATH:$JAVA_HOME/jre/bin
RUN export CATALINA_HOME=/opt/tomcat/
RUN export PATH=$PATH:$CATALINA_HOME/bin
WORKDIR /opt/tomcat/webapps
RUN mkdir testapp
enter code here
enter code here
Docker-Compose File:
version: '3.3'
services:
db:
image: "mcr.microsoft.com/mssql/server:2017-latest"
restart: always
volumes:
- db_data:/var/lib/mssql
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=Test123
network_mode: bridge
hostname: db
ports:
- "1433:1433"
app:
build: './testapp'
volumes:
- './system/build:/opt/tomcat/webapps/testapp/'
ports:
- "8080:8080"
- "8009:8009"
network_mode: bridge
tty: true
depends_on:
- db
volumes:
db_data:
Bring your service to the same network and target the service by service name. For that you need to define a docker network like below. For the following example I can access DB with http://mongo:27017.
mongo:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
networks:
- my-net
spring:
depends_on:
- mongo
image: docker-spring-http-alpine
ports:
- "8080:8080"
networks:
- my-net
networks:
my-net:

Prevent publishing ports defined in compose file

I have a docker compose file that defines a service that will run my application and a service that that application is dependent on to run:
services:
frontend:
build:
context: .
volumes:
- "../.:/opt/app"
ports:
- "8080:8080"
links:
- redis
image: node
command: ['yarn', 'start']
redis:
image: redis
expose:
- "6379"
For development this compose file exposes 8080 so that I can access the running code from a browser.
In jenkins however I can't expose that port as then two jobs running simultaneously would conflict trying to bind to the same port on jenkins.
Is there a way to prevent docker-compose from binding service ports? Like an inverse of the --service-ports flag?
For context:
In jenkins I run tests using docker-compose run frontend yarn test which won't map ports and so isn't a problem.
The issue presents when I try to run end to end browser tests against the application. I use a container to run CodeceptJS tests against a running instance of the app. In that case I need the frontend to start before I run the tests, as they will fail if the app is not up.
Q. Is there a way to prevent docker-compose from binding service ports?
It has no sense to prevent something that you are asking to do. docker-compose will start stuff as the docker-compose.yml file indicates.
I propose duplicate the frontend service using extends::
version: "2"
services:
frontend-base:
build:
context: .
volumes:
- "../.:/opt/app"
image: node
command: ['yarn', 'start']
frontend:
extends: frontend-base
links:
- redis
ports:
- "8080:8080"
frontend-test:
extends: frontend-base
links:
- redis
command: ['yarn', 'test']
redis:
image: redis
expose:
- "6379"
So use it as this:
docker-compose run frontend # in dev environment
docker-compose run frontend-test # in jenkins
Note that extends: is not available in version: "3", but they will bring it back again in the future.
For preventing to publish ports outside the docker network you just
need to write on a single port in the ports segment.
Instead of using this:
ports:
- 8080:8080
Just use this one(at below):
ports:
- 8080

Resources