This is my docker_compose:
version: '3.7'
services:
app:
command: run
build:
context: .
dockerfile: Dockerfile.dev
image: test/testapi-configuration
ports:
- '5005:5005'
- '8080:8080'
volumes:
- './:/source:rw'
- '~/.vault_token:/root/.vault_token'
localstack:
image: localstack/localstack
ports:
- '4566:4566'
environment:
- SERVICES=dynamodb
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- DEFAULT_REGION=us-east-2
volumes:
- './.localstack:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
My dockerfile.dev:
WORKDIR /source
ENTRYPOINT ["./gradlew"]
EXPOSE 5005
I set up my app running with:
localstack start
Then I run my api in the IntelliJ IDE, y create the table in the dynamoDb with:
aws dynamodb --endpoint-url=http://localhost:4566 --region=us-east-2 create-table --cli-input-json file://file_example.json
But I cant get to use the dynamodb-admin tool working.
Docs here: https://www.npmjs.com/package/dynamodb-admin
I understand that I have to execute:
DYNAMO_ENDPOINT=http://localhost:8080 dynamodb-admin -p 4566
But ive got following error:
UnknownError: 405
at Request.extractError (/usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/protocol/json.js:52:27)
at Request.callListeners (/usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/request.js:686:14)
at Request.transition (/usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/request.js:688:12)
at Request.callListeners (/usr/local/lib/node_modules/dynamodb-admin/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
The right way to do it was:
docker-compose up localstack
Related
I have an issue I discovered while trying to try PHPUnit tests with CURL commands. I always get:
Connecting to `www.example.local` (www.example.local)|127.0.0.1|:80... failed: Connection refused.
I then tried to run wget and curl commands from the container command line, same problem. I have docker working like this.
In my computer’s /etc/hosts
127.0.0.1 www.example.local
127.0.0.1 api.example.local
and so forth and it works when accessing the sites in my browser. In my docker-compse.yml, I have:
version: “3.4”
volumes:
postgres_database :
external: false
mysql_data : {}
schemas:
external: false
services:
php:
build:
context : ./
dockerfile : Dockerfile
network: host
volumes:
- ./:/code
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
depends_on: [ "postgres"]
ports:
- "9000:9000"
expose:
- "9000"
container_name: binge_php
web:
image: nginx:latest
build:
context : ./
dockerfile : Dockerfile_Nginx
network: host
ports:
- "80:80"
expose:
- "0"
volumes:
- ./:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
- ./nginx_custom_settings.conf:/etc/nginx/conf.d/nginx_custom_settings.conf
links:
- php
depends_on:
- php
container_name: binge_nginx
What might I be doing wrong that is preventing me from running curl commands from inside the PHP container?
Hi I Have trying to run NIFI Apache on Docker using the DockerFile but i got error just like i attached in the Picture, is there any settings missing on the DockerFile?.
my Docker Compose File
version: '3.5'
services:
nifi:
build:
context: .
dockerfile: Dockerfile
args:
NIFI_VERSION: 1.8.0
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
ports:
- "8080:8080"
volumes:
- ./config/nifi/conf:/opt/nifi/nifi-current/conf
- ./config/nifi/data:/home/nifi/data/data
- ./config/nifi/script:/home/nifi/data/script
- ./config/nifi/utils:/home/nifi/data/utils
nifi-registry:
build:
context: .
dockerfile: Dockerfile.registry
args:
NIFI-REGISTRY-VERSION: 0.3.0
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
ports:
- "18080:18080"
volumes:
- ./config/nifi-registry/conf:/opt/nifi-registry/nifi-registry-0.3.0/conf
- ./config/nifi-registry/flow_storage:/opt/nifi-registry/nifi-registry-0.3.0/flow_storage
- ./config/nifi-registry/database/:/opt/nifi-registry/nifi-registry-0.3.0/database
screenshot
(sorry, can't comment yet, so I'm popping an answer)
Your log file says '/opt/nifi/nifi-current/logs/nifi-app.log' can't be open. Could it be a problem with your configuration file ?
I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.
I am trying to execute a wait-for-it.sh script in my docker-compose.yaml file using "command:". I also tried to even execute the ls command as well. They both resulted in command not found. Howeer, if I go to the command line, I am able to run both commands.
Here is the docker-compose.yaml file:
rabbitmq:
container_name: "myapp_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- /rabbitmq/lib:/var/lib/rabbitmq
- /rabbitmq/log:/var/log/rabbitmq
- /rabbitmq/conf:/etc/rabbitmq/
service1:
container_name: "service1"
build:
context: .
dockerfile: ./service1.dockerfile
links:
- mongo
- rabbitmq
depends_on:
- mongo
- rabbitmq
command: ["./wait-for-it.sh", "rabbitmq:5672", "-t", "90"]
service2:
container_name: "service2"
build:
context: .
dockerfile: ./service2.dockerfile
links:
- mongo
- rabbitmq
depends_on:
- mongo
- rabbitmq
command: ["./wait-for-it.sh", "rabbitmq:5672", "-t", "90"]
What could be causing this as the commands work from the command line, just not from docker-compose file? I am using "docker-compose up -d" to start the containers, if that helps any.
If the wait-for-it.sh is not found at runtime, then I suspect that the wait-for-it.sh is not inside your docker image.
You can add this file to the image using the ADD instruction in your Dockerfile(s)
ADD wait-for-it.sh /wait-for-it.sh
Please help, I have the docker-compose file below, and I want to write an Ansible playbook that runs the docker-compose file on localhost and a remote target.
version: '2.0'
services:
weather-backend:
build: ./backend
volumes: #map backend dir and package inside container
- './backend/:/usr/src/'
- './backend/package.json:/usr/src/package.json'
#ports:
# - "9000:9000" #expose backend port - Host:container
ports:
- "9000:9000"
command: npm start
weather-frontend:
build: ./frontend
depends_on:
- weather-backend
volumes:
- './frontend/:/usr/src/'
- '/usr/src/node_modules'
ports:
- "8000:8000" #expose ports -Host:container
environment:
NODE_ENV: "development"
Since Ansible 2.1, you can have a look at the docker_compose, which can read directly a docker-compose.yml
playbook task:
- name: run the service defined in my_project's docker-compose.yml
docker_compose:
project_src: /path/to/my_project