I have a UI running on 127.0.0.1:80 through a docker container. I tested this via Chrome and Postman Client app which works well.
When I try to export GET test case from Postman Client app and running it through a separate container, I receive this error:
GET 127.0.0.1:80 [errored]
connect ECONNREFUSED 127.0.0.1:80
here is my docker compose file:
version: "3.7"
services:
web:
build: ui
ports:
- 80:80
depends_on:
- api
api:
build: app
environment:
- PORT=80
ports:
- 8020:80
test:
container_name: restful_booker_checks
build: test
image: postman_checks
depends_on:
- api
- web
here is my docker file for test cases:
FROM node:10-alpine3.11
RUN apk update && apk upgrade && \
apk --no-cache --update add gcc musl-dev ca-certificates curl && \
rm -rf /var/cache/apk/*
RUN npm install -g newman
COPY ./ /test
WORKDIR /test
CMD ["newman", "run", "OCR_POC_v2_tests.json"]
here is the exported json file from Postman Client app:
{
"info": {
"_postman_id": "5e702694-fa82-4b9f-8b4e-49afd11330cc",
"name": "OCR_POC_v2_tests",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json",
"_exporter_id": "21096887"
},
"item": [
{
"name": "get",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "127.0.0.1:80",
"host": [
"127",
"0",
"0",
"1"
],
"port": "80"
}
},
"response": []
}
]
}
When I run the following command, it also works well:
newman run OCR_POC_v2_tests.json
So it's just in docker which the connection can not get through.
docker-compose creates a bridge network that it connects the containers to. Each container can be addressed by it's service name on the network.
You're trying to connect to 127.0.0.1 or localhost, which in a docker context is the testcontainer itself.
You need to connect to web instead, so your URL becomes http://web:80/. I don't know how exactly you need to change your exported Postman file to accomplish that, but hopefully you can figure it out.
Also note that on the bridge network you connect using the container ports. I.e. in your case, that would be port 80 on both web and api. If you only need to access the containers from other containers on the docker network, you don't need to map the ports.
Related
I tried to find solution over a long time - reloading nodemon in docker while updating e.g index.js. I've windows 10.
I've node project with docker:
proj/backend/src/index.js:
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.send('Hello world.')
})
const port = process.env.PORT || 3001
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
proj/backend/package.json:
{
"scripts": {
"start": "node ./bin/www",
"start:legacy": "nodemon --legacy-watch -L --watch src src/index.js"
},
"dependencies": {
"express": "^4.17.2"
},
"devDependencies": {
"nodemon": "^2.0.15"
}
}
proj/backend/dev.Dockerfile:
FROM node:lts-alpine
RUN npm install --global nodemon
WORKDIR /usr/src/app
COPY . .
RUN npm ci
EXPOSE 3001
ENV DEBUG=playground:*
CMD npm run start:legacy
proj/docker-compose.dev.yml:
version: '3.8'
services:
backend:
image: backend-img
build:
context: ./backend
dockerfile: ./dev.Dockerfile
ports:
- 3001:3001
environment:
- PORT=3001
If I am not wrong, the docker container is made for kill itself when the process ends. When using nodemon (and updating the code), the process will be stopped and restarted and the container will stop. You could do the npm start not be the main process, but this is not a good practice.
Probably, it's already late. But I will write.
There are misconceptions in your configuration.
In command "start:legacy". You should use only one option to run legacy --legacy-watch or -L, not both. Because these commands are equal. According to nodemon docs: Via the CLI, use either --legacy-watch or -L for short
Your Dockerfile configurations seems fine. But to synchronize your local machine files and directory with docker container you should use volumes in docker-compose. So, your docker-compose file will look something like:
version: '3.8'
services:
backend:
image: backend-img
build:
context: ./backend
dockerfile: ./dev.Dockerfile
volumes:
- ./your_project_dir:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3001:3001
environment:
- PORT=3001
I believe, if you define volumes you will able to make changes locally and container also will see changes
I have a docker-compose used for production which I'm hoping to incorporate with VS Code's dockerized development environment.
./docker-compose.yml
version: "3.6"
services:
django: &django-base
build:
context: .
dockerfile: backend/Dockerfile_local
restart: on-failure
volumes:
- .:/website
depends_on:
- memcached
- postgres
- redis
networks:
- main
ports:
- 8000:8000 # HTTP port
- 3000:3000 # ptvsd debugging port
expose:
- "3000"
env_file:
- variables/django_local.env
...
Note how I'm both forwarding and exposing port 3000 here. This is a result of me playing around to get what I need working. Not sure if I need one or the other or both.
My ./devcontainer then looks like the following:
./devcontainer/devcontainer.json
{
"name": "Dev Container",
"dockerComposeFile": ["../docker-compose.yml", "docker-compose.extend.yml"],
"service": "dev",
"workspaceFolder": "/workspace",
"shutdownAction": "stopCompose",
"settings": {
"terminal.integrated.shell.linux": null,
"python.linting.pylintEnabled": true,
"python.pythonPath": "/usr/local/bin/python3.8"
},
"extensions": [
"ms-python.python"
]
}
.devcontainer/docker-compose.extended.yml
version: '3.6'
services:
dev:
build:
context: .
dockerfile: ./Dockerfile
external_links:
- django
volumes:
- .:/workspace:cached
command: /bin/sh -c "while sleep 1000; do :; done"
The idea is that I want to be able to run VS code attached to the dev service, which from there I want to run the debugger attached to the django service using the following launch.json config:
{
"name": "WP",
"type": "python",
"request": "attach",
"port": 3000,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/website"
}
]
},
I get an error when doing this though, where VS Code says connect ECONNREFUSED 127.0.0.1:3000. How can I get the ports mapped so this will work? Is it even possible?
Edit
Why not just attach directly to the django service?
The dev container simply contains python and node runtimes for linting and intellisense purposes while using VS Code. The idea behind creating a new service devoted specifically to debugging in the dev environment is that ./docker-compose.yml contains more than a few services that some of the devs on my team like to turn off sometimes to keep resource consumption low. By creating a container specifically for dev, it also makes it easier to setup .devcontainer-devcontainer.json to add things like extensions to one container without needing to add them after attaching to the running "non-dev" container. If this were to work, VS Code would be running within the dev container (see this).
I was able to solve this by changing the host in the launch.json from localhost to host.docker.internal. The resulting launch.json configuration then looks like this:
{
"name": "WP",
"type": "python",
"request": "attach",
"port": 3000,
"host": "host.docker.internal",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/web-portal"
}
]
},
I have a tiny socket server in a docker container the server looks like
var app = require('express')();
var server = require('http').Server(app);
var io = require('socket.io')(server, {origins: 'localhost:*'});
io.on('connection', function (socket) {
console.log('Connected');
});
const PORT = 8081;
const HOST = '0.0.0.0';
server.listen(PORT, HOST);
and the docker file is
FROM keymetrics/pm2-docker-alpine:latest
WORKDIR /root
RUN apk update && \
apk upgrade && \
apk add git
ENV HOME /root
COPY socket.js ./
COPY package.json ./
RUN npm install
COPY pm2.json ./
EXPOSE 8081
CMD [ "pm2-docker", "start", "pm2.json" ]
pm2.json looks like
{
"apps": [{
"name": "socket-server",
"script": "socket.js",
"exec_mode" : "cluster",
"instances" : 2,
"env": {
"production": true
}
}]
}
package.json
{
"name": "socket-server",
"version": "1.0.0",
"description": "",
"main": "socket.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.15.3",
"socket.io": "^2.0.3"
}
}
It all runs just fine with
docker run -d -p 8081:8081 socket-server
until I try to connect to it from a website running in another container, the website connects like this...
<script src="socket.io.js"></script>
<script>
var socket = io.connect('http://localhost:8081');
socket.on('connect', function(data) {
console.log('Connected Client')
});
</script>
and in the console, it shows that it polls just fine with
Request URL:http://localhost:8081/socket.io/?
EIO=3&transport=polling&t=LthQCgI&sid=93sOyTiSOe5RVOdEAAAL
Request Method:POST
Status Code:200 OK
but fails to get a socket connection
Request URL:ws://localhost:8081/socket.io/?
EIO=3&transport=websocket&sid=93sOyTiSOe5RVOdEAAAL
Request Method:GET
Status Code:400 Bad Request
Now if I run the socket server, not in the docker container it's fine and the socket connects.
I have tried getting the IP of the container that the socket server is running and using that in the connection script but even the polling doesn't work when I configure it like that.
I really need this inside a Docker container.
Any help is most appreciated
Although this is an old question I figured I would elaborate a little bit on it for anyone else that is wondering how you would go about connecting containers since the previous answer was a bit slim.
Using swarm in this case would be overkill especially for something like running the containers locally in a manner that would allow them to talk. Instead you simply want to establish the containers on the same docker network.
version: '3.5'
services:
app:
build:
context: .
dockerfile: Dockerfile
command: app
ports:
- "4000:4000"
volumes:
- .:/app
networks:
- app-network
pgsql:
image: postgres:latest
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
postgres_data:
driver: local
In the docker-compose.yml file example above you can see that I am defining a network via:
networks:
app-network:
driver: bridge
Then on both the app and pgsql containers I am assigning them to that network via:
networks:
- app-network
This allows me to access the containers from one another via the container "name". So in my code I am now able to use pgsql:5432 and communicate with the postgres service. The app container is also reachable from the pgsql container via app:4000.
While this can get much more complex than the above example I figured I leave a working docker-compose.yml example above. You can find out more about docker networks at https://docs.docker.com/compose/networking/
maybe you should try to make a docker swarm and let the containers join the same network:
....
version: '3.5'
services:
myserver:
image: 'mydocker-image'
networks:
- mynetwork
....
and access the server like this http://myserver:8081
I have two docker containers. One is java server, running on 8080 port with rest API /drivers. Another container is simple nodejs server with index.html page where ajax call is being performed to save new driver.
URL in js file is: const URL = "http://storage:8080/drivers";
When I run them using just docker and created for them network, communication between them works fine. But when I run both containers using docker-compose, then I get status "(failed) net::ERR_NAME_NOT_RESOLVED"
When I open bash of any of this containers and run 'ping storage', I normally receive packets.
What am I missing?
DockerFile for java server:
FROM java:8
VOLUME /tmp
ADD target/docker-project-1.0-SNAPSHOT.jar app.jar
EXPOSE 8080
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
DockerFile for nodejs server:
FROM node:argon
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install connect serve-static
COPY . /usr/src/app
EXPOSE 8081
CMD ["node", "server.js"]
docker-compose.yml file:
version: '2.1'
services:
client:
image: glasierr/js-client
networks:
default:
aliases:
- "client"
links:
- "storage"
ports:
- "8081:8081"
storage:
image: glasierr/drivers-storage
networks:
default:
aliases:
- "storage"
ports:
- "8080:8080"
expose:
- "8080"
JS script:
const URL = "http://storage:8080/drivers";
$.ajax({
headers: {
'Content-Type': 'application/json'
},
type: "POST",
url: URL,
data: JSON.stringify({
licenceId: licenceId,
name: name,
surname: surname,
email: email
}),
dataType: "json"
});
I cannot see any mistake and tried it and it worked with compose version 2, maybe it is not working with experimental.
But you can simplify your compose file to that
version: '2'
services:
client:
image: glasierr/js-client
ports:
- 8081:8081
storage:
image: glasierr/drivers-storage
ports:
- 8080:8080
All containers are automatically in the default overlay network and are reachable with their service name (and your alias is the same), so you can remove networks. links does the same as the default network and depends_on and is not needed in this example. You also only need expose if you don't publish 8080.
I have a docker container that is part of a big project configured and launched by docker-compose. This container needs configuration when it is started by sending it an HTTP POST request to its REST API.
Basically what I do now is that I run docker-compose up and wait a bit so that everything seems to have started, and particularly the given docker container. Then I send my HTTP POST with a curl command.
Is there a way to modify the Dockerfile so that it will launch the Docker container, wait for its REST API to be up and POST a given request?
My question is a generic one but if you need more details, here is my current very simple Dockerfile:
FROM 1ambda/kafka-connect:latest
COPY my-project/target/*.jar $KAFKA_HOME/libs/
And the corresponding line in the docker-compose.yml:
connect:
build: kafka-connect
links:
- kafka
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka:9092
CONNECT_GROUP_ID: connect-cluster-A
And finally the HTTP POST I do once the container is up:
curl -H "Content-Type: application/json" -X POST -d '
{
"name":"my-project",
"config" :
{
"name":"my-project",
"topics":"my-topic",
"tasks.max":2,
"connector.class":"the.package.to.my.connector.class"
}
}
' http://localhost:8083/connectors
You can create another container that wait for the the rabbitmq to expose its REST API and then perform the actions you want (Sending the HTTP POST in your case)
Here is an example of the docker-compose file
connect:
build: kafka-connect
links:
- kafka
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka:9092
CONNECT_GROUP_ID: connect-cluster-A
connect-init:
build:
context: .
depends_on:
- connect
links:
- connect
command: ["/wait-for-it.sh", "-t", "300", "connect:8083", "--", "/init.sh"]
Here is the Docker file for the init container. You can use this wait-for-it.sh file
FROM bash
RUN apk add --no-cache curl
ADD init.sh wait-for-it.sh /
And the init.sh file contains your HTTP POST (You can add other commands to this file if you need)
curl -X POST \
http://connect:8083/connectors \
-d '{"name":"my-project","config":{"name":"my-project","topics":"my-topic","tasks.max":2,"connector.class":"the.package.to.my.connector.class"}}'