I keep getting CORS error while using the Docker remote API.
As the Docker documentation mentions, I did set the flag:
"api-cors-header" : "*",
I still do not see the header Access-Control-Allow-Origin: * set on the response headers.
I am using Docker 1.13 experimental.
Docker-Experimental: true
Server: Docker/1.13.0-rc3 (linux)
Here is my API version:
{
"Version": "1.13.0-rc3",
"ApiVersion": "1.25",
"MinAPIVersion": "1.12",
"GitCommit": "4d92237",
"GoVersion": "go1.7.3",
"Os": "linux",
"Arch": "amd64",
"KernelVersion": "4.8.12-moby",
"Experimental": true,
"BuildTime": "2016-12-06T01:15:44.725283878+00:00"
}
Am I missing anything here?
Based on https://docs.browserless.io/docs/docker.html#enable-cors:
Enable CORS
You can enable cross-origin-resource-sharing with browserless by setting the ENABLE_CORS=true variable. This defaults to false:
$ docker run -e "ENABLE_CORS=true" -p 3000:3000 --restart always -d --name browserless browserless/chrome
It is also possible that problem is in another level, not the docker itself, for me I had a python flask application wrapped in a docker container. I found the solution here to the CORS.
....
from flask_cors import CORS
....
app = Flask(__name__)
CORS(app)
Related
I am deploy the xxl-job application in Kubernetes(v1.15.2), now the application deploy success but registry client service failed.If deploy it in docker, it should look like this:
docker run -e PARAMS="--spring.datasource.url=jdbc:mysql://mysql-service.example.com/xxl-job?Unicode=true&characterEncoding=UTF-8 --spring.datasource.username=root --spring.datasource.password=<mysql-password>" -p 8180:8080 -v /tmp:/data/applogs --name xxl-job-admin -d xuxueli/xxl-job-admin:2.0.2
and when start application,the server side give me tips:
22:33:21.078 logback [http-nio-8080-exec-7] WARN o.s.web.servlet.PageNotFound - No mapping found for HTTP request with URI [/xxl-job-admin/api/registry] in DispatcherServlet with name 'dispatcherServlet'
I am searching from project issue and find the problem may be I could not pass the project name in docker to be part of it's url, so give me this tips.The client side give this error:
23:19:18.262 logback [xxl-job, executor ExecutorRegistryThread] INFO c.x.j.c.t.ExecutorRegistryThread - >>>>>>>>>>> xxl-job registry fail, registryParam:RegistryParam{registryGroup='EXECUTOR', registryKey='job-schedule-executor', registryValue='172.30.184.4:9997'}, registryResult:ReturnT [code=500, msg=xxl-rpc remoting fail, StatusCode(404) invalid. for url : http://xxl-job-service.dabai-fat.svc.cluster.local:8080/xxl-job-admin/api/registry, content=null]
so to solve the problem, I should execute command as possible as the same in kubernetes like execute with docker. The question is: How to pass the docker command --name to kubernetes environment? I have already tried this:
"env": [
{
"name": "name",
"value": "xxl-job-admin"
}
],
and also tried this:
"containers": [
{
"name": "xxl-job-admin",
"image": "xuxueli/xxl-job-admin:2.0.2",
}
]
Both did not work.
When setting up on a brand new EC2 server as a test I run the following and it all works fine.
/vault/config/local.json
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 1
}
}],
"storage": {
"file": {
"path": "/vault/data"
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true
}
docker run -d -p 8200:8200 -v /home/ec2-user/vault:/vault --cap-add=IPC_LOCK vault server
export VAULT_ADDR='http://0.0.0.0:8200'
vault operator init
I unseal and login fine.
On one of our corporate test servers I use 0.0.0.0 and I get a web server busy sorry page on the init. However, if I export 127.0.0.1 the init works fine. I cannot access the container from the server command line with a curl with 0.0.0.0 or 127.0.0.1. I'm unsure why the behaviours are different?
I understand that 127.0.0.1 should not work but why am I get server busy on 0.0.0.0 on one server and not another in the actual container?
Thanks Mark
The listener works fine in the container with 0.0.0.0. To access the container externally you need to VAULT_ADDR to an address the server understands not the container.
I recently set up Traefik v.1.7.14 in a Docker container, on a Docker Swarm enabled cluster. As a test, I created a simple service:
docker service create --name demo-nginx \
--network traefik-net \
--label traefik.app.port=80 \
--label traefik.app.frontend.auth.basic="test:$$apr1$$LG8ly.Y1$$1J9m2sDXimLGaCSlO8.T20" \
--label traefik.app.frontend.rule="Host:t.myurl.com" \
nginx
As the code above states, I am simply installing nginx on my url, at the subdomain t specified.
When this code runs, the service gets created successfully. Traefik also shows the service in the traefik api, as well as within the traefik administrator.
In the traefik api, the back-end service is reported as follows:
"frontend-Host-t-myurl-com-0": {
"entryPoints": [
"http",
"https"
],
"backend": "backend-demo-nginx",
"routes": {
"route-frontend-Host-t-myurl-com-0": {
"rule": "Host:t.myurl.com"
}
},
"passHostHeader": true,
"priority": 0,
"basicAuth": null,
"auth": {
"basic": {}
}
When I go to visit t.myurl.com, I get the authentication prompt, as expected.
However, when I type in my username/password (test:test, in this case), the login prompt just prompts me again and doesn't authenticate me.
I have checked to ensure that I am escaping the characters in the docker label by using:
echo $(htpasswd -nb test test) | sed -e s/\\$/\\$\\$/g
To generate the password.
As part of my testing, I also tried turning off the https entryPoint, as I wanted to see if this cycle was somehow being triggered by ssl. That didn't seem to have any impact on resolving this (rule: --label traefik.app.frontend.entryPoints=http). Traefik did properly respond on http upon doing this, but the password authentication still fell into the same prompting loop as before.
When I remove the traefik.app.frontend.auth.basic label, I can access my site at my url (t.myurl.com). So this issue appears to be isolated within the basic authentication functionality.
My DNS provider is Cloudflare.
If anyone has any ideas, I'd appreciate it.
Maybe you can try this:
echo $(htpasswd -nb your-user your-password);
Because you don't need two $$ in the command line.
How would you go about changing the default port of swagger-ui dist version?
By default it listens to requests on port 8080. I want it to listen to some other port. The use case is that we want to have a couple of dists running on our host but listening on different ports.
Is this possible or do you actually need to do some more complicated setup?
We run it via node js default package:
{
"name": "dist",
"version": "1.0.0",
"description": "",
"main": "swagger-ui-bundle.js",
"scripts": {
"start": "http-server"
},
"keywords": [],
"author": "",
"license": "ISC"
}
The simples solution that I know Is just to use docker and map port by -p 80:8080:
https://hub.docker.com/r/swaggerapi/swagger-ui/
docker run -p 80:8080 -e API_URL=http://generator.swagger.io/api/swagger.json swaggerapi/swagger-ui
In case if you not use API_URL here is docker file for above docker - you cna use this information SWAGGER_JSON "/app/swagger.json" to map path to swagger.json in your local machine (using docker --volume parameter)
I have a docker container running a java process that I am trying to connect to rabbitmq running on my localhost.
Here are the steps I've done so far:
On my Local machine (macbook running Docker version 1.13.0-rc3, build 4d92237 with firewall turned off)
I've updated my rabbitmq_env.conf file to remove RABBITMQ_NODE_IP_ADDRESS so I am not tied to connect via localhost and i have an admin rabbitmq user. (not trying with guest user)
I tested this via telnet on my local machine and have no issues telnet <local-ip> 5672
Inside my docker container
able to ping local-ip and curl rabbitmq admin api
curl -i -u username:password http://local-ip:15672/api/vhosts returns sucessfully
[{"name":"/","tracing":false}]
When i try to telnet from inside the container I get
"Connection closed by foreign host"
looking at the rabbitmq.logs
=ERROR REPORT====
closing AMQP connection <0.30526.1> (local-ip:53349 -> local-ip:5672):
{handshake_timeout,handshake}
My java stacktrace incase helpful
Caused by: java.net.ConnectException: Connection refused (Connection >refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at >java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at >java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.>java:206)
at >java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at >com.rabbitmq.client.impl.FrameHandlerFactory.create(FrameHandlerFactory.ja>va:32)
at >com.rabbitmq.client.impl.recovery.RecoveryAwareAMQConnectionFactory.newCon>nection(RecoveryAwareAMQConnectionFactory.java:35)
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "716f935f19a107225650a95d06eb83d4c973b7943b1924815034d469164affe5",
"Created": "2016-12-11T15:34:41.950148125Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"9722a49c4e99ca5a7fabe56eb9e1c71b117a1e661e6c3e078d9fb54d7d276c6c": {
"Name": "testing",
"EndpointID": "eedf2822384a5ebc01e5a2066533f714b6045f661e24080a89d04574e654d841",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
What am I missing?
for me this works fine!
I have been installed the image docker pull rabbitmq:3-management
and run
docker run -d --hostname haroldjcastillo --name rabbit-server -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=admin2017 -p 5672:5672 -p 15672:15672 rabbitmq:3-management
the most important is to add the connection and management ports -p 5672:5672 -p 15672:15672
See you host in docker
docker-machine ip
return in my case:
192.168.99.100
Go to management http://192.168.99.100:15672
For Spring Boot you can configure this or works good for another connections
spring.rabbitmq.host=192.168.99.100
spring.rabbitmq.username=admin
spring.rabbitmq.password=admin2017
spring.rabbitmq.port=5672
Best wishes
For anyone else searching for this error, I'm using spring boot and rabbitmq in docker container, starting them with docker compose. I kept getting org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused from the spring app.
The rabbitmq hostname was incorrect. To fix this, I'm using the container names in the spring app configuration. Either put spring.rabbitmq.host=my-rabbit in spring's application.properties (or yml file), or in docker-compose.yaml add environment: SPRING_RABBITMQ_HOST: my-rabbit to the spring service. Of course, "my-rabbit" is the rabbitmq container name described in the docker-compose.yaml
I am using docker with linux container with rabbitmq:3-management and have created a dotnet core based web api. While calling from We API action method I faced the same issue and changed the value to "host.docker.internal"
following scenario worked for me
"localhost" on IIS Express
"localhost" on Docker build from Visual Studio
"host.docker.internal" on Docker build from Visual Studio
"Messaging": {
"Hostname": "host.docker.internal",
"OrderQueue": "ProductQueue",
"UserName": "someuser",
"Password": "somepassword" },
But facing the same issue when, the container created via docker build command, but not when container created using Visual Studio F5 command.
Now find the solution there are two ways to do it:
by default all the containers get added into "bridge" network go through with these steps
Case1: If you have already containers (rabbitmq and api) in the docker
and running then first check their ip / hostname
docker network ls
docker network inspect bridge # from this step you'll get to know what containers are associated with this
find the rabbitmq container and internal IP, apply this container name or IP and then run your application it will work from Visual Studio and Docker build and run command
Case2: if you have no containers running then you may like to create
your network in docker then follow these steps:
docker network create givenetworknamehere
add your container while using "docker run" command or after
Step2.1: if using docker run command for your container then;
docker run --network givenetworknamehere -d -p yourport:80 --name givecontainername giveyourimagename
Step2.2 if adding newly created network after container creation then use below
command docker network connect givenetworknamehere givecontainername
with these step you bring your container in your newly created same network and they can communicate.
Note: by default "bridge" network type get created
After a restart, all was working. I don't think Rabbit was using respecting .config changes