Is it possible to connect to SQL Server Express (localdb) from a container instance?
docker-compose sample connection string:
foo-api:
image: foo-api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ConnectionStrings:DefaultConnection=Server=host.docker.internal\\mssqllocaldb;Database=FooDB;MultipleActiveResultSets=true;User Id=foo;Password=Your_password123;
build:
context: ../ms-foo/src/
dockerfile: Dockerfile
ports:
- "6004:6004"
Running the app # localhost wtih this config works just fine:
"ConnectionStrings": {
"DefaultConnection": "Server=(localdb)\\mssqllocaldb;Database=FooDB;MultipleActiveResultSets=true;User Id=foo;Password=Your_password123;"
},
Is there some dns resolution I'm missing or does the instance need to be shared somehow?
https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/sql-server-express-localdb?view=sql-server-ver15#shared-instances-of-localdb?
Update
Including the error message.
{
"error": "14 UNAVAILABLE: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 25 - Connection string is not valid)"}
You can reach the parent host on the container's network gateway.
Let's say your container address is 172.21.0.2. In that case, you can most likely find the SQL server at 172.21.0.1 (and the correspondent listening port).
To find out the container's IP, you can try to jump into it and print its network config, i.e. docker exec <container name> ip a, but often ip and ifconfig commands are not installed, so I'd suggest looking at the bottom of docker inspect <container name>. Keys IPAddress and Gateway hold the discussed values.
You can then build your container with a custom /etc/hosts, or point it to DNS server that can resolve this, so you can use nice names instead of IP addresses.
I think that your error related to the environment variable defined on your docker-compose file , the second variable must be like that:
variable_name="value2"
So for your case must be like that:
DB_CONNECTION_STRING="ConnectionStrings:DefaultConnection"
DB_SERVER="host.docker.internal\\mssqllocaldb"
DB_NAME="FooDB"
DB_MultipleActiveResultSets=true
DB_USER_ID="foo"
DB_PASSWORD="Your_password123"
after that you can use this env variables to define your connection:
"ConnectionStrings": {
"DefaultConnection": "Server=(localdb)\\mssqllocaldb;Database=FooDB;MultipleActiveResultSets=true;User Id=foo;Password=Your_password123;"
},
I hope that can help you to resolve your issue.
Related
What I'm trying to do
Host a Taskwarrior Server on an AWS EC2 instance, and connect to it via a subdomain (e.g. task.mydomain.dev).
Taskwarrior server operates on port 53589.
Tech involved
AWS EC2: the server (Ubuntu)
Caddy Server: for creating a reverse proxy for each app on the EC2 instance
Docker (docker-compose): for launching apps, including the Caddy Server and the Taskwarrior server
Cloudflare: DNS hosting and SSL certificates
How I've tried to do this
I have:
allowed incoming connections for ports 22, 80, 443 and 53589 in the instance's security policy
given the EC2 instance an elastic IP
setup the DNS records (task.mydomain.dev is CNAME'd to mydomain.dev, mydomain.dev has an A record pointing to the elastic IP)
used Caddy server to setup a reverse proxy on port 53589 for task.mydomain.dev
setup the Taskwarrior server as per instructions (i.e. certificates created; user and organisation created; taskrc file updated with cert, auth and server info; etc)
Config files
/opt/task/docker-compose.yml
version: '3.3'
services:
taskd:
image: connectical/taskd
restart: always
volumes:
- /opt/task:/var/taskd
ports:
- 53589:53589
networks:
default:
external:
name: caddy_net
/opt/caddy/docker-compose.yml
version: "3.4"
services:
caddy:
build:
context: .
dockerfile: Dockerfile
container_name: caddy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./config:/config
- ./data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
networks:
default:
external:
name: caddy_net
/opt/caddy/Caddyfile:
task.mydomain.dev:53589 {
reverse_proxy taskd:53589
tls {
dns cloudflare myCloudflareAPIkey
}
}
What's actually happening
I'm unable to connect to port 53589 on task.mydomain.dev
Running telnet task.mydomain.dev 53589 times out
I'm unable to connect to port 53589 on mydomain.dev
Running telnet mydomain.dev 53589 times out
I'm able to connect to port 53589 at 127.0.0.1 by ssh'ing into the EC2 instance
Runningtelnet 127.0.0.1 53589 from the EC2 instance successfully connects
I'm able to connect to port 80 on task.mydomain.dev, but unable to sync with the Taskwarrior server
Running task sync init returns:
c: 1 Received record packet of unknown type 72
Syncing with task.mydomain.dev:80
Cannot perform this action while handshake is in progress.
Sync failed. Could not connect to the Taskserver.
I'm able to connect to port 443 on task.mydomain.dev, but unable to sync with the Taskwarrior server
Running task sync init returns:
Syncing with task.mydomain.dev:443
Malformed message
Sync failed. Could not connect to the Taskserver.
What I've tried to fix it
Changing the Caddyfile's first line to:
task.mydomain.dev { and task.mydomain.dev:80 {, then connecting to port 80
Running task sync init returns:
c: 1 Received record packet of unknown type 72
Syncing with task.mydomain.dev:80
Cannot perform this action while handshake is in progress.
Sync failed. Could not connect to the Taskserver.
task.mydomain.dev { and task.mydomain.dev:443 {, then connecting to port 443
Running task sync init returns:
Syncing with task.mydomain.dev:443
Malformed message
Sync failed. Could not connect to the Taskserver.
Changing Caddyfile's second line to reverse_proxy 127.0.0.1:53589, reverse_proxy 0.0.0.0:53589 and reverse_proxy localhost:53589. Same errors occur.
Removing the CNAME records for the subdomain. Same errors occur
Does anyone have any idea what's happening or could point me in the right direction?
If you are attempting to proxy HTTPS traffic on Cloudflare on a port not on the standard list, you will need to follow one of these options:
Set it up as a Cloudflare HTTPS Spectrum app on the required port 53589
Set up the record in the Cloudflare DNS tab as Grey cloud (in other words, it will only perform the DNS resolution - meaning you will need to manage the certificates on your side)
Change your service so that it listens on one of the standard HTTPS ports listed in the documentation in point (1)
Hello i am having trouble with vagrant setup.
So i am trying to ping serverless API which runs on http://localhost:3000/ (and it's outside vagrant project).
Right now my vagrant project runs on https://localhost:4443/.
Overall trying to CURL request from vagrant project to another serverless project.
Tried to use http://localhost:3000/ in CURL request but getting Failed to connect to localhost port 3000: Connection refused
Tried to use VM ip 10.0.2.15 address same
Tried to do port forwarding in vagrantfile config.vm.network :forwarded_port, guest: 3000, host: 3000 and use machine IP address 192.168.0.16, getting empty response from server, when i try to do telnet 192.168.0.16 3000 getting
Trying 192.168.0.16...
Connected to 192.168.0.16.
Escape character is '^]'.
Connection closed by foreign host.
Any idea what to try?
I had to use VM IP address something like
curl -X GET http://10.0.2.2:3000
These errors may be caused due to follow reasons, ensure the following steps are followed. To connect the local host with the local virtual machine(host). Here, I'am connecting http://localhost:3001/ to the http://abc.test Steps to be followed:
1.We have to allow CORS, placing Access-Control-Allow-Origin: in header of request may not work. Install a google extension which enables a CORS request.*
2.Make sure the credentials you provide in the request are valid.
3.Make sure the vagrant has been provisioned. Try vagrant up --provision this make the localhost connect to db of the homestead.
Try changing the content type of the header. header:{ 'Content-Type' : 'application/x-www-form-urlencoded; charset=UTF-8;application/json' }
this point is very important.
I am starting a WebLogic 12.2.1.4 admin server in docker from my docker-compose.yml file.
I use different port mapping, not the default 7001.
My docker port mapping is this: 7101:7001
Everything works fine, except this: I constantly get the following exception when I click on the Deployment menu on the web console:
<Feb 12, 2021 5:11:21,002 PM UTC> <Notice> <JMX> <BEA-149535> <JMX Resiliency Activity Server=All Servers : Resolving connection list DomainRuntimeServiceMBean>
javax.ws.rs.ProcessingException: java.net.ConnectException: Tried all: '1' addresses, but could not connect over HTTP to server: 'localhost', port: '7101'
failed reasons:
[0] address:'localhost/127.0.0.1',port:'7101' : java.net.ConnectException: Connection refused
The WL admin server tries to use the public docker port 7101 in the container but actually, WL is listening on the default 7001 port inside the container. Port 7101 is only used from the host machine, and of course, WL is not listening on port 7101 in the container.
My workaround is the following:
Check the IP address of the admin-server container with docker inspect <container-name>
Open the WL console using the container private IP address, e.g.: http://172.19.0.2:7001/console
In this case, the exception does not appear
But if I open the WL console from http://localhost:7101/console which is the mapped port to the host machine by docker, then the exception appears
Maybe this is a WL user interface issue? But I am not sure.
Any idea why this happening?
I'm currently running a spring boot application in a docker container on an EC2. My docker-compose file looks like this (with some values replaced):
version: '3.8'
services:
my-app:
image: ${ecr-repo}/my-app:0.0.1-SNAPSHOT
ports:
- "8893:8839/tcp"
networks:
default:
The docker container deploys and comes up as healthy with the healthcheck command being:
wget --spider -q -Y off http://localhost:8893/my-app/v1/actuator/health
If I do a docker ps -a I can see for the ports:
0.0.0.0:8893->8893
My Alb healthcheck however is returning a 502 so I've temporarily allowed connections from my IP directly to the EC2 in the security group. The rules are:
Allow Ingress on 8893 from my Alb security group
Allow Ingress on 8893 from my IP
Allow Egress to anywhere (0.0.0.0)
When I try and hit the healthcheck endpoint of my app using the public DNS of the ec2 on port 8893 using Postman I get Error: connect ECONNREFUSED
If I take my docker container down and then simulate a webserver using the command from https://fabianlee.org/2016/09/26/ubuntu-simulating-a-web-server-using-netcat/ which is:
while true; do { echo -e "HTTP/1.1 200 OK\r\n$(date)\r\n\r\n<h1>hello world from $(hostname) on $(date)</h1>" | nc -vl 8080; } done
I get a 200 response with the expected body which indicates it's not a problem with the security groups.
The actuator endpoint for spring boot is definitely enabled as if I try running the app through intellij and hitting the endpoint it returns a 200 and status up.
Any suggestions for what I might be missing here or how I could debug this further? It seems like docker isn't picking up connections to the port for some reason.
I have a HTTP health check in my service, exposed on localhost:35000/health. At the moment it always returns 200 OK. The configuration for the health check is done programmatically via the HTTP API rather than with a service config, but in essence, it is:
set id: service-id
set name: health check
set notes: consul does a GET to '/health' every 30 seconds
set http: http://127.0.0.1:35000/health
set interval: 30s
When I run consul in dev mode (consul agent -dev -ui) on my host machine directly the health check passes without any problem. However, when I run consul in a docker container, the health check fails with:
2017/07/08 09:33:28 [WARN] agent: http request failed 'http://127.0.0.1:35000/health': Get http://127.0.0.1:35000/health: dial tcp 127.0.0.1:35000: getsockopt: connection refused
The docker container launches consul, as far as I am aware, in exaclty the same state as the host version:
version: '2'
services:
consul-dev:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul-dev"
hostname: "consul-dev"
ports:
- "8400:8400"
- "8500:8500"
- "8600:8600"
volumes:
- ./etc/consul.d:/etc/consul.d
command: "agent -dev -ui -client=0.0.0.0 -config-dir=/etc/consul.d"
I'm guessing the problem is that consul is trying to do the GET request to the containers loopback interface rather than what I am intending, which is for the loopback interface of the host. Is that a correct assumption? More importantly, what do I need to do to correct the problem?
So it transpires that there was a bug in some previous versions of macOS that prevented use of the docker0 network. Whilst the bug is fixed in newer versions, Docker support extends to older versions and so Docker for Mac doesn't currently support docker0. See this discussion for details.
The workaround is to create an alias to the loopback interface on the host machine, set the service to listen on either that alias or 0.0.0.0, and configure Consul to send the health check GET request to the alias.
To set the alias (choose a private IP address that's not being used for anything else; I chose a class A address but that's irrelevant):
sudo ifconfig lo0 alias 10.200.10.1/24
To remove the alias:
sudo ifconfig lo0 -alias 10.200.10.1
From the service definition above, the HTTP line should now read:
set http: http://10.200.10.1:35000/health
And the HTTP server listening for the health check requests also needs to be listening on either 10.200.10.2 or 0.0.0.0. This latter option is suggested in the discussion but I've only tried it with the alias.
I've updated the title of the question to more accurately reflect the problem, now I know the solution. Hope it helps somebody else too.