AWS Ec2- need to create VPC and Subnets before Ec2 instance? - docker

I am trying to create a basic ec2 instance on which I will run a docker container that runs a spring boot web app.
When I go to create the instance I see the below screen.
Do I need to create a VPC and subnets first before I can create an Ec2 instance? And is this a new feature of AWS?
I want my instance and docker container to be accessible via http and https on the public internet as spring boot exposes a rest api.

If you don't already have one, you can create your own VPC or use the default one then create a public subnet (with auto-assigned public addresses) in this VPC.
I would recommend to directly create your own VPC.
Since you want your instance being reachable on http and https you want to create a security group that allows connections on ports 80 & 443 and allows connection on port 22 from your personal IP address only.
The port 22 will allow you to connect via SSH in the instance to set up your docker container.
Hope it helped!

Related

Can we use a DNS name for a service running on a Docker Swarm on my local system?

Say I have a Swarm of 3 nodes on my local system. And I create a service say Drupal with a replication of 3 in this swarm. Now, say each of the node has one container each running Drupal. Now when I have to access this in my browser I will have to use the IP address of one of the nodes <IP Address>:8080 to access Drupal.
Is there a way I can set a DNS name for this service and access it using DNS name instead of having to use IP Address and port number?
You need to configure the DNS server that you use on the host making the query. So if your laptop queries the public DNS, you need to create a public DNS entry that would resolve from the internet (on a domain you own). This should resolve to the docker host IPs running the containers, or an LB in front of those hosts. And then you publish the port on the host to the container you want to access.
You should not be trying to talk directly to the container IP, these are not routeable from outside of the docker host. And the docker DNS used for service discovery is for container to container communication. This is separate from communication outside of docker that goes through a published port.

Cannot reach container in ECS Fargate cluster that looks properly configured

I have an ECS Fargate cluster that I configured after reading instructions in another StackOverFlow post. I have several containers that I've pushed into ECR repositories and can successfully launch the containers. But going to http://PUBLIC-IP-ADDRESS does not access the service exposed by the container.
In my most recent test I simply used the httpd container from Docker Hub because it is simple and provides a default web page. Still no luck.
The VPC has two subnets - public and private - and was constructed per the instructions in the above-linked post. I am attaching the containers -- as an ECS Service -- to the public subnet and also configuring the Service to make it a public IP address.
Public subnet (CIDR 10.0.1.0/24) has this route table:
10.0.0.0/16 local
0.0.0.0/0 igw-0ad0671cc2924857e
Network ACL inbound rules
100 ALL Traffic ALL ALL 0.0.0.0/0 ALLOW
* ALL Traffic ALL ALL 0.0.0.0/0 DENY
Network ACL outbound rules
100 ALL Traffic ALL ALL 0.0.0.0/0 ALLOW
* ALL Traffic ALL ALL 0.0.0.0/0 DENY
(These are the default rules)
The private subnet (CIDR 10.0.2.0/24) has the same configuration but the route table instead connects to a NAT gateway. The NAT gateway is homed on the public net.
The only thing I did differently from the VPC configuration instructions is the security group. When creating the services, I configure the service with the default security group that came with the VPC. This security group allows all traffic both inbound and outbound.
For the Task Definition - I created an httpd T.D. using the awsvpc network mode (it's a Fargate ECS), 1/2 GB memory, 0.25 vCPU, exposing port 80 on the container,
For the Service, I attached it to the VPC, gave it the name httpd, attached it to the public subnet, and said to use a public IP address.
The Service and the contained Task launch correctly, and the Task shows a public IP address. Accessing that IP address results in a long wait and eventually the web browser gives up. (times out)
UPDATE --
I was not aware of the need to have a load balancer. I have attempted to add a load balancer. But it made no difference.
To add the load balancer required adding more public subnets configured as above. The Application Load Balancer is attached to the VPC and to the public subnets. It is listening to HTTP (port 80).
I then re-created the service for the httpd container. During creation of the service, I did my best to configure it for the load balancer and then the service description gives this summary:
Target Group Name Container Name Container Port
ecs-ecs-go-httpd httpd 80
An ELB shouldn't be needed if you are trying to hit a specific container. Run a TCP test against port 80 and see what the response is.
Powershell: Test-NetConnection <IP address> -port 80
Bash: nc -z <IP address> <80>
If you cannot connect over 80 then move back through the pieces:
Open up the task
Click on the link to the ENI
Double check the security group has 80 open
Open up the task definition
Open up the container definition
Ensure that the host port and container port are both 80
If it is not an issue with either the security group or the task definition that leaves only the VPC configuration and the container image.
Test the container image by launching it locally and hitting it on localhost:80
Test the VPC configuration by launching another EC2 instance in the same subnet as your container with the same security group
SSH in
If you can SSH in then the issue is most likely not a routing issue. If you cannot SSH in double check the route table, the VPC ACL, and the subnet ACL.
From the new EC2 instance
Ensure 80 is wide open and try to hit your container with it's local IP
Install a web server
Try to hit this from outside on port 80
If you can hit 80 on your EC2 instance with the same security group in the same subnet then you have eliminated any possible issues with networking and can focus on what might be wrong with the ECS configuration.
Lastly, don't forget to lock your security group and ACLs back down if you opened anything up wider.

Exposing A Containerized Web App on a Public Domain

I am trying to expose my containerized web app to the Internet over a public domain, but all the articles out there seem to be teaching how to play around with Docker's local network, for example how to run a containerized DNS server or running a DNS server in Docker. Even if I set up a DNS server that resolves an IP e.g. 172.20.0.3 to a domain like exmaple.com, then DNS service will translate example.com to 172.20.0.3 which is obviously only local to the docker network and not accessible from the outside.
The scenario seems easy. I have a docker host with a public static IP lets say 64.233.191.255, and I have multiple domains on it. Each domain is mapped to a web server and will serve a (containerized) web application. Each application has its own network defined in docker-compose.yml under the networks section on which all other services related to the web app e.g. mariadb, redis, etc. communicate. Should I have a DNS server inside every container I create? How do I translate local addresses to the static public IP address so as to make the web apps available on their respective domains on port 80?
I found a service called ngrok that exposes a container over a public domain name like xxxx.ngrok.io, but that is not what I want. I would like to serve my website on my own domain.
This has proved to be everything but trivial to me. Also, there's no explicit tutorial on Docker's documentation on how to do this. I suppose this is not how it is supposed to be done in real world as they probably do it via Kubernetes or OpenShift.
Should I have a bind9 configuration on the host or a containerized bind9 to manage DNS queries? Do I need iptables rules for this scenario?
You have to map both domains to the public ip via DNS and than use an reverse proxy to forward the requests to the correct apache server.
So basically 3 vhosts inside the docker host.
Vhost 1 (the reverse proxy) gets the request maps the domain to Vhost 2 or Vhost 3 address.
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
you can use reverse proxy with Nginx for each application. For example, you're running two apps on port 3000 and 3001. Assign a proper DNS for each application.
like localhost:3000 maps to example1.com

rancher 2.0 networking in project namespace

can i ping one workload from other workload by workloadname?
I accustomed on rancher 1.0, where if i created stack with more conteiner so i can ping one from other conteiner by name.
for example: I have api and database and I need api to communicate with databases. When i click on execute shell on api and write "ping database", so not working.
I write connection string to database in api environmental variable.
And YES i can create database and take database ip a write it to ENV, but this ip will change after each restart.
It's possible to call by some not generate name?
thanks
EDIT:
Service discovery:
Shell:
As you see, so translate database name is work. Only ping database container not working.
To communicate between services you can communicate with cluster IP or with Service Name.
Using the ServiceName will be easier.
The service discovery add a DNS for each of your service. So if you have api, app and database you will have a DNS entry for each of those services.
So within your services, you can refer directly to the DNS.
Example: To connect in JDBC to a schema name test in your database, you would do something like this:
jdbc:mysql://database/test
see:
https://rancher.com/docs/rancher/v2.x/en/k8s-in-rancher/service-discovery/
If you want to know the clusterIP of you services you can run this command: kubectl get services --all-namespaces
Edit 1: Adding ClusterIP as a way to communicate with a service.
Kubernetes Service IP is implemented using "iptables" on the linux hosts which are part of the cluster. If you examine those rules closely, ONLY the port specified as part of the Service is exposed, not the ICMP port, which means, one cannot ping the Service IP addresses by default. But you would still be able to communicate with the Service on the designated port.

Rancher external subdomains

I need to set subdomains for apps in docker containers, not in internal rancher network but for public use. I have domain delegated to rancher server. And there is host property in almost all stacks from catalog, but it doesn't work. I guess i need to delegate domain using some rancher dns or setup nginx to proxy traffic to some rancher server but I can't find any.
What you need is add a load-balancer service, which then forwards 80/443 of the host to the container app/nginx/whatever.
So navigate to your stack, click on add service -> load balancer. Then you can chose either for wich domain to trigger ( or catch all, which i would do for now ) and then which target. There you select your app-container and the port the container has its app / httpd server running and thats basically it

Resources