vscode debug FastAPI in container on MacOS - docker

I am setting up debugging of FastAPI running in a container with VS Code. When I launch the debugger, the FastAPI app runs in the container. But when I access the webpage from host, there is no response from server as the following:
However, if I start the container from command line with the following command, I can access the webpage from host.
docker run -p 8001:80/tcp with-batch:v2 uvicorn main:app --host 0.0.0.0 --port 80
Here is the tasks.json file:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--port",
"80"
],
"module": "uvicorn"
}
},
{
"type": "docker-build",
"label": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "with-batch:v2"
}
}
]
}
here is the launch.json file:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Debug Flask App",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}/app",
"remoteRoot": "/app"
}
],
"projectType": "fastapi"
}
}
]
}
here is the debug console output:
here is the docker-run: debug terminal output:
here is the Python Debug Console terminal output:

Explanation
The reason you are not able to access your container at that port, is because VSCode builds your image with a random, unique localhost port mapped to the running container.
You can see this by running docker container inspect {container_name} which should print out a JSON representation of the running container. In your case you would write docker container inspect withbatch-dev
The JSON is an array of objects, in this case just the one object, with a key of "NetworkSettings" and a key in that object of "Ports" which would look similar to:
"Ports": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "55016"
}
]
}
That port 55016 would be the port you can connect to at localhost:55016
Solution
With some tinkering and documentation it seems the "projectType": "fastapi" should be launching your browser for you at that specific port. Additionally, your debug console output shows Uvicorn running on http://127.0.0.1:80. 127.0.0.1 is localhost (also known as the loopback interface), which means your process in the docker container is only listening to internal connections. Think of docker containers being in their own subnetwork relative to your computer (there are exceptions to this, but that's not important). If they want to listen to outside connections (your computer or other containers), they would need to tell the container's virtual network interface to do so. In the context of a server, you would use the address 0.0.0.0 to indicate you want to listen on all ipv4 addresses referencing this interface.
That got a little deep, but suffice it to say, you should be able to add --host 0.0.0.0 to your run arguments and you would be able to connect. You would add this to tasks.json, in the docker-run object, where your other python args are specified:
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--host",
"0.0.0.0",
"--port",
"80"
],
"module": "uvicorn"
}
},

Related

How to keep logs for ECS container?

I'm running ECS tasks, and recently the service cpu hit 100% and went down.
I waited for the instance to settle down and sshed-in.
I was looking for logs, but it seemed docker container restarted and logs are all gone (logs when the cpu was high)
Next time, how do I make sure I can see the logs at least to diagnose the problem?
I have the following, hoping to see some logs somewhere (mounted in the host machine)
"mountPoints": [
{
"readOnly": null,
"containerPath": "/var/log/uwsgi",
"sourceVolume": "logs"
}
],
But there's no /var/log/uwsgi in the host machine.
And I probably need syslog and stuff..
As far you current configuration logs totally depend on the path that you define in the volume section.
"mountPoints": [
{
"readOnly": null,
"containerPath": "/var/log/uwsgi",
"sourceVolume": "logs"
}
],
the souces path defined in volume logs logs not /var/log/uwsgi, so you are mounting
/var/log/uwsgi (container path) -> logs volume (host path). you find these logs in path define in logs volume. but better to set something like
{
"readOnly": null,
"containerPath": "/var/log/uwsgi",
"sourceVolume": "volume_name"
}
then volume config
"volumes": [
{
"name": "logs",
"host": {
"sourcePath": "/home/ec2-user/logs"
}
}
]
From documentation
In the task definition volumes section, define a bind mount with name
and sourcePath values.
"volumes": [
{
"name": "webdata",
"host": {
"sourcePath": "/ecs/webdata"
}
}
]
In the containerDefinitions section, define a container with
mountPoints values that reference the name of the defined bind mount
and the containerPath value to mount the bind mount at on the
container.
"containerDefinitions": [
{
"name": "web",
"image": "nginx",
"cpu": 99,
"memory": 100,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"essential": true,
"mountPoints": [
{
"sourceVolume": "webdata",
"containerPath": "/usr/share/nginx/html"
}
]
}
]
bind-mounts-ECS
Now if I come to my suggestion I will go for AWS log driver.
Working in AWS, the best approach is to push all logs to CW, but AWS log driver only pushes container stdout and stderr logs to CW.
Using AWS log driver you do not need to worry about instance and container, you will log in CW and you can stream these logs to ELK as well.
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-wordpress",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
using_awslogs

On vscode debugger restart, postDebugTask is run but not preLaunchTask

I'm running unit tests using googletests on embedded C software and I'm using a docker container to be able to run them easily on any platform. Now I would like to debug these unit tests from vscode connecting to my docker container and running gdb in it.
I managed to configure launch.json and tasks.json to start and run the debug session.
launch.json :
{
"version": "0.2.0",
"configurations": [
{
"name": "tests debug",
"type": "cppdbg",
"request": "launch",
"program": "/project/build/tests/bin/tests",
"args": [],
"cwd": "/project",
"environment": [],
"sourceFileMap": {
"/usr/include/": "/usr/src/"
},
"preLaunchTask": "start debugger",
"postDebugTask": "stop debugger",
"pipeTransport": {
"debuggerPath": "/usr/bin/gdb",
"pipeProgram": "docker",
"pipeArgs": ["exec", "-i", "debug", "sh", "-c"],
"pipeCwd": "${workspaceRoot}"
},
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "start debugger",
"type": "shell",
"command": "docker run --privileged -v /path/to/my/project:/project --name debug -it --rm gtest-cmock",
"isBackground": true,
"problemMatcher": {
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": ".",
"endsPattern": "."
}
}
},
{
"label": "stop debugger",
"type": "shell",
"command": "docker stop -t 0 debug",
}
]
}
When I hit the debugger restart button, stop debugger task is run and the docker container stops but start debugger is not run. The debug session hangs and I have to close vscode do be able to run another debug session.
I'm looking for a way to either run both tasks on debugger restart, or to run none (if I start my docker from another terminal and deactivate both tasks, restart works with no problem).

Can you tell me the solution to the change of service ip in mesos + marathon combination?

I am currently posting a docker service with the MESOS + Marathon combination.
This means that the IP address of the docker is constantly changing.
For example, if you put mongodb on marathon, you would use the following code.
port can specify the port that is coming into the host. After a day, the service will automatically shut down and run and the IP will change.
So, when I was looking for a method called mesos dns, when I was studying the docker command, I learned how to find the ip of the service with the alias name by specifying the network alias in the docker.
I thought it would be easier to access without using mesos dns by using this method.
However, in marathon, docker service is executed in json format like below.
I was asked because I do not know how to specify the docker network alias option or the keyword or method.
{
"id": "mongodbTest",
"instances": 1,
"cpus": 2,
"mem": 2048.0,
"container": {
"type": "DOCKER",
"docker": {
"image": "mongo:latest",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 27017,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/mesos-mg",
"hostPath": "/var/data/mesos-mg",
"mode": "RW"
}
]
}
}

Docker Swarm Windows Server container not accesible from browser when run in a service

I am attempting to run a prototype of a microservice deployment (at this point just for R&D purposes). I have made a very basic API endpoint and used docker-compose in visual studio to create the container. The API code is as follows:
[RoutePrefix("api/test")]
public class TestController : ApiController
{
[HttpGet]
[Route("")]
public HttpResponseMessage Get()
{
HttpResponseMessage response = Request.CreateResponse(HttpStatusCode.OK, "Microservice Test Successful 2");
return response;
}
}
And the deployed container has the following details (second one on the list):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
31d4d7f15d3e testmicroservice:latest "C:\\ServiceMonitor.e…" 16 hours ago Up 16 hours testservice.1.ttt25efcq418xbsu7vqksl94p
816c526ef9f3 testmicroservice "cmd /c 'start /B C:…" 16 hours ago Up 16 hours 0.0.0.0:8785->80/tcp dockercompose2417227251495589316_gdms.testmicroservice_1
I can quite merrily access this in my browser on the published port and get back the expected result from the API:
I have attempted to deploy this same image as a docker service (with swarm mode active in docker) and have mapped a different port to access it:
ID NAME MODE REPLICAS IMAGE PORTS
m3wfea6n9anl testservice replicated 1/1 testmicroservice:latest *:5050->80/tcp
The service appears to be running correctly and the container for this task also appears to be running fine (it is the first of the two containers from the snippet above.
For some reason when i attempt to access the same endpoint but with the new port (again on localhost in my browser), i get "unable to connect".
The full docker service details are below.
[
{
"ID": "m3wfea6n9anligjrrihbi03vt",
"Version": {
"Index": 500
},
"CreatedAt": "2018-10-04T16:19:17.2891599Z",
"UpdatedAt": "2018-10-04T16:19:17.2921526Z",
"Spec": {
"Name": "testservice",
"Labels": {},
"TaskTemplate": {
"ContainerSpec": {
"Image": "testmicroservice:latest",
"Init": false,
"StopGracePeriod": 10000000000,
"DNSConfig": {},
"Isolation": "default"
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"Delay": 5000000000,
"MaxAttempts": 0
},
"Placement": {},
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 5050,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 5050,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 5050,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "ylj65tghq4ek1ewmtysaitmcx",
"Addr": "10.255.0.31/16"
}
]
}
}
]
The dockerfile for the image is this:
FROM microsoft/aspnet:4.7.2-windowsservercore-1709
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
Any suggestions as to what I may have missed would be great.
I have managed to get this working now and thought I would share what I did in case anyone else faces the same.
I believe the issue I was facing was due to a couple of factors:
1) The Routing Mesh not being fully supported for Windows Server Containers/Hosts on Docker CE
2) A miss-match between the operating system kernels between the container and the host which again caused issues with the ingress network and publishing ports (although it did not throw up any warning/error to indicate this fact).
How I got it working:
1) Created an 1803 Windows Server VM and installed Docker EE Basic (which comes free with Windows Server 2016 and above). I believe running a VM on 1709 would have also worked). If you have a VM running Windows Server 2016+ you can install the EE Basic version of docker following the instructions from the Docker Website (make sure to update it to the latest version).
2) Changed the Dockerfile I was using to create the image to use the 1803 version of microsoft/aspnet:
FROM microsoft/aspnet:4.7.2-windowsservercore-1803
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
(if you are using a 1709 Windows Server VM then it would have been:
FROM microsoft/aspnet:4.7.2-windowsservercore-1709
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
3) Deployed the new image to the VM.
4) Set up swarm mode on the server:
Docker Swarm Init
5) Deployed the image as a Service:
Docker Service Create --name testservice --publish 3000:80 testmicroservice
Following these steps I now have a fully functional swarm running on a multi node cluster witha number of different replicated services.
Hope this helps.

Mounted volume using volume-from is empty

So here's what I'm trying to do:
Nginx container linked to -> Rails container running Puma
Using docker-compose, this solution works great. I'm able to start both containers and the NGINX container has access to the volume in the linked container using volumes_from.
First, the relevant bits of the Dockerfile for Rails:
ENV RAILS_ROOT /www/apps/myapp
RUN mkdir -p $RAILS_ROOT
WORKDIR $RAILS_ROOT
.... lots of files get put in their proper places ....
EXPOSE 3000
VOLUME [/www/apps/myapp/]
CMD puma -C config/puma.rb
Nginx config is pretty basic, relevant parts are here:
ENV RAILS_ROOT /www/apps/myapp
# Set our working directory inside the image
WORKDIR $RAILS_ROOT
EXPOSE 80
EXPOSE 443
Again, this all works great in docker-compose. However, in ECS, I'm trying to use the following task definition:
{
"family": "myapp",
"containerDefinitions": [
{
"name": "web",
"image": "%REPOSITORY_URI%:nginx-staging",
"cpu": 512,
"memory": 512,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
},
{
"containerPort": 443,
"protocol": "tcp"
}
],
"links": [
"myapp"
],
"volumesFrom": [
{
"sourceContainer": "myapp",
"readOnly": false
}
],
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-myapp-staging",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-myapp-nginx"
}
}
},
{
"image": "%REPOSITORY_URI%:v_%BUILD_NUMBER%",
"name": "myapp",
"cpu": 2048,
"memory": 2056,
"essential": true,
...bunch of environment variables, etc.
}
The task starts in ECS as expected, and the myapp container looks perfect. However, when I check out the nginx container on the EC2 instance host with
docker exec -it <container> bash
I land in /www/apps/myapp, but the directory is empty. I've tried to mount drives and do several other things and I'm at a loss here... anyone have any ideas as to how to get the files from the linked container to be usable in my nginx container?
And of course, right after I post this I find the solution. So no one else has to feel this pain, here it is:
VOLUME [/www/apps/myapp/]
VOLUME ["/www/apps/myapp/"]
sigh

Resources