VSCode multiple docker run task recognize only the last docker run task - docker

I have 2 tasks in visual studio code to run 2 different images into containers. Only the last docker run task is recognized by vscode.
This is my tasks.json file
{
"version": "2.0.0",
"tasks": [
{
"label": "docker-build-1",
"type": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "image1:latest",
"dockerfile": "${workspaceFolder}/app1/dev.Dockerfile",
"context": "${workspaceFolder}/",
"pull": true
}
},
{
"label": "docker-build-2",
"type": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "image2:latest",
"dockerfile": "${workspaceFolder}/app2/dev.Dockerfile",
"context": "${workspaceFolder}/",
"pull": true
}
},
{
"label": "docker-run-1",
"type": "docker-run",
"dependsOn": [
"docker-build-1"
],
"python": {
"module": "app.main"
},
"dockerRun": {
"network": "mynetwork"
}
},
{
"label": "docker-run-2",
"type": "docker-run",
"dependsOn": [
"docker-build-2"
],
"python": {
"module": "app.main"
},
"dockerRun": {
"network": "mynetwork"
}
},
]
}
When vscode shows the menu for running task, only thask docker-run-2 is showing:
Actually, only the last docker run task in the tasks.json file is shown. If I change the order in the list of tasks, then vscode only recognize docker-run-1. I searched in the documentation and it doesn't says anything about this behaviour. Any idea why this is happening? The idea is to setup 2 debug configurations in vscode for the 2 apps, but running the debug config for the app that is not the last produce an error in vscode:

Came across this same issue today. Seems that the "dockerRun" attribute between the run tasks has to be different. In my case i just added a test environment variable to one of the tasks and then both started to appear in the task list.

Related

How to run a task on stop event in debugging?

I tried the postDebugTask but it doesn't seem to run the task at all. I've skimmed this as well to no avail.
./.vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Launch Node.js in Docker",
"type": "docker",
"request": "launch",
"preLaunchTask": "run",
"platform": "node",
"removeContainerAfterDebug": true,
"postDebugTask": "stop"
}
]
}
./.vscode/tasks.json
{
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "run",
"command": "docker-compose up serve ; docker-compose rm -fsv serve",
},
{
"type": "shell",
"label": "stop",
"command": "echo test | docker-compose rm -fsv serve",
},
]
}
How can I make it so that docker-compose rm -fsv serve runs in parallel if the stop event occurs in the debug?

Production build of electron still has toggle developer tools in menu

I am trying to build a production build of electron for mac, it still has default menu items,
File, Edit, View, Window, Help.
And the worst thing is Toggle Developer Tools is still there.
I have set the menu to null still the same problem in a different way, now the developer tools option is visible but it's greyed out.
const { app, BrowserWindow, ipcMain, shell, Menu } = require("electron");
Menu.setApplicationMenu(null);
Electron version : 9.3.2
Is there a way to disable these default options?? I am using electron builder.
Package.json mac config
"mac": {
"publish": [
{
"provider": "spaces",
"name": "releases",
"region": "sfo2",
"channel": "latest",
"path": "/mac",
"acl": "public-read"
}
],
"category": "public.app-category.developer-tools",
"icon": "build-resources/app.iconset",
"target": [
{
"target": "dmg",
"arch": [
"x64"
]
},
{
"target": "zip",
"arch": [
"x64"
]
}
],
"darkModeSupport": false,
"type": "distribution"
},

Could not create review app. Postdeploy exit code was not 0

I have below app.json setup to create review apps in Heroku.
{
"name": "Small Sharp Tool",
"description": "This app does one little thing, and does it well.",
"keywords": [
"rails",
"ruby",
"angular"
],
"scripts": {
"postdeploy": "bash script/bootstrap.sh"
},
"env": {
"SECRET_TOKEN": {
"description": "A secret key for verifying the integrity of signed cookies.",
"generator": "secret"
},
"WEB_CONCURRENCY": {
"description": "The number of processes to run.",
"value": "5"
},
"LANG": {
"value": "en_US.UTF-8"
},
"RAILS_LOG_TO_STDOUT": {
"value": "enabled"
},
"S3_KEY": {
"required": true
},
"S3_SECRET": {
"required": true
},
"RAILS_SERVE_STATIC_FILES": {
"value": "true"
}
},
"formation": {
"web": {
"quantity": 1
},
"sidekiq": {
"quantity": 1
}
},
"addons": [
{
"plan": "heroku-redis:hobby-dev",
"as": "Redis"
},
{
"plan": "heroku-postgresql:hobby-dev",
"as": "postgresql",
"options": {
"version": "12"
}
}
],
"buildpacks": [
{
"url": "heroku/ruby"
},
{
"url": "heroku/nodejs"
},
{
"url": "https://github.com/simplefractal/heroku-buildpack-wkhtmltopdf.git"
}
],
"environments": {
"test": {
"scripts": {
"test": "bundle exec rake test"
}
}
},
"stack": "heroku-16"
}
and bootstrap.sh file has for now only pg_restore command and it seems the restore went fine as per the log.
shell script file has:
#!/bin/bash
echo $HEROKU_APP_NAME
curl https://s3-bucket-url | pg_restore --verbose --clean --no-acl --no-owner --dbname $POSTGRESQL_URL
But I am getting error as Could not create review app. Postdeploy exit code was not 0. . What is that I am missing here?
I had the same issue and found another solution: I used pre-made interface, and that solved the problem for me. However if you want a different database management system (eg i wanted PostgreSQL)or any other add-on, you will have to customize the code. Here is mine, but beware, it has still the issue you were complaining about - if you found, why my code isnĀ“t working I ll be very glad if you let me know.

Orphaned Tasks in Docker Swarm after removal of failed node

Last week I had to remove a failed node from my Docker Swarm Cluster, leaving some tasks that ran on that node in desired state "Remove".
Even after deleting the stack and recreating it with the same name, docker stack ps stackname still shows them.
Interestingly enough, after recreating the stack, the tasks are still there, but with no node assigned.
Here's what I tried so far to "cleanup" the stack:
Recreating the stack with the same name
docker container prune
docker volume prune
docker system prune
Is there a way to remove a specific task?
Here's the output for docker inspect fkgz0oihexzs, the first task in the list:
[
{
"ID": "fkgz0oihexzsjqwv4ju0szorh",
"Version": {
"Index": 14422171
},
"CreatedAt": "2018-11-05T16:15:31.528933998Z",
"UpdatedAt": "2018-11-05T16:27:07.422368364Z",
"Labels": {},
"Spec": {
"ContainerSpec": {
"Image": "redacted",
"Labels": {
"com.docker.stack.namespace": "redacted"
},
"Env": [
"redacted"
],
"Privileges": {
"CredentialSpec": null,
"SELinuxContext": null
},
"Isolation": "default"
},
"Resources": {},
"Placement": {
"Platforms": [
{
"Architecture": "amd64",
"OS": "linux"
}
]
},
"Networks": [
{
"Target": "3i998stqemnevzgiqw3ndik4f",
"Aliases": [
"redacted"
]
}
],
"ForceUpdate": 0
},
"ServiceID": "g3vk9tgfibmcigmf67ik7uhj6",
"Slot": 1,
"Status": {
"Timestamp": "2018-11-05T16:15:31.528892467Z",
"State": "new",
"Message": "created",
"PortStatus": {}
},
"DesiredState": "remove"
}
]
I had the same problem. I resolved it following this instructions :
docker run --rm -v /var/run/docker/swarm/control.sock:/var/run/swarmd.sock dperny/tasknuke <taskid>
Be sure to use the full long task id or it will not work (fkgz0oihexzsjqwv4ju0szorh in your case).

How to use volumes-from in marathon

I'm working with mesos + marathon + docker quite a while but I got stuck at some point. At the moment I try to deal with persistent container and I tried to play around with the "volumes-from" parameter but I can't make it work because I have no clue how I can figure out the name of the data box to put it as a key in the json. I tried it with the example from here
{
"id": "privileged-job",
"container": {
"docker": {
"image": "mesosphere/inky"
"privileged": true,
"parameters": [
{ "key": "hostname", "value": "a.corp.org" },
{ "key": "volumes-from", "value": "another-container" },
{ "key": "lxc-conf", "value": "..." }
]
},
"type": "DOCKER",
"volumes": []
},
"args": ["hello"],
"cpus": 0.2,
"mem": 32.0,
"instances": 1
}
I would really appreciate any kind of help :-)
From what I know :
docker --volume-from take the ID or the name of a container.
Since your datacontainer is launch with Marathon too, it get an ID (not sur how to get this ID from marathon) and a name of that form : mesos-0fb2e432-7330-4bfe-bbce-4f77cf382bb4 which is not related to task ID in Mesos nor docker ID.
The solution would be to write something like this for your web-ubuntu application :
"parameters": [
{ "key": "volumes-from", "value": "mesos-0fb2e432-7330-4bfe-bbce-4f77cf382bb4" }
]
Since this docker-ID is unknown from Marathon it is not practical to use datacontainer that are started with Marathon.
You can try to start a datacontainer directly with Docker (without using Marathon) and use it as you do before but since you don't know in advance where web-ubuntu will be scheduled (unless you add a constraint to force it) it is not practical.
{
"id": "data-container",
"container": {
"docker": {
"image": "mesosphere/inky"
},
"type": "DOCKER",
"volumes": [
{
"containerPath": "/data",
"hostPath": "/var/data/a",
"mode": "RW"
}
]
},
"args": ["data-only"],
"cpus": 0.2,
"mem": 32.0,
"instances": 1
}
{
"id": "privileged-job",
"container": {
"docker": {
"image": "mesosphere/inky"
"privileged": true,
"parameters": [
{ "key": "hostname", "value": "a.corp.org" },
{ "key": "volumes-from", "value": "data-container" },
{ "key": "lxc-conf", "value": "..." }
]
},
"type": "DOCKER",
"volumes": []
},
"args": ["hello"],
"cpus": 0.2,
"mem": 32.0,
"instances": 1
}
Something like that maybe?
Mesos support passing the parameter of volume plugin using "key" & "value". But the issue is how to pass the volume name which Mesos expects to be either an absolute path or if absolute path is not passed then it will merge the name provided with the slave container sandbox folder. They do that primarily to support checkpointing, in case slave goes down accidentally.
The only option, till the above get enhanced, is to use another key value pair parameter. For e.g. in above case
{ "key": "volumes-from", "value": "databox" },
{ "key": "volume", "value": "datebox_volume" }
I have tested above with a plugin and it works.
Another approach is to write a custom mesos framework capable of running the docker command you want. In order to know what offers to accept and where to place each task you can use marathon information from: /apps/v2/ (under tasks key).
A good starting point for writing a new mesos framework is: https://github.com/mesosphere/RENDLER

Resources