I'm trying to achieve a copy tool to move the PDF file to a different folder than the compiled one.
The idea behind this is to have the main tex files in a cloud base folder (onedrive), but prevent to generate all the files in OneDrive (because it sync the generated files...).
So I tried to make a new tool. But unfortunatly, it isn't working. Can someone help me with this ?
I'm on Windows 10.
I tried with copy (but it isn't a known command). So I tried with xcopy. It seems to find the command, but it tells that the number of parameters are wrong....
{
"name": "copyPDF",
"command": "xcopy",
"args": [
"%TMPDIR%/%DOCFILE%.pdf",
"%DIR%/PDF/%DOCFILE%.pdf",
"/y",
]
}
I've just tried to do the exact same thing.
It took me longer than I'd like to admit and it might be too late for your request, but it can help possible future visitors.
This is my settings.json:
"latex-workshop.latex.recipes": [
{
"name": "latexmk ➞ copyPDF",
"tools": ["latexmk", "copyPDF"]
}
],
"latex-workshop.latex.tools": [
{
"name": "latexmk",
"command": "latexmk",
"args": [
"-synctex=1",
"-interaction=nonstopmode",
"-file-line-error",
"-pdf",
"-outdir=%OUTDIR%",
"%DOC%"
],
"env": {}
},
{
"name": "copyPDF",
"command": "cmd.exe",
"args": [
"/c",
"copy",
"%OUTDIR%\\%DOCFILE%.pdf",
"%DIR%",
],
"env": {}
},
],
Had some problems with rheinert.leon's answer, mainly because of spaces in the filenames. Here's a a powershell tool version that accounts for that:
"latex-workshop.latex.tools": [
{
"name": "copyPDF",
"command": "powershell.exe",
"args": [
"copy '%OUTDIR%\\%DOCFILE%.pdf' %DIR%"
],
"env": {}
}
...
],
Related
Since Rails 5.1, It's possible to run rails server next to webpack-dev-server. I have configured debugger in launch.json to run rails server. When I start rails server throught vscode, I want it to automatically run ./bin/webpack-dev-server on background for autocompile javascript changes as another process, but I can't figure out how to achieve this.
I have created task in tasks.json to run webpacker but I can't figure out how to combine it with launch.json.
Here is my launch.json:
{
"version": "0.2.0",
"configurations": [
{
"preLaunchTask": "webpack-dev-server",
"name": "Rails server",
"type": "Ruby",
"request": "launch",
"program": "${workspaceRoot}/bin/rails",
"args": [
"server"
]
}
]
}
And here is tasks.json:
{
"version": "2.0.0",
"tasks": [
{
"label": "webpack-dev-server",
"type": "shell",
"command": "${workspaceRoot}/webpack-dev-server",
"isBackground": true,
}
]
}
When I run debugging and task separately, It's working as expected, but run then automatically when starting debugging not working.
Things I've tried:
run webpack-dev-server with "preLaunchTask" - problem with this is that "preLaunchTask" waits until webpack-dev-server stop running and after that runs debugging. I need them to run simultaneously next to each other.
Specify webpack-dev-server as another launch configuration and combine these two launches through compond in launch.json - this isn't working, because vscode needs to specify type of launch and shell isn't supported
run task with & at the end to suppress waiting for process finish - not working
If anybody solved this or know how to achieve running both processes simultaneously through one click, It would be helpful to share this knowledge.
Thank you.
So I found the solution thanks to https://stackoverflow.com/a/54017304/3442759.
In tasks.json there needs to be specified problemMatcher even when it's not used. Without problemMatcher specified, task will not run in the background even when isBackground is set to true.
I've created gist with setup steps. https://gist.github.com/tomkra/b1d67a7ae96af34cba78935f15b755b6
So final configuration is:
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Rails server",
"type": "Ruby",
"request": "launch",
"program": "${workspaceRoot}/bin/rails",
"args": [
"server"
],
"preLaunchTask": "webpack-dev-server"
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "webpack-dev-server",
"type": "shell",
"isBackground": true,
"command": "./bin/webpack-dev-server",
"problemMatcher": [
{
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": ".",
"endsPattern": ".",
}
}
]
}
]
}
In electron.manifest.json, I have the following:
"extraResources": [
{
"from": "./bin",
"to": "bin",
"filter": ["**/*"]
}
],
However, the electron build is missing a few needed files. These files came from a nugget package that the application is using and are located in the following folder:
bin\Debug\netcoreapp3.1\runtimes\win7-x64\native
I am wondering if it may be because these are windows specific files. The pipeline specifies a windows implementation. Here is the build command in azure-pipelines.yml (if it matters):
electronize build /target win /package-json package.json /dotnet-configuration $(buildConfiguration)
I have tried to specify the folder and the file several different ways (a few shown below), but I can’t get it to work. There are only 3 missing DLLs so I don’t mind adding them individually if needed. And I can add the entire folder since there isn't anything else in the folder but the 3 DLLs.
Attempt 1 – adding a FROM folder:
"extraResources": [
{
"from": [ "./bin", "./bin/Debug/netcoreapp3.1/runtimes/win7-x64/native" ],
"to": "bin",
"filter": [ "**/*" ]
}
],
Attempt 2 – specifying the file:
"extraResources": [
{
"from": [ "./bin", "./bin/Debug/netcoreapp3.1/runtimes/win7-x64/native/abcdef.dll" ],
"to": "bin",
"filter": [ "**/*" ]
}
],
Attempt 3 – using a windows tag and a specific file:
"extraResources": [
{
"from": "./bin",
"to": "bin",
"filter": [ "**/*" ]
}
],
"win":
{
"extraResources": [
{
"from": "./bin/Debug/netcoreapp3.1/runtimes/win7-x64/native/abcdef.dll",
"to": "bin"
"filter": [ "**/*" ]
}
]
},
How can I add DLL libraries to my electron build? And can I do that by passing in the build configuration (something like $(buildConfiguration))?
Thanks in advance for any help you can give me.
Andrew
I'm writing a simple program using VS Code, Mingw and OpenCv Lib. I downloaded a prebuild OpenCV package from here and I followed the instruction in this page for building the code. I can build the program successfully with no error but there is a problem. when I call OpenCV function(like cv::imread) an segmentation fault occurs. It will be appreciated for any kind of help.
task.json
{
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "C/C++: gcc.exe build active file",
"command": "C:\\mingw\\mingw64\\bin\\g++.exe",
"args": [
"-g",
"${file}",
"${workspaceFolder}/utils.cpp",
"-o",
"${fileDirname}\\${fileBasenameNoExtension}.exe",
"-IC:\\OpenCV\\include",
"-LC:\\OpenCV\\x64\\mingw\\bin",
"-llibopencv_calib3d341",
"-llibopencv_core341",
"-llibopencv_dnn341",
"-llibopencv_features2d341",
"-llibopencv_flann341",
"-llibopencv_highgui341",
"-llibopencv_imgcodecs341",
"-llibopencv_imgproc341",
"-llibopencv_ml341",
"-llibopencv_objdetect341",
"-llibopencv_photo341",
"-llibopencv_shape341",
"-llibopencv_stitching341",
"-llibopencv_superres341",
"-llibopencv_video341",
"-llibopencv_videoio341",
"-llibopencv_videostab341"
],
"options": {
"cwd": "C:\\mingw\\mingw64\\bin"
},
"problemMatcher": [
"$gcc"
],
"group": "build"
},
{
"type": "shell",
"label": "g++.exe build active file",
"command": "C:\\mingw\\mingw64\\bin\\g++.exe",
"args": [
"-g",
"${file}",
"-o",
"${fileDirname}\\${fileBasenameNoExtension}.exe"
],
"options": {
"cwd": "C:\\mingw\\mingw64\\bin"
}
}
]
}
c_cpp_properties.json
{
"configurations": [{
"name": "Win32",
"includePath": [
"${workspaceFolder}/**",
"C:/OpenCV/include/**"
],
"defines": [
"_DEBUG",
"UNICODE",
"_UNICODE"
],
"windowsSdkVersion": "8.1",
"compilerPath": "C:\\mingw\\mingw64\\bin\\g++.exe",
"cStandard": "c11",
"cppStandard": "c++17",
"intelliSenseMode": "gcc-x64"
}],
"version": 4
}
Is this on Windows? If so I don't think the issue is with your VS Code files if you're only facing problems after build. You may want to check your opencv or mingw-w64 installation, did you build and install with cmake? For Mingw64, was your install configuration correct for your machine?
Assuming of course that everything has also been added to your Windows path environment variable, I tested your cpp_properties.json and tasks.json setup with my own opencv windows vs code environment. The only things I did differently were to get rid of include errors, such as:
"problemMatcher": {
"base": "$gcc",
"fileLocation": [
"absolute"
]
},
"group": {
"kind":"build",
"isDefault": true
}
To my tasks.json problemMatcher statement and group statement so that the system could properly find the opencv library. I also don't know what the "${workspaceFolder}/utils.cpp" line is doing in tasks.json, but regardless, if you're able to build fine it would seem to me there's more likely an underlying problem with either mingw or opencv.
On my linux VM I have set up a docker container to build and debug my vs code C++ project via an ssh connection. Building works inside the container as well as running and debugging with breakpoints. I am stuck on how to redirect stdout to the Output and Problems tabs so i can see warnings generated from the build and then navigate to the affected files. Instead it just outputs the build to a terminal window.
The project is located in a docker volume in the location:
/var/snap/docker/common/var-lib-docker/volumes/vol-tom-2/_data/My-Project
And inside the container it is located in:
/home/buildmaster/workspace/My-Project
For debugging i have modified the launch.json file so that when setting breakpoints it matches up the files in the project to the ones in the container, by adding this line:
"sourceFileMap": {
"/home/user/workspace": "/var/snap/docker/common/var-lib-docker/volumes/vol-tom-2/_data/"
},
I would like to find something similar in tasks.json so that it can sync up my local vs code project with the warnings and errors generated from the build inside the container.
Below is my tasks.json file, thanks in advance if any one has any idea how to solve this!
{
"version": "2.0.0",
"command": "/bin/sh",
"args": ["-c"],
"reveal": "always",
"tasks": [
{
"args": [
"user#localhost",
"-p",
"32772",
"/home/build-scripts/build-script.sh"
],
"label": "build",
"command": "ssh",
"problemMatcher": {
"owner": "cpp",
"fileLocation": ["relative", "${workspaceRoot}"],
"pattern": {
"regexp": "^\/host\/(.*):(\\d+):(\\d+):\\s+(warning|error):\\s+(.*)$",
"file": 1,
"line": 2,
"column": 3,
"severity": 4,
"message": 5
}
},
}
]
}
Hi didn't really knew if my question was more for serverfault or here, I hope devops won't mind me posting here.
I am working on a stack with mesos/marathon/docker/glusterfs, I feel tired with the lake of documentation.
I am looking for a sample marthon deployement file for deploying using glusterfs driver.
The author says that we should create the volume before, but he doesn't say anything about mounting it.
"container": {
"type": "DOCKER",
"docker": {
"image": "kylemanna/openvpn:latest",
"parameters": [
{
"key": "volume-driver",
"value": "glusterfs"
},
{
"key": "cap-add",
"value": "NET_ADMIN"
}
],
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 1194
}
]
},
"volumes": [
{
"containerPath": "/etc/openvpn",
"hostPath": "openvpn-data",
"mode": "RW"
}
]
}
My container keep restarting in marathon and logs says that /usr/local/bin/ovpn_run: line 16: /etc/openvpn/ovpn_env.sh: No such file or directory
On my gluster fileserver, I have these file present in /data/openvpn-data/ovpn_env.sh
I don't see any mount point in /mnt, I guess marathon did the mount itself, but because the container keep restarting, I dont see it.
I did a docker inspect to check where was stored the filesystem and I found that it is stored in /var/lib/docker-volumes/_glusterfs/openvpn-data
So here are my questions :
Is my marathon json file correct ?
Will the container wait for all data to be downloaded and should I configure something for that ?
Are the data erased when deleting a container on marathon?
Should I have my ovpn_env.sh in /data/myvolume/ovpn_env.sh or /data/myvolume/etc/openvpn/ovpn_env.sh
Have a look at the folowing issue
https://github.com/mesosphere/marathon/issues/2493#issuecomment-196743212
and the docs at
https://github.com/mesosphere/marathon/blob/bd076173b662b12d18e5dd568629a286b242ba91/docs/docs/persistent-volumes.md
Quote:
Docker volumes with plugin drivers is not available right now.
You'll have to create the volume/mount before you start the container, and map the host folder when you launch the app via Marathon (you do this already). I guess that's why it's currently called "persistent local volumes"...
Define it in "parameters" part, like this:
"parameters": [
{
"key": "volume-driver",
"value": "glusterfs"
},
{
"key": "volume",
"value": "openvpn-data:/etc/openvpn"
}
]