Defining a ProblemMatcher in VSCode tasks -- schema disagrees with docs? - task

In VSCode I'm trying to create a ProblemMatcher to parse errors on a custom script of mine which I run (markdown file -> pandoc -> PDFs if you're interested).
The pretty good VSCode ProblemMatcher documentation has an example task which appears (to me) to run a command ("command": "gcc") and define a problem matcher ("problemMatcher": {...}).
When I try this for my tasks.json file with both, I get an 'the description can't be converted into a problem matcher' error, which isn't terribly helpful. I checked the tasks.json schema and it clearly says:
The problem matcher to be used if a global command is executed (e.g. no tasks are defined). A tasks.json file can either contain a global problemMatcher property or a tasks property but not both.
Is the schema wrong? In which case I'll raise an issue.
Or is my code wrong? In which case, please point me in the right direction. Code in full (minus comments):
{
"version": "2.0.0",
"tasks": [
{
"label": "md2pdf",
"type": "shell",
"command": "md2pdf",
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "shared",
"showReuseMessage": false
},
"problemMatcher": {
"owner": "Markdown",
"fileLocation": ["absolute", "/tmp/md2pdf.log"],
"pattern": [
{
// Regular expression to match filename (on earlier line than actual warnings)
"regexp": "^Converting:\\s+(.*)$",
"kind": "location",
"file": 1
},
{
// Regular expression to match: "l.45 \msg_fatal:nn {fontspec} {cannot-use-pdftex}" with a preceding line giving "Converting <filename>:"
"regexp": "l.(\\d+)\\s(.*):(.*)$",
"line": 1,
"severity": 2,
"message": 3
}
]
}
}]
}

I've since spent more time figuring this out, and have corresponded with VSCode team, which has led to improvements in the documentation.
The two changes needed to get something simple working were:
Need to have "command": "/full/path/to/executable" not just "executable name".
The "fileLocation" isn't about the location of the file to be matched, but about how to treat file paths mentioned in the task output. The file to be matched can't be specified, as it's implicitly the file or folder open in the editor at the time of the task. The setting wasn't important in my case.

If you, like me, have come here due to the description can't be converted into a problem matcher, here is what I learned:
If your problem matcher says something like "base": "$gcc", then I assume you are using the Microsoft C/C++ plugin. If you are using some other base which is not listed on the official docs webpage (search Tasks in Visual Studio Code), then assume that it is probably supplied by a plugin.
So, this error could mean that you are missing a plugin. In my case I was trying to run this task remotely in WSL/Ubuntu using VS Code's awesome WSL integration. I installed the C/C++ plugin inside WSL and the error was fixed (go to the extension panel, click Install in WSL: <Distro name>).

Just a hunch, but I bet your fileLocation is wrong. Try something like
"fileLocation": "absolute",

Related

dask - read_json into dataframe ValueError

A minimal example here: I have a json file xaa.json whose contents looks like this (two rows from stackoverflow archive):
[
{"Id": 11, "Body": "<p>Given a specific <code>DateTime</code> value", "Title": "Calculate relative time in C#", "Comments": "There is the .net package https://github.com/NickStrupat/TimeAgo which pretty much does what is being asked."},
{"Id": 7888, "Body": "<p>You need to use an <code>ifstream</code> if you just want to read (use an <code>ofstream</code> to write, or an <code>fstream</code> for both).</p>
<p>To open a file in text mode, do the following:</p>
<pre><code>ifstream in(\\"filename.ext\\", ios_base::in); // the in flag is optional
</code></pre>
<p>To open a file in binary mode, you just need to add the \\"binary\\" flag.</p>
<pre><code>ifstream in2(\\"filename2.ext\\", ios_base::in | ios_base::binary );
</code></pre>
<p>Use the <code>ifstream.read()</code> function to read a block of characters (in binary or text mode). Use the <code>getline()</code> function (it's global) to read an entire line.</p>
", "Title": null, "Comments": "+1 for noting that the global getline() function is to be used instead of the member function."}
]
I want to load such json files into a dask dataframe. I use:
so_posts_df = dd.read_json('./xaa.json', orient='columns').compute()
I get this error:
ValueError: Unexpected character found when decoding object value
After looking into the contents, I figured that the "\\"' stuff was causing it. So, when I removed them, (the editor - IntelliJ said it was clean and nice looking JSON) and when I ran the same read_json, it was able to read into a df and display them nicely.
So, I have 2 questions: (a) what are the values for the read_json argument "errors" ? (b) How can I properly preprocess the json file before reading into dask dataframe? The presence of double-quotes and the double-escaping seems to be causing an issue.
[This may not be a dask issue at all...]...
This also fails with pandas.read_json. I recommend first trying to get things to work well with Pandas, and then try the same workload with dask dataframe. You will likely get much better support when asking Pandas questions.

Custom build task not showing in YAML task list

We wrote a custom Azure DevOps build task, but we can't find it in the YAML editor task list. It doesn't even show up in search.
This is my task.json:
{
"id": "17813657-13c6-4cd8-b245-8d8b3b0cf210",
"name": "ApplitoolsBuildTask",
"friendlyName": "Applitools Build Task",
"description": "Add the Applitools dashboard as a tab in the Azure DevOps build results page.",
"categories": [ "Build" ],
"category": "Build",
"author": "Applitools",
"version": {
"Major": 0,
"Minor": 44,
"Patch": 0
},
"instanceNameFormat": "Applitools Build Task $(version)",
"execution": {
"Node": {
"target": "dist/index.js"
}
}
}
I also tried with only categories property, and it still didn't show in the search.
I then tried downloading Augurk locally and examined its content (also available in GitHub: https://github.com/Augurk/vsts-extension/tree/master/src), and I saw in AugurkCLI it doesn't even have categories property, as it has a typo: categorues, and for some reason it still shows up. This leads me to think there's no relation between that property and the task list.
I also tried examining the XML file and saw it has <PublisherDetails> section, which my .vsix file doesn't have. What should I put in my vss-extension.json file to have it? And will it help getting my extension to show up in the Task List?
Note that in the Classic editor (the one with the UI) I see it just fine, in the right categories (if I have the "category" property), and if I don't have it then it still shows up when I search. The problem I have is to get my build-task to show up in the YAML editing Task List.
Indeed, our team is fixing this issues now. The issue caused by the YAML assistant panel doesn't allow tasks without input parameters. But worked in classic editor.
Before our fixed release deployed, you can use this workaround to achieve your customize task appeared in the YAML editor task list:
Change your script to accept an input parameter. And then the task will appeared in YAML editor task list.
You can reference this ticket we received recently. We will inform you here once we deployed the fixed release and the issue be fixed.
Well, it seems it's a bug on Microsoft side. I don't have any input fields in the build task, and the Azure DevOps YAML editor task list filters out any task that doesn't have input fields.
They told me they had fixed it:
https://developercommunity.visualstudio.com/content/problem/576169/our-custom-azure-devops-build-task-doesnt-show-in.html
The fix should be available in a few weeks.

global evaluation is not supported on Vscode when use shift+r flutter

When I use hot reload on my debug console it says "global evaluation not supported". How do I fix this problem?
Before I read this global evaluation, but it did not help me
This is the image of my problem
Add configuration "console":"terminal" in launch.json
Example:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "dart_application_1",
"request": "launch",
"type": "dart",
"program": "bin/dart_application_1.dart",
"console": "terminal"
}
]
}
More details available here
The VS Code debugger for Dart/Flutter does not currently support global evaluation (there's an open issue for this here). Add a 👍 to that issue if it's something you'd like to see.
The debug console can be used for evaluation when execution is paused (eg. you've stopped at a breakpoint or similar) - this is evaluation in the context of the current frame.
Just remember to update your name to your project name and then again on the program setting AFTER the bin/ ;)

How do I figure out what found-elasticsearch version is running on heroku?

Heroku says I am running elasticsearch version 2.2.0 but I think they are wrong and here is why...
Locally on 2.2.0, my mappings include the payloads: true option as defined here - and they work just fine. However, on heroku I get empty responses. If I remove this option and construct the mapping in accordance with the "2.x" docs then it works on heroku but responses are empty locally. What does 2.x mean exactly? How can I find the real version running on heroku?
Thank you.
Found-ElasticSearch:
{
"name": "instance-x",
"cluster_name": "x",
"version": {
"number": "2.2.0",
"build_hash": "1b182b4497d4bba7602085ebd2e59a8a555ad368",
"build_timestamp": "2016-01-14T13:42:27Z",
"build_snapshot": true,
"lucene_version": "5.4.0"
},
"tagline": "You Know, for Search"
}
Local:
{
"name": "Power Princess",
"cluster_name": "elasticsearch_brew",
"version": {
"number": "2.2.0",
"build_hash": "8ff36d139e16f8720f2947ef62c8167a888992fe",
"build_timestamp": "2016-01-27T13:32:39Z",
"build_snapshot": false,
"lucene_version": "5.4.1"
},
"tagline": "You Know, for Search"
}
I notice lucene and build_snapshot are different. The lucene version only has bugfixes that are nothing to do with payloads. So what is the build_snapshot and could that be affecting it?
You can use the build_hash values in order to figure out the difference between both builds. The one deployed on Found dates back from January 14th, 2016 and the one on Heroku from January 27th, 2016, i.e. 13 days. According to build_snapshot, the one on Found is not a release artifact, but a snapshot artifact.
So let's look at the differences between both artifacts on Github using the build hashes above:
13 days worth of commits
49 commits
191 changes files
And somewhere in there we find the commit db409c99 which includes changes in the CompletionFieldMapper and the payloads field has been added.
Glancing through the commits, we can find that they had to revert the new completion suggester changes because it was breaking other parts. It will be re-introduced in a major version (3.0).
So, to sum up, the local version you have includes the payloads field, while the one you have on Found doesn't, which explains the behavior you're seeing.

Generating an avro schema with optional values

I am trying to write a very easy avro schema (easy because I am just pointing out my current issue) to write an avro data file based on data stored in json format. The trick is that one field is optional, and one of avrotools or me is not doing it right.
The goal is not to write my own serialiser, the endgoal will be to have this in flume, I am in the early stages.
The data (works), in a file named so.log:
{
"valid": {"boolean":true}
, "source": {"bytes":"live"}
}
The schema, in a file named so.avsc:
{
"type":"record",
"name":"Event",
"fields":[
{"name":"valid", "type": ["null", "boolean"],"default":null}
, {"name":"source","type": ["null", "bytes"],"default":null}
]
}
I can easily generate an avro file with the following command:
java -jar avro-tools-1.7.6.jar fromjson --schema-file so.avsc so.log
So far so good. The thing is that "source" is optional, so I would expect the following data to be valid as well:
{
"valid": {"boolean":true}
}
But running the same command gives me the error:
Exception in thread "main" org.apache.avro.AvroTypeException: Expected start-union. Got END_OBJECT
at org.apache.avro.io.JsonDecoder.error(JsonDecoder.java:697)
at org.apache.avro.io.JsonDecoder.readIndex(JsonDecoder.java:441)
at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:229)
at org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:206)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:155)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:193)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:183)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:151)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142)
at org.apache.avro.tool.DataFileWriteTool.run(DataFileWriteTool.java:99)
at org.apache.avro.tool.Main.run(Main.java:84)
at org.apache.avro.tool.Main.main(Main.java:73)
I did try a lot of variations in the schema, even things that do not follow the avro spec. The schema I show here is, as far as I know, what the spec says it should be.
Would anybody know what I am doing wrong, and how I can actually have optional elements without writing my own serialiser?
Thanks,
According to the documentation of the Java API:
using a builder requires setting all fields, even if they are null
The python API, on the other hand, seems to allow null fields to be really optional:
Since the field favorite_color has type ["string", "null"], we are not required to specify this field
In short, as most tools are written Java, null fields must usually be explicitly given.

Resources