In order to allow users to upload documents on my website, I am trying to add form validation on a symfony2 application. According to this doc : http://symfony.com/doc/current/reference/constraints/File.html , I should create a validation.yml file with this syntax :
# src/Acme/BlogBundle/Resources/config/validation.yml
Acme\BlogBundle\Entity\Author
properties:
bioFile:
- File:
maxSize: 1024k
mimeTypes: [application/pdf, application/x-pdf]
mimeTypesMessage: Please upload a valid PDF
I have tried to type/edit this file in a lot of ways, yet I always get a parsing error when the file is executed :
Unable to parse in "\/***\/***\/dev\/***\/src\/***\/***Bundle\/Resources\/config\/validation.yml" at line 1 (near "***\***\Entity\Author").
I tried to test this code with this online YML parsing tool : http://yaml-online-parser.appspot.com/, and it says the colon on line 3 just after "properties" is wrong :
Output
ERROR:
mapping values are not allowed here
in "<unicode string>", line 3, column 13:
properties:
^
What am I missing here? Why is the YML syntax used in symfony documentation not accepted by this online parser? Note that I am aware of the tab indentation vs. space indentation for .yml files.
Related
I'm going to put the csv file into the bucket using influxdb v2.1.
Attempting to insert a simple example file results in the following error:
error in csv.from(): failed to read metadata: failed to read annotations: expected annotation datatype
The csv file that I was going to write is as follows.
#datatype measurement,tag,double,dateTime:RFC3339
m,host,used_percent,time
mem,host1,64.23,2020-01-01T00:00:00Z
mem,host2,72.01,2020-01-01T00:00:00Z
mem,host1,62.61,2020-01-01T00:00:10Z
mem,host2,72.98,2020-01-01T00:00:10Z
mem,host1,63.40,2020-01-01T00:00:20Z
mem,host2,73.77,2020-01-01T00:00:20Z
This is the example data in the official document of influxdata.
If you look at the first line of the example, you can see that datatype is annotated, but why does the error occur?
How should I modify it?
This looks like invalid annotated CVS.
In the csv.from function documentation, you can find examples (as string literals) of both annotated and raw CVS that the cvs.from supports.
I have a custom connector that writes Neo4j commands from a file to Kafka and I would like to debug it. So, I downloaded Confluent v3.3.0 and took time familiarize myself with it; however, I find myself stuck trying to load the connector. When I try to load the connector with its .properties file I get the following error:
parse error: Invalid string: control characters from U+0000 through U+001F must be escaped at line 1, column 124
parse error: Invalid numeric literal at line 2, column 0
I have an inkling that it is trying to parse the file as a JSON file as before this error I got the following error when trying to load the connector:
Warning: Install 'jq' to add support for parsing JSON
And so I brew installed jq, and now having been getting the former error.
I would like this file to be parsed as java properties format which I thought would be implicit due to the .properties, but do I need to be explicit in a setting somewhere?
Update:
I converted the .properties to JSON as suggested by #Konstantine Karantasis, but I get the same error as before but without the first line:
parse error: Invalid numeric literal at line 2, column 0
I triple checked my formatting and did some searching on the error, but have come up short. Please let me know if I made an error in my formatting or if there is a nuance when using JSON files with Kafka Connect that I don't about.
Java properties:
name=neo4k-file-source
connector.class=neo4k.filestream.source.Neo4jFileStreamSourceConnector
tasks.max=1
file=Neo4jCommands.txt
topic=neo4j-commands
Converted to JSON:
[{
"name": "neo4k-file-source",
"connector": {
"class": "neo4k.filestream.source.Neo4jFileStreamSourceConnector"
},
"tasks": {
"max": 1
},
"file": "Neo4jCommands.txt",
"topic": "neo4j-commands"
}]
Check out https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/ for an example of a valid json file being loaded using confluent CLI
In your example, try this:
{
"name": "neo4k-file-source",
"config": {
"connector.class": "neo4k.filestream.source.Neo4jFileStreamSourceConnector",
"tasks.max": 1,
"file": "Neo4jCommands.txt",
"topic": "neo4j-commands"
}
}
Confluent CLI, which you are using to start your connector, tries to be smart about figuring out the type of your properties file. It doesn't depend on the extension name (.properties) but calls file on the input file and matches the result against the ASCII string.
This complies with the current definition of a java properties file (https://en.wikipedia.org/wiki/.properties) but it should be extended to match files encoded in UTF-8 or files that contain escaped unicode characters.
You have two options.
Transform your properties to JSON format instead.
Edit the CLI to match the file type returned when running file <yourconf.properties>
I am trying to upload csv file of products in prestashop. Below are the errors that I am getting :
No Name (ID: 61,1,Orous Women's A Line Dress,Home,1399,IN Reduced Rate (4%),0,0,,,,,D2_Yellow,,,,,,,,,,,2,,,,,,"Fabric: Crepe A-line
Exquisite style patterns Gentle machine wash, dry clean, do not
bleach",,,,,,,,,1,,,,http://www.spademark.com/1154X1500/orous/D2_Yellow-_1.jpg,,,,New,,,,,,,,)
cannot be saved
and
Property Product->name is empty
What am I doing wrong?
docs http://doc.prestashop.com/display/PS16/CSV+Import+Parameters
Please use this structure for successful upload:
"Enabled";"Name";"Categories";"Price";"Tax rule ID";"Buying price";"On sale";"Reference";"Weight";"Quantity";"Short desc.";"Long desc";"Images URL"
1;"Test";"1,2,3";130;1;75;0;"PROD-TEST";"0.500";10;"'Tis a short desc.";"This is a long description.";"http://www.myprestashop/images/product1.gif"
I am getting the following error in trying to load a large RDF/XML document into Fuseki:
> Code: 4/UNWISE_CHARACTER in PATH: The character matches no grammar rules of URIs/IRIs. These characters are permitted in RDF URI References, XML system identifiers, and XML Schema anyURIs.
How do I find out what line contains the offending error?
I have tried turning up the output in Log4j.properties and I also tried validating the RDF/XML file using the Jena commandline rdfxml tool (as well as utf8 & riot) --- it validates with no errors reported. But I'm new to this toolset.
(version?)
Check the ""-strings in your RDF/XML data for undesiravle URIs - especially spaces in URIs.
Best to validate before loading : try riot YourFile and send stderr and stdout to a file. The errors should be approximately in the position of the parser output (N-triples) at the time.
In R there is the source function where you can source an R script from another R script.
I want to be able to do the same in SPSS.
How can I source an SPSS syntax file from another SPSS syntax file?
Updated Following #AndyW's comments.
There is the INSERT and INCLUDE commands. INSERT is newer and more versatile than INCLUDE.
See documentation on INSERT here.
The following is the basic syntax template:
INSERT FILE='file specification'
[SYNTAX = {INTERACTIVE*}]
{BATCH }
[ERROR = {CONTINUE*}]
{STOP }
[CD = {NO*}]
{YES}
[ENCODING = 'encoding specification']
Thus, the following command can be placed in an SPSS syntax file
INSERT FILE='foo.sps'.
and it would import foo.sps syntax file.
By default, syntax must follow the rules of interactive mode, and the code wont stop on an error.
To avoid having to specify the full path to the file, the working directory can be specified as an argument in the INSERT statement or with a separate CD command.
E.g.,
CD '/user/jimbo/long/path/to/project'
Another option is to use FILE HANDLE.
For more information see the SPSS Syntax Reference (available here as a large PDF file).