Access single object in model using Metalsmith and swig - swig-template

I have a json data file with multiple objects with named keys in it.
{
"berlin:" : {
"location": "Berlin",
"folder": "berlin-2016"
},
"seattle" : {
"location": "Seattle ",
"folder": "seattle-2016"
}
}
In my content file I would like to specify which object in the model to use and then refer to that in swig. Something like this:
---
model:
conference: conferences['berlin']
---
{{ model.conference.location }}
Is this possible?

That's definitely possible with metalsmith. I don't have a complete picture of your build process, but for this solution you'll have to use the metalsmith javascript api:
./data.json
{
"berlin:" : {
"location": "Berlin",
"folder": "berlin-2016"
},
"seattle" : {
"location": "Seattle ",
"folder": "seattle-2016"
}
}
./build.js
// Dependencies
var metalsmith = require('metalsmith');
var layouts = require('metalsmith-layouts');
// Import metadata
var metadata = require('./data.json');
// Build
metalsmith(__dirname)
// Make data available
.metadata(data)
// Process templates
.use(layouts('swig'))
// Build site
.build(function(err){
if (err) throw err;
});
Then run node build.js in your root project folder to build. In your templates the data from data.json would then be available as {{ berlin.location }}.
You can also do this without the javascript api (which I don't recommend because you lose some flexibility), in which case you would use a plugin (for example: metalsmith-json)

Related

Artifactory and Jenkins - get file with newest/biggest custom property

I have generic repository "my_repo". I uploaded files there from jenkins with to paths like my_repo/branch_buildNumber/package.tar.gz and with custom property "tag" like "1.9.0","1.10.0" etc. I want to get item/file with latest/newest tag.
I tried to modify Example 2 from this link ...
https://www.jfrog.com/confluence/display/JFROG/Using+File+Specs#UsingFileSpecs-Examples
... and add sorting and limit the way it was done here ...
https://www.jfrog.com/confluence/display/JFROG/Artifactory+Query+Language#ArtifactoryQueryLanguage-limitDisplayLimitsandPagination
But im getting "unknown property desc" error.
The Jenkins Artifactory Plugin, like most of the JFrog clients, supports File Specs for downloading and uploading generic files.
The File Specs schema is described here. When creating a File Spec for downloading files, you have the option of using the "pattern" property, which can include wildcards. For example, the following spec downloads all the zip files from the my-local-repo repository into the local froggy directory:
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"target": "froggy/"
}
]
}
Alternatively, you can use "aql" instead of "pattern". The following spec, provides the same result as the previous one:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"target": "froggy/"
}
]
}
The allowed AQL syntax inside File Specs does not include everything the Artifactory Query Language allows. For examples, you can't use the "include" or "sort" clauses. These limitations were put in place, to make the response structure known and constant.
Sorting however is still available with File Specs, regardless of whether you choose to use "pattern" or "aql". It is supported throw the "sortBy", "sortOrder", "limit" and "offset" File Spec properties.
For example, the following File Spec, will download only the 3 largest zip file files:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "froggy/"
}
]
}
And you can do the same with "pattern", instead of "aql":
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "local/output/"
}
]
}
You can read more about File Specs here.
(After answering this question here, we also updated the File Specs documentation with these examples).
After a lot of testing and experimenting i found that there are many ways of solving my main problem (getting latest version of package) but each of way require some function which is available in paid version. Like sort() in AQL or [RELEASE] in REST API. But i found that i still can get JSON with a full list of files and its properties. I can also download each single file. This led me to solution with simple python script. I can't publish whole but only the core which should bu fairly obvious
import requests, argparse
from packaging import version
...
query="""
items.find({
"type" : "file",
"$and":[{
"repo" : {"$match" : \"""" + args.repository + """\"},
"path" : {"$match" : \"""" + args.path + """\"}
}]
}).include("name","repo","path","size","property.*")
"""
auth=(args.username,args.password)
def clearVersion(ver: str):
new = ''
for letter in ver:
if letter.isnumeric() or letter == ".":
new+=letter
return new
def lastestArtifact(response: requests):
response = response.json()
latestVer = "0.0.0"
currentItemIndex = 0
chosenItemIndex = 0
for results in response["results"]:
for prop in results['properties']:
if prop["key"] == "tag":
if version.parse(clearVersion(prop["value"])) > version.parse(clearVersion(latestVer)):
latestVer = prop["value"]
chosenItemIndex = currentItemIndex
currentItemIndex += 1
return response["results"][chosenItemIndex]
req = requests.post(url,data=query,auth=auth)
if args.verbose:
print(req.text)
latest = lastestArtifact(req)
...
I just want to point that THIS IS NOT permanent solution. We just didnt want to buy license yet only because of one single problem. But if there will be more of such problems then we definetly buy PRO subscription.

Error Creating Folder in Sharepoint using Graph API

I am attempting to create a child folder in Sharepoint and I am receiving errors. I can successfully create two levels of folders, but upon creating the third level, I receive the error below:
"The request URI is not valid. The bound function binding to 'microsoft.graph.driveItem' does not support the escape function annotation."
I am using Postman and taking the steps here to create folders:
1) Top Level: (Successful) - Folder1 Created
https://graph.microsoft.com/v1.0/sites/<SITE_ID>/drive/root/children
{
"name": "Folder1",
"folder": { },
"#microsoft.graph.conflictBehavior": "replace"
}
2) First Child (Successful - Folder2 Created within Folder1)
https://graph.microsoft.com/v1.0/sites/<SITE_ID>/drive/root:/Folder1:/children
{
"name": "Folder2",
"folder": { },
"#microsoft.graph.conflictBehavior": "replace"
}
3) Second Child (Fails)
https://graph.microsoft.com/v1.0/sites/<SITE_ID>/drive/root:/Folder1:/Folder2:/children
{
"name": "Folder3",
"folder": { },
"#microsoft.graph.conflictBehavior": "replace"
}
Any feedback how to properly create the Folder3 folder would be much appreciated.
In the last example, the error occurs since invalid Url format is specified for accessing Folder2 in the provided endpoint:
https://graph.microsoft.com/v1.0/sites/<SITE_ID>/drive/root:/Folder1:/Folder2:/children
|_________________|
^^^
invalid path syntax for accessing Folder2
To create folder under Folder2 sub folder, the Url format should be as follows:
https://graph.microsoft.com/v1.0/sites/<SITE_ID>/drive/root:/Folder1/Folder2:/children
Refer Addressing resources in a drive on OneDrive for a more details.
Trying using below code.
Where parentDriveId is root drive ID and driveId is the folder where we try to create folder.
var driveItem = new DriveItem
{
Name = driveName,
Folder = new Folder { },
AdditionalData = new Dictionary<string, object>()
{
{ "#microsoft.graph.conflictBehavior", "rename" }
}
};
createdDriveId = (await graphclient.Drives[parentDriveId].Items[driveId].Children.Request().AddAsync(driveItem)).Id;

How to create multiple output paths in Webpack config

Does anyone know how to create multiple output paths in a webpack.config.js file? I'm using bootstrap-sass which comes with a few different font files, etc. For webpack to process these i've included file-loader which is working correctly, however the files it outputs are being saved to the output path i specified for the rest of my files:
output: {
path: __dirname + "/js",
filename: "scripts.min.js"
}
I'd like to achieve something where I can maybe look at the extension types for whatever webpack is outputting and for things ending in .woff .eot, etc, have them diverted to a different output path. Is this possible?
I did a little googling and came across this *issue on github where a couple of solutions are offered, edit:
but it looks as if you need to know the entry point in able to specify an output using the hash method
eg:
var entryPointsPathPrefix = './src/javascripts/pages';
var WebpackConfig = {
entry : {
a: entryPointsPathPrefix + '/a.jsx',
b: entryPointsPathPrefix + '/b.jsx',
c: entryPointsPathPrefix + '/c.jsx',
d: entryPointsPathPrefix + '/d.jsx'
},
// send to distribution
output: {
path: './dist/js',
filename: '[name].js'
}
}
*https://github.com/webpack/webpack/issues/1189
however in my case, as far as the font files are concerned, the input process is kind of abstracted away and all i know is the output. in the case of my other files undergoing transformations, there's a known point where i'm requiring them in to be then handled by my loaders. if there was a way of finding out where this step was happening, i could then use the hash method to customize output paths, but i don't know where these files are being required in.
Webpack does support multiple output paths.
Set the output paths as the entry key. And use the name as output template.
webpack config:
entry: {
'module/a/index': 'module/a/index.js',
'module/b/index': 'module/b/index.js',
},
output: {
path: path.resolve(__dirname, 'dist'),
filename: '[name].js'
}
generated:
└── module
├── a
│   └── index.js
└── b
└── index.js
I'm not sure if we have the same problem since webpack only support one output per configuration as of Jun 2016. I guess you already seen the issue on Github.
But I separate the output path by using the multi-compiler. (i.e. separating the configuration object of webpack.config.js).
var config = {
// TODO: Add common Configuration
module: {},
};
var fooConfig = Object.assign({}, config, {
name: "a",
entry: "./a/app",
output: {
path: "./a",
filename: "bundle.js"
},
});
var barConfig = Object.assign({}, config,{
name: "b",
entry: "./b/app",
output: {
path: "./b",
filename: "bundle.js"
},
});
// Return Array of Configurations
module.exports = [
fooConfig, barConfig,
];
If you have common configuration among them, you could use the extend library or Object.assign in ES6 or {...} spread operator in ES7.
You can now (as of Webpack v5.0.0) specify a unique output path for each entry using the new "descriptor" syntax (https://webpack.js.org/configuration/entry-context/#entry-descriptor) –
module.exports = {
entry: {
home: { import: './home.js', filename: 'unique/path/1/[name][ext]' },
about: { import: './about.js', filename: 'unique/path/2/[name][ext]' }
}
};
If you can live with multiple output paths having the same level of depth and folder structure there is a way to do this in webpack 2 (have yet to test with webpack 1.x)
Basically you don't follow the doc rules and you provide a path for the filename.
module.exports = {
entry: {
foo: 'foo.js',
bar: 'bar.js'
},
output: {
path: path.join(__dirname, 'components'),
filename: '[name]/dist/[name].bundle.js', // Hacky way to force webpack to have multiple output folders vs multiple files per one path
}
};
That will take this folder structure
/-
foo.js
bar.js
And turn it into
/-
foo.js
bar.js
components/foo/dist/foo.js
components/bar/dist/bar.js
Please don't use any workaround because it will impact build performance.
Webpack File Manager Plugin
Easy to install copy this tag on top of the webpack.config.js
const FileManagerPlugin = require('filemanager-webpack-plugin');
Install
npm install filemanager-webpack-plugin --save-dev
Add the plugin
module.exports = {
plugins: [
new FileManagerPlugin({
onEnd: {
copy: [
{source: 'www', destination: './vinod test 1/'},
{source: 'www', destination: './vinod testing 2/'},
{source: 'www', destination: './vinod testing 3/'},
],
},
}),
],
};
Screenshot
If it's not obvious after all the answers you can also output to a completely different directories (for example a directory outside your standard dist folder). You can do that by using your root as a path (because you only have one path) and by moving the full "directory part" of your path to the entry option (because you can have multiple entries):
entry: {
'dist/main': './src/index.js',
'docs/main': './src/index.js'
},
output: {
filename: '[name].js',
path: path.resolve(__dirname, './'),
}
This config results in the ./dist/main.js and ./docs/main.js being created.
In my case I had this scenario
const config = {
entry: {
moduleA: './modules/moduleA/index.js',
moduleB: './modules/moduleB/index.js',
moduleC: './modules/moduleB/v1/index.js',
moduleC: './modules/moduleB/v2/index.js',
},
}
And I solve it like this (webpack4)
const config = {
entry: {
moduleA: './modules/moduleA/index.js',
moduleB: './modules/moduleB/index.js',
'moduleC/v1/moduleC': './modules/moduleB/v1/index.js',
'moduleC/v2/MoculeC': './modules/moduleB/v2/index.js',
},
}
You definitely can return array of configurations from your webpack.config file. But it's not an optimal solution if you just want a copy of artifacts to be in the folder of your project's documentation, since it makes webpack build your code twice doubling the overall time to build.
In this case I'd recommend to use the FileManagerWebpackPlugin plugin instead:
const FileManagerPlugin = require('filemanager-webpack-plugin');
// ...
plugins: [
// ...
new FileManagerPlugin({
onEnd: {
copy: [{
source: './dist/*.*',
destination: './public/',
}],
},
}),
],
You can only have one output path.
from the docs https://github.com/webpack/docs/wiki/configuration#output
Options affecting the output of the compilation. output options tell Webpack how to write the compiled files to disk. Note, that while there can be multiple entry points, only one output configuration is specified.
If you use any hashing ([hash] or [chunkhash]) make sure to have a consistent ordering of modules. Use the OccurenceOrderPlugin or recordsPath.
I wrote a plugin that can hopefully do what you want, you can specify known or unknown entry points (using glob) and specify exact outputs or dynamically generate them using the entry file path and name. https://www.npmjs.com/package/webpack-entry-plus
I actually wound up just going into index.js in the file-loader module and changing where the contents were emitted to. This is probably not the optimal solution, but until there's some other way, this is fine since I know exactly what's being handled by this loader, which is just fonts.
//index.js
var loaderUtils = require("loader-utils");
module.exports = function(content) {
this.cacheable && this.cacheable();
if(!this.emitFile) throw new Error("emitFile is required from module system");
var query = loaderUtils.parseQuery(this.query);
var url = loaderUtils.interpolateName(this, query.name || "[hash].[ext]", {
context: query.context || this.options.context,
content: content,
regExp: query.regExp
});
this.emitFile("fonts/"+ url, content);//changed path to emit contents to "fonts" folder rather than project root
return "module.exports = __webpack_public_path__ + " + JSON.stringify( url) + ";";
}
module.exports.raw = true;
u can do lik
var config = {
// TODO: Add common Configuration
module: {},
};
var x= Object.assign({}, config, {
name: "x",
entry: "./public/x/js/x.js",
output: {
path: __dirname+"/public/x/jsbuild",
filename: "xbundle.js"
},
});
var y= Object.assign({}, config, {
name: "y",
entry: "./public/y/js/FBRscript.js",
output: {
path: __dirname+"/public/fbr/jsbuild",
filename: "ybundle.js"
},
});
let list=[x,y];
for(item of list){
module.exports =item;
}
The problem is already in the language:
entry (which is a object (key/value) and is used to define the inputs*)
output (which is a object (key/value) and is used to define outputs*)
The idea to differentiate the output based on limited placeholder like '[name]' defines limitations.
I like the core functionality of webpack, but the usage requires a rewrite with abstract definitions which are based on logic and simplicity... the hardest thing in software-development... logic and simplicity.
All this could be solved by just providing a list of input/output definitions... A LIST INPUT/OUTPUT DEFINITIONS.
Vinod Kumar's good workaround is:
module.exports = {
plugins: [
new FileManagerPlugin({
events: {
onEnd: {
copy: [
{source: 'www', destination: './vinod test 1/'},
{source: 'www', destination: './vinod testing 2/'},
{source: 'www', destination: './vinod testing 3/'},
],
},
}
}),
],
};

How can one split API documentation in multiple files using Swagger 2.0

According to Swagger 2.0 specs, it might be possible to do this. I am referencing PathObject using $ref which points to another file. We used to be able to do this nicely using Swagger 1.2. But Swagger-UI does not seem to be able to read the referred PathObject in another file.
Is this part of spec too new and is not yet supported? Is there a way to split each "path"'s documentation into another file?
{
"swagger": "2.0",
"basePath": "/rest/json",
"schemes": [
"http",
"https"
],
"info": {
"title": "REST APIs",
"description": "desc",
"version": "1.0"
},
"paths": {
"/time": {
"$ref": "anotherfile.json"
}
}
}
To support multiple files, your libraries have to support dereferencing the $ref field. But I would not recommend to deliver the swagger file with unresolved references. Our swagger defintion has around 30-40 files. Delivering them via HTTP/1.1 could slow down any reading application.
Since we are building javascript libs, too, we already had a nodejs based build system using gulp. For the node package manager (npm) you can find some libraries supporting dereferencing to build one big swagger file.
Our base file looks like this (shortened):
swagger: '2.0'
info:
version: 2.0.0
title: App
description: Example
basePath: /api/2
paths:
$ref: "routes.json"
definitions:
example:
$ref: "schema/example.json"
The routes.json is generated from our routing file. For this we use a gulp target implementing swagger-jsdoc like this:
var gulp = require('gulp');
var fs = require('fs');
var gutil = require('gulp-util');
var swaggerJSDoc = require('swagger-jsdoc');
gulp.task('routes-swagger', [], function (done) {
var options = {
swaggerDefinition: {
info: {
title: 'Routes only, do not use, only for reference',
version: '1.0.0',
},
},
apis: ['./routing.php'], // Path to the API docs
};
var swaggerSpec = swaggerJSDoc(options);
fs.writeFile('public/doc/routes.json', JSON.stringify(swaggerSpec.paths, null, "\t"), function (error) {
if (error) {
gutil.log(gutil.colors.red(error));
} else {
gutil.log(gutil.colors.green("Succesfully generated routes include."));
done();
}
});
});
And for generating the swagger file, we use a build task implementing SwaggerParser like this:
var gulp = require('gulp');
var bootprint = require('bootprint');
var bootprintSwagger = require('bootprint-swagger');
var SwaggerParser = require('swagger-parser');
var gutil = require('gulp-util');
var fs = require('fs');
gulp.task('swagger', ['routes-swagger'], function () {
SwaggerParser.bundle('public/doc/swagger.yaml', {
"cache": {
"fs": false
}
})
.then(function(api) {
fs.writeFile('public/doc/swagger.json', JSON.stringify(api, null, "\t"), function (error) {
if (error) {
gutil.log(gutil.colors.red(error));
} else {
gutil.log("Bundled API %s, Version: %s", gutil.colors.magenta(api.info.title), api.info.version);
}
});
})
.catch(function(err) {
gutil.log(gutil.colors.red.bold(err));
});
});
With this implementation we can maintain a rather large swagger specification and we are not restricted to special programming language or framework implementation, since we define the paths in the comments to the real routing definitions. (Note: The gulp tasks are split in multiple files too.)
While it would theoretically be possible to do that in the future, the solution is still not fully baked into the supporting tools so for now I'd highly recommend keeping it in one file.
If you're looking for a way to manage and navigate the Swagger definition, I'd recommend using the YAML format of the spec, where you can add comments and that may ease up navigation and splitting of a large definition.
You can also use JSON Refs library to resolve such multi-file Swagger spec.
I've written about it in this blog post
There is also this GitHub repo to demonstrate how all of this work.
My solution to this problem is using this package below to solve the reference issue
https://www.npmjs.com/package/json-schema-ref-parser
Here is the code snippet when generating the swagger UI using that library. I was using Express.js for my node server.
import express from 'express';
import * as path from 'path';
import refParser from '#apidevtools/json-schema-ref-parser';
import swaggerUi from 'swagger-ui-express';
const port = 3100;
const app = express();
app.get('/', async (req, res) => {
res.redirect('/api-docs')
});
app.use(
'/api-docs',
async function (req: express.Request, res: express.Response, next: express.NextFunction) {
const schemaFilePath = path.join(__dirname, 'schema', 'openapi.yml');
try {
// Resolve $ref in schema
const swaggerDocument = await refParser.dereference(schemaFilePath);
(req as any).swaggerDoc = swaggerDocument;
next();
} catch (err) {
console.error(err);
next(err);
}
},
swaggerUi.serve,
swaggerUi.setup()
);
app.listen(port, () => console.log(`Local web server listening on port ${port}!`));
Take a look at my Github repository to see how it works

How to make elasticsearch add the timestamp field to every document in all indices?

Elasticsearch experts,
I have been unable to find a simple way to just tell ElasticSearch to insert the _timestamp field for all the documents that are added in all the indices (and all document types).
I see an example for specific types:
http://www.elasticsearch.org/guide/reference/mapping/timestamp-field/
and also see an example for all indices for a specific type (using _all):
http://www.elasticsearch.org/guide/reference/api/admin-indices-put-mapping/
but I am unable to find any documentation on adding it by default for all documents that get added irrespective of the index and type.
Elasticsearch used to support automatically adding timestamps to documents being indexed, but deprecated this feature in 2.0.0
From the version 5.5 documentation:
The _timestamp and _ttl fields were deprecated and are now removed. As a replacement for _timestamp, you should populate a regular date field with the current timestamp on application side.
You can do this by providing it when creating your index.
$curl -XPOST localhost:9200/test -d '{
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"_default_":{
"_timestamp" : {
"enabled" : true,
"store" : true
}
}
}
}'
That will then automatically create a _timestamp for all stuff that you put in the index.
Then after indexing something when requesting the _timestamp field it will be returned.
Adding another way to get indexing timestamp. Hope this may help someone.
Ingest pipeline can be used to add timestamp when document is indexed. Here, is a sample example:
PUT _ingest/pipeline/indexed_at
{
"description": "Adds indexed_at timestamp to documents",
"processors": [
{
"set": {
"field": "_source.indexed_at",
"value": "{{_ingest.timestamp}}"
}
}
]
}
Earlier, elastic search was using named-pipelines because of which 'pipeline' param needs to be specified in the elastic search endpoint which is used to write/index documents. (Ref: link) This was bit troublesome as you would need to make changes in endpoints on application side.
With Elastic search version >= 6.5, you can now specify a default pipeline for an index using index.default_pipeline settings. (Refer link for details)
Here is the to set default pipeline:
PUT ms-test/_settings
{
"index.default_pipeline": "indexed_at"
}
I haven't tried out yet, as didn't upgraded to ES 6.5, but above command should work.
You can make use of default index pipelines, leverage the script processor, and thus emulate the auto_now_add functionality you may know from Django and DEFAULT GETDATE() from SQL.
The process of adding a default yyyy-MM-dd HH:mm:ss date goes like this:
1. Create the pipeline and specify which indices it'll be allowed to run on:
PUT _ingest/pipeline/auto_now_add
{
"description": "Assigns the current date if not yet present and if the index name is whitelisted",
"processors": [
{
"script": {
"source": """
// skip if not whitelisted
if (![ "myindex",
"logs-index",
"..."
].contains(ctx['_index'])) { return; }
// don't overwrite if present
if (ctx['created_at'] != null) { return; }
ctx['created_at'] = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date());
"""
}
}
]
}
Side note: the ingest processor's Painless script context is documented here.
2. Update the default_pipeline setting in all of your indices:
PUT _all/_settings
{
"index": {
"default_pipeline": "auto_now_add"
}
}
Side note: you can restrict the target indices using the multi-target syntax:
PUT myindex,logs-2021-*/_settings?allow_no_indices=true
{
"index": {
"default_pipeline": "auto_now_add"
}
}
3. Ingest a document to one of the configured indices:
PUT myindex/_doc/1
{
"abc": "def"
}
4. Verify that the date string has been added:
GET myindex/_search
An example for ElasticSearch 6.6.2 in Python 3:
from elasticsearch import Elasticsearch
es = Elasticsearch(hosts=["localhost"])
timestamp_pipeline_setting = {
"description": "insert timestamp field for all documents",
"processors": [
{
"set": {
"field": "ingest_timestamp",
"value": "{{_ingest.timestamp}}"
}
}
]
}
es.ingest.put_pipeline("timestamp_pipeline", timestamp_pipeline_setting)
conf = {
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1,
"default_pipeline": "timestamp_pipeline"
},
"mappings": {
"articles":{
"dynamic": "false",
"_source" : {"enabled" : "true" },
"properties": {
"title": {
"type": "text",
},
"content": {
"type": "text",
},
}
}
}
}
response = es.indices.create(
index="articles_index",
body=conf,
ignore=400 # ignore 400 already exists code
)
print ('\nresponse:', response)
doc = {
'title': 'automatically adding a timestamp to documents',
'content': 'prior to version 5 of Elasticsearch, documents had a metadata field called _timestamp. When enabled, this _timestamp was automatically added to every document. It would tell you the exact time a document had been indexed.',
}
res = es.index(index="articles_index", doc_type="articles", id=100001, body=doc)
print(res)
res = es.get(index="articles_index", doc_type="articles", id=100001)
print(res)
About ES 7.x, the example should work after removing the doc_type related parameters as it's not supported any more.
first create index and properties of the index , such as field and datatype and then insert the data using the rest API.
below is the way to create index with the field properties.execute the following in kibana console
`PUT /vfq-jenkins
{
"mappings": {
"properties": {
"BUILD_NUMBER": { "type" : "double"},
"BUILD_ID" : { "type" : "double" },
"JOB_NAME" : { "type" : "text" },
"JOB_STATUS" : { "type" : "keyword" },
"time" : { "type" : "date" }
}}}`
the next step is to insert the data into that index:
curl -u elastic:changeme -X POST http://elasticsearch:9200/vfq-jenkins/_doc/?pretty
-H Content-Type: application/json -d '{
"BUILD_NUMBER":"83","BUILD_ID":"83","JOB_NAME":"OMS_LOG_ANA","JOB_STATUS":"SUCCESS" ,
"time" : "2019-09-08'T'12:39:00" }'

Resources