Drupal pagecall first hook - path

When a page is called, I want to check the call on depending on the path, I want to redirect the user to the frontpage, with some parameters. These parameters I will use in a block to show extra information to the visitor.
What hook should I use, so that drupal has to do the least unnecessary work ?

1) The template_preprocess_page function is the appropriate hook here.
2) An alternative option is to use Rules module.
Event: Drupal is initializing (using hook_init)
Condition: Execute custom php code (check path argument)
Actions: Page redirect, Other rules actions (eg a Message)
I would suggest to show a Drupal Message to the user instead of a block. Except if user is logged in and parameters shown in block do exist in the database so you can use views module to create that block.
Here is an export of a rule that redirects if the taxonomy term page display belongs to Vocabulary '4'. Import it to your rules to see the results.
{ "rules_taxonomy_redirect_business" : {
"LABEL" : "Taxonomy redirect - Business",
"PLUGIN" : "reaction rule",
"TAGS" : [ "redirect", "taxonomy" ],
"REQUIRES" : [ "php", "rules" ],
"ON" : [ "init" ],
"IF" : [
{ "php_eval" : { "code" : "$check1 = (arg(0)==\u0027taxonomy\u0027)\u0026\u0026(arg(1)==\u0027term\u0027);\r\n$check2 = (arg(3)!=\u0027edit\u0027);\r\n\r\nif (arg(2)) {\r\n$tid = arg(2);\r\n$vid = db_query(\u0027SELECT vid FROM {taxonomy_term_data} WHERE tid = :tid\u0027, array(\u0027:tid\u0027 =\u003E $tid))-\u003EfetchField();\r\n$check3 = ($vid == \u00274\u0027);\r\n}\r\n\r\nreturn ($check1)\u0026\u0026($check2)\u0026\u0026($check3);" } }
],
"DO" : [
{ "redirect" : { "url" : "\u003C?php\r\n$tid = arg(2);\r\nreturn \u0027business?cat%5B%5D=\u0027 . $tid;\r\n?\u003E" } }
]
}
}

Related

Show description or comments for variables in FastAPI autodocs (Swagger UI)

I'm making a function and class for it with POST method.
Since I use FastAPI, it automatically generates API docs (using OpenAPI specification and Swagger UI), where I can see the function's description or example data there.
My class and function are like below:
from pydantic import BaseModel, Field
from typing import Optional, List
#app.post("/user/item")
def func1(args1: User, args2: Item):
...
class User(BaseModel):
name: str
state: List[str]
class Config:
schema_extra = {
"example": {
"name": "Mike",
"state": ["Texas", "Arizona"]
}
}
class Item(BaseModel):
_id: int = Field(..., example=3, description="item id")
Through schema_extra and example attribute in Field, I can see the example value in Request body of function description.
It shows like
{
"args1": {
"name": "Mike",
"state": ["Texas", "Arizona"] # state user visits. <-- I'd like to add this here or in other place.
},
"args2: {
"_id": 3 <-- Here I can't description 'item id'
}
}
However, I'd like to add description or comments to example value, like # state user visits above.
I've tried to add description attribute of pydantic Field, but I think it shows only for parameters of get method.
Is there any way to do this? Any help will be appreciated.
You are trying to pass "comments" inside the actual JSON payload that will be sent to the server. Thus, such an approach wouldn't work. The way to add description to the fields is as shown below. Users/you can see the descriptions/comments, as well as the examples provided, by expanding the corresponding JSON schema of a Pydantic model (e.g., "User") under "Schemas" (at the bottom of the page) when visting OpenAPI at http://127.0.0.1:8000/docs, for instance. Or, by clicking on "Schema", next to "Example Value", above the example given in the "Request Body".
class User(BaseModel):
name: str = Field(..., description="Add user name")
state: List[str] = Field(..., description="State user visits")
class Config:
schema_extra = {
"example": {
"name": "Mike",
"state": ["Texas", "Arizona"]
}
}
Alternatively, you could use Body field in your endpoint, allowing you to add a description that is shown under the example in the "Request body". As per the documentation:
But when you use example or examples with any of the other utilities
(Query(), Body(), etc.) those examples are not added to the JSON
Schema that describes that data (not even to OpenAPI's own version of
JSON Schema), they are added directly to the path operation
declaration in OpenAPI (outside the parts of OpenAPI that use JSON
Schema).
You could add multiple examples (with their associated descriptions), as described in the documentation. Example below:
#app.post("/user/item")
async def update_item(
user: User = Body(
...,
examples={
"normal": {
"summary": "A normal example",
"description": "**name**: Add user name. **state**: State user vistis. ",
"value": {
"name": "Mike",
"state": ["Texas", "Arizona"]
},
}
}
),
):
return {"user": user}

Google Actions Account Linking: doesn't work in testing

I have to develop a Google Action with a mandatory Account Linking phase that I have configured with an OAuth2 server. I'm using the online console at https://console.actions.google.com/ to develop the action.
I have set up the Start scene where the condition is user.validationStatus != "VERIFIED" . Based on the result of the condition I will go to 2 different scenes.
Here the screen of the Start scene where is checked the account linking status.
Here the Start_AccountLinking scene
But when I try to go in the "Test" section of the console after I open the action with the invocation, It doesn't pass any of the conditions and stays in the Start scene. In the log on the right, I can see that it failed both the conditions.
{
"conditionsEvaluated": {
"failedConditions": [
{
"expression": "user.validationStatus != \"VERIFIED\"",
"nextSceneId": "Start_AccountLinking"
},
{
"expression": "user.validationStauts == \"VERIFIED\"",
"nextSceneId": "AuthenticatedScene"
}
]
},
"responses": [
{
"firstSimple": {
"speech": "Benvenuto in Semiperdo",
"text": "Benvenuto in Semiperdo"
}
}
]
}
Instead of user.validationStatus use user.accountLinkingStatus. It will work!

Zapier - add data to JSON response (App development)

We are creating a Zapier app to expose our APIs to the public, so anyone can use it. The main endpoint that people are using returns a very large and complex JSON object. Zapier, it looks like, has a really difficult time parsing nested complex JSON. But it does wonderful with a very simple response object such as
{ "field": "value" }
Our data that is being returned has this structure and we want to move some of the fields to the root of the response so it's easily parsed by Zapier.
"networkSections": [
{
"identifier": "Deductible",
"label": "Deductible",
"inNetworkParameters": [
{
"key": "Annual",
"value": " 600.00",
"message": null,
"otherInfo": null
},
{
"key": "Remaining",
"value": " 600.00",
"message": null,
"otherInfo": null
}
],
"outNetworkParameters": null
},
So, can we do something to return for example the remaining deductible?
I got this far (adding outputFields) but this returns an array of values. I'm not sure how to parse through this array either in the Zap or in the App.
{key: 'networkSections[]inNetworkParameters[]key', label: 'xNetworkSectionsKey',type: 'string'},
ie this returns an array of "Annual", "Remaining", etc
Great question. In this case, there's a lot going on, and outputFields can't quite handle it all. :(
In your example, inNetworkParameters contains an array of objects. Throughout our documentation, we refer to these as line items. These lines items can be passed to other actions, but the different expected structures presents a bit of a problem. The way we've handled this is by letting users map line-items from one step's output to another step's input per field. So if step 1 returns
{
"some_array": [
{
"some_key": "some_value"
}
]
}
and the next step needs to send
{
"data": [
{
"some_other_key": "some_value"
}
]
}
users can accomplish that by mapping some_array.some_key to data.some_other_key.
All of that being said, if you want to always return a Remaining Deductible object, you'll have to do it by modifying the result object itself. As long as this data is always in that same order, you can do something akin to
var data = z.JSON.parse(bundle.response.content);
data["Remaining Deductible"] = data.networkSections[0].inNetworkParameters[1].value;
return data;
If the order differs, you'll have to implement some sort of search to find the objects you'd like to return.
I hope that all helps!
Caleb got me where I wanted to go. For completeness this is the solution.
In the creates directory I have a js file for the actual call. The perform part is below.
perform: (z, bundle) => {
const promise = z.request({
url: 'https://api.example.com/API/Example/' + bundle.inputData.elgRequestID,
method: 'GET',
headers: {
'content-type': 'application/json',
}
});
return promise.then(function(result) {
var data = JSON.parse(result.content);
for (var i=0; i<data.networkSections.length; i++) {
for (var j=0; j<data.networkSections[i].inNetworkParameters.length; j++) {
// DEDUCT
if (data.networkSections[i].identifier == "Deductible" &&
data.networkSections[i].inNetworkParameters[j].key == "Annual")
data["zAnnual Deductible"] = data.networkSections[i].inNetworkParameters[j].value;
} // inner for
} // outer for
return data;
});

Return data & reference from falcor router

I've got a route that returns details about a features on a user's account:
// games[{keys:games}].features[{integers:indices}]
{
$type : "atom",
value : {
id: "6",
count: "1",
...
}
}
There's also a route that returns generic details about specific features:
// features[{integers:features}]
{
$type : "atom",
value : {
name : "fooga",
max : 10,
...
}
}
I don't want to merge the generic feature data into the user-specific data because that will be a bunch of data duplication, but I also want to be able to get it all in a single request
What's a smart way to structure my routes/returned data so that games[{keys:games}].features[{integers:indices}] can return a useful reference to features[{integers:features}]?
I tried splitting them up like this:
// games[{keys:games}].features[{integers:indices}].details
{
$type : "atom",
value : {
id: "6",
count: "1",
...
}
}
// games[{keys:games}].features[{integers:indices}].meta
{
$type : "ref",
value : [
"features",
"15"
]
}
but I couldn't figure out a way to resolve the .meta reference w/o writing redundant-seeming paths like ...features.0.meta.[name,max,...]. Ideally the ref would just return an atom because it's a small amount of data.
I ended up structuring it like this:
games[{keys:games}].features[{integers:indices}].details
games[{keys:games}].features[{integers:indices}].feature
features[{keys:games}][{integers:features}].details
Ugly paths, but ¯\_(ツ)_/¯

How to make elasticsearch add the timestamp field to every document in all indices?

Elasticsearch experts,
I have been unable to find a simple way to just tell ElasticSearch to insert the _timestamp field for all the documents that are added in all the indices (and all document types).
I see an example for specific types:
http://www.elasticsearch.org/guide/reference/mapping/timestamp-field/
and also see an example for all indices for a specific type (using _all):
http://www.elasticsearch.org/guide/reference/api/admin-indices-put-mapping/
but I am unable to find any documentation on adding it by default for all documents that get added irrespective of the index and type.
Elasticsearch used to support automatically adding timestamps to documents being indexed, but deprecated this feature in 2.0.0
From the version 5.5 documentation:
The _timestamp and _ttl fields were deprecated and are now removed. As a replacement for _timestamp, you should populate a regular date field with the current timestamp on application side.
You can do this by providing it when creating your index.
$curl -XPOST localhost:9200/test -d '{
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"_default_":{
"_timestamp" : {
"enabled" : true,
"store" : true
}
}
}
}'
That will then automatically create a _timestamp for all stuff that you put in the index.
Then after indexing something when requesting the _timestamp field it will be returned.
Adding another way to get indexing timestamp. Hope this may help someone.
Ingest pipeline can be used to add timestamp when document is indexed. Here, is a sample example:
PUT _ingest/pipeline/indexed_at
{
"description": "Adds indexed_at timestamp to documents",
"processors": [
{
"set": {
"field": "_source.indexed_at",
"value": "{{_ingest.timestamp}}"
}
}
]
}
Earlier, elastic search was using named-pipelines because of which 'pipeline' param needs to be specified in the elastic search endpoint which is used to write/index documents. (Ref: link) This was bit troublesome as you would need to make changes in endpoints on application side.
With Elastic search version >= 6.5, you can now specify a default pipeline for an index using index.default_pipeline settings. (Refer link for details)
Here is the to set default pipeline:
PUT ms-test/_settings
{
"index.default_pipeline": "indexed_at"
}
I haven't tried out yet, as didn't upgraded to ES 6.5, but above command should work.
You can make use of default index pipelines, leverage the script processor, and thus emulate the auto_now_add functionality you may know from Django and DEFAULT GETDATE() from SQL.
The process of adding a default yyyy-MM-dd HH:mm:ss date goes like this:
1. Create the pipeline and specify which indices it'll be allowed to run on:
PUT _ingest/pipeline/auto_now_add
{
"description": "Assigns the current date if not yet present and if the index name is whitelisted",
"processors": [
{
"script": {
"source": """
// skip if not whitelisted
if (![ "myindex",
"logs-index",
"..."
].contains(ctx['_index'])) { return; }
// don't overwrite if present
if (ctx['created_at'] != null) { return; }
ctx['created_at'] = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date());
"""
}
}
]
}
Side note: the ingest processor's Painless script context is documented here.
2. Update the default_pipeline setting in all of your indices:
PUT _all/_settings
{
"index": {
"default_pipeline": "auto_now_add"
}
}
Side note: you can restrict the target indices using the multi-target syntax:
PUT myindex,logs-2021-*/_settings?allow_no_indices=true
{
"index": {
"default_pipeline": "auto_now_add"
}
}
3. Ingest a document to one of the configured indices:
PUT myindex/_doc/1
{
"abc": "def"
}
4. Verify that the date string has been added:
GET myindex/_search
An example for ElasticSearch 6.6.2 in Python 3:
from elasticsearch import Elasticsearch
es = Elasticsearch(hosts=["localhost"])
timestamp_pipeline_setting = {
"description": "insert timestamp field for all documents",
"processors": [
{
"set": {
"field": "ingest_timestamp",
"value": "{{_ingest.timestamp}}"
}
}
]
}
es.ingest.put_pipeline("timestamp_pipeline", timestamp_pipeline_setting)
conf = {
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1,
"default_pipeline": "timestamp_pipeline"
},
"mappings": {
"articles":{
"dynamic": "false",
"_source" : {"enabled" : "true" },
"properties": {
"title": {
"type": "text",
},
"content": {
"type": "text",
},
}
}
}
}
response = es.indices.create(
index="articles_index",
body=conf,
ignore=400 # ignore 400 already exists code
)
print ('\nresponse:', response)
doc = {
'title': 'automatically adding a timestamp to documents',
'content': 'prior to version 5 of Elasticsearch, documents had a metadata field called _timestamp. When enabled, this _timestamp was automatically added to every document. It would tell you the exact time a document had been indexed.',
}
res = es.index(index="articles_index", doc_type="articles", id=100001, body=doc)
print(res)
res = es.get(index="articles_index", doc_type="articles", id=100001)
print(res)
About ES 7.x, the example should work after removing the doc_type related parameters as it's not supported any more.
first create index and properties of the index , such as field and datatype and then insert the data using the rest API.
below is the way to create index with the field properties.execute the following in kibana console
`PUT /vfq-jenkins
{
"mappings": {
"properties": {
"BUILD_NUMBER": { "type" : "double"},
"BUILD_ID" : { "type" : "double" },
"JOB_NAME" : { "type" : "text" },
"JOB_STATUS" : { "type" : "keyword" },
"time" : { "type" : "date" }
}}}`
the next step is to insert the data into that index:
curl -u elastic:changeme -X POST http://elasticsearch:9200/vfq-jenkins/_doc/?pretty
-H Content-Type: application/json -d '{
"BUILD_NUMBER":"83","BUILD_ID":"83","JOB_NAME":"OMS_LOG_ANA","JOB_STATUS":"SUCCESS" ,
"time" : "2019-09-08'T'12:39:00" }'

Resources