I use Kapacitor auto load directory for delivering tick scripts to all envs https://docs.influxdata.com/kapacitor/v1.4/guides/load_directory/
one requirement: you need set "dbrp"
other way you get error:
failed to create task: must specify dbrp
In the same time I want to debug/modify (see log) of this alert in Chronograf web interface (http://****:8888/sources/1/tickscript/)
but can not do it cause Error message:
cannot specify dbrp in implicitly and explicitly
as Chronograf provide one more "select database" control.
May be someone now is it possible to debug pre-load tick script in Chronograf ui?
In https://docs.influxdata.com/kapacitor/v1.5/tick/syntax/#declarations
the following paragraph is instructive:
A database declaration begins with the keyword dbrp and is followed by two strings separated by a period. The first string declares the default database, with which the script will be used. The second string declares its retention policy. Note that the database and retention policy can also be declared using the flag -dbrp when defining the task with the command kapacitor define on the command-line, so this statement is optional. ...
Since it is optional in the TICKscript, then you can just set database declaration can be set from the command line when you load the script, e.g.
kapacitor define load_1 -tick ~/tick/telegraf-autogen/load_1.tick -dbrp "telegraf"."autogen"
Defined this way, the dbrp is considered implicitly set since it's not defined in the TICKscript. If you define it in the TICKscript, then it is explicitly set. This small detail unlocks this conundrum - define dbrp on the load script and not the TICKscript.
Coded this way, if you later save the TICKscript in the cronograf TICKscript editor, you won't get this error, since it's not explicitly set in the TICKscript.
Yes, you have to track two pieces of code, e.g. the TICKscript and the command line you use to load it into kapacitor. Suggestion, add a hint in the TICKscript will help reduce confusion regarding the intended dbrp. Also, group TICKscripts in subdirs by dbrp (as shown above) along with the load script in that dir, would keep things clean.
Related
I'm creating a custom Fluent-Bit image and I want a "generic" configuration file that can work on multiple cases, i.e.: it should work with a forward input sometimes and with tail input some other times.
I thought about using environment variables so to only have one input but it seems we cannot set variables in the key part only on the value side (see following code).
When I set the corresponding environment variables in a docker-entrypoint file with corresponding conditions
export INPUT_PATH="/myLogPath"
export INPUT_PATH_TYPE="path"
export INPUT_NAME="tail"
[INPUT]
Name ${INPUT_NAME}
${INPUT_PATH_TYPE} ${INPUT_PATH}
This is the error message I got
[error] [config] tail: unknown configuration property '${INPUT_PATH_TYPE}'. The following properties are allowed: path, exclude_path, key, read_from_head, refresh_interval, watcher_interval, rotate_wait, docker_mode, docker_mode_flush, docker_mode_parser, path_key, ignore_older, buffer_chunk_size, buffer_max_size, skip_long_lines, exit_on_eof, parser, tag_regex, db, db.sync, db.locking, multiline, multiline_flush, parser_firstline, and parser_.
I'm looking for a way to make it dynamic so either to have a single file with dynamic configuration or multiple files which can be included dynamically (#Include requires a static filepath from what I've seen).
EDIT: the only option I see is to have multiple input files (for each use case) and call it dynamically when starting fluent-bit in the docker-entrypoint file
I use a docker-entrypoint and split the input, filters to different files and then depending of the environment variables in the entrypoint I create a symbolic link to the corresponding file
I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.
I've created a Task that I converted to a Task Group (and why TFS won't allow you JUST to create a Task group is still beyond me, but I digress).
All the parameters in this task have default values. However the one I really care about is the third one (highlighted)
My understanding was that I could leave that blank when I consume the task in a build definition. However this is what I get when I leave it blank:
In addition I'm unable to save this build definitition until I've entered a value. It's not a show stopper by any means and it's really easy to enter the same value again. I'm just perplexed as to why it's doing this. Have I missed a new definition of the word Default?
Check the first item of how the task group is created:
Ensure that all of the tasks you want to include in a task group have their parameters defined as variables, such as $(MyVariable), where you want to be able to configure these parameters when you use the task group. Variables used in the tasks are automatically extracted and converted into parameters for the task group. Values of these configuration variables will be converted into default values for the task group.
If you specify a value (instead of a variable) for a parameter, that
value becomes a fixed parameter value and cannot be exposed as a
parameter to the task group. Parameters of the encapsulated tasks for
which you specified a value (instead of a variable), or you didn't
provide a value for, are not configurable in the task group when added
to a build or release definition.
In informatica mapping design, there must be a target table, but in my design, I only use informatica to call store procedures, and after they were called, all work has been done, so I don't need a target table to be inserted or updated.
I used a non-exist table as the target table, and one nonsense field as the input port(cause there must be at least one input port!), then unchecked or the option(insert, update,delete) in the session configuration, so that the informatica would not generated DML SQL statements, avoiding "no table" errors.
But then informatica treat the input row as reject row and try to write it into a bad file. And cause I unchecked the insert option, the session log showed that there was an error that it couldn't be insert into the bad file!
Strangely, this error never showed in the monitor, and all session run successfully! It only appeared in informatica's meta table.
Is there a better way to avoid this problem, although it has no effect to my result? Is there a possibility to use a non-exist table and do nothing to it (include reject the input rows)?
Use a filter transformation just before the target and put filter condition 'FALSE'
No rows will go to the target
I had run into this same issue when i wanted to just execute a stored procedure and nothing else.
I solved this by creating a dummy source object that had one port and a dummy target with one port of the same datatype. In the source qualifier I added a SQL statement select 1 from dual (since it's Oracle).
I then added a filter object that was set to false. Then I connected the single port from the source/qualifier through the filter and finally to the target.
When the mapping is run, the source qualifier will return 1 row of one value, this will pass through to the filter but nothing will come out of the filter because the filter is set to false. This mapping will always be successful and valid because all ports are connected a nothing makes it to the "dummy" target thus no bad file logs or failure, etc.
Let me know if you need any clarification and I can update this answer.
No, you always need a target for the mapping to be valid. But I would rather work with a flat file target instead of a database table, you'll have much less work to do.
If you're on Linux / Unix, you can even route the file to /dev/null (use folder:/dev/, file:null) so the file is not actually written to the filesystem.
And using one dummy port is the right way. As you have said, you need at least one port, even if you don't really use it.
As odd as this may sound (Unix systems): neither source, nor target need to exist.
Source (flat file): /dev/null, column DUMMY
Target (flat file): /dev/null, column DUMMY
And you don't need to use any databases for the session to succeed, nor use any filters. It runs.
I'm trying to follow the instructions for deploying a database via TFS build listed here:
http://www.mytechfinds.com/articles/software-testing/6-test-automation/64-db-deployment-tfs
The instructions include notes about how to configure a ConvertWorkspaceItem element. I've followed the directions, but TFS remains unhappy with my setting for 'Result' and 'Workspace'. For now, I simply entered the text from the directions ('dbproj' and 'Workspace', respectively). TFS complains about my values:
Compiler error(s) encountered processing expression "dbproj". 'dbproj' is not declared. It may be inaccessible due to its production level.
I'm trying to find basic tutorial information on the ConvertWorkspaceItem element, but other than the MSDN reference page there isn't a lot of info. Does anyone know much about configuring this element?
You need to specify valid variable names for both of these properties. there should already be a variable declared in the workflow called workspace, You will need to declare a variable of type string that you wish to receive the result of this activity and specify it's name as the Result property. It looks like in your linked article the author must have already created a variable called dbproj. At the bottom of the workflow designer is a variables tab where you can define your own variables.