I have a particular requirement where I require two ways of rolling(TimeBasedTriggeringPolicy and SizeBasedTriggeringPolicy) , and each way of rolling should create a file with a different name:
When file size reaches 50MB: appname.log.1 -> appname.log.2 and so on (including rewriting them if required)
When the day changes: appname.log.2021-07-05
Currently I'm using the following properties:
appender.R.type=RollingFile
appender.R.name=customRolling
appender.R.fileName=<pathtofile>/appname.log
appender.R.filePattern=<pathtofile>/appname.log.%d{yyyy-MM-dd-HH-mm}.%i
appender.R.policies.type=Policies
appender.R.policies.size.type=SizeBasedTriggeringPolicy
appender.R.policies.size.size=20KB
appender.R.policies.time.type=TimeBasedTriggeringPolicy
appender.R.policies.time.interval=1
appender.R.policies.time.modulate = true
appender.R.strategy.type = DefaultRolloverStrategy
appender.R.strategy.min = 1
appender.R.strategy.max = 10
appender.R.strategy.fileIndex = min
appender.R.append=true
appender.R.layout.type=PatternLayout
appender.R.layout.charset=UTF-8
appender.R.layout.pattern=%d{yyyy:MM:dd:HH:mm:ss.SSS} %h %NL %c{1} %t %ms%n
But this creates files like:
alertsender.log.2022-07-19-09-05.1
alertsender.log.2022-07-19-09-05.2
Which is not what's required.
How do I achieve this?
Related
https://bazel.googlesource.com/bazel/+show/master/CHANGELOG.md mentions, that there are cpu tags. Of course now the question to me is where else these tags are being taken into account.
Posting the commit message here as I think it answers the question perfectly:
TLDR: You can increase the CPU reservation for tests by adding a "cpu:" (e.g. "cpu:4" for four cores) tag to their rule in a BUILD file. This can be used if tests would otherwise overwhelm your system if there's too much parallelism.
This lets users specify that their test needs a minimum of CPU cores
to run and not be flaky. Example for a reservation of 4 CPUs:
sh_test(
name = "test",
size = "large",
srcs = ["test.sh"],
tags = ["cpu:4"],
)
This could also be used by remote execution strategies to tune their
resource adjustment.
As of 2017-06-21 the following alternating options are possible:
genrule: Setting tags same as in sh_test.
Example:
genrule(
name = "foo",
srcs = [],
outs = ["foo.h"],
cmd = "./$(location create_foo.pl) > \"$#\"",
tools = ["create_foo.pl"],
tags = ["cpu:4"],
)
Skylark rules: This can work as long as you do NOT use Workers. See.
For Skylark rules cpu can be set manually for any created action individually. This is accomplished by setting execution_requirements.
Example:
ctx.action(
execution_requirements = {
"cpu:4": "", # This is no mistake - you really encode the value in the dict key and an empty string in dict value
},
)
I'm working with an application that has 3 ini files in a somewhat irritating custom format. I'm trying to compile these into a 'standard' ini file.
I'm hoping for some inspiration in the form of pseudocode to help me code some sort of 'compiler'.
Here's an example of one of these ini files. The less than/greater than indicates a redirect to another section in the file. These redirects could be recursive.. i.e. one redirect then redirects to another. It could also mean a redirect to an external file (3 values are present in that case). Comments start with a # symbol
[PrimaryServer]
name = DEMO1
baseUrl = http://demo1.awesome.com
[SecondaryServer]
name = DEMO2
baseUrl = http://demo2.awesome.com
[LoginUrl]
# This is a standard redirect
baseLoginUrl = <PrimaryServer:baseUrl>
# This is a redirect appended with extra information
fullLoginUrl = <PrimaryServer:baseUrl>/login.php
# Here's a redirect that points to another redirect
enableSSL = <SSLConfiguration:enableSSL>
# This is a key that has mutliple comma-separated values, some of which are redirects.
serverNames = <PrimaryServer:name>,<SecondaryServer:name>,AdditionalRandomServerName
# This one is particularly nasty. It's a redirect to another file...
authenticationMechanism = <Authenication.ini:Mechanisms:PrimaryMechanism>
[SSLConfiguration]
enableSSL = <SSLCertificates:isCertificateInstalled>
[SSLCertificates]
isCertificateInstalled = true
Here's an example of what I'm trying to achieve. I've removed the comments for readability.
[PrimaryServer]
name = DEMO1
baseUrl = http://demo1.awesome.com
[SecondaryServer]
name = DEMO2
baseUrl = http://demo2.awesome.com
[LoginUrl]
baseLoginUrl = http://demo1.awesome.com
fullLoginUrl = http://demo1.awesome.com/login.php
enableSSL = true
serverNames = DEMO1,DEMO2,AdditionalRandomServerName
authenticationMechanism = valueFromExternalFile
[SSLConfiguration]
enableSSL = <SSLCertificates:isCertificateInstalled>
[SSLCertificates]
isCertificateInstalled = true
I'm looking at using ini4j (Java) to achieve this, but am by no means fixed on using that language.
My main questions are:
1) How can I handle the recursive redirects
2) How am I best to handle the redirects that have an additional string, e.g. serverNames
3) Bonus points for any suggestions about how to handle the external redirects. No big deal if that part isn't working just yet.
So far, I'm able to parse and tidy up the file, but I'm struggling with these redirects.
Once again, I'm only hoping for pseudocode. Perhaps I need more coffee, but I'm really puzzled by this one.
Thanks in advance for any suggestions.
I am not sure why I am getting the error below, but I suppose it's something that I am doing wrong.
First, you can grab my dataset by downloading the file dataset.r from this link and loading it into your session with dget("dataset.r").
In my case, I would do dat = dget("dataset.r").
The code below is what I am using to load data into the Neo4j.
library(RNeo4j)
graph = startGraph("http://localhost:7474/db/data/")
graph$version
# sure that the graph is clean -- you should backup first!!!
clear(graph, input = FALSE)
## ensure the constraints
addConstraint(graph, "School", "unitid")
addConstraint(graph, "Topic", "topic_id")
## create the query
## BE CAREFUL OF WHITESPACE between KEY:VALUE pairs for parameters!!!
query = "
MERGE (s:School {unitid:{unitid},
instnm:{instnm},
obereg:{obereg},
carnegie:{carnegie},
applefeeu:{applfeeu},
enrlft:{enrlft},
applcn:{applcn},
admssn:{admssn},
admit_rate:{admit_rate},
ape:{ape},
sat25:{sat25},
sat75:{sat75} })
MERGE (t:Topic {topic_id:{topic_id},
topic:{topic} })
MERGE (s)-[:HAS_TOPIC {score:{score} }]->(t)
"
for (i in 1:nrow(dat)) {
## status
cat("starting row ", i, "\n")
## run the query
cypher(graph,
query,
unitid = dat$unitid[i],
instnm = dat$instnm[i],
obereg = dat$obereg[i],
carnegie = dat$carnegie[i],
applfeeu = dat$applfeeu[i],
enrlft = dat$enrlt[i],
applcn = dat$applcn[i],
admssn = dat$admssn[i],
admit_rate = dat$admit_rate[i],
ape = dat$apps_per_enroll[i],
sat25 = dat$sat25[i],
sat75 = dat$sat75[i],
topic_id = dat$topic_id[i],
topic = dat$topic[i],
score = dat$score[i] )
} #endfor
I can successfully load the first 49 records of my dataframe dat, but errors out on the 50th row.
This is the error that I recieve:
starting row 50
Show Traceback
Rerun with Debug
Error: 400 Bad Request
{"message":"Node 1477 already exists with label School and property \"unitid\"=[110680]","exception":"CypherExecutionException","fullname":"org.neo4j.cypher.CypherExecutionException","stacktrace":["org.neo4j.cypher.internal.compiler.v2_1.spi.ExceptionTranslatingQueryContext.org$neo4j$cypher$internal$compiler$v2_1$spi$ExceptionTranslatingQueryContext$$translateException(ExceptionTranslatingQueryContext.scala:154)","org.neo4j.cypher.internal.compiler.v2_1.spi.ExceptionTranslatingQueryContext$ExceptionTranslatingOperations.setProperty(ExceptionTranslatingQueryContext.scala:121)","org.neo4j.cypher.internal.compiler.v2_1.spi.UpdateCountingQueryContext$CountingOps.setProperty(UpdateCountingQueryContext.scala:130)","org.neo4j.cypher.internal.compiler.v2_1.mutation.PropertySetAction.exec(PropertySetAction.scala:51)","org.neo4j.cypher.internal.compiler.v2_1.mutation.MergeNodeAction$$anonfun$exec$1.apply(MergeNodeAction.scala:80)","org.neo4j.cypher.internal.compiler.v2_1
Here is my session info:
> sessionInfo()
R version 3.1.0 (2014-04-10)
Platform: x86_64-apple-darwin13.1.0 (64-bit)
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] RNeo4j_1.2.0
loaded via a namespace (and not attached):
[1] RCurl_1.95-4.1 RJSONIO_1.2-0.2 tools_3.1.0
And it's worth noting that I am using Neo4j 2.1.3.
Thanks for any help in advance.
This is an issue with how MERGE works. By setting the score property within the MERGE clause itself here...
MERGE (s)-[:HAS_TOPIC {score:{score} }]->(t)
...MERGE tries to create the entire pattern, and thus your uniqueness constraint is violated. Instead, do this:
MERGE (s)-[r:HAS_TOPIC]->(t)
SET r.score = {score}
I was able to import all of your data after making this change.
I'm trying to get zipline working with non-US, intraday data, that I've loaded into a pandas DataFrame:
BARC HSBA LLOY STAN
Date
2014-07-01 08:30:00 321.250 894.55 112.105 1777.25
2014-07-01 08:32:00 321.150 894.70 112.095 1777.00
2014-07-01 08:34:00 321.075 894.80 112.140 1776.50
2014-07-01 08:36:00 321.725 894.80 112.255 1777.00
2014-07-01 08:38:00 321.675 894.70 112.290 1777.00
I've followed moving-averages tutorial here, replacing "AAPL" with my own symbol code, and the historical calls with "1m" data instead of "1d".
Then I do the final call using algo_obj.run(DataFrameSource(mydf)), where mydf is the dataframe above.
However there are all sorts of problems arising related to TradingEnvironment. According to the source code:
# This module maintains a global variable, environment, which is
# subsequently referenced directly by zipline financial
# components. To set the environment, you can set the property on
# the module directly:
# from zipline.finance import trading
# trading.environment = TradingEnvironment()
#
# or if you want to switch the environment for a limited context
# you can use a TradingEnvironment in a with clause:
# lse = TradingEnvironment(bm_index="^FTSE", exchange_tz="Europe/London")
# with lse:
# the code here will have lse as the global trading.environment
# algo.run(start, end)
However, using the context doesn't seem to fully work. I still get errors, for example stating that my timestamps are before the market open (and indeed, looking at trading.environment.open_and_close the times are for the US market.
My question is, has anybody managed to use zipline with non-US, intra-day data? Could you point me to a resource and ideally example code on how to do this?
n.b. I've seen there are some tests on github that seem related to the trading calendars (tradincalendar_lse.py, tradingcalendar_tse.py , etc) - but this appears to only handle data at the daily level. I would need to fix:
open/close times
reference data for the benchmark
and probably more ...
I've got this working after fiddling around with the tutorial notebook. Code sample below. It's using the DF mid, as described in the original question. A few points bear mentioning:
Trading Calendar I create one manually and assign to trading.environment, by using non_working_days in tradingcalendar_lse.py. Alternatively you could create one that fits your data exactly (however could be a problem for out-of-sample data). There are two fields that you need to define: trading_days and open_and_closes.
sim_params There is a problem with the default start/end values because they aren't timezone aware. So you must create a sim_params object and pass start/end parameters with a timezone.
Also, run() must be called with the argument overwrite_sim_params=False as calculate_first_open/close raise timestamp errors.
I should mention that it's also possible to pass pandas Panel data, with fields open,high,low,close,price and volume in the minor_axis. But in this case, the former fields are mandatory - otherwise errors are raised.
Note that this code only produces a daily summary of the performance. I'm sure there must be a way to get the result at a minute resolution (I thought this was set by emission_rate, but apparently it's not). If anybody knows please comment and I'll update the code.
Also, not sure what the api call is to call 'analyze' (i.e. when using %%zipline magic in IPython, as in the tutorial, the analyze() method gets automatically called. How do I do this manually?)
import pytz
from datetime import datetime
from zipline.algorithm import TradingAlgorithm
from zipline.utils import tradingcalendar
from zipline.utils import tradingcalendar_lse
from zipline.finance.trading import TradingEnvironment
from zipline.api import order_target, record, symbol, history, add_history
from zipline.finance import trading
def initialize(context):
# Register 2 histories that track daily prices,
# one with a 100 window and one with a 300 day window
add_history(10, '1m', 'price')
add_history(30, '1m', 'price')
context.i = 0
def handle_data(context, data):
# Skip first 30 mins to get full windows
context.i += 1
if context.i < 30:
return
# Compute averages
# history() has to be called with the same params
# from above and returns a pandas dataframe.
short_mavg = history(10, '1m', 'price').mean()
long_mavg = history(30, '1m', 'price').mean()
sym = symbol('BARC')
# Trading logic
if short_mavg[sym] > long_mavg[sym]:
# order_target orders as many shares as needed to
# achieve the desired number of shares.
order_target(sym, 100)
elif short_mavg[sym] < long_mavg[sym]:
order_target(sym, 0)
# Save values for later inspection
record(BARC=data[sym].price,
short_mavg=short_mavg[sym],
long_mavg=long_mavg[sym])
def analyze(context,perf) :
perf["pnl"].plot(title="Strategy P&L")
# Create algorithm object passing in initialize and
# handle_data functions
# This is needed to handle the correct calendar. Assume that market data has the right index for tradeable days.
# Passing in env_trading_calendar=tradingcalendar_lse doesn't appear to work, as it doesn't implement open_and_closes
from zipline.utils import tradingcalendar_lse
trading.environment = TradingEnvironment(bm_symbol='^FTSE', exchange_tz='Europe/London')
#trading.environment.trading_days = mid.index.normalize().unique()
trading.environment.trading_days = pd.date_range(start=mid.index.normalize()[0],
end=mid.index.normalize()[-1],
freq=pd.tseries.offsets.CDay(holidays=tradingcalendar_lse.non_trading_days))
trading.environment.open_and_closes = pd.DataFrame(index=trading.environment.trading_days,columns=["market_open","market_close"])
trading.environment.open_and_closes.market_open = (trading.environment.open_and_closes.index + pd.to_timedelta(60*7,unit="T")).to_pydatetime()
trading.environment.open_and_closes.market_close = (trading.environment.open_and_closes.index + pd.to_timedelta(60*15+30,unit="T")).to_pydatetime()
from zipline.utils.factory import create_simulation_parameters
sim_params = create_simulation_parameters(
start = pd.to_datetime("2014-07-01 08:30:00").tz_localize("Europe/London").tz_convert("UTC"), #Bug in code doesn't set tz if these are not specified (finance/trading.py:SimulationParameters.calculate_first_open[close])
end = pd.to_datetime("2014-07-24 16:30:00").tz_localize("Europe/London").tz_convert("UTC"),
data_frequency = "minute",
emission_rate = "minute",
sids = ["BARC"])
algo_obj = TradingAlgorithm(initialize=initialize,
handle_data=handle_data,
sim_params=sim_params)
# Run algorithm
perf_manual = algo_obj.run(mid,overwrite_sim_params=False) # overwrite == True calls calculate_first_open[close] (see above)
#Luciano
You can add analyze(None, perf_manual)at the end of your code for automatically running the analyze process.
Background
I'm writing a photo gallery, the gallery can create both previews and thumbnails.
I want the user to customize the creation of these (resolution etc)
What I have so far
config/gallery.yml - Stores the actual settings, a block for each env
config/initializers/gallery.rb - Loads the YAML file into Gallery::Application
app/models/image.rb - The model that reads the configuration and creates the images
My question
Where is the best place to merge the settings with the default values?
I could either do it in gallery.rb or image.rb where it's actually used.
Otherwise, is this a sane approach or have I gone about this all wrong?
Any feedback is greatly appreciated.
Files
config/initializers/gallery.yml
production:
create_previews: true
create_thumbnails: true
development:
create_thumbnails: true
create_previews: true
test:
create_thumbnails: false
create_previews: false
config/initializers/gallery.rb
Gallery::Application.config.image_directory = File.join(Rails.root, 'images')
Gallery::Application.config.thumbnail_directory = File.join(Rails.root, 'thumbnails')
Gallery::Application.config.preview_directory = File.join(Rails.root, 'previews')
# DO NOT MODIFY THE LINES BELOW
EXTERNAL_CONFIG_FILE_PATH = "#{Rails.root}/config/gallery.yml"
if FileTest.exists? EXTERNAL_CONFIG_FILE_PATH
vals = YAML.load_file(EXTERNAL_CONFIG_FILE_PATH)[::Rails.env]
else
vals = {}
end
Gallery::Application.config.create_previews = vals['create_previews']
Gallery::Application.config.preview_settings = vals['preview_settings']
Gallery::Application.config.create_thumbnails = vals['create_thumbnails']
Gallery::Application.config.thumbnail_settings = vals['thumbnail_settings']
I also agree this is good place in config/initializers.
but other place and way is read/write database table field in GalleryConfig ActiveRecord model using MySQL.
because easy for testcase or rails console.