I've been asked to remove any images duplicated in the war file our grails app is packaged in. The documentation suggests this is possible via the grails.assets.excludes property in Config.groovy, but it doesn't clearly state how this property is supposed to work.
Here's what the documentation says:
Optionally, assets can be excluded from processing if included by your require tree. This can dramatically reduce compile time for your assets. To do so simply leverage the excludes configuration option:
grails.assets.excludes = ["tiny_mce/src/*.js"]
The example is totally unclear to me. I've tried several permutations of this expression with no success; image assets continue to be preprocessed, causing duplicates of all of them in the resulting war file. Here are a few settings I've tried:
grails.assets.excludes = ["tiny_mce/src/*.jpg", "tiny_mce/src/*.jpg"]
grails.assets.excludes = ["<app_name>/src/*.jpg", "<app_name>/src/*.jpg"]
grails.assets.excludes = ["/images/*.jpg", "/images/*.png"]
grails.assets.excludes = ["**/*.jpg", "/images/**"]
What am I missing? How do I tell the asset pipeline to skip precompiling images?
This works for me:
grails.assets.excludes = ["**/*.jpg","**/*.png"]
I can't get any rule based on images/ to work...
Related
I'm building an docker image with a call like this:
load(
"#io_bazel_rules_docker//container:container.bzl",
"container_image"
)
container_image(
name = "image",
base = "#nginx//image",
files = ["nginx.conf"],
symlinks = {
"/etc/nginx/nginx.conf": "/nginx.conf",
},
tars = [":build_tar"],
tags = ["catalog"],
)
As you can see, the image is supposed to contain the content of the tar archive available at target :build_tar, as well as the nginx.conf file.
The problem is that all the timestamps of the contents of produced image (both files taken from the tar archive as well as nginx.conf file) are set to zero (i.e. midnight Jan 1st 1970).
It looks like the container_image rule is not preserving timestamps. How can I fix this?
Adding enable_mtime_preservation = True param to container_image call fixes the timestamps.
A way to fix it would be to take the image you want to base your container off of, deploy it, then copy the files into it while it is running, and finally repack it for deployment.
I understand that is not ideal but from the searching I've done, that's how everyone recommends doing it.
I am just going to add that I do not completely understand what the image is for so I may be completely wrong on how to fix it.
Bazel has been working great for me recently, but I've stumbled upon a question for which I have yet to find a satisfactory answer:
How can one collect all files bearing a certain extension from the workspace?
Another way of phrasing the question: how could one obtain the functional equivalent of doing a glob() across a complete Bazel workspace?
Background
The goal in this particular case is to collect all markdown files to run some checks and generate a static site from them.
At first glance, glob() sounds like a good idea, but will stop as soon as it runs into a BUILD file.
Current Approaches
The current approach is to run the collection/generation logic outside of the sandbox, but this is a bit dirty, and I'm wondering if there is a way that is both "proper" and easy (ie, not requiring that each BUILD file explicitly exposes its markdown files.
Is there any way to specify, in the workspace, some default rules that will be added to all BUILD files?
You could write an aspect for this to aggregate markdown files in a bottom-up manner and create actions on those files. There is an example of a file_collector aspect here. I modified the aspect's extensions for your use case. This aspect aggregates all .md and .markdown files across targets on the deps attribute edges.
FileCollector = provider(
fields = {"files": "collected files"},
)
def _file_collector_aspect_impl(target, ctx):
# This function is executed for each dependency the aspect visits.
# Collect files from the srcs
direct = [
f
for f in ctx.rule.files.srcs
if ctx.attr.extension == f.extension
]
# Combine direct files with the files from the dependencies.
files = depset(
direct = direct,
transitive = [dep[FileCollector].files for dep in ctx.rule.attr.deps],
)
return [FileCollector(files = files)]
markdown_file_collector_aspect = aspect(
implementation = _file_collector_aspect_impl,
attr_aspects = ["deps"],
attrs = {
"extension": attr.string(values = ["md", "markdown"]),
},
)
Another way is to do a query on file targets (input and output files known to the Bazel action graph), and process these files separately. Here's an example querying for .bzl files in the rules_jvm_external repo:
$ bazel query //...:* | grep -e ".bzl$"
//migration:maven_jar_migrator_deps.bzl
//third_party/bazel_json/lib:json_parser.bzl
//settings:stamp_manifest.bzl
//private/rules:jvm_import.bzl
//private/rules:jetifier_maven_map.bzl
//private/rules:jetifier.bzl
//:specs.bzl
//:private/versions.bzl
//:private/proxy.bzl
//:private/dependency_tree_parser.bzl
//:private/coursier_utilities.bzl
//:coursier.bzl
//:defs.bzl
The Bazel Starlark API does strange things with files in external repositories. I have the following Starlark snippet:
print(ctx.genfiles_dir)
print(ctx.genfiles_dir.path)
print(output_filename)
ret = ctx.new_file(ctx.genfiles_dir, output_filename)
print(ret.path)
It is creating the following output:
DEBUG: build_defs.bzl:292:5: <derived root>
DEBUG: build_defs.bzl:293:5: bazel-out/k8-fastbuild/genfiles
DEBUG: build_defs.bzl:294:5: google/protobuf/descriptor.upb.c
DEBUG: build_defs.bzl:296:5: bazel-out/k8-fastbuild/genfiles/external/com_google_protobuf/google/protobuf/descriptor.upb.c
That extra external/com_google_protobuf comes seemingly out of nowhere, and it makes my rule fail:
I tell protoc to generate into ctx.genfiles_dir.path (which is bazel-out/k8-fastbuild/genfiles).
So protoc generates bazel-out/k8-fastbuild/genfiles/google/protobuf/descriptor.upb.c
Bazel fails because I didn't generate bazel-out/k8-fastbuild/genfiles/external/com_google_protobuf/google/protobuf/descriptor.upb.c
Likewise, when I try to call file.short_path on a source file from an external repository, I get a result like ../com_google_protobuf/google/protobuf/descriptor.proto. This seems quite unhelpful, so I just wrote some manual code to strip off the leading ../com_google_protobuf/.
Am I missing something? How can I write this rule in a way that doesn't feel like I'm fighting Bazel the whole time?
Am I missing something?
The basic problem, as you already realized, is that you have two path "namespaces" the one that protoc sees (i.e. import paths) and the one that bazel sees (i.e. the path you pass to declare_file().
2 things to note:
1) All paths declared with declare_file() get the path <bin dir>/<package path incl. workspace>/<path you passed to declare_file()>
2) All actions are executed from <bin dir> (unless output_to_genfils=True in which case this switches to <gen dir> as in your example.
Trying to solve the exact same problem you encountered, I resorted to stripping the known path from the output_file's path to determine which directory to pass as p:
# This code is run from the context of the external protobuf dependency
proto_path = "google/a/b.proto"
output_file = ctx.actions.declare_file(proto_path)
# output_file.path would be `<gen_dir>/external/protobuf/google/a/b.proto`
# Strip the known proto_path from output_file.path
protoc_prefix = output_file.path[:-len(proto_path)]
print(protoc_prefix) # Prints: <gen_dir>/external/protobuf
command = "{protoc} {proto_paths} {cpp_out} {plugin} {plugin_options} {proto_file}".format(
...
cpp_out = "--cpp_out=" + protoc_prefix,
...
)
Alternatives
You may also be able to construct the same path with ctx.bin_dir, ctx.label.workspace_name, ctx.label.package, and ctx.label.name.
Misc.
proto_library recently gained an attribute strip_import_prefix. When used, the above is not correct, as all dependent files are symlinked into a new directory from which they have the relative paths declared with strip_import_prefix.
The path format is:
<bin dir>/<repo>/<package>/_virtual_base/<label name>/<path `import`ed in .proto files>
i.e.
<bin dir>/external/protobuf/_virtual_base/b_proto/google/a/b.proto
Assuming you are building an external repo called protobuf, which contains a BUILD file at its root with a target named b_proto, which in turn, relies on a proto_library wrapping google/a/b.proto AND uses the strip_import_prefix attribute.
When using the karma javascript test library (née Testacular) together with Rails, where should test files and mocked data go be placed?
It seems weird to have them in /assets/ because we don’t actually want to serve them to users. (But I guess if they are simply never precompiled, then that’s not an actual problem, right?)
Via this post: https://groups.google.com/forum/#!topic/angular/Mg8YjKWbEJ8
I'm experimenting with something that looks like this:
// list of files / patterns to load in the browser
files: [
'http://localhost:3000/assets/application.js',
'spec/javascripts/*_spec.coffee',
{
pattern: 'app/assets/javascripts/*.{js,coffee}',
watched: true,
included: false,
served: false
}
],
It watches app js files, but doesn't include them or serve them, instead including the application.js served by rails and sprockets.
I've also been fiddling with https://github.com/lucaong/sprockets-chain , but haven't found a way to use requirejs to include js files from within gems (such as jquery-rails or angularjs-rails).
We ended up putting tests and mocked data under the Rails app’s spec folder and configuring Karma to import them as well as our tested code from app/assets.
Works for us. Other thoughts are welcome.
Our config/karma.conf.js file:
basePath = '../';
files = [
JASMINE,
JASMINE_ADAPTER,
//libs
'vendor/assets/javascripts/angular/angular.js',
'vendor/assets/javascripts/angular/angular-*.js',
'vendor/assets/javascripts/jquery-1.9.1.min.js',
'vendor/assets/javascripts/underscore-min.js',
'vendor/assets/javascripts/angular-strap/angular-strap.min.js',
'vendor/assets/javascripts/angular-ui/angular-ui.js',
'vendor/assets/javascripts/angular-bootstrap/ui-bootstrap-0.2.0.min.js',
//our app!
'app/assets/javascripts/<our-mini-app>/**',
// and our tests
'spec/javascripts/<our-mini-app>/lib/angular/angular-mocks.js',
'spec/javascripts/<our-mini-app>/unit/*.coffee',
// mocked data
'spec/javascripts/<our-mini-app>/mocked-data/<data-file>.js.coffee',
];
autoWatch = true;
browsers = 'PhantomJS'.split(' ')
preprocessors = {
'**/*.coffee': 'coffee'
}
I found this project helpful as a starting point. https://github.com/monterail/rails-angular-karma-example. It is explained by the authors on their blog.
It's an example rails app with angular.js and karma test runner.
I have this file app/assets/stylesheets/config.rb with the following content:
http_path = "/"
css_dir = "."
sass_dir = "."
images_dir = "img"
javascripts_dir = "js"
output_style = :compressed
relative_assets=true
line_comments = false
what is it for?
To answer your question "what is it for", it's configuration for the Compass scss compiler. The reference in the other answer is handy, but for up to date details without speculation check out the details of the Compass configuration file here:
http://compass-style.org/help/tutorials/configuration-reference/
I had originally posted to also provide an answer for where the file came from (if you didn't create it yourself. I answer this because it was 'magially' appearing in my Rails project, and a Google search for generation of config.rb in Rails ends up here. I first supsected sass-rails, but someone from that project confirmed they don't generate that file. Then I figured out that auto-generation of this file was being caused by Sublime Text with the LiveReload package. See the related github issue.
I'm guessing that it defines how to find assets from the stylesheets folder, and some output options on what happens to the assets before being sent to the client browser.
http_path = "/" <= probably the root path to assets, so if you had
application.css, the URL path to it would be
http://example.com/application.css
css_dir = "." <= the directory to look for css files. "." in linux
means "the current directory", so this would set you
CSS path to app/assets/stylesheets
sass_dir = "." <= same as above, except for .sass/.scss files
images_dir = "img" <= images go in app/assets/stylesheets/img
javascripts_dir = "js" <= javascript go in app/assets/stylesheets/js
(seems like a strange place to put them)
output_style = :compressed <= either to produce .gzip'ed versions of files
or to minify them, maybe both - consult docs on this
relative_assets=true <= relative, as opposed to absolute paths to assets, I think
line_comments = false <= I think this turns on/off the ability to include comments,
or maybe whether the comments are included in the
compressed versions of the file. turn it to "true" and
compare the assets, as served to the browser, and see
what the difference is
You can fix it by removing Live Reload plugin from Sublime Text.
Sublime Text > Preferences > Package Control > Remove Package > Live Reload
For me it's good solution because I didn't using Live Reload.
Who using live reload please follow issue: github.