in Java/Groovy, afaik, a package has to be defined in the corresponding folder. This results in all class files which are stored in /a/b/c start with the line package a.b.c. Is this still necessary? With regards to convention over configuration, this isn't DRY...
What kind of problems would arise when this package definition would be optional`?
While it is conventional for the directory structure to match the package structure, and certain problems arise if they don't match, it is in fact not a requirement that they match. This is also true of Java (though a lot of folks don't realize that).
Below is an example which demonstrates this.
groovydemo $ mkdir classes
groovydemo $
groovydemo $ cat src/groovy/com/demo/SomeClass.groovy
package com.somethingotherthandemo
class SomeClass {}
groovydemo $
groovydemo $ groovyc -d classes/ src/groovy/com/demo/SomeClass.groovy
groovydemo $ find classes -type f
classes/com/somethingotherthandemo/SomeClass.class
The reasons for using packages in Groovy (and Grails) are some of the same reason why they are used in Java.
Packages serve to organize classes into logical namespaces, typically by grouping collaborating classes together.
It helps avoid naming conflicts with other classes (either Java or Groovy).
In any non-trival system where you have hundreds or thousands of classes, packages provide a very useful mechanism for organization and structure.
I think what you're saying is that the package name is implied by the directory the class is in, so why do you need to state it explicity? This is only true in some cases (like Grails) where there's a convention that establishes the root of the source files (e.g. src/groovy).
But imagine I'm writing a Groovy app and have a file at /a/b/c/D.groovy, how can we tell if the root of the source files is /a and thus the package name is b.c or the root of the source files is /a/b and therefore the package name is just c? As far as I can see, we can't, so the package name needs to be stated in the source file explicitly.
Related
I am attempting to use Bazel to compile a dhall program based on dhall-kubernetes to generate a Kubernetes YAML file.
The basic dhall compile without dhall-kubernetes using a simple bazel macro works ok.
I have made an example using dhall's dependency resolution to download dhall-kubernetes - see here. This also works but is very slow (I think because dhall downloads each remote file separately), and introduces a network dependency to the bazel rule execution, which I would prefer to avoid.
My preferred approach is to use Bazel to download an archive release version of dhall-kubernetes, then have the rule access it locally (see here). My solution requires a relative path in Prelude.dhall and package.dhall for the examples/k8s package to reference dhall-kubernetes. While it works, I am concerned that this is subverting the Bazel sandbox by requiring special knowledge of the folder structure used internally by Bazel. Is there a better way?
Prelude.dhall:
../../external/dhall-kubernetes/1.17/Prelude.dhall
WORKSPACE:
workspace(name = "dhall")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
DHALL_KUBERNETES_VERSION = "4.0.0"
http_archive(
name = "dhall-kubernetes",
sha256 = "0bc2b5d2735ca60ae26d388640a4790bd945abf326da52f7f28a66159e56220d",
url = "https://github.com/dhall-lang/dhall-kubernetes/archive/v%s.zip" % DHALL_KUBERNETES_VERSION,
strip_prefix = "dhall-kubernetes-4.0.0",
build_file = "#//:BUILD.dhall-kubernetes",
)
BUILD.dhall-kubernetes:
package(default_visibility=['//visibility:public'])
filegroup(
name = "dhall-k8s-1.17",
srcs = glob([
"1.17/**/*",
]),
)
examples/k8s/BUILD:
package(default_visibility = ["//visibility:public"])
genrule(
name = "special_ingress",
srcs = ["ingress.dhall",
"Prelude.dhall",
"package.dhall",
"#dhall-kubernetes//:dhall-k8s-1.17"
],
outs = ["ingress.yaml"],
cmd = "dhall-to-yaml --file $(location ingress.dhall) > $#",
visibility = [
"//visibility:public"
]
)
There is a way to instrument dhall to do "offline" builds, meaning that the package manager fetches all Dhall dependencies instead of Dhall fetching them.
In fact, I implemented something exactly this for Nixpkgs, which you may be able to translate to Bazel:
Add Nixpkgs support for Dhall
High-level explanation
The basic trick is to take advantage of a feature of Dhall's import system, which is that if a package protected by a semantic integrity check (i.e. a "semantic hash") is cached then Dhall will use the cache instead of fetching the package. You can build upon this trick to have the package manager bypass Dhall's remote imports by injecting dependencies in this way.
You can find the Nix-related logic for this here:
Nix function for building a Dhall package
... but I will try to explain how it works in a package-manager-independent way.
Package structure
First, the final product of a Dhall "package" built using Nix is a directory with the following structure:
$ nix-build --attr 'dhallPackages.Prelude'
…
$ tree -a ./result
./result
├── .cache
│ └── dhall
│ └── 122026b0ef498663d269e4dc6a82b0ee289ec565d683ef4c00d0ebdd25333a5a3c98
└── binary.dhall
2 directories, 2 files
The contents of this directory are:
./cache/dhall/1220XXX…XXX
A valid cache directory for Dhall containing a single build product: the binary encoding of the interpreted Dhall expression.
You can create such a binary file using dhall encode and you can compute the file name by replacing the XXX…XXX above with the sha256 encoding of the expression, which you can obtain using the dhall hash command.
./binary.dhall
A convenient Dhall file containing the expression missing sha256:XXX…XXX. Interpreting this expression only succeeds if the expression we built matching the hash sha256:XXX…XXX is already cached.
The file is called binary.dhall because this is the Dhall equivalent of a "binary" package distribution, meaning that the import can only be obtained from a binary cache and cannot be fetched and interpreted from source.
Optional: ./source.dhall
This is a file containing a fully αβ-normalized expression equivalent to the expression that was cached. By default, this should be omitted for all packages except perhaps the top-level package, since it contains the same expression that is stored inside of ./cache/1220XXX…XXX, albeit less efficiently (since the binary encoding is more compact)
This file is called ./source.dhall because this is the Dhall equivalent of a "source" package distribution, which contains valid source code to produce the same result.
User interface
The function for building a package takes four arguments:
The package name
This is not material to the build. It's just to name things since every Nix package has to have a human-readable name.
The dependencies for the build
Each of these dependencies is a build product that produces a directory tree just like the one I described above (i.e. a ./cache directory, a ./binary.dhall file, and an optional ./source.dhall file)
A Dhall expression
This is can be arbitrary Dhall source code, with only one caveat: all remote imports transitively referenced by the expression must be protected by integrity checks AND those imports must match one of the dependencies to this Dhall package (so that the import can be satisfied via the cache instead of the Dhall runtime fetching the URL)
A boolean option specifying whether to keep the ./source.dhall file, which is False by default
Implementation
The way that the Dhall package builder works is:
First, build the Haskell Dhall package with the -f-with-http flag
This flag compiles out support for HTTP remote imports, that way if the user forgets to supply a dependency for a remote import they will get an error message saying Import resolution is disabled
We'll be using this executable for all of the subsequent steps
Create a cache directory within the current working directory named .cache/dhall
... and populate the cache directory with the binary files stored inside each dependency's ./cache/ directory
Configure the interpreter to use the cache directory we created
... by setting XDG_CACHE_HOME to point to the .cache directory we just created in our current working directory
Interpret and α-normalize the Dhall source code for our package
... using the dhall --alpha command. Save the result to $out/source.dhall where $out is the directory that will store the final build product
Obtain the expression's hash
... using the dhall hash command. We will need this hash for the following two steps.
Create the corresponding binary cache file
... using the dhall encode command and save the file to $out/cache/dhall/1220${HASH}
Create the ./binary.dhall file
... by just writing out a text file to $out/binary.dhall containing missing sha256:${HASH}
Optional: Delete the ./source.dhall file
... if the user did not request to keep the file. Omitting this file by default helps conserve space within the package store by not storing the same expression twice (as both a binary file and source code).
Packaging conventions
Once you have this function, there are a couple of conventions that can help simplify doing things "in the large"
By default, a package should build a project's ./package.dhall file
Make it easy to override the package version
Make it easy to override the file built within the package
In other words, if a user prefers to import individual files like https://prelude.dhall-lang.org/List/map instead of the top-level ./package.dhall file there should be a way for them to specify a dependency like Prelude.override { file = "./List/map"; } to obtain a package that builds and caches that individual file.
Conclusion
I hope that helps! If you have more questions about how to do this you can either ask them here or you can also discuss more on our Discourse forum, especially on the thread where this idiom first originated:
Dhall Discourse - Offline use of Prelude
In Haskell: Given an existing directory tree (with sub-directors) of source files.
Is there a way to get a .cabal or .stack file, created automatically, with all the necessary dependents (references to the import files that are embedded inside the source file) embedded in the command file,with no need to manualy editing the command file.
In other words, get a command file that I will be able to run "straight out of the box" without the regular methods of stack new/stack build etc,commands?
cabal init will create a file that lists all the modules in your sourcedir for you. But you will still need to provide the package dependencies yourself. This is because a module Foo.Bar.Baz may come from multiple packages -- hence the package you intend to import must be explicitly specified.
I am new to nixos, this is my understanding about configurations
Configuration files created by installer
/etc/nixos/configuration.nix :: The central point of system description used by nixos-rebuild
/etc/nixos/hardware-configuration.nix :: to be include in above configuration.nix
Configuration files for packages
<package>.nix on nixpkgs github :: configuration for each module (options are searchable on nixos package page)
These are what I do not fully understand
defatult.nix (any where in filesystem) :: for nix-shell like .bashrc
~/.nixpkgs/config.nix :: nix-env overrided configuration for each users
~/.config/<various>.nix :: ?? no idea
Am I understand it right?
Where can I find more information on these configuration files?
You don't call all of these files configuration files. E.g. the <package>.nix files are rather called derivations. What all these files share is the language in which they are written.
/etc/nixos/configuration.nix is indeed where you configure your system and ~/.nixpkgs/config.nix where you configure nix-env.
default.nix doesn't mean anything in particular except that this is the default file that it selected by the commands nix-build and nix-shell when you give them a directory as argument instead of a specific file. Note e.g. that the nixpkgs collection (on GitHub like you noticed already) contains a lot of such default.nix files.
To understand all of this better I advise you to read Nix-pills (that's a long series but it's worth it) and of course the NixOS, Nix and nixpkgs manuals.
I would like to move from one saleforce Dev org to another Dev org using ANT Migration Tool. I would like to autogenerate package.xml file which takes care for all customfields, customObjects and all custom components so which helps me to move easily Source org to target org without facing dependencies issues.
There are a lot of answers here.
I would like to rank those answers here.
The simplest way I think is to consider using Ben Edwards's heroku service https://packagebuilder.herokuapp.com/
Another option is to use npm module provided by Matthias Rolke.
To grab a full package.xml use force-dev-tool, see: https://github.com/amtrack/force-dev-tool.
npm install --global force-dev-tool
force-dev-tool remote add mydev user pass --default
force-dev-tool fetch --progress
force-dev-tool package -a
You will now have a full src/package.xml.
Jar file provided by Kim Galant
Here's a ready made Java JAR that you point to an org (through properties files), tell it what metadata types to go and look for, and it then goes off and inventories your org and builds a package.xml for you based on the types you've specified. Even has a handy-dandy feature allowing you to skip certain things based on a regular expression, so you can easily e.g. exclude managed packages, or some custom namespaces (say you prefix a bunch of things that belong together with CRM_) from the generated package.
So a command line like this:
java -jar PackageBuilder.jar [-o <parameter file1>,<parameterfile2>,...] [-u <SF username>] [-p <SF password>] [-s <SF url>] [-a <apiversion>] [-mi <metadataType1>,<metadataType2>,...] [-sp <pattern1>,<pattern2>,...] [-d <destinationpath>] [-v]
will spit out a nice up-to-date package.xml for your ANT pleasure.
Also another way is to use Ant https://www.jitendrazaa.com/blog/salesforce/auto-generate-package-xml-using-ant-complete-source-code-and-video/
I had idea to create some competing service to aforementioned services but I dropped that project ( I didn't finish the part that was retrieving all parts from reports and dashboards)
There is an extention in VS Code that allows you to choose components and generate package.xml file using point and click
Salesforce Package.xml generator for VS Code
https://marketplace.visualstudio.com/items?itemName=VignaeshRamA.sfdx-package-xml-generator
I am affiliated to this free vs code extension as the developer
I have an erlang application I have been writing which uses the erldis library for communicating with redis.
Being a bit of a newbie with actually deploying erlang applications to production, I wanted to know if there was anyway to 'bundle' these external libraries with the application rather than installing into my system wide /usr/lib/erlang/lib/ folder.
Currently my directory structure looks like...
\
--\conf
--\ebin
--\src
I have a basic Makefile that I stole from a friend's project, but I am unsure how to write them properly.
I suspect this answer could involve telling me how to write my Makefile properly rather than just which directory to plonk some external library code into.
You should really try to avoid project nesting whenever possible. It can lead to all sorts of problems because of how module/application version is structured within Erlang.
In my development environment, I do a few things to simplify dependancies and multiple developed projects. Specifically, I keep most of my projects sourced in a dev directory and create symlinks into an elibs dir that is set in the ERL_LIBS environmental variables.
~/dev/ngerakines-etap
~/dev/jacobvorreuter-log_roller
~/dev/elib/etap -> ~/dev/ngerakines-etap
~/dev/elib/log_roller -> ~/dev/jacobvorreuter-log_roller
For projects that are deployed, I've either had package-rpm or package-apt make targets that create individual packages per project. Applications get boot scripts and init.d scripts for easy start/stop controls but libraries and dependancy projects just get listed as package dependencies.
I use mochiweb-inspired style. To see example of this get your copy of mochiweb:
svn checkout http://mochiweb.googlecode.com/svn/trunk/ mochiweb
and use
path/to/mochiweb/scripts/new_mochiweb.erl new_project_name
to create sample project of the structure (feel free to delete everything inside src afterwards and use it for your project).
It looks like this:
/
/ebin/
/deps/
/src/
/include/
/support/
/support/include.mk
Makefile
start.sh
ebin contains *.beam files
src contains ***.erl files and local *.hrl files
include contains global *.hrl files
deps contains symlinks to root directories of dependencies
Makefile and include.mk takes care of including appropriate paths when project is built.
start.sh takes care of including appropriate paths when project is run.
So using symlinks in deps directory you are able to fine tune the versions of libraries you use for every project. It is advised to use relative paths, so afterwards it is enough to rsync this structure to the production server and run it.
On more global scale I use the following structure:
~/code/erlang/libs/*/
~/code/category/project/*/
~/code/category/project/*/deps/*/
Where every symlink in deps points to the library in ~/code/erlang/libs/ or to another project in the same category.
The simplest way to do this would be to just create a folder named erldir and put the beams you need into it and then in your start script just use the -pa flag to the erlang runtime to point out where it should fetch the beams.
The correct way (at least if you buy into the OTP distribution model) would be to create a release using reltool (http://www.erlang.org/doc/man/reltool.html) or systools (http://www.erlang.org/doc/man/systools.html) which includes both your application and erldis.
Add the external libraries that you need, anywhere you want them, and add them to your ERL_LIBS environment variable. Separate the paths with colon in unix or semicolon in dos.
Erlang will add the "ebin"-named subdirs to its code loading path.
Have your *.app file point out the other applications it depends on.
This is a good halfway-there approach for setting up larger applications.
Another way is put your lib path in ~/.erlang.
code:add_pathz("/Users/brucexin/sources/mochiweb/ebin").
code:add_pathz("/Users/brucexin/sources/webnesia/ebin").
code:add_pathz("./ebin").
code:add_pathz("/Users/brucexin/sources/erlang-history/ebin/2.15.2").