In Haskell: Given an existing directory tree (with sub-directors) of source files.
Is there a way to get a .cabal or .stack file, created automatically, with all the necessary dependents (references to the import files that are embedded inside the source file) embedded in the command file,with no need to manualy editing the command file.
In other words, get a command file that I will be able to run "straight out of the box" without the regular methods of stack new/stack build etc,commands?
cabal init will create a file that lists all the modules in your sourcedir for you. But you will still need to provide the package dependencies yourself. This is because a module Foo.Bar.Baz may come from multiple packages -- hence the package you intend to import must be explicitly specified.
Related
Compiling my Agda code results in a src/MAlonzo directory being created. (Where src/MyProject is where my Agda code lives.) It contains a bunch of .hs (Haskell) and .o (object) files.
Is there anything in this directory that I should commit, or do people typically add /src/MAlonzo to their .gitignore?
I'm asking because I'm surprised that build artifacts are being put in the src directory instead of the _build directory. I wonder if there's a reason for that.
Yes. MAlonzo is the GHC backend used for compiling and running Agda programs. Everything there is automatically generated from your Agda source files.
I've already used devtools to create my package skeleton, then added a bunch of R code, metadata, documentation, etc. I would like to use rstan within this package. I understand that rstan::rstan.package.skeleton creates a package skeleton to facilitate this. So what is the best practice for augmented an existing package with the structure necessary to use rstan from that package? Thank you.
I would say to use rstan.package.skeleton to create the skeleton in a temporary directory and then copy the relevant stuff it creates into the package you created by devtools. This would include
cleanup and cleanup.win in the root of the directory
the tools directory
the exec directory
the inst/chunks subdirectory
the src directory
the R/stanmodels.R file
the DESCRIPTION file in the root of the directory
For the DESCRIPTION file, you may just have to combine it by hand with whatever DESCRIPTION file you have currently.
in Java/Groovy, afaik, a package has to be defined in the corresponding folder. This results in all class files which are stored in /a/b/c start with the line package a.b.c. Is this still necessary? With regards to convention over configuration, this isn't DRY...
What kind of problems would arise when this package definition would be optional`?
While it is conventional for the directory structure to match the package structure, and certain problems arise if they don't match, it is in fact not a requirement that they match. This is also true of Java (though a lot of folks don't realize that).
Below is an example which demonstrates this.
groovydemo $ mkdir classes
groovydemo $
groovydemo $ cat src/groovy/com/demo/SomeClass.groovy
package com.somethingotherthandemo
class SomeClass {}
groovydemo $
groovydemo $ groovyc -d classes/ src/groovy/com/demo/SomeClass.groovy
groovydemo $ find classes -type f
classes/com/somethingotherthandemo/SomeClass.class
The reasons for using packages in Groovy (and Grails) are some of the same reason why they are used in Java.
Packages serve to organize classes into logical namespaces, typically by grouping collaborating classes together.
It helps avoid naming conflicts with other classes (either Java or Groovy).
In any non-trival system where you have hundreds or thousands of classes, packages provide a very useful mechanism for organization and structure.
I think what you're saying is that the package name is implied by the directory the class is in, so why do you need to state it explicity? This is only true in some cases (like Grails) where there's a convention that establishes the root of the source files (e.g. src/groovy).
But imagine I'm writing a Groovy app and have a file at /a/b/c/D.groovy, how can we tell if the root of the source files is /a and thus the package name is b.c or the root of the source files is /a/b and therefore the package name is just c? As far as I can see, we can't, so the package name needs to be stated in the source file explicitly.
I am (very) new to qmake, but i would like
to use qmake to build whole project automatically,
so recursively check all subdirs and build every file.
I have a pch file too.
Is there a way to do it?
Thanks ahead!
The simplest way is to let qmake generate the qmake file for you.
After making a backup copy of any exist *.pro files you may need to reference, go to the top level of your directory structure and issue the command qmake -project. This tells qmake to recurse the tree and locate everything it needs to build and create a qmake project file from it.
Next, edit the generated qmake file. You will at least need to change the TEMPLATE line to be "lib" instead of "app". You will also want to specify the name of the TARGET. There may also be some other things you wish to change.
Now that you have a qmake file, you need to generate a make file. Run qmake again, but this time just say qmake without any arguments.
Finally, you should be able to just run make and have things build. [For future readers running Windows with the MingW tools, make should be replaced with mingw32-make]
I have an erlang application I have been writing which uses the erldis library for communicating with redis.
Being a bit of a newbie with actually deploying erlang applications to production, I wanted to know if there was anyway to 'bundle' these external libraries with the application rather than installing into my system wide /usr/lib/erlang/lib/ folder.
Currently my directory structure looks like...
\
--\conf
--\ebin
--\src
I have a basic Makefile that I stole from a friend's project, but I am unsure how to write them properly.
I suspect this answer could involve telling me how to write my Makefile properly rather than just which directory to plonk some external library code into.
You should really try to avoid project nesting whenever possible. It can lead to all sorts of problems because of how module/application version is structured within Erlang.
In my development environment, I do a few things to simplify dependancies and multiple developed projects. Specifically, I keep most of my projects sourced in a dev directory and create symlinks into an elibs dir that is set in the ERL_LIBS environmental variables.
~/dev/ngerakines-etap
~/dev/jacobvorreuter-log_roller
~/dev/elib/etap -> ~/dev/ngerakines-etap
~/dev/elib/log_roller -> ~/dev/jacobvorreuter-log_roller
For projects that are deployed, I've either had package-rpm or package-apt make targets that create individual packages per project. Applications get boot scripts and init.d scripts for easy start/stop controls but libraries and dependancy projects just get listed as package dependencies.
I use mochiweb-inspired style. To see example of this get your copy of mochiweb:
svn checkout http://mochiweb.googlecode.com/svn/trunk/ mochiweb
and use
path/to/mochiweb/scripts/new_mochiweb.erl new_project_name
to create sample project of the structure (feel free to delete everything inside src afterwards and use it for your project).
It looks like this:
/
/ebin/
/deps/
/src/
/include/
/support/
/support/include.mk
Makefile
start.sh
ebin contains *.beam files
src contains ***.erl files and local *.hrl files
include contains global *.hrl files
deps contains symlinks to root directories of dependencies
Makefile and include.mk takes care of including appropriate paths when project is built.
start.sh takes care of including appropriate paths when project is run.
So using symlinks in deps directory you are able to fine tune the versions of libraries you use for every project. It is advised to use relative paths, so afterwards it is enough to rsync this structure to the production server and run it.
On more global scale I use the following structure:
~/code/erlang/libs/*/
~/code/category/project/*/
~/code/category/project/*/deps/*/
Where every symlink in deps points to the library in ~/code/erlang/libs/ or to another project in the same category.
The simplest way to do this would be to just create a folder named erldir and put the beams you need into it and then in your start script just use the -pa flag to the erlang runtime to point out where it should fetch the beams.
The correct way (at least if you buy into the OTP distribution model) would be to create a release using reltool (http://www.erlang.org/doc/man/reltool.html) or systools (http://www.erlang.org/doc/man/systools.html) which includes both your application and erldis.
Add the external libraries that you need, anywhere you want them, and add them to your ERL_LIBS environment variable. Separate the paths with colon in unix or semicolon in dos.
Erlang will add the "ebin"-named subdirs to its code loading path.
Have your *.app file point out the other applications it depends on.
This is a good halfway-there approach for setting up larger applications.
Another way is put your lib path in ~/.erlang.
code:add_pathz("/Users/brucexin/sources/mochiweb/ebin").
code:add_pathz("/Users/brucexin/sources/webnesia/ebin").
code:add_pathz("./ebin").
code:add_pathz("/Users/brucexin/sources/erlang-history/ebin/2.15.2").