I'm trying to get started with a simple Hello World derivation using the Nix manual.
But it's not clear to me how to go about building it.
Is there somewhere I can download the source files from so I don't
have to copy them line-by-line?
Is there a way I can nix-build it
without having to modify anything global (eg pkgs/top-level/all-packages.nix)?
Where is pkgs/top-level/all-packages.nix?
One option is to clone the nixpkgs repository and then build the hello package recipe provided in that repository:
git clone https://github.com/NixOS/nixpkgs
cd nixpkgs
nix-build -A hello
Doing it this way, you don't have to modify all-packages.nix, because it already has an entry for hello. If you do want to modify all-packages.nix, you can find it in the nixpkgs repository that you cloned. Just take the path of the repository you cloned, (e.g. ~/nixpkgs) and add pkgs/top-level/all-packages.nix to get the path to all-packages.nix. You can see a copy of that file here:
https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/all-packages.nix
When you start building your own software that is not part of nixpkgs, you might chose to write your own default.nix file in your own repository and put a line like this in there to import nixpkgs, using the NIX_PATH environment variable:
let
nixpkgs = import <nixpkgs> { };
...
Note that I am running NixOs, not sure if my answer will valid for other (non-Linux) system.
Is there somewhere I can download the source files from so I don't have to copy them line-by-line?
You could browse nixpkgs, All source files are there.
Is there a way I can nix-build it without having to modify anything global (eg pkgs/top-level/all-packages.nix)?
David Grayson give excellent answer already.
I would love to add some information.
nix-build will looking for default.nix on the current directory and the build result will symlink named result on current working directory.
Another way to test if nix expression could build is nix-shell which also looking for default.nix or shell.nix on current directory. If the build success you will get shell prompt with your packages avaliable.
Both nix-build and nix-shell have -I argument that you can point to any nix repository including remote one.
For example, if I use nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/master.tar.gz -p hello, nix will download binary cache if exists or build hello using current master branch expression and give me a shell which hello is avaliable.
$ nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/master.tar.gz -p hello
downloading ‘https://github.com/NixOS/nixpkgs/archive/master.tar.gz’... [12850/0 KiB, 1404.5 KiB/s]
unpacking ‘https://github.com/NixOS/nixpkgs/archive/master.tar.gz’...
these paths will be fetched (0.04 MiB download, 0.19 MiB unpacked):
/nix/store/s3vlmp0k8b07h0r81bn7lh24q2mqcai8-hello-2.10
fetching path ‘/nix/store/s3vlmp0k8b07h0r81bn7lh24q2mqcai8-hello-2.10’...
*** Downloading ‘https://cache.nixos.org/nar/1ax9cr6qqqqrb4pdm1mpqn7whm6alwp56dvsh5hpgs5g8rrpnjxd.nar.xz’ (signed by ‘cache.nixos.org-1’) to ‘/nix/store/s3vlmp0k8b07h0r81bn7lh24q2mqcai8-hello-2.10’...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 40364 100 40364 0 0 40364 0 0:00:01 0:00:01 --:--:-- 37099
[nix-shell:~]$ which hello
/nix/store/s3vlmp0k8b07h0r81bn7lh24q2mqcai8-hello-2.10/bin/hello
Where is pkgs/top-level/all-packages.nix?
There is NIX_PATH enviroment variable. The nixpkgs portion will point you to your current repository
$ echo $NIX_PATH
nixpkgs=/nix/var/nix/profiles/per-user/root/channels/nixos/nixpkgs:nixos-config=/etc/nixos/configuration.nix:/nix/var/nix/profiles/per-user/root/channels
My all-packages.nix is located at /nix/var/nix/profiles/per-user/root/channels/nixos/nixpkgs/pkgs/top-level/all-packages.nix
Related
I have a container that I want to export as a .tar file. I have used a podman run with a tar --exclude=/dir1 --exclude=/dir2 … that outputs to a file located on a bind-mounted host dir. But recently this has been giving me some tar: .: file changed as we read it errors, which podman/docker export would avoid. Besides the export I suppose is more efficient. So I'm trying to migrate to using the export, but the major obstacle is I can't seem to find a way to exclude paths from the tar stream.
If possible, I'd like to avoid modifying a tar archive already saved on disk, and instead modify the stream before it gets saved to a file.
I've been banging my head for multiple hours, trying useless advices from ChatGPT, looking at cpio, and attempting to pipe the podman export to tar --exclude … command. With the last I did have small success at some point, but couldn't make tar save the result to a particularly named file.
Any suggestions?
(note: I do not make distinction between docker and podman here as their export command is completely the same, and it's useful for searchability)
I'm running Julia on the raspberry pi 4. For what I'm doing, I need Julia 1.5 and thankfully there is a docker image of it here: https://github.com/Julia-Embedded/jlcross
My challenge is that, because this is a work-in-progress development I find myself adding packages here and there as I work. What is the best way to persistently save the updated environment?
Here are my problems:
I'm having a hard time wrapping my mind around volumes that will save packages from Julia's package manager and keep them around the next time I run the container
It seems kludgy to commit my docker container somehow every time I install a package.
Is there a consensus on the best way or maybe there's another way to do what I'm trying to do?
You can persist the state of downloaded & precompiled packages by mounting a dedicated volume into /home/your_user/.julia inside the container:
$ docker run --mount source=dot-julia,target=/home/your_user/.julia [OTHER_OPTIONS]
Depending on how (and by which user) julia is run inside the container, you might have to adjust the target path above to point to the first entry in Julia's DEPOT_PATH.
You can control this path by setting it yourself via the JULIA_DEPOT_PATH environment variable. Alternatively, you can check whether it is in a nonstandard location by running the following command in a Julia REPL in the container:
julia> println(first(DEPOT_PATH))
/home/francois/.julia
You can manage the package and their versions via a Julia Project.toml file.
This file can keep both the list of your dependencies.
Here is a sample Julia session:
julia> using Pkg
julia> pkg"generate MyProject"
Generating project MyProject:
MyProject\Project.toml
MyProject\src/MyProject.jl
julia> cd("MyProject")
julia> pkg"activate ."
Activating environment at `C:\Users\pszufe\myp\MyProject\Project.toml`
julia> pkg"add DataFrames"
Now the last step is to provide package version information to your Project.toml file. We start by checking the version number that "works good":
julia> pkg"st DataFrames"
Project MyProject v0.1.0
Status `C:\Users\pszufe\myp\MyProject\Project.toml`
[a93c6f00] DataFrames v0.21.7
Now you want to edit Project.toml file [compat] to fix that version number to always be v0.21.7:
name = "MyProject"
uuid = "5fe874ab-e862-465c-89f9-b6882972cba7"
authors = ["pszufe <pszufe#******.com>"]
version = "0.1.0"
[deps]
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
[compat]
DataFrames = "= 0.21.7"
Note that in the last line the equality operator is twice to fix the exact version number see also https://julialang.github.io/Pkg.jl/v1/compatibility/.
Now in order to reuse that structure (e.g. different docker, moving between systems etc.) all you do is
cd("MyProject")
using Pkg
pkg"activate ."
pkg"instantiate"
Additional note
Also have a look at the JULIA_DEPOT_PATH variable (https://docs.julialang.org/en/v1/manual/environment-variables/).
When moving installations between dockers here and there it might be also sometimes convenient to have control where all your packages are actually installed. For an example you might want to copy JULIA_DEPOT_PATH folder between 2 dockers having the same Julia installations to avoid the time spent in installing packages or you could be building the Docker image having no internet connection etc.
In my Dockerfile I simply install the packages just like you would do with pip:
FROM jupyter/datascience-notebook
RUN julia -e 'using Pkg; Pkg.add.(["CSV", "DataFrames", "DataFramesMeta", "Gadfly"])'
Here I start with a base datascience notebook which includes Julia, and then call Julia from the commandline instructing it to execute the code needed to install the packages. Only downside for now is that package precompilation is triggered each time I load the container in VS Code.
If I need new packages, I simply add them to the list.
I'm sort of new to programming (not really, but I'm still learning - aren't we all?). Although I know Java and Python and sort of know C, C++, JS, C#, HTML, CSS, etc. (and I can navigate pretty well in the terminal), I am not familiar with what $PATH is in the terminal.
I've been using the Linux terminal and Mac terminal much more frequently than I used to (if I even did at all two years ago), and I know for python, it wants you to "export" its path like PATH=\path\to\python\bin:${PATH}\ export PATH. However, I don't even know what it does. I tried to find out, but all I could find were people saying "export this path and export that one."
So, what is it and why use it? I understand that (if you do it for Python), it basically makes 'python' (or 'python2' or 'python3') a variable, but I just don't understand the concept of what it is.
man bash describes it as:
PATH
The search path for commands. It is a colon-separated list of
directories in which the shell looks for commands (see COMMAND
EXECUTION below). A zero-length (null) directory name in the
value of PATH indicates the current directory. A null directory
name may appear as two adjacent colons, or as an initial or
trailing colon. The default path is system-dependent, and is
set by the administrator who installs bash. A common value is
/usr/gnu/bin:/usr/local/bin:/usr/ucb:/bin:/usr/bin'.
When you run a command, like python, the operating system tries to find the python program in the list of directories stored in PATH.
Suppose your PATH is /usr/local/bin:/foo:/bar:/baz:/usr/bin. When you try to run the python comamnd, the operating system will look for an executable named python in those directories in order. On Linux, you can watch it do this with the strace command:
$ PATH=/usr/local/bin:/foo:/bar:/baz:/usr/bin strace -f /bin/bash -c 'python --version' 2>&1 | grep 'stat.*python'
stat("/usr/local/bin/python", 0x7fff98b63d00) = -1 ENOENT (No such file or directory)
stat("/foo/python", 0x7fff98b63d00) = -1 ENOENT (No such file or directory)
stat("/bar/python", 0x7fff98b63d00) = -1 ENOENT (No such file or directory)
stat("/baz/python", 0x7fff98b63d00) = -1 ENOENT (No such file or directory)
stat("/usr/bin/python", {st_mode=S_IFREG|0755, st_size=4864, ...}) = 0
As soon as python is found in /usr/bin/python, the search stops, and the program runs.
I followed the instructions for installing the documentation from the source build (by compiling the documentation scheme in CorePlotExamples) but it fails when trying to compile the documentation with the following errors.
3068: protocol_c_p_t_bar_plot_data_source-p.html
3069: protocol_c_p_t_scatter_plot_data_source-p.html
3070: _c_p_t_utilities_8m.html#a794f89cd14d4cfb21bf8c050b2df8237
3071: category_c_p_t_layer_07_c_p_t_platform_specific_layer_extensions_08.html
3072: interface_c_p_t_line_style.html#a4013bcb6c2e1af2e37cfabd7d8222320
3073: _c_p_t_utilities_8h.html#ae826ae8e3f55a0aa794ac2e699254cad
Loading symbols from /Users/GeoffCoopeMP/Downloads/core-plot-master-3/framework/CorePlotDocs.docset/html/com.CorePlot.Framework.docset/Contents/Resources/Tokens.xml
1000 tokens processed...
2000 tokens processed...**strong text**
3000 tokens processed...
4000 tokens processed...
5000 tokens processed...
* 5145 tokens processed ( 1.8 sec)
* 20 tokens ignored
Linking up related token references
Sorting tokens
rm -f com.CorePlot.Framework.docset/Contents/Resources/Documents/Nodes.xml
rm -f com.CorePlot.Framework.docset/Contents/Resources/Documents/Info.plist
rm -f com.CorePlot.Framework.docset/Contents/Resources/Documents/Makefile
rm -f com.CorePlot.Framework.docset/Contents/Resources/Nodes.xml
rm -f com.CorePlot.Framework.docset/Contents/Resources/Tokens.xml
mkdir -p ~/Library/Developer/Shared/Documentation/DocSets
cp -R com.CorePlot.Framework.docset ~/Library/Developer/Shared/Documentation/DocSets
cp: /Users/GeoffCoopeMP/Library/Developer/Shared/Documentation/DocSets/com.CorePlot.Framework.docset: Not a directory
make: *** [install] Error 1
find: /Users/GeoffCoopeMP/Library/Developer/Shared/Documentation/DocSets/com.CorePlot.Framework.docset/Contents/: Not a directory
false
Showing first 200 notices only
Command /bin/sh emitted errors but did not return a nonzero exit code to indicate failure
I found the com.CorePlot.Framework.docset files (7kb) but noticed the KIND is "Unix executable file" rather than the expected "Documention Set" like other Xcode help files.
The dockset files are also 7kb in the zip file download under the documentation folder and the kind is shown as Unix executable file there too.
Under my user Library folder I can see the dockets as in:
I also noticed that the docksets can be within the Xcode.app contents but placing these files here didn't work either.
So, is this 7k file the right one? should it be kind Documentation Set rather than Unix Exectuable File? Why does the documentation not compile in Xcode but still generates the files?
I am using Xcode version 5.1.1, Doxygen 1.8.7, graphviz 2.36 and Core Plot 2.0 source from github.
Any help would be much appreciated as I am trying to learn how to use this excellent SDK.
The Core Plot docsets should each be around 70 MB in size. A "docset" is a package which is a special type of folder treated as a single file in the Finder. When building Core Plot documentation, Doxygen makes the docset folder inside the Core Plot "framework" folder and copies it to your library from there.
Did the docset get built in the "framework" folder? Are there any aliases or file links in the path to the Core Plot folder that might be confusing Doxygen or the cp command?
We are using Windows as a development system and Ant to create the platform-specific bundles. For the macOS specific bundle (.tar.gz file) we are using the tar task. I want to create a symbolic link in the output .tar.gz file which points to another file in the same .tar.gz file. Can this be done using Ant?
To the best of my knowledge stock Ant can't do this.
Bit late, but suggestion: you can create tar archives containing symlinks from windows command line using cygwin.
$ ln -s abc def
$ tar cf test.tar abc def
$ tar tvf test.tar
-rw-r--r-- shegny/Domain Users 0 2013-04-26 10:25 abc
lrwxrwxrwx shegny/Domain Users 0 2013-04-26 10:25 def -> abc
You cannot create a symbolic link on Windows. Therefore, the Symlink Ant task would not work properly. One option would be to run a continuous build server on a Mac machine. Then, you could run that task at regular intervals and the Symlink Ant task would work.