nix user packages vs system packages - nix

I use Vagrant + virtualBox and a Debian 8 / Ubuntu image for my development environments.
I want to use nix for provisioning, but I can't get nixOS to work completely right (ie. with networking, provisioning...), but I'd more than happy to switch. So I'm falling back to nix + Debian 8 or nix + Ubuntu to mimic the production server.
I can't quite get the differences between user packages and system packages and what is preferable to use in in my context, in order to be able to reuse that into a production server.
Should I install packages with nix -i, or edit the configuration.nix file ?

Whether you should install packages into the system profile or into a user profile depends on what you'd like to achieve. There is no single right answer. Instead, both types of installing packages have different pros and cons.
System packages are defined in a global configuration file that typically resides in /etc/nixos/configuration.nix. Users other than root can typically not edit that file and thus cannot add or remove packages from the list. Packages defined in there will be installed as part of nixos-rebuild and they'll be available in $PATH to every user of your system. If any of those packages don't compile for some reason, however, then you cannot rebuild your system, i.e. you may find yourself in a situation where you cannot update your kernel because, say, nano doesn't compile. People who follow the master branch of the Nixpkgs git repository may experience this problem frequently, but it doesn't happen so much for users of the unstable or the release-x.y channels.
User packages can be installed, updated, and de-installed by any user on your system without the need for super-user privileges. One user's packages will not be visible to another user (unless she makes a conscious effort to import them into $PATH). Thus, it's possible for two users to install two different versions of the same package.
If a user does not regularly update her profile, then you'll accumulate old packages in your Nix store that cannot be garbage collected. This may be a problem or a feature, depending on how you look at it. System packages, on the other hand, are all updated every time you update the system, so those tend to have a smaller disk space footprint.
Personally, I prefer to keep my system profile as small as possible; I install packages mostly into user profiles with nix-env. I know many people, however, who do the exact opposite and we all get along fine. :-)

Related

Is there a way to install just Mosquitto Pub?

I'm working on a Linux system (a based of OpenWRT version) that has not much storage (<3MB) and no active internet connection, however I need to be able to publish to a MQTT-Broker some outputs. Is there a way for me to install just the publisher part of Mosquitto to save space or another way to solve this issue?
Without a LOT more information about the system this question is basically impossible to answer. e.g. we have no idea about what OS is being used...
But for a system with such tightly constrained storage your best option will probably to build the components you need from scratch that way you have complete control over what gets installed.
You could build the mosquitto tools and then strip them before copying just the binary you want (and any require libraries) to the system.
If you install from pretty much any Linux package management system you are likely to get all the tools plus man pages which will inflate the install footprint.
EDIT -
But all that said, a quick look at the available packages for OpenWRT implies that the existing packages that include both the broker and the command line tools would use 129kb (99 + 30 and is less if you don't need SSL 85 + 28) when installed (this assumes the pre-requisites are already installed)
https://openwrt.org/packages/table/start?dataflt%5BDescription_wiki*%7E%5D=mosquitto

Docker query on containerizing

Our requirement is to create a container for legacy apps over docker.
We don't have the operating system support/application server support available, nor do we have knowledge to build them from scratch.
But we have a physical instance of the legacy app running in our farm.
We could get an ISO image from our server team if required, our question is if we get this ISO image can we export this as a docker image?
if yes, please let me know if there is any specific procedure or steps associated with it.
if no, please tell me why? and the possible workarounds for the same.
if we get this ISO image can we export this as a docker image?
I don't think there is an easy way (like push-the-export-button) to do this. Explanation follows...
You are describing a procedure taking place in the Virtual Machine world. You take a snapshot of a server, move the .iso file somewhere else and create a new VM that will run on a Hypervisor.
Containers are not VMs. They "contain" all the bytes that a service needs to run but not a whole operating system. They are supposed to run as processes on the host.
Workarounds:
You will have to get your hands dirty. This means that you will have to find out what the legacy app uses (for example Apache + PHP + MySql + app code) and build it from scratch with Docker.
Some thoughts:
containers are supposed to be lightweight. For example one might use one container for the database, another one for the Apache etc... Your case looks like you are moving towards a fat container that has everything inside.
Depending on what the legacy technology is, you might hit a wall... For example, if we are talking about something working with old php, mysql you might find ready-to-use images on hub.docker.com. But if the legacy app is a financial system written in cobol, I don't know what your starting point might be...
You will need to reverse engineer the application dependencies from the artifacts that you have in access to. This means recovering the language specific dependencies (whether python, java, php, node, etc). And any operating system level packages/dependencies that are required.
Essentially you are rebuilding the contents of that ISO image inside your docker file using OS package installation tools like apt, language level tools like pip, PECL, PEAR, composer, or maven, and finally the files that make up the app code.
So, for example: a PHP application might be dependent on having build-essential and php-mysql installed in the OS. Then the app may be dependent on packages like twig and monolog loaded through composer. If you are using SASS you may need to install ruby as well.
Your job is to track all these down and create a docker file that reproduces the iso image. If you are using a common stack like a J2EE app in tomcat, or a php app fronted by apache or ngnix, there will be base docker images that will get you most of the way to where you need to go.
It does look like there are some tools that can do this for you automatically: Dependency Walker equivalent for Linux?. I can't vouch for any of them. But you can also use command line tools. For example this will give you a list of all the user installed packages on a fedora system:
sudo dnf history userinstalled
When an app is using a dependency manager like composer or pip, there is usually a file that lists all the language specific dependencies.
At the end of the process you'll have a portable legacy app that can be easily deployed anywhere with a minimal footprint.
As one of the comments rightly points out, creating a VM from the ISO image is another way forward that will be much easier to accomplish. The application dependencies won't be explicit, but maybe that's ok for your use case.

SaltStack Caches installing package files and reproduce installation later

When we use a YAML file to install packages like this below
nginx:
pkg.installed
Is there any convenient way to cache the installing binary files used by SaltStack, and reuse them with Saltstack later? With these caches, we can ask SaltStack to precisely reproduce the installing (provisioning) process on another machine (with the same Linux distribution installed) at the binary level.
We know that there are other tools that provide this kind of binary-level reliability in different ways. For example, docker images and AWS AMIs; Nix manages packages by storing both binary packages and hash values over them.
The pkg.installed state uses the operating system's package manager to check if the package is installed and install the package if needed. So there's nothing to cache unless your operating system's package manager provides that type of functionality.

How can I preinstall software on travis-ci?

We use travis-ci for continuous integration. I'm troubled by the fact that our build process takes too long (~30 minutes). We depend on several Ubuntu packages which we fetch using apt-get, among others python-pandas.
We also have some of our own debs which we fetch over HTTPS and dpkg install. Finally, we have several pip/pypi requirements, such as Django, Flask, Werkzeug, numpy, pycrypto, selenium.
It would be nice to be able to at least pre-package some of these requirements. Does travis support something like this? How can I prepackage some of these requirements? Is it possible to build a custom travis base VM and start the build from there (perhaps using docker)? Especially the apt-get requirements from the default Ubuntu precise repository as well as the pip requirements should be easy to include.
So while this question is already answered, it's doesn't actually provide a solution path. You can use cache directives in travis to cache your built packages for future travis runs.
cache:
directories:
- $HOME/.pip-cache/
- $HOME/virtualenv/python2.7
install:
- pip install -r requirements.txt --download-cache "$HOME/.pip-cache"
Now your package content is saved for your next travis build. You can similarly store slow-to-retrieve resources in other directories and cache them.
Currently Travis-CI doesn't support such a feature. There are related issues currently open though such as custom VMs, running Docker in an OpenVz container - (Spotify seems to have a somewhat working example links in this issue), using Linux Containers (LXC), using KVM.
Some of those have workarounds mentioned in the issues, I'd give those a try until something more substantial is supported by Travis-CI. I'd also suggest reaching out to Travis-CI support and see if they have any suggestions (maybe there's something coming out soon that could help).

Distribute CouchDB as part of a Rails app?

I am working on a Rails project and the Architect has asked me to investigate bundling CouchDB into to application so that it can be deployed by Capistrano across multiple platforms and managed by Rake.
My expectation was that I could set up the Erlang VM on the various environments and then distribute the CouchDB application with Capistrano. However I can't find any option to download CouchDB without the Erlang runtime. I can, however see an option to build CouchDB from source which I assume is platform dependent.
I am new to Erlang and CouchBD, am I missing something? Is there a way to bundle CouchDB into a Rails app and distribute it across multiple platforms?
Have a look at some of the tools for provisioning Rails services (such as passenger_stack). Passenger Stack will download, make and install the ancillary services for your Rails app ... might be something you can adapt or use as a base to install Erlang and CouchDB.
There are a bunch of alternatives to this as well. Deprec contains recipes for provisioning with Capistrano. Essential idea is the same though.
I think you will not find a silver bullet. Distributing Erlang is similar to distributing Ruby; however Ruby has the advantage of being included in many default OS installs.
I know ejabberd has pre-built binaries for many distros. You might investigate how they do it.
The correct solution probably depends on how many "multiple platforms" you are targeting. If it's "Ubuntu 8.04 plus Ubuntu 10.04" that is different from several Linux distros, plus OSX, plus FreeBSD. Typically only open source projects support those many platforms and ideally you can get patches from the community. For internal projects, I have seen teams standardize on a Linux build and use virtualization on Mac/Windows.
But back to your question:
Building from source is a reasonable option. You could build when you deploy, or pre-build for all platforms and then deploy the binaries. Both Erlang and CouchDB use Autoconf which means you can --prefix them to a dedicated location (more-or-less standalone apps). It will take some trial and error but your build script can
Platform-specific dependency setup: gcc, make, autoconf, everything you need. apt-get on Ubuntu, yum on RHEL, Macports, whatever you need to get a common platform on your development and deployment system
Compile and install the rest using the tools from step 1. Use configure --prefix=/opt/my_software to keep it all in one place. (You can totally uninstall with rm -rf.)
This is an medium-level challenge--mostly trial and error. If possible, work within a build framework such as Rake or Toby's suggestion passenger_stack. Good luck!

Resources