How to install the jenkins on a solaris server? i found articles that this cannot be done as jenkins has discontinued support for solaris.
Even though official IPS repositories for Solaris are discontinued, you can still run Jenkins in Solaris via the jenkins webapp (jenkins.war). To quote from Jenkins installation doc:
Solaris, OmniOS, SmartOS, and other siblings
Generally it should
suffice to install Java 8 and download the jenkins.war and run it as a
standalone process or under an application server such as Apache
Tomcat.
Some caveats apply:
Headless JVM and fonts: For OpenJDK builds on minimalized-footprint
systems, there may be issues running the headless JVM, because Jenkins
needs some fonts to render certain pages.
ZFS-related JVM crashes: When Jenkins runs on a system detected as a
SunOS, it tries to load integration for advanced ZFS features using
the bundled libzfs.jar which maps calls from Java to native libzfs.so
routines provided by the host OS. Unfortunately, that library was made
for binary utilities built and bundled by the OS along with it at the
same time, and was never intended as a stable interface exposed to
consumers. As the forks of Solaris legacy, including ZFS and later the
OpenZFS initiative evolved, many different binary function signatures
were provided by different host operating systems - and when Jenkins
libzfs.jar invoked the wrong signature, the whole JVM process crashed.
A solution was proposed and integrated in jenkins.war since weekly
release 2.55 (and not yet in any LTS to date) which enables the
administrator to configure which function signatures should be used
for each function known to have different variants, apply it to their
application server initialization options and then run and update the
generic jenkins.war without further workarounds. See the libzfs4j Git
repository for more details, including a script to try and "lock-pick"
the configuration needed for your particular distribution (in
particular if your kernel updates bring a new incompatible libzfs.so).
Also note that forks of the OpenZFS initiative may provide ZFS on
various BSD, Linux, and macOS distributions. Once Jenkins supports
detecting ZFS capabilities, rather than relying on the SunOS check,
the above caveats for ZFS integration with Jenkins should be
considered.
Related
Are containers specific to a particular host OS? For instance, if a container is created on Windows with particular dependencies (e.g., DLL files), can it run in a setup in which the host OS is Linux? I initially assumed that a container must be specific to a particular host OS.
But the following two excerpts seem to suggest that I may not have understood the mechanics correctly. So my question is: are containers built over the docker engine so when the dependencies are included, they are relative to the docker engine and the underlying host OS does not matter?
(1) From IBM:
Containerization allows developers to create and deploy applications faster and more securely. With traditional methods, code is developed in a specific computing environment which, when transferred to a new location, often results in bugs and errors. For example, when a developer transfers code from a desktop computer to a virtual machine (VM) or from a Linux to a Windows operating system. Containerization eliminates this problem by bundling the application code together with the related configuration files, libraries, and dependencies required for it to run. This single package of software or “container” is abstracted away from the host operating system, and hence, it stands alone and becomes portable—able to run across any platform or cloud, free of issues. [https://www.ibm.com/cloud/learn/containerization]
(2) From Docker:
Does Docker run on Linux, macOS, and Windows?
You can run both Linux and Windows programs and executables in Docker containers. The Docker platform runs natively on Linux (on x86-64, ARM and many other CPU architectures) and on Windows (x86-64).
Docker Inc. builds products that let you build and run containers on Linux, Windows and macOS.
What does Docker technology add to just plain LXC?🔗
Docker technology is not a replacement for LXC. “LXC” refers to capabilities of the Linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations. On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities:
Portable deployment across machines. Docker defines a format for bundling an application and all its dependencies into a single object called a container. This container can be transferred to any Docker-enabled machine. The container can be executed there with the guarantee that the execution environment exposed to the application is the same in development, testing, and production. LXC implements process sandboxing, which is an important pre-requisite for portable deployment, but is not sufficient for portable deployment. If you sent me a copy of your application installed in a custom LXC configuration, it would almost certainly not run on my machine the way it does on yours. The app you sent me is tied to your machine’s specific configuration: networking, storage, logging, etc. Docker defines an abstraction for these machine-specific settings. The exact same Docker container can run - unchanged - on many different machines, with many different configurations.
The host OS, or precisely, the kernel provided still matters. That's why you can't run Windows containers on Linux. You can run Linux container on Windows due to Hyper-V and WSL2, and on macOS with Hypervisor, but that's it. If the provided kernel is compatible (doesn't have to be identical), usually similar version and the same architecture (remember, there are x64, ARM64, etc) or at least supported virtualization (x64 containers can run on M1, which is ARM64) then you can just run the container, no need to worry about DLLs because they're supposed to be included either in one of the base image you start with or the image you generate.
I was wondering If any one could help me to understand the difference of python virtual environment and docker container.
So I would like to have environment for each tools isolating from each other to avoid dependency conflict for example: use of different version of same dependency causing error in one of the tool because one tool need older version and other one requires newer version.
I’m tested out python venv but not sure if it’s the right one I should use for the issue I just explained or docker is something I should be using for my situation?.
Particularly for day-to-day development, prefer a virtual environment if it's practical.
Virtual environment
Docker
Works with native tools; can just run python myscript.py
Requires Docker-specific setup
Every IDE and editor works fine with it
Requires Docker-specific IDE support
Can just open() data files with no special setup
Can't access data files without Docker-specific mount setup
Immediately re-run code after editing it
Re docker build image or use Docker-specific mount setup
Uses Python installation from host
Use any single specific version of Python
Isolated Python library tree
Isolated Python library tree
Uses host version of C library dependencies
Isolated C library dependencies
A virtual environment acts like a normal Python installation in an alternate path. You don't need to do special things to make your local code or data files available; you can just run your script directly or via your IDE. The one downside is that you're limited to what your host OS's package manager makes available for Python versions and C library dependencies.
A Docker container contains the filesystem of a complete OS, including a completely isolated Python installation. It can be a good match if you need a very specific version of Python or if you need host OS dependencies that are tricky to install. It can also be a good match if you're looking for a production-oriented deployment setup that doesn't specifically depend on installing things on to the target system. But, Docker by design makes it hard to access your host files; it is not a great match for a live development environment or especially for one-off scripts that read and write host files.
The other consideration here is, if you use the standard Python packaging tools, it's straightforward to run your program in a virtual environment, and converting that to a Docker image is almost boilerplate. Starting from Docker can make it tricky to go back the other way, and I see some setups around SO that can only be run via Docker; if they were restructured to use a standard setup.cfg/requirements.txt installation setup they would not require Docker but could still be used with it.
I have trouble understanding this concept. I know a little bit about how Docker works and what the benefits are, and while I understand running web servers, databases and development environments in containers, I don't understand the point of running an OS like Ubuntu in Docker.
Can someone explain why you would want to do that and also the benefits of an entire OS in a container?
The OS is essentially the runtime environment required to run your app. If you app is compiled to run on Linux, it relies on Linux libraries (libc, glib, and so on) that must be present in executing environment, regardless of its type. Docker makes no exception to this.
So a Ubuntu application requires a Ubuntu image in order to run correctly.
Note that Docker container does not include nor run an entire OS, but only the minimum set of libraries that allow your app to run. In particular it does never contain or execute a kernel, as it runs under the host kernel.
Docker doesn't have its own OS, it is installed on a machine and this allows it to share host operating system resources. There will be only one OS and all the containers will be using that OS.
Most of the application are meaningless without OS since it is required for IO, hardware calls etc.
Each docker container may have different packages (java, python, jboss etc), applications installed.
I develop websites using python with django framework, I like to get things done fast.
I used to use virtual machine or in the local host machine, recently went to vagrant, I am not sure if there is other technologies to help keep the process faster?
I could use some tips and pointers.
- Docker
It is great at building and sharing disk images with others through the Docker Index
Docker is a manager for infrastructure (today's bindings are for Linux Containers, but future bindings including KVM, Hyper-V, Xen, etc.)
Docker is a great image distribution model for server templates built with Configuration * Managers (like Chef, Puppet, SaltStack, etc)
Docker uses btrfs (a copy-on-write filesystem) to keep track of filesystem diff's which can be committed and collaborated on with other users (like git)
Docker has a central repository of disk images (public and private) that allow you to easily run different operating systems (Ubuntu, Centos, Fedora, even Gentoo)
- virtualenv
It isolates the Python interpreter and the Python dependencies on one machine so you can install multiple Python projects alongside each other with their own dependencies. But for the rest of the machine the virtualenv doesn't do anything:
you still have global dependencies / packages that are installed using your Mac OS X / Linux package manager and these are shared between the virtualenvs.
- A virtual machine (VM)
It is a software program or operating system that not only exhibits the behavior of a separate computer, but is also capable of performing tasks such as running applications and programs like a separate computer.
A virtual machine, usually known as a guest is created within another computing environment referred as a "host."
Multiple virtual machines can exist within a single host at one time.
- Vagrant
often used to programmatically configure virtual machines
specifies the whole machine: it allows you to specify the Linux distribution, packages to be installed and actions to be taken to install the project.
So if you want to launch a Vagrant box with multiple Python projects on that machine you'd still use virtualenv to keep the Python dependencies separate.
I would like to use the Spark-graphX packages available to Neo4j through Mazerunner, however I am an analyst and not a software person. I am running Windows 7 on my laptop and Neo4j 2.3.0, and would like a step-by-step guide explaining how I can set-up Mazerunner for both Community & Enterprise. There's a lot of mention of dockers and containers, and I have no idea what these are, or how to set them up. Simple instructions would be of sooo much help! :)
Docker is primarily Operating System Level Visualization technology designed to run on Unix based systems (Linux,Mac,FreeBSD). Luckily Docker provides a Windows version that sort of does the same thing on Unix.
What happens is, after you have installed Docker, it allows you to run what they call containers which are basically virtual machines on top of your host (Windows 7 Running Docker). This allows you to run services like Neo4j in an isolated environment. Docker also allows you to download and install pre-configured, pre-compiled images of operating systems that usually provide some sort of service or have some software pre-installed.
In your case, I believe all you have to do is:
First install Docker
Use "Docker Compose" to download and install the images.
Continue Reading the Tutorial as you have now installed the required docker images
Note: Some of the operations, like the one in Step 2 will require command-line access and Also the creation of a "docker-compose.yml" so, be sure to visit all the links I have provided. Spend a little time going through them and you should be alright.
PS: great blog. definitely bookmarking it!