I have a pipeline which uses a global singularity image and rule-based conda wrappers.
However, some of the tools don't have wrappers (i.e. htslib's bgzip and tabix).
Now I need to learn how to run jobs in containers.
In the official documentation link it says:
"Allowed image urls entail everything supported by singularity (e.g., shub:// and docker://)."
Now I've tried the following image from singularity hub but I get an error:
minimal reproducible example:
config.yaml
# Files
REF_GENOME: "c_elegans.PRJNA13758.WS265.genomic.fa"
GENOME_ANNOTATION: "c_elegans.PRJNA13758.WS265.annotations.gff3"
Snakefile
# Directories------------------------------------------------------------------
configfile: "config.yaml"
# Setting the names of all directories
dir_list = ["REF_DIR", "LOG_DIR", "BENCHMARK_DIR", "QC_DIR", "TRIM_DIR", "ALIGN_DIR", "MARKDUP_DIR", "CALLING_DIR", "ANNOT_DIR"]
dir_names = ["refs", "logs", "benchmarks", "qc", "trimming", "alignment", "mark_duplicates", "variant_calling", "annotation"]
dirs_dict = dict(zip(dir_list, dir_names))
GENOME_INDEX=config["REF_GENOME"]+".fai"
VEP_ANNOT=config["GENOME_ANNOTATION"]+".gz"
VEP_ANNOT_INDEX=config["GENOME_ANNOTATION"]+".gz.tbi"
# Singularity with conda wrappers
singularity: "docker://continuumio/miniconda3:4.5.11"
# Rules -----------------------------------------------------------------------
rule all:
input:
expand('{REF_DIR}/{GENOME_ANNOTATION}{ext}', REF_DIR=dirs_dict["REF_DIR"], GENOME_ANNOTATION=config["GENOME_ANNOTATION"], ext=['', '.gz', '.gz.tbi']),
expand('{REF_DIR}/{REF_GENOME}{ext}', REF_DIR=dirs_dict["REF_DIR"], REF_GENOME=config["REF_GENOME"], ext=['','.fai']),
rule download_references:
params:
ref_genome=config["REF_GENOME"],
genome_annotation=config["GENOME_ANNOTATION"],
ref_dir=dirs_dict["REF_DIR"]
output:
os.path.join(dirs_dict["REF_DIR"],config["REF_GENOME"]),
os.path.join(dirs_dict["REF_DIR"],config["GENOME_ANNOTATION"]),
os.path.join(dirs_dict["REF_DIR"],VEP_ANNOT),
os.path.join(dirs_dict["REF_DIR"],VEP_ANNOT_INDEX)
resources:
mem=80000,
time=45
log:
os.path.join(dirs_dict["LOG_DIR"],"references","download.log")
singularity:
"shub://biocontainers/tabix"
shell: """
cd {params.ref_dir}
wget ftp://ftp.wormbase.org/pub/wormbase/releases/WS265/species/c_elegans/PRJNA13758/c_elegans.PRJNA13758.WS265.genomic.fa.gz
bgzip -d {params.ref_genome}.gz
wget ftp://ftp.wormbase.org/pub/wormbase/releases/WS265/species/c_elegans/PRJNA13758/c_elegans.PRJNA13758.WS265.annotations.gff3.gz
bgzip -d {params.genome_annotation}.gz
grep -v "#" {params.genome_annotation} | sort -k1,1 -k4,4n -k5,5n -t$'\t' | bgzip -c > {params.genome_annotation}.gz
tabix -p gff {params.genome_annotation}.gz
"""
rule index_reference:
input:
os.path.join(dirs_dict["REF_DIR"],config["REF_GENOME"])
output:
os.path.join(dirs_dict["REF_DIR"],GENOME_INDEX)
resources:
mem=2000,
time=30,
log:
os.path.join(dirs_dict["LOG_DIR"],"references", "faidx_index.log")
wrapper:
"0.64.0/bio/samtools/faidx"
Error
Building DAG of jobs...
Pulling singularity image shub://biocontainers/tabix.
WorkflowError:
Failed to pull singularity image from shub://biocontainers/tabix:
ESC[31mFATAL: ESC[0m While pulling shub image: failed to get manifest for: shub://biocontainers/tabix: the requested manifest was not found in singularity hub
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/deployment/singularity.py", line 88, in pull
~
It appears this is a problem with the container?
(snakemake) [moldach#arc CONTAINER_TROUBLESHOOT]$ singularity pull shub://biocontainers/tabix
FATAL: While pulling shub image: failed to get manifest for: shub://biocontainers/tabix: the requested manifest was not found in singularity hub
In fact, I experience this problem with other biocontainers containers.
For example, I also need to use a container to do bowtie2 indexing and this is the error I get from the biocontainers/bowtie2 versus another developers container of the same tool comics/bowtie2:
^C(snakemake) [moldach#arc CONTAINER_TROUBLESHOOT]$ singularity pull docker://biocontainers/bowtie2
FATAL: While making image from oci registry: failed to get checksum for docker://biocontainers/bowtie2: Error reading manifest latest in docker.io/biocontainers/bowtie2: manifest unknown: manifest unknown
(snakemake) [moldach#arc CONTAINER_TROUBLESHOOT]$ singularity pull docker://comics/bowtie2
INFO: Converting OCI blobs to SIF format
INFO: Starting build...
Getting image source signatures
Copying blob a02a4930cb5d done
Does anyone know why?
Biocontainers does not allow latest as tag for their containers, and therefore you will need to specify the tag to be used.
From their doc:
The BioContainers community had decided to remove the latest tag. Then, the following command docker pull biocontainers/crux will fail. Read more about this decision in Getting started with Docker
When no tag is specified, it defaults to latest tag, which of course is not allowed here. See here for bowtie2's tags. Usage like this will work:
singularity pull docker://biocontainers/bowtie2:v2.4.1_cv1
Using another container solves the issue; however, the fact I'm getting errors from biocontainers is troubling given that these are both very common and used as examples in the literature so I will award the top-answer to whomever can solve that specific issue.
As it were, the use of stackleader/bgzip-utility solve the issue of actually running this rule in a container.
container:
"docker://stackleader/bgzip-utility"
Once again, for those coming to this post, it's probably best to test any container first before running snakemake, e.g. singularity pull docker://stackleader/bgzip-utility.
I am configuring Jenkins to automatically deploy my successfull builds to my Kubernetes cluster. I have manually set up the KUBECONFIG file in /var/lib/jenkins/.kube/config.
But my Jenkins job keeps giving the same error:
+ kubectl config --kubeconfig=/var/lib/jenkins/.kube/config view
error: error loading config file "/var/lib/jenkins/.kube/config": v1.Config.Contexts: \
[]v1.NamedContext: Clusters: []v1.NamedCluster: v1.NamedCluster.Name: Cluster: v1.Cluster.Server: \
CertificateAuthorityData: decode base64: illegal base64 data at input byte 47, error found in #10 byte of ... \
|ASXY9gkN$","server":|..., bigger context ...|"LS3tGS1PR0dJTiBDRVJUMLPAR0FURS0tLS0tCk1JSUV5gkN$", \
"server":"https://clsx-cloud-d734ef-0b|...
I copied the kube config file manually from my SSH accessible account, i.e
cat home/username/.kube/config
You most likely copied the wrong screen output from the terminal, or while editing the file in i.e nano.
The $ characters are illegal characters and the result of truncated file viewing in the terminal, make sure that you copy the real file data correctly.
For example:
xclip -sel clip < home/username/.kube/config
I have fixed this error by removing files in this directory:
/var/lib/jenkins/.kube/
Note: take backup on first priority of this directory as well.
I am new to Fabric 2.0 and recently installed all samples and I was able to run test-network without an issue with 2 orgs. Then I followed the directory on addOrg3 to add 3rd organization and join the channel I created earlier.
Now the fun part came in when I wanted to add 4th organization. What I did was, I copied the addOrg3 folder and renamed almost everything in each file to represent 4th organization. I even assigned new PORT for this organization. However I am seeing the following error.
I've also added the following in Scripts/envVar.sh
export PEER0_ORG4_CA=${PWD}/organizations/peerOrganizations/org4.example.com/peers/peer0.org4.example.com/tls/ca.crt
And added the following in envVarCLI.sh
elif [ $ORG -eq 4 ]; then
CORE_PEER_LOCALMSPID="Org4MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG4_CA
CORE_PEER_ADDRESS=peer0.org4.example.com:12051
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/peerOrganizations/org4.example.com/users/Admin#.../msp
I have also added step1Org4.sh and step2Org4.sh basically following by following addOrg3's structure.
What steps do you follow to add additional organizations ? Please help.
"No such container: Org4cli"
Sorry for the formatting since I wasn't able to put in to coding style but here is the output from running the command "./addOrg4.sh up"
**Add Org4 to channel 'mychannel' with '10' seconds and CLI delay of '3' seconds and using database 'leveldb'
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
##########################################################
############ Create Org4 Identities ######################
##########################################################
+ cryptogen generate --config=org4-crypto.yaml --output=../organizations
org4.example.com
+ res=0
+ set +x
Generate CCP files for Org4
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/configtxgen
##########################################################
####### Generating Org4 organization definition #########
##########################################################
+ configtxgen -printOrg Org4MSP
2020-05-29 13:33:04.609 EDT [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-29 13:33:04.617 EDT [common.tools.configtxgen.localconfig] LoadTopLevel -> INFO 002 Loaded configuration: /Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/configtx.yaml
+ res=0
+ set +x
###############################################################
####### Generate and submit config tx to add Org4 #############
###############################################################
Error: No such container: Org4cli
ERROR !!!! Unable to create config tx **
In your addOrg4.sh have condition check like this:
CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
fi
If you already run addOrg3.sh up, CONTAINER_IDS alway have value (Example: 51b4ad60d812). It is ContainerID of Org3cli. So function Org4Up will never call. Simple way is just comment code like this:
# CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
# if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
# fi
It will bring up Org4cli you missing.
First check the container is up or not and if it is up then I think the CLI where the command is executed is not bootstrapped with the Org4 details.
I have added the 4th Organization from the three Org Hyperledger Fabric Network .Firstly, you have to create the Org4-artifacts (Crypto.yaml and Org4 docker file including the Org4Cli) and then try to follow the manual (step by step) process to add the new Organization from the official documentation.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/channel_update_tutorial.html
Omit the process of editing scripts (step1 Org3.sh ...) because the workflow for adding the 4th or a new Org is slightly changed.So,you will spend a lot of time in just modifying the scripts.
I will write an article to add a new Org (4th) on medium,will paste the link here too.
I'm following the Getting started tutorial of Sawtooth Seth (https://sawtooth.hyperledger.org/docs/seth/releases/latest/getting_started.html), but I'm not able to set up an account on the network. The network is running properly in another shell. (I've removed the -aes128 flag for simplicity)
lorenzo#MacBook-Air-di-Lorenzo ~ $ cd gitHubRepo/sawtooth-seth/
lorenzo#MacBook-Air-di-Lorenzo ~/gitHubRepo/sawtooth-seth $ docker exec -it seth-cli bash
root#a6caea38a1a8:/project/sawtooth-seth# openssl ecparam -genkey -name secp256k1 | openssl ec -out key-file.pem
read EC key
writing EC key
root#a6caea38a1a8:/project/sawtooth-seth# seth account import key-file.pem myalias
error: Found argument 'myalias' which wasn't expected, or isn't valid in this context
USAGE:
seth account import <key-file> --pass-file <pass-file>
For more information try --help
root#a6caea38a1a8:/project/sawtooth-seth# seth account import key-file.pem --pass-file myalias
Error: No such file or directory (os error 2)
root#a6caea38a1a8:/project/sawtooth-seth# ls
bin cli common contracts key-file.pem protos tests
You ran the seth command in the seth-cli Docker container.
You need to run it in the seth-cli-go container until PR https://github.com/hyperledger/sawtooth-seth/pull/72 is merged.
EDIT: This was merged 2018-11-13.
A good resource for questions like this is also the seth chat channel at
https://chat.hyperledger.org/channel/sawtooth-seth
Deploying an OVF/OVA file to a remote ESXi server
I am trying to deploy an OVF/OVA file to a remote ESXi server.
I want to do this from the command line.
I have written a simple batch file which deploys the ovf using
ovftool.exe.
Here is my batch file:
#echo off
CLS
set OVF_COMMAND="C:\Program Files (x86)\VMware\VMwareWorkstation\OVFTool\ovftool.exe"
set OVF_DEPLOY_OFF=ovftool
IF NOT EXIST %OVF_COMMAND% (
#echo powershell does not exists at:
#echo %OVF_COMMAND%
pause
)
#echo START OF THE BATCH SCRIPT
#echo ###############**strong text**########################################################
%OVF_DEPLOY_OFF% --noSSLVerify --disableVerification --skipManifestGeneration C:\Newfolder\vAppTS2\vAppTS2.ovf vi://administrator:jim#141.192.91.124/nrtms-training/host/141.192.91.9/
#echo #######################################################################
This works fine, but it is too slow. The OVF file comprises of one vApp with one VM. When all will be done the vApp will contain about 9 VMs.
It takes about 20 minutes to deploy the the current vApp which contains only one VM. I cannot imagine how long it will take to deploy a vApp with 9 VMs.
Is it a way to make it faster?
Cheers.
I have managed to find work-around.
Instead of importing a ovf file from some remote location I have chosen to clone that vApp from a predefined resource pool.
So at the beginning I have created a resource pool on which I have uploaded a vApp.
//connect to server
Connect-VIServer -Server $args[2].ToString() -Username $args[3] -Password $args[4]
// search which vApp to move into the new source pool
// The name of the vApp is given as an argument to the powerCLI script
// It must be one of the existing vApps
foreach ($vApps in (Get-vApp ) )
{
if ($vApps.name -eq $args[0])
{
# defined source and destination hosts
$vmHost_dest = Get-VMHost -Name "100.106.37.10"
$vmHost_source = Get-VMHost -Name "100.106.37.9"
# create a resource pool on destination host
$myDestinationRP = New-ResourcePool -Name "datastore-13" -Location $vmHost_dest
New-VApp -Name MyVApp2 -VApp $vApps -Location $myDestinationRP
}
}
So I can build a custom vApp and store it to a specific source pool from where I can clone it later on as I please.
If I want to remove the newly cloned vApp I can do as followS:
Get-VApp $vApps | Remove-VApp -Confirm:$false
Hope this helps