How can I create a OpenEBS cstor pool? - storage

Setup -- OpenEBS 0.7
K8S -- 1.10, GKE
I am having 3 node cluster with 2 disks each per node. Can I use these disks for cstor pool creation? How can I do that? Should I have to manually select the disks?

Yes, You can use the disks attached to the Nodes for creating cStor Storage Pool using OpenEBS.The main prereuisties is to unmount the disks if it is being used.
With the latest OpenEBS 7.0 release, the following disk types/paths are excluded by OpenEBS data plane components Node Disk Manager(NDM) which identifies the disks to create cStor pools on nodes.
loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-
You can also customize by adding more disk types associated with your nodes. For example, used disks, unwanted disks and so on. This must be done in the 'openebs-operator-0.7.0.yaml' file that you downloaded before installation. Add the device path in openebs-ndm-config under ConfigMap in the openebs-operator.yaml file as follows.
"exclude":"loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-"
Example:
{
"key": "path-filter",
"name": "path filter",
"state": "true",
"include":"",
"exclude":"loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-"
}
So just install openebs-oprator.yaml which is mentioned in the docs.openebs.io and after the installation it will detect the disks. Follow the instruction given in the doc. You can create pool either by manually selecting the disks or by auto way.

Related

What IP address ranges are available to Docker when creating gateways, for example when using Compose files

I have a naive question, but I noticed by playing with some Compose files, that docker creates gateway addresses in the form 172.x.0.1 for all the networks of my projects. x being normally always (?) incremented (unless the docker service is restarted), starting from 18 (because 17 is used by the default bridge network) up to... some number that I cannot figure out in the documentation. After what, docker jumps to gateways of the form 192.168.y.1, and here again, I'm not able to figure out what range of y values is available to docker, and what is its strategy to chose gateway addresses in all those ranges.
I have the strong impression that it only chooses private IP addresses. But I've not seen (yet) addresses such as 10.a.b.c.
Can anybody explain me, ideally with some official resources, how docker actually chooses gateway addresses when creating bridge networks (specially in the case of Compose files), what are all the pools of addresses available to docker (and if it's possible to manually define or constrain these ranges)?
Some of the pages I consulted without much success:
https://docs.docker.com/network/
https://docs.docker.com/network/bridge/
https://docs.docker.com/network/network-tutorial-standalone/
https://docs.docker.com/compose/networking/
https://github.com/compose-spec/compose-spec/blob/master/spec.md
It seems that some "explanation" hides in that tiny piece of code:
var (
// PredefinedLocalScopeDefaultNetworks contains a list of 31 IPv4 private networks with host size 16 and 12
// (172.17-31.x.x/16, 192.168.x.x/20) which do not overlap with the networks in `PredefinedGlobalScopeDefaultNetworks`
PredefinedLocalScopeDefaultNetworks []*net.IPNet
// PredefinedGlobalScopeDefaultNetworks contains a list of 64K IPv4 private networks with host size 8
// (10.x.x.x/24) which do not overlap with the networks in `PredefinedLocalScopeDefaultNetworks`
PredefinedGlobalScopeDefaultNetworks []*net.IPNet
mutex sync.Mutex
localScopeDefaultNetworks = []*NetworkToSplit{{"172.17.0.0/16", 16}, {"172.18.0.0/16", 16}, {"172.19.0.0/16", 16},
{"172.20.0.0/14", 16}, {"172.24.0.0/14", 16}, {"172.28.0.0/14", 16},
{"192.168.0.0/16", 20}}
globalScopeDefaultNetworks = []*NetworkToSplit{{"10.0.0.0/8", 24}}
)
source: https://github.com/moby/libnetwork/blob/a79d3687931697244b8e03485bf7b2042f8ec6b6/ipamutils/utils.go#L10-L22
This is the best I could come up with, as I still haven't found any official documentation about this...
It also seems possible to force Docker to use a range of allowed subnets, by creating a /etc/docker/daemon.json file with, e.g. such content:
{
"default-address-pools": [
{"base": "172.16.0.0/16", "size": 24}
]
}
One can also specify multiple address pools:
{
"default-address-pools": [
{"base": "172.16.0.0/16", "size": 24},
{"base": "xxx.xxx.xxx.xxx/yy", "size": zz} // <- additional poll can be stacked, if needed
]
}
Don't forget to restart the docker service once you're done:
$ sudo service docker restart
More on this can be found here:
https://capstonec.com/2019/10/18/configure-custom-cidr-ranges-in-docker-ee/

GitLab: How to list registry containers with size

I have a self-hosten GitLab CE Omnibus installation (version 11.5.2) running including the container registry.
Now, the disk size needed to host all those containers increase quite fast.
As an admin, I want to list all Docker images in this registry including their size, so I can maybe let those get deleted.
Maybe I haven't looked hard enough but currently, I couldn't find something in the Admin Panel of GitLab. Before I make myself the work of creating a script to compare that weird linking between repositories and blobs directories in /var/opt/gitlab/gitlab-rails/shared/registry/docker/registry/v2 and then aggregating the sizes based on the repositories, i wanted to ask:
Is there some CLI command or even a curl call to the registry to get the information I want?
Update: This answer is deprecated by now. Please see the accepted answer for a solution built into GitLab's Rails console directly.
Original Post:
Thanks to great comment from #Rekovni, my problem is somehwat solved.
First: The huge amount of used disk space by Docker Images was due to a bug in Gitlab/Docker Registry. Follow the link from Rekovni's comment below my question.
Second: In his link, there's also an experimental tool which is being developed by GitLab. It lists and optionally deletes those old unused Docker layers (related to the bug).
Third: If anyone wants do his own thing, I hacked together a pretty ugly script which lists the image size for every repo:
#!/usr/bin/env python3
# coding: utf-8
import os
from os.path import join, getsize
import subprocess
def get_human_readable_size(size,precision=2):
suffixes=['B','KB','MB','GB','TB']
suffixIndex = 0
while size > 1024 and suffixIndex < 4:
suffixIndex += 1
size = size/1024.0
return "%.*f%s"%(precision,size,suffixes[suffixIndex])
registry_path = '/var/opt/gitlab/gitlab-rails/shared/registry/docker/registry/v2/'
repos = []
for repo in os.listdir(registry_path + 'repositories'):
images = os.listdir(registry_path + 'repositories/' + repo)
for image in images:
try:
layers = os.listdir(registry_path + 'repositories/{}/{}/_layers/sha256'.format(repo, image))
imagesize = 0
# get image size
for layer in layers:
# get size of layer
for root, dirs, files in os.walk("{}/blobs/sha256/{}/{}".format(registry_path, layer[:2], layer)):
imagesize += (sum(getsize(join(root, name)) for name in files))
repos.append({'group': repo, 'image': image, 'size': imagesize})
# if folder doesn't exists, just skip it
except FileNotFoundError:
pass
repos.sort(key=lambda k: k['size'], reverse=True)
for repo in repos:
print("{}/{}: {}".format(repo['group'], repo['image'], get_human_readable_size(repo['size'])))
But please do note, it's really static, doesn't list specific tags for an image, doesn't take into account that some layers might be used by other images as well. But it will give you a rough estimate in case you don't want to use Gitlab's tool written above. You might use the ugly script as you like, but I do not take any liability whatsoever.
The current answer should now be marked as deprecated.
As posted in the comments, if your repositories are nested, you will miss projects. Additionally, from experience, it seems to under-count the disk space used by the the repositories it finds. It will also skip repositories that are created with Gitlab 14 and up.
I was made aware of that by using the Gitlab Rails Console that is now available: https://docs.gitlab.com/ee/administration/troubleshooting/gitlab_rails_cheat_sheet.html#registry-disk-space-usage-by-project
You can adapt that command to increase the number of projects it will find as it's only looking at the 100 last projects.

Neo4j Java VM Tuning (v2.3 Community)

From what I can tell I'm having an issue with my Neo4j v2.3 Community Java VM adding items to the Old Gen Heap and never being able to Garbage Collecting them.
Here is a detailed outline of the situation.
I have a PHP file which calls the Dropbox Delta API and writes out the file structure to my Neo4j Database. Each call to Delta returns a 2000 Item data sets of which I pull out the information I need, the following is an example of what that query looks like with just one item, usually I send in full batches of 2000 items as it gave me the best results.
***Following is an example Query***
MERGE (c:Cloud { type:'Dropbox', id_user:'15', id_account:''})
WITH c
UNWIND [
{ parent_shared_folder_id:488417928, rev:'15e1d1caa88',.......}
]
AS items MERGE (i:Item { id:items.path, id_account:'', id_user:'15', type:'Dropbox' })
ON Create SET i = { id:items.path, id_account:'', id_user:'15', is_dir:items.is_dir, name:items.name, description:items.description, size:items.size, created_at:items.created_at, modified:items.modified, processed:1446769779, type:'Dropbox'}
ON Match SET i+= { id:items.path, id_account:'', id_user:'15', is_dir:items.is_dir, name:items.name, description:items.description, size:items.size, created_at:items.created_at, modified:items.modified, processed:1446769779, type:'Dropbox'}
MERGE (p:Item {id_user:'15', id:items.parentPath, id_account:'', type:'Dropbox'})
MERGE (p)-[:Contains]->(i)
MERGE (c)-[:Owns]->(i)
***The query is sent via Everyman****
static function makeQuery($client, $qry) {
return new Everyman\Neo4j\Cypher\Query($client, $qry);
}
This works fine and generally from start to finish takes 8-10 seconds to run.
The Dropbox account I am accessing contains around 35000 items, and takes around 18 runs of my PHP to populate my Neo4j Database with the folder/file structure of the dropbox account.
With every run of this PHP, around 50mb of items are added to the Neo4j JVM Old Gen heap, 30mb of that is not removed by GC.
The end result is obviously the VM runs out of memory and gets stuck in a constant state of GC throttling.
I have tried a range of Neo4j VM settings, as well as an update from Neo4j v2.2.5 to v2.3, which actually has appeared to make the problem worse.
My current settings are as follows,
-server
-Xms4096m
-Xmx4096m
-XX:NewSize=3072m
-XX:MaxNewSize=3072m
-XX:SurvivorRatio=1
I am testing on a windows 10 PC with 8GB of ram and an i5 2.5GHz quad core. Java 1.8.0_60
Any info on how to solve this issue would be greatly appreciated.
Cheers, Jack.
Reduce the new size to 1024m
change your settings to:
-server
-Xms4096m
-Xmx4096m
-XX:NewSize=1024m
It is most likely that the size of your tx grows too large.
I recommend sending each of the parents in separately, so instead of the UNWIND sent one statement each.
Make sure to use the new transactional http endpoint, I recommend to go wit neoclient instead of Neo4jPHP
You should also use parameters instead of literal values!!!
And don't repeeat user-id and type etc. properties on every item.
Are you sure you want to connect everything to c not just the root of the directory structure? I would do the latter.
MERGE (c:Cloud:Dropbox { id_user:{userId}})
MERGE (p:Item:Dropbox {id:{parentPath}})
// owning the parent should be good enough
MERGE (c)-[:Owns]->(p)
WITH c
UNWIND {items} as item
MERGE (i:Item:Dropbox { id:item.path})
ON Create SET i += { is_dir:item.is_dir, name:item.name, created_at:item.created_at }
SET i += { description:item.description, size:item.size, modified:items.modified, processed:timestamp()}
MERGE (p)-[:Contains]->(i);
Make sure to use 2.3.0 for best MERGE performance for relationships.

cleaning up remains of Datastax Enterprise Hadoop install

We have removed the analytics datacenter, but I am seeing lots of stuff hanging around. For instance keyspaces
select * from schema_keyspaces;
HiveMetaStore | True | org.apache.cassandra.locator.SimpleStrategy | {"replication_factor":"1"}
Also I have references to CFSCompactionStrategy left in the log files. I want to cleanup our ring completely. No weird keyspaces ... remove CFSCompactionStrategy Ideas?
Edited with some more info from the recommended solution:
UPDATE schema_keyspaces set strategy_options = '{"Cassandra":"2"}' where keyspace_name in ('keyspace1', 'keyspace2');
drop keyspace cfs_archive;
drop keyspace dse_security;
drop keyspace cfs;
DROP KEYSPACE "HiveMetaStore";
Then clean out the folders...
This may be needed as well:
DELETE from system.schema_columnfamilies where keyspace_name = 'cfs';
delete from system.schema_columns where keyspace_name in ('cfs', 'cfs_archive');
You can simply drop the HiveMetaStore (and cfs and cfs_archive). The keyspaces are created the first time an analytics node is started and behave exactly like standard Cassandra keyspaces.
At this point you only have the metadata for them; the data shouldn't be replicated on the other nodes unless you changed the replication strategy for those keyspaces at some point.

How do I get my UEFI EDK2 based BIOS to automatically load a driver located in its own firmware volume?

I am using the UEFI EDK2 to create a BIOS. I have modified the FDF to move a driver (both UEFI and legacy versions) from the main firmware volume into a separate firmware volume (FV) that I created strictly to hold the driver.
Before I moved the driver from the main FV, I would see the legacy OROM sign-on during POST. However, since I have moved the driver to the new FV, I no longer see the legacy OROM sign-on. It would seem the legacy OROM is no longer being loaded.
It seems that EDK2 "automatically" loads only certain FVs and then dispatches their drivers, but I can't figure out how these particular FVs are identified in EDK2.
I have searched the EDK2 code for several hours trying to find out where/how the FV HOB is created/initialized, but I cannot find this code. I'm guessing I need to add the new FV's GUID to some list or data structure, but I'm really guessing at this point.
Any pointers would be greatly appreciated.
I found the location in the BIOS where the firmware volume HOBs are created (in a proprietary file). I added code there to create a FV HOB for my new firmware volume.
After that, I had to install a PPI that could process the new firmware volume. Here is the PPI creation code:
static EFI_PEI_FIRMWARE_VOLUME_INFO_PPI mNewFvPpiInfo = {
EFI_FIRMWARE_FILESYSTEM2_GUID,
(VOID*) <Starting address of new FV in the ROM>,
<size of the new FV in the ROM>,
NULL,
NULL
};
static EFI_PEI_PPI_DESCTRIPTOR mNewFvPpi = {
(EFI_PEI_PPI_DESCTRIPTOR_PPI | EFI_PEI_PPI_DESCTRIPTOR_TERMINATE_LIST),
&gEfiPeiFirmwareVolumeInfoPpiGuid,
&mNewFvPpiInfo
};
Here is the code that installs the PPI (placed after the new FV HOB is added to the FV HOB list):
(*ppPeiServices)->InstallPpi(ppPeiServices, &mNewPvPpi);

Resources