InfluxDB backup command is an throwing error - influxdb

On running the influx backup on my local it is not working as expected.
My Environment:
InfluxDB version: v2.0.9
OS: Windows 11
I have installed InfluxDB(v2.0.9) on my local machine (Windows 11) and been using it. Whenever I am running the influx backup command it is throwing erros.
The Backup command I am using is
influx backup -b myDB -t <<my-token>> --host http://localhost:8086 -o myOrg bucket_backup.gz
The errors when I am getting on running the above command is
2023/01/27 15:53:58 INFO: Downloading legacy KV snapshot 2023/01/27 15:53:58 INFO: Extracting bucket manifest from legacy KV snapshot Error: failed to backup metadata: failed to extract bucket metadata from downloaded KV snapshot: unable to open boltdb: CreateFileMapping: Not enough memory resources are available to process this command.
Am I missing anything

Related

Running a Chainlink Node - Can't connect to database

Using docker-desktop on macOS.
I'm trying to run a node following the instructions on this page.
The database name is node, which is the same as the username: node. The user has access to the database and can log in using psql client.
Connection strings I've tried in the .env file:
postgresql://node#localhost/node
postgresql://node:password#localhost/node
postgresql://node:password#localhost:5432/node
postgresql://node:password#127.0.0.1:5432/node
postgresql://node:password#127.0.0.1/node
When I run the start command: cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink local n , using docker-desktop on macOS, I get the following stack trace:
2020-09-15T14:24:41Z [INFO] Starting Chainlink Node 0.8.15 at commit a904730bd62c7174b80a2c4ccf885de3e78e3971 cmd/local_client.go:50
2020-09-15T14:24:41Z [INFO] SGX enclave *NOT* loaded cmd/enclave.go:11
2020-09-15T14:24:41Z [INFO] This version of chainlink was not built with support for SGX tasks cmd/enclave.go:12
2020-09-15T14:24:41Z [INFO] Locking postgres for exclusive access with 500ms timeout orm/orm.go:69
2020-09-15T14:24:41Z [ERROR] unable to lock ORM: dial tcp 127.0.0.1:5432: connect: connection refused logger/default.go:139 stacktrace=github.com/smartcontractkit/chainlink/core/logger.Error
/chainlink/core/logger/default.go:117
...
Does anyone know how I can resolve this?
The problem probably caused by the fact that your chainlink database has been locked with Exclusive Lock and before stopping node that locks never removed.
What you do in this situation (as what works for me) is use PgAdmin Ui or similar way to find all Locks then find the Exclusive Lock that is held on the chainlink database and note down its Process id or ids (if multiple exclusive locks there are on chainlink DB)
Log in to your pg client and run SELECT pg_terminate_backend(<pid>) or SELECT pg_cancel_backend(<pid>); Enter PID of those locks here without quotes and meanwhile keep refreshing on pg admin URL to see if those processes stopped If stopped then rerun your chainlink node.
The problem is with docker networking.
Add --network host to the docker run command so that it is:
cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink --network host local n
This fixes the issue.

Can I run k8s master INSIDE a docker container? Getting errors about k8s looking for host's kernel details

In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.

Hyperledger fabric first network error executing ./byfn.sh: line 175: docker-compose: command not found

I have the following prerequisites node v10.16.3, go v1.8.1 linux/amd64, Docker Podman v1.0.5, Docker-compose 1.24.1. All are included in path. When I execute "sudo ./byfn.sh up" I'm able to generate the crypto certificates without issues- However error "docker-compose: command not found
" comes in Line 175 when docker-compose is executed in NetworkUp. Tried uninstalling Docker and reinstalling and couldnt find anything relevant in stackflow as well - can someone pl bail me out ?
error
./byfn.sh: line 175: docker-compose: command not found
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
Additional info
host:
BuildahVersion: 1.6-dev
Conmon:
package: podman-1.0.5-1.gitf604175.module+el8.0.0+4017+bbba319f.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.14.0-dev, commit: 6ee5ad0285ca12e6c8d0b663d7a8db5323812ef6-dirty'
Distribution:
distribution: '"rhel"'
version: "8.0"
kernel: 4.18.0-80.4.2.el8_0.x86_64
os: linux

Permission error when starting workspace on Laradock Docker on Windows 10

Each time when starting docker-compose up on Laradock, receives Permission error for workspace laradock on Windows 10. I have tried restarting but not working.
PS D:\projects\sites\laradock> docker-compose up -d nginx mysql phpmyadmin redis workspace
laradock_redis_1 is up-to-date
laradock_mysql_1 is up-to-date
laradock_docker-in-docker_1 is up-to-date
laradock_phpmyadmin_1 is up-to-date
Starting laradock_workspace_1 ... error
ERROR: for laradock_workspace_1 Cannot start service workspace: driver failed programming external connectivity on endpoint laradock_workspace_1 (cc289cd2758bb1c99d3f8aab7243220a94078e19b2a1f4a08e4b0b01fbba64a3): Error starting userland proxy: Bind for 0.0.0.0:2222: unexpected error Permission denied
ERROR: for workspace Cannot start service workspace: driver failed programming external connectivity on endpoint laradock_workspace_1 (cc289cd2758bb1c99d3f8aab7243220a94078e19b2a1f4a08e4b0b01fbba64a3): Error starting userland proxy: Bind for 0.0.0.0:2222: unexpected error Permission denied
ERROR: Encountered errors while bringing up the project.
I think you can find your answer here .In short you should execute this command netcfg -d this will clean up all networking devices, and requires a reboot

startNodeManager.sh not found

I have been trying to run Oracle weblogic in Docker containers and i am facing trouble in starting the NodeManager.I ran the following command.
docker run -d --name MS1 --link wlsadmin:wlsadmin -p 8001:8001 -e ADMIN_PASSWORD=#123 \
-e MS_NAME=MS1 --volumes-from wlsadmin a5e55 createServer.sh
Under normal circumstances it is expected to start the Nodemanager.
I am able to access the weblogic console and start the Managed Server which then returns the error-
-- Warning For server MS1, the Node Manager associated with machine Machine_MS1 is not reachable
This is the part of the log file that is returned on executing the above "docker run" command :
Domain Home: /u01/oracle/user_projects/domains/base_domain
Managed Server Name: MS1
NodeManager Name:
----> 'weblogic' admin password: ctebs#123
Waiting for WebLogic Admin Server on wlsadmin:7001 to become available...
WebLogic Admin Server is now available. Proceeding...
Setting NodeManager
----> No NodeManager Name set
Node Manager Name: Machine_MS1
Node Manager Home for Container: /u01/oracle/user_projects/domains/base_domain/Machine_MS1
cp: cannot stat '/u01/oracle/user_projects/domains/base_domain /bin/startNodeManager.sh': No such file or directory
cp: cannot stat '/u01/oracle/user_projects/domains/base_domain/nodemanager/*': No such file or directory
NODEMGR_HOME_STR: NODEMGR_HOME="/u01/oracle/user_projects/domains/base_domain/Machine_MS1"
NODEMGRHOME_STR: NodeManagerHome=/u01/oracle/user_projects/domains/base_domain/Machine_MS1
DOMAINSFILE_STR: DomainsFile=/u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.domains
LOGFILE_STR: LogFile=/u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.log
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/startNodeManager.sh: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
sed: can't read /u01/oracle/user_projects/domains/base_domain/Machine_MS1/nodemanager.properties: No such file or directory
Starting NodeManager in background...
NodeManager started.
Connection refused (Connection refused). Could not connect to NodeManager. Check that it is running at /172.17.0.3:5556.
Starting server MS1 ...No stack trace available.
This Exception occurred at Tue Dec 12 03:38:06 GMT 2017.
weblogic.management.scripting.ScriptException: Error occurred while performing start : Server with name MS1 failed to be started
No stack trace available.
How can I get past this error message?
You can try and follow this OracleWebLogic workshop intro which points out:
The ~/docker-images/OracleWebLogic/samples/1221-domain/container-scripts has useful Bash and WLST scripts that provide three possible modes to run WebLogic Managed Servers on a Docker container. Make sure you have an AdminServer container running before starting a ManagedServer container.
The sample scripts will by default, attempt to find the AdminServer running at t3://wlsadmin:8001. You can change this.
But most importantly, the AdminServer container has to be linked with Docker's --link parameter.
Below, are the three suggestions for running ManagedServer Container within the sample 12c-domain:
Start NodeManager (Manually):
docker run -d --link wlsadmin:wlsadmin startNodeManager.sh
Start NodeManager and Create a Machine Automatically:
docker run -d --link wlsadmin:wlsadmin createMachine.sh
Start NodeManager, Create a Machine, and Create a ManagedServer Automatically
docker run -d --link wlsadmin:wlsadmin createServer.sh
See more at "Example of Image with WLS Domain", removed in commit e49bb4d in Apr. 2019, 2 yers later, since Oracle no longer supports WebLogic versions.

Resources