I'm new to Redis Cache and I just created a local database in which I'm trying to restore an export from a RDB file.
I'm using Redis Insight as a client, running via docker, with volumes.
I identified the bulk_operation folder, copied the .rdb file in there and now from the Redis Insight interface I'm trying to restore the RDB using bulk actions.
However the path I'm providing results in "File not found" and I can't get to the expected format.
./db/bulk_operation/export-7b85c642-3bfc-4904-b0c0-81c84aae6748.rdb
The /DB is mounted to my local persistence folder.
Any advice is appreciated.
My trial wasn't far from the truth, the expected format was /db/bulk_operation/export-7b85c642-3bfc-4904-b0c0-81c84aae6748.rdb - note the missing dot at the beginning.
The fact that RedisInsight also randomly gives errors did not help, on the first two tries it failed, then randomly it worked.
So don't give up on the first fail :)
Related
I recently got into using ddev to develop TYPO3 pages, but I run into the same issue every once in a while. Sometimes (I don't really know what's causing this issue) the page just stops loading and after a while this errormessage appears:
PHP Warning
Core: Error handler (BE): PHP Warning: rename(/var/www/html/var/cache/code/cache_core/5d5a7572dd900787722599.temp,/var/www/html/var/cache/code/cache_core/site-configuration.php): No such file or directory in /var/www/html/public/typo3/sysext/core/Classes/Cache/Backend/SimpleFileBackend.php line 234
I know that this error appears when TYPO3 has no permission to write cache but I don't know what I can do to prevent this issue. Restarting Docker fixes it for a short while but eventually it's happening again and this really costs a lot of time to restart Docker every 10 to 20 minutes.. Does anybody know what kind of configuration I need to do to prevent this issue?
Btw, I'm using Docker on Windows with TYPO3 9.5.8
As there is no official accepted answer yet I'll elaborate on what has been said:
The issue can be resolved by following Susis example in the comments of the initial post:
Create a docker-compose.tempfs.yaml in the .ddev directory (look carefully for the spaces indent!)
version: '3.6'
services:
web:
volumes:
- type: tmpfs
target: /var/www/html/var
tmpfs:
size: 268435456
Combining this with the setup for NFS described in https://ddev.readthedocs.io/en/stable/users/performance/ also increases performance.
Mind: The most feasible approach for using NFS seems to be creating your own package directory which you include via composer "path" repository inside the ddev directory which already gets mounted. (e.g.. /projectname/Packages/Vendor.MyPackage)
Mounting directories above the ddev directory is complicated and prone to error when using symlinks.
I have the same problem and tried it with the yaml file but after I created the file and starte ddev I got the error:
Uncaught RuntimeException: Could not create directory "/var/www/html/var/log/"!
Did someone have a hint? I also deleted the var folder. after deletion the page runs withoiut issue but after restart ddev the error reappears.
I'm on Mac.
I try to install InfluxDB Enterprise Edition using this documentation: https://docs.influxdata.com/enterprise/v1.2/production_installation/. The Requirements suggest to either use a license-key or license-path, where I do it with the License Key.
In Step 2, after installing, configuring and starting the Data Nodes I try to join the data nodes to the cluster. But executing influxd-ctl add-data enterprise-data-01:8088 gives me the error:
add-data: operation exited with error: open /tmp/influx-enterprise.key.json: no such file or directory
although I configured not to use the license-key json but rather the license-key.
I also have the json file, so I tried it with the license-path but still getting the same error. Has somebody else encountered the same issues?
EDIT
Issue has been resolved, I had to restart the Data nodes after I changed the configuration to use the license-path facepalm. I went into this problem as I previously entered an old license key.
This one is quite strange.
I am running a very typical Docker container that holds a Rails API. Inside this API, I have an endpoint which takes an upload of a CSV and does some things and stuff.
Here is the exact flow:
vim spec/fixtuers/bid_update.csv
# fill it with some data
# now we call the spec that uses this fixture
docker-compose run --rm web bundle exec rspec spec/requests/bids_spec.rb
# and now the csv is loaded and I can see it as plaintext
However, after creating this, I decided to change the content of the CSV. So I do this, adding a column and respective value to it for each piece.
Now, however, when we run our spec again after saving this it has the old version of the CSV. The one originally used at the breakpoint in the spec.
cat'ing out the CSV shows it clearly should have the new content.
Restarting the VM does nothing. The only solution I've found is to docker-machine rm dev and build a new machine (my main one for this is called dev).
I am entirely perplexed as to what could cause this or a simple means to fix it (building with all those images takes a while).
Ideas? Inform me I'm an idiot and I just had to press 0 for an operator and they would have fixed it?
Any help appreciated :)
I think it could be an issue with how virtualbox shares folders with your environment. More information here https://github.com/mitchellh/vagrant/issues/351#issuecomment-1339640
I am running a Cassandra cluster in Docker containers using fleet for management. I am able to get the cluster up and running, but if I bring the units down with fleet and then back up again, the containers fail. The Cassandra logs has this entry on the second start.
Cannot add table 'role_members' to non existing keyspace 'system_auth'.
Fatal configuration error; unable to start server. See log for stacktrace.
INFO 20:59:34 InetAddress /172.17.8.102 is now DOWN
ERROR 20:59:34 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Cannot add table 'role_members' to non existing keyspace 'system_auth'.
at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:284) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:275) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.maybeAddTable(StorageService.java:1046) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.doAuthSetup(StorageService.java:1034) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:967) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:698) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:581) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:291) [apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:481) [apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:588) [apache-cassandra-2.2.0.jar:2.2.0]
I can't find any information on this particular error, and I really have no idea why it's happening. The closest information I can find is that the system_auth table needs to be configured specially if you are not using the default AllowAllAuthenticator, but I am using this. I haven't changed it in the cassandra.yaml file.
Does anyone know why this might be happening?
Is it possible that you are using CassandraAuthorizer without using PasswordAuthenticator? I think that might not work and cause this particular error.
system_auth is not applicable to AllowAllAuthenticator, you need to use PasswordAuthenticator instead. If you configure cassandra.yaml in the following way:
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
And then restart cassandra, it should create the system_auth keyspace for you. If you don't want to set up authorization, you can always use AllowAllAuthorizer instead. More information can be found here.
This turned out to be a rather unique configuration issue I had. I was mapping /var/lib/cassandra on the host to /var/lib/cassandra inside my docker container. But I was also inadvertently mapping /var/lib/cassandra/data to an auto-generated Docker directory on the host. As such when I stopped and restarted the containers, the data directory would disappear and Cassandra would fail as it tried to recreate data from the commitlog directory.
I got the problem just following the Datastax "Initializing a multiple node cluster (single data center)" tutorial.
I solved the same problem deleting the whole content of /var/lib/cassandra and not only the content of /var/lib/cassandra/system/
Why?
I think Kris got the real problem source: when restarting, the C* service found the commitLog full and recovered by trying to reconstruct the commits found there, failing due to a different configuration and a different table structure...
i am getting this
Windows could not start the SphinxSearch service on Local Computer.
Error 1067: The process terminated unexpectedly.
i got installation instruction from this .
http://blog.robbsnet.com/2011/07/how-to-install-and-implement-sphinx.html
build process is complete but when i start the sphinx search service i got errors .
Try running searchd manually from Command Prompt. Maybe it will give you a useful error message.
Try also looking in searchd.log
For anyone who is still having problems with the Windows service:
Make sure your configuration is correct for both database and paths and the needed folders & files are created.
Make sure that the service "command" matches the correct paths and files, to do so you have to:
Run administrative tools, click on local services, then find SphinxSearch. Click on its properties and read the line the service is trying to execute. If the configuration path doesn't match the service start command, it will fail to start.
FYI, this can also happen because your search data files are corrupt. Easy fix for that is to nuke the files and refill them.