INSERT INTO ... in MariaDB in Ubuntu under Windows WSL2 results in corrupted data in some columns - docker

I am migrating a MariaDB database into a Linux docker container.
I am using mariadb:latest in Ubuntu 20 LTS via Windows 10 WSL2 via VSCode Remote WSL.
I have copied the sql dump into the container and imported it into the InnoDB database which has DEFAULT CHARACTER SET utf8. It does not report any errors:
> source /test.sql
That file does this (actual data truncated for this post):
USE `mydb`;
DROP TABLE IF EXISTS `opsitemtest`;
CREATE TABLE `opsitemtest` (
`opId` int(11) NOT NULL AUTO_INCREMENT,
`opKey` varchar(50) DEFAULT NULL,
`opName` varchar(200) DEFAULT NULL,
`opDetails` longtext,
PRIMARY KEY (`opId`),
KEY `token` (`opKey`)
) ENGINE=InnoDB AUTO_INCREMENT=4784 DEFAULT CHARSET=latin1;
insert into `opsitemtest`(`opId`,`opKey`,`opName`,`opDetails`) values
(4773,'8vlte0755dj','VTools addin for MSAccess','<p>There is a super helpful ...'),
(4774,'8vttlcr2fTA','BAS OLD QB','<ol>\n<li><a href=\"https://www.anz.com/inetbank/bankmain.asp\" ...'),
(4783,'9c7id5rmxGK','STP - Single Touch Payrol','<h1>Gather data</h1>\n<ol style=\"list-style-type: decimal;\"> ...');
If I source a subset of 12 records of the table in question all the columns are correctly populated.
If I source the full set of data for the same table ( 4700 rows ) where everything else is the same, many of the opDetails long text fields have a length showing in sqlYog but no data is visible. If I run a SELECT on that column there are no errors but some of the opDetails fields are "empty" (meaning: you can't see any data), and when I serialize that field, the opDetails column of some records (not all) has
"opDetails" : "\u0000\u0000\u0000\u0000\u0000\u0000\",
( and many more \u0000 ).
The opDetails field contains HTML fragments. I am guessing it is something to do with that content and possibly the CHARSET, although that doesn't explain why the error shows up only when there are a large number of rows imported. The same row imported via a set of 12 rows works correctly.
The same test of the full set of data on a Windows box with MariaDB running on that host (ie no Ubuntu or WSL etc) all works perfectly.
I tried setting the table charset to utf8 to match the database default but that had no effect. I assume it is some kind of Windows WSL issue but I am running the source command on the container all within the Ubuntu host.
The MariaDB data folder is mapped using a volume, again all inside the Ubuntu container:
volumes:
- ../flowt-docker-volumes/mariadb-data:/var/lib/mysql
Can anyone offer any suggestions while I go through and try manually removing content until it works? I am really in the dark here.
EDIT: I just ran the same import process on a Mac to a MariaDB container on the OSX host to check whether it was actually related to Windows WSL etc and the OSX database has the same issue. So maybe it is a MariaDB docker issue?
EDIT 2: It looks like it has nothing to do with the actual content of opDetails. For a given row that is showing the symptoms, whether or not the data gets imported correctly seems to depend on how many rows I am importing! For a small number of rows, all is well. For a large number there is missing data, but always the same rows and opDetails field. I will try importing in small chunks but overall the table isn't THAT big!
EDIT 3: I tried a docker-compose without a volume and imported the data directly into the MariaDB container. Same problem. I was wondering whether it was a file system incompatibility or some kind of speed issue. Yes, grasping at straws!
Thanks,
Murray

OK. I got it working. :-)
One piece of info I neglected to mention, and it might not be relevant anyway, is that I was importing from an sql dump from 10.1.48-MariaDB-0ubuntu0.18.04.1 because I was migrating a legacy app.
So, with my docker-compose:
Version
Result
mysql:latest
data imported correctly
mariadb:latest
failed as per this issue
mariadb:mariadb:10.7.4
failed as per this issue
mariadb:mariadb:10.7
failed as per this issue
mariadb:10.6
data imported correctly
mariadb:10.5
data imported correctly
mariadb:10.2
data imported correctly
Important: remember to completely remove the external volume mount folder content between tests!
So, now I am not sure whether the issue was some kind of sql incompatibility that I need to be aware of, or whether it is a bug that was introduced between v10.6 and 10.7. Therefore I have not logged a bug report. If others with more expertise think this is a bug, I am happy to make a report.
For now I am happy to use 10.6 so I can progress the migration- the deadline is looming!
So, this is sort of "solved".
Thanks for all your help. If I discover anything further I will post back here.
Murray

Related

How do i change the Encoding of my BBjServices?

My BBjServices have a different encoding. This causes my Data from the Database to be displayed wrong.
Where can i change the encoding of the Services ?
In your BBj installation and open the BBj.properties in the cfg folder and search for basis.java.args.BBjServices
This already has a lot of values set. You want to add or change
-Dfile.encoding="Your encoding".
Remember to shut down the services first and restart them afterward.

Is there a way to configure the filename for a Neo4j Desktop database dump file to exclude timestamp?

I'm a first time user of Neo4j and following a training course to install and learn the basics.
I've installed Neo4j Desktop on a Windows machine and can see that it comes with a demo DB called "Movie DBMS". I'm trying to follow steps to dump the database, by stopping the database, clicking on "..." and then "Dump".
The dump errors with the following error in the log file:
[2022-01-31 12:54:36.022] [error] Selecting JVM - Version:11.0.8+10-LTS, Name:OpenJDK 64-Bit Server VM, Vendor:Azul Systems, Inc.
java.nio.file.InvalidPathException: Illegal char <:> at index 128: C:\Users<me>.Neo4jDesktop\relate-data\projects<my project name>\movie-dbms-neo4j-31-Jan-2022-12:54:31.dump
It would appear that the automatic configuration for the dump file is adding a timestamp with includes colons (hh:mm:ss). How can I configure the file name to either exclude the timestamp or avoid using ":"?
Thanks.
I had no responses. But I've figured it out myself.
The answer was to use the command line to dump the database manually. At that point I can specify my own "--to=" filename which doesn't include a ":".
Details in this section of the manual: https://neo4j.com/docs/operations-manual/current/backup-restore/offline-backup/#offline-backup

FireDAC (FDQuery) - database with dot in it's name

I have got this problem with FireDAC -> FDQuery component when it tries to select data from a database with '.' (dot) in its name.
The database name is TEST_2.0 and the error on Opening the dataset says:
Could not find server 'TEST_2' in sys.servers [...]
I have tried {TEST_2.0} (curly brackets) and [TEST_2.0] (square brackets). Also setting QuotedIdentifiers (Format Opetions) property to True does not seem to fix the problem. In SQL query I can add 'SET QUOTED_IDENTIFIER ON;' but this breaks inserts to the dataset.
The FDConnection component can connect to that server and that database using MSSQL driver without problems. It seems it is the dataset that dosn't handle it. UniDAC seems to handle everything without any problems.
I am using RadStudio 10.2.
Has anyone found any solution to this? Thanks in advance for any replies
I got a response from Emarcadero and it works for me:
"The problem is not in FireDAC, but in SQL Server ODBC driver
SQLPrimaryKeys function. It fails to work with a catalog name
containing a dot. FireDAC uses this function to get primary key fields
for a result set, when fiMeta is included into FetchOptions.Items. So,
as a workaround / solution, please exclude fiMeta from
FetchOptions.Items."
What is wrong?
I was able to reproduce what you've described here. I've ended up on metainformation command, specifically the SQLPrimaryKeys ODBC function call. I have used SQL Server Native Client 11.0 driver connected to Microsoft SQL Server Express 12.0.2000.8, local database server instance.
When I tried to execute the following SQL command (with TEST_2.0 database created) through a TFDQuery component instance with default settings (linked connection object was left with empty database connection parameter) in Delphi Tokyo application:
SELECT * FROM [TEST_2.0].INFORMATION_SCHEMA.TABLES
I got this exception raised when the SQLPrimaryKeys function was called with the CatalogName parameter set to TEST_2.0 (from within the metainformation statement method Execute):
[FireDAC][Phys][ODBC][Microsoft][SQL Server Native Client 11.0][SQL
Server]Could not find server 'TEST_2' in sys.servers. Verify that the
correct server name was specified. If necessary, execute the stored
procedure sp_addlinkedserver to add the server to sys.servers.'.
My next attempt was naturally modifying that CatalogName parameter value to [TEST_2.0] whilst debugging, but even that failed with similar reason (just failed for the name [TEST_2), so for me it seems that the SQLPrimaryKeys ODBC function implementation with the driver I've used cannot properly handle dotted CatalogName parameter values (it seems to ignore everything after dot).
What can I do?
The only solution seems to be just fixing ODBC drivers. Workaround I would suggest is not using dots in database names (as discussed e.g. in this thread). Another might be preventing FireDAC from getting dataset object metadata (by excluding fiMeta option from the Items option set). That will bring you the responsability of supplying dataset object metadata by yourself (at this time only primary key definition).

How I can import db(archive) neo4j

I have archive(tar.gz) which was dumped from production neo4j server. Now I want to get this db locally in my computer. I did it in several steps:
I was maked this command: neo4j-admin load --from=<archive-path> --database=<database> [--force](I did it correct, set path to my archive and gave name for database)
2.When I maked previous command then folder in data/databases appeared, that's cool, I thought
Next I was change active db I changed this value:
dbms.active_database=graph.db.test
And when all must work I typed "./neo4j console" it started in localhost:7474 I see that db is my,for example "test" db but all was empty. No node labels, no relations and when I used simple command "MATCH(n) RETURN n" there are no records. But I totally sure that it's must to be not empty.
Question:
What I did incorrect and what I need to do to make it works?
I believe that you are having problems with the param database and the value of dbms.active_database property.
Try using the same value for both:
bin/neo4j-admin load --from=/backups/graph.db/2016-10-02.dump --database=graph.test.db --force
and
dbms.active_database=graph.test.db
after it, restart Neo4j.
You may try refreshing and re-entering your credentials. Also, it may be the case that your file might not be unzipped properly.

Thinking Sphinx : Error while indexing

indexing index 'users_core'...
ERROR: index 'users_core': sql_range_query: Incorrect key file for table '/tmp/#sql_ff2_0.MYI'; try to repair it (DSN=mysql://root:*#localhost:3306/myname)
What does this means ?
I can't find the file '/tmp/#sql_ff2_0.MYI'
How do i repair it ?
This actually has nothing to do with Ruby / Rails, I just ran into this myself and had quite a bit of trouble finding a real answer.
The issue is that the sphinx indexer script is trying to create a temporary MySQL table while building the index. In my case MySQL ran out of disk space for the temporary table (default is /tmp, my partition was only 2GB)
As data is added to this temporary table and the disk is filled the table becomes corrupt, because the last bit of data written to the temp table is truncated.
The solution is to ensure the drive that MySQL is writing temporary tables to has enough space on it. I ended up changing the temp directory in my.cnf to a different larger partition. Default location for the config file on debian is /etc/mysql/my.cnf
Add:
tmpdir = /var/lib/mysql/tmp
Best place to put it is next to the datadir setting in the [mysqld] section.
I can't find the file
'/tmp/#sql_ff2_0.MYI'
This is perhaps some temporary table mysql creates during the query.
Did you try to repair the main table you use in the sql_range_query ?

Resources