reindex error when upgrading to elasticsearch 7 - ruby-on-rails

We are currently using the searchkick gem and it works great. Recently I tried upgrading elasticsearch to 7 in my local development environment. I got it up and running using homebrew ( after researching that I needed to run rm -fr /usr/local/var/lib/elasticsearch). When I went to reindex one of my models I got the following mapping error:
Elasticsearch::Transport::Transport::Errors::BadRequest: [400] {"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [cosmetics/products : {properties={product={type=keyword}}}]"}],"type":"mapper_parsing_exception","reason":"Failed to parse mapping [_doc]: Root mapping definition has unsupported parameters: [cosmetics/products : {properties={product={type=keyword}}}]","caused_by":{"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [cosmetics/products : {properties={product={type=keyword}}}]"}},"status":400}
This error does not occur when using elasticsearch 6.8.4. Can anyone point to me to a resource for resolving this issue?

Mapping type is not supported in version 7.
To solve this, remove all mapping types (_doc for example)
Indices created in Elasticsearch 6.0.0 or later may only contain a single mapping type. Indices created in 5.x with multiple mapping types will continue to function as before in Elasticsearch 6.x. Types will be deprecated in APIs in Elasticsearch 7.0.0, and completely removed in 8.0.0.
Check this out
And this

In addition to the excellent answer by #Assael Azran and the great link he shared, here's some additional information:
If you have indices created in 5.x or before, you'll need to re-index them when you are in 6.8 BEFORE upgrading to 7.x
If you have indices with multiple types, you'll need to re-index them per document type.
Custom type names like products in your case should be replaced with _doc or doc. Ideally, type name should not be there when defining mappings. See this.
All your 5.x or before snapshots, if any, will not work on 7.x. So you'll need to restore the indices from those snapshots when you are in 6.8, then re-index the indices, then again snapshot. You can then delete the indices and also delete older snapshots.
Have a look at this Upgrade link.
Hope this helps

Related

INSERT INTO ... in MariaDB in Ubuntu under Windows WSL2 results in corrupted data in some columns

I am migrating a MariaDB database into a Linux docker container.
I am using mariadb:latest in Ubuntu 20 LTS via Windows 10 WSL2 via VSCode Remote WSL.
I have copied the sql dump into the container and imported it into the InnoDB database which has DEFAULT CHARACTER SET utf8. It does not report any errors:
> source /test.sql
That file does this (actual data truncated for this post):
USE `mydb`;
DROP TABLE IF EXISTS `opsitemtest`;
CREATE TABLE `opsitemtest` (
`opId` int(11) NOT NULL AUTO_INCREMENT,
`opKey` varchar(50) DEFAULT NULL,
`opName` varchar(200) DEFAULT NULL,
`opDetails` longtext,
PRIMARY KEY (`opId`),
KEY `token` (`opKey`)
) ENGINE=InnoDB AUTO_INCREMENT=4784 DEFAULT CHARSET=latin1;
insert into `opsitemtest`(`opId`,`opKey`,`opName`,`opDetails`) values
(4773,'8vlte0755dj','VTools addin for MSAccess','<p>There is a super helpful ...'),
(4774,'8vttlcr2fTA','BAS OLD QB','<ol>\n<li><a href=\"https://www.anz.com/inetbank/bankmain.asp\" ...'),
(4783,'9c7id5rmxGK','STP - Single Touch Payrol','<h1>Gather data</h1>\n<ol style=\"list-style-type: decimal;\"> ...');
If I source a subset of 12 records of the table in question all the columns are correctly populated.
If I source the full set of data for the same table ( 4700 rows ) where everything else is the same, many of the opDetails long text fields have a length showing in sqlYog but no data is visible. If I run a SELECT on that column there are no errors but some of the opDetails fields are "empty" (meaning: you can't see any data), and when I serialize that field, the opDetails column of some records (not all) has
"opDetails" : "\u0000\u0000\u0000\u0000\u0000\u0000\",
( and many more \u0000 ).
The opDetails field contains HTML fragments. I am guessing it is something to do with that content and possibly the CHARSET, although that doesn't explain why the error shows up only when there are a large number of rows imported. The same row imported via a set of 12 rows works correctly.
The same test of the full set of data on a Windows box with MariaDB running on that host (ie no Ubuntu or WSL etc) all works perfectly.
I tried setting the table charset to utf8 to match the database default but that had no effect. I assume it is some kind of Windows WSL issue but I am running the source command on the container all within the Ubuntu host.
The MariaDB data folder is mapped using a volume, again all inside the Ubuntu container:
volumes:
- ../flowt-docker-volumes/mariadb-data:/var/lib/mysql
Can anyone offer any suggestions while I go through and try manually removing content until it works? I am really in the dark here.
EDIT: I just ran the same import process on a Mac to a MariaDB container on the OSX host to check whether it was actually related to Windows WSL etc and the OSX database has the same issue. So maybe it is a MariaDB docker issue?
EDIT 2: It looks like it has nothing to do with the actual content of opDetails. For a given row that is showing the symptoms, whether or not the data gets imported correctly seems to depend on how many rows I am importing! For a small number of rows, all is well. For a large number there is missing data, but always the same rows and opDetails field. I will try importing in small chunks but overall the table isn't THAT big!
EDIT 3: I tried a docker-compose without a volume and imported the data directly into the MariaDB container. Same problem. I was wondering whether it was a file system incompatibility or some kind of speed issue. Yes, grasping at straws!
Thanks,
Murray
OK. I got it working. :-)
One piece of info I neglected to mention, and it might not be relevant anyway, is that I was importing from an sql dump from 10.1.48-MariaDB-0ubuntu0.18.04.1 because I was migrating a legacy app.
So, with my docker-compose:
Version
Result
mysql:latest
data imported correctly
mariadb:latest
failed as per this issue
mariadb:mariadb:10.7.4
failed as per this issue
mariadb:mariadb:10.7
failed as per this issue
mariadb:10.6
data imported correctly
mariadb:10.5
data imported correctly
mariadb:10.2
data imported correctly
Important: remember to completely remove the external volume mount folder content between tests!
So, now I am not sure whether the issue was some kind of sql incompatibility that I need to be aware of, or whether it is a bug that was introduced between v10.6 and 10.7. Therefore I have not logged a bug report. If others with more expertise think this is a bug, I am happy to make a report.
For now I am happy to use 10.6 so I can progress the migration- the deadline is looming!
So, this is sort of "solved".
Thanks for all your help. If I discover anything further I will post back here.
Murray

Delete Bigtable row in Apache Beam 2.2.0

In Dataflow 1.x versions, we could use CloudBigtableIO.writeToTable(TABLE_ID) to create, update, and delete Bigtable rows. As long as a DoFn was configured to output a Mutation object, it could output either a Put or a Delete, and CloudBigtableIO.writeToTable() successfully created, updated, or deleted a row for the given RowID.
It seems that the new Beam 2.2.0 API uses BigtableIO.write() function, which works with KV<RowID, Iterable<Mutation>>, where the Iterable contains a set of row-level operations. I have found how to use that to work on Cell-level data, so it's OK to create new rows and create/delete columns, but how do we delete rows now, given an existing RowID?
Any help appreciated!
** Some further clarification:
From this document: https://cloud.google.com/bigtable/docs/dataflow-hbase I understand that changing the dependency ArtifactID from bigtable-hbase-dataflow to bigtable-hbase-beam should be compatible with Beam version 2.2.0 and the article suggests doing Bigtble writes (and hence Deletes) in the old way by using CloudBigtableIO.writeToTable(). However that requires imports from the com.google.cloud.bigtable.dataflow family of dependencies, which the Release Notes suggest is deprecated and shouldn't be used (and indeed it seems incompatible with the new Configuration classes/etc.)
** Further Update:
It looks like my pom.xml didn't refresh properly after the change from bigtable-hbase-dataflow to bigtable-hbase-beam ArtifactID. Once the project got updated, I am able to import from the
com.google.cloud.bigtable.beam.* branch, which seems to be working at least for the minimal test.
HOWEVER: It looks like now there are two different Mutation classes:
com.google.bigtable.v2.Mutation and
org.apache.hadoop.hbase.client.Mutation ?
And in order to get everything to work together, it has to be specified properly which Mutation is used for which operation?
Is there a better way to do this?
Unfortunately, Apache Beam 2.2.0 doesn't provide a native interface for deleting an entire row (including the row key) in Bigtable. The only full solution would be to continue using the CloudBigtableIO class as you already mentioned.
A different solution would be to just delete all the cells from the row. This way, you can fully move forward with using the BigtableIO class. However, this solution does NOT delete the row key itself, so the cost of storing the row key remains. If your application requires deleting many rows, this solution may not be ideal.
import com.google.bigtable.v2.Mutation
import com.google.bigtable.v2.Mutation.DeleteFromRow
// mutation to delete all cells from a row
Mutation.newBuilder().setDeleteFromRow(DeleteFromRow.getDefaultInstance()).build()
I would suggest that you should continue using CloudBigtableIO and bigtable-hbase-beam. It shouldn't be too different from CloudBigtableIO in bigtable-hbase-dataflow.
CloudBigtableIO uses the HBase org.apache.hadoop.hbase.client.Mutation and translates them into the Bigtable equivalent values under the covers

Freeradius error - Unknown module "X-Ascend-Session-Svr-Key"

I am using FreeRadius 2.1.12 on Ubuntu Server 14.04 (installed through apt directly from the OS apt repos).
I am getting the following error on every accounting request:
WARNING: Unknown module "X-Ascend-Session-Svr-Key" in string expansion "%')"
This causes an SQL error when inserting the accounting records into the database.
I have tracked this to dialup.conf accounting_start_query where it tries to insert '%{X-Ascend-Session-Svr-Key}'.
My searches turned out very little on why this could happen.
How can I solve this issue, or debug it to find out why its happening?
X-Ascend-Session-Srv-Key has not been defined in the dictionaries. It may have been removed due to compatibility issues, I know Ascend overloaded some of the standards attribute space with their own VSAs (that weren't VSAs).
It's safe to modify the default queries and remove the X-Ascend-Session-Srv-Key references. That column has been stripped from the default SQL queries and schemas in >= 3.0.0.

Laravel 5.1 : Class 'Doctrine\DBAL\Driver\PDOSqlite\Driver' not found

I'm using laravel 5.1 , I'm trying to do a migrate:refresh, I get an error :
Class 'Doctrine\DBAL\Driver\PDOSqlite\Driver' not found in
../vendor/laravel/framework/src/Illuminate/Database/SQLiteConnection.php
[Symfony\Component\Debug\Exception\FatalErrorException] Class
'Doctrine\DBAL\Driver\PDOSqlite\Driver' not found
Doctrine/dbal is already required in my composer.json
"require": {
"php": ">=5.5.9",
"laravel/framework": "5.1.*",
"Doctrine/dbal": "^2.5"
}
So I want to ask what is wrong in my laravel project.
From the official docs:
Before modifying a column, be sure to add the doctrine/dbal dependency
to your composer.json file. The Doctrine DBAL library is used to
determine the current state of the column and create the SQL queries
needed to make the specified adjustments to the column:
composer require doctrine/dbal
What worked for me is to delete the database.sqlite file and create an empty one.
I know this is not the best solution, but it fixed the issue in my use case.
Just a heads up, on Laravel 5.4 (paired with doctrine/dbal#^2.5 package), using the Blueprint::dropColumn() method works like a charm on SQLite databases. No Class 'Doctrine\DBAL\Driver\PDOSqlite\Driver' not found errors thrown.
Doctrine/dbal version 3 doesn't have Doctrine\DBAL\Driver\PDOSqlite\Driver class.
Make sure you are using doctrine/dbal version 2.
If you have version 3, you should remove it
composer remove doctrine/dbal
And then to install the version 2
composer require doctrine/dbal:2.13
Visit this link to see all available versions: https://packagist.org/packages/doctrine/dbal

Domain.GetDomainsById not working in Umbraco 6

I'm trying to get the language from the current node but are unable to get this working.
umbraco.cms.businesslogic.web.Domain.GetDomainsById(
umbraco.uQuery.GetCurrentNode().Id
).Id
This will return 0 at all times. Any advice where to start looking or are there other methods to acquire the current language id, thanks!
After some extensive digging in the well functioning dictionary classes I found the UmbracoCultureDictionary library that contains useful stuff like this
new umbraco.MacroEngines.UmbracoCultureDictionary().Language.id
Currently obsoleted and the referenced class Umbraco.Web.Dictionary.DefaultCultureDictionary is Internal, hence the following approach is probably the most compatible at the moment
umbraco.cms.businesslogic.language.Language.GetByCultureCode(
System.Threading.Thread.CurrentThread.CurrentUICulture.Name
).id
umbraco.cms.businesslogic.language.Language.GetByCultureCode(
System.Threading.Thread.CurrentThread.CurrentUICulture.Name
).FriendlyName
umbraco.cms.businesslogic.language.Language.GetByCultureCode(
System.Threading.Thread.CurrentThread.CurrentUICulture.Name
).CultureAlias

Resources