I am using FreeRadius 2.1.12 on Ubuntu Server 14.04 (installed through apt directly from the OS apt repos).
I am getting the following error on every accounting request:
WARNING: Unknown module "X-Ascend-Session-Svr-Key" in string expansion "%')"
This causes an SQL error when inserting the accounting records into the database.
I have tracked this to dialup.conf accounting_start_query where it tries to insert '%{X-Ascend-Session-Svr-Key}'.
My searches turned out very little on why this could happen.
How can I solve this issue, or debug it to find out why its happening?
X-Ascend-Session-Srv-Key has not been defined in the dictionaries. It may have been removed due to compatibility issues, I know Ascend overloaded some of the standards attribute space with their own VSAs (that weren't VSAs).
It's safe to modify the default queries and remove the X-Ascend-Session-Srv-Key references. That column has been stripped from the default SQL queries and schemas in >= 3.0.0.
Related
I am migrating a MariaDB database into a Linux docker container.
I am using mariadb:latest in Ubuntu 20 LTS via Windows 10 WSL2 via VSCode Remote WSL.
I have copied the sql dump into the container and imported it into the InnoDB database which has DEFAULT CHARACTER SET utf8. It does not report any errors:
> source /test.sql
That file does this (actual data truncated for this post):
USE `mydb`;
DROP TABLE IF EXISTS `opsitemtest`;
CREATE TABLE `opsitemtest` (
`opId` int(11) NOT NULL AUTO_INCREMENT,
`opKey` varchar(50) DEFAULT NULL,
`opName` varchar(200) DEFAULT NULL,
`opDetails` longtext,
PRIMARY KEY (`opId`),
KEY `token` (`opKey`)
) ENGINE=InnoDB AUTO_INCREMENT=4784 DEFAULT CHARSET=latin1;
insert into `opsitemtest`(`opId`,`opKey`,`opName`,`opDetails`) values
(4773,'8vlte0755dj','VTools addin for MSAccess','<p>There is a super helpful ...'),
(4774,'8vttlcr2fTA','BAS OLD QB','<ol>\n<li><a href=\"https://www.anz.com/inetbank/bankmain.asp\" ...'),
(4783,'9c7id5rmxGK','STP - Single Touch Payrol','<h1>Gather data</h1>\n<ol style=\"list-style-type: decimal;\"> ...');
If I source a subset of 12 records of the table in question all the columns are correctly populated.
If I source the full set of data for the same table ( 4700 rows ) where everything else is the same, many of the opDetails long text fields have a length showing in sqlYog but no data is visible. If I run a SELECT on that column there are no errors but some of the opDetails fields are "empty" (meaning: you can't see any data), and when I serialize that field, the opDetails column of some records (not all) has
"opDetails" : "\u0000\u0000\u0000\u0000\u0000\u0000\",
( and many more \u0000 ).
The opDetails field contains HTML fragments. I am guessing it is something to do with that content and possibly the CHARSET, although that doesn't explain why the error shows up only when there are a large number of rows imported. The same row imported via a set of 12 rows works correctly.
The same test of the full set of data on a Windows box with MariaDB running on that host (ie no Ubuntu or WSL etc) all works perfectly.
I tried setting the table charset to utf8 to match the database default but that had no effect. I assume it is some kind of Windows WSL issue but I am running the source command on the container all within the Ubuntu host.
The MariaDB data folder is mapped using a volume, again all inside the Ubuntu container:
volumes:
- ../flowt-docker-volumes/mariadb-data:/var/lib/mysql
Can anyone offer any suggestions while I go through and try manually removing content until it works? I am really in the dark here.
EDIT: I just ran the same import process on a Mac to a MariaDB container on the OSX host to check whether it was actually related to Windows WSL etc and the OSX database has the same issue. So maybe it is a MariaDB docker issue?
EDIT 2: It looks like it has nothing to do with the actual content of opDetails. For a given row that is showing the symptoms, whether or not the data gets imported correctly seems to depend on how many rows I am importing! For a small number of rows, all is well. For a large number there is missing data, but always the same rows and opDetails field. I will try importing in small chunks but overall the table isn't THAT big!
EDIT 3: I tried a docker-compose without a volume and imported the data directly into the MariaDB container. Same problem. I was wondering whether it was a file system incompatibility or some kind of speed issue. Yes, grasping at straws!
Thanks,
Murray
OK. I got it working. :-)
One piece of info I neglected to mention, and it might not be relevant anyway, is that I was importing from an sql dump from 10.1.48-MariaDB-0ubuntu0.18.04.1 because I was migrating a legacy app.
So, with my docker-compose:
Version
Result
mysql:latest
data imported correctly
mariadb:latest
failed as per this issue
mariadb:mariadb:10.7.4
failed as per this issue
mariadb:mariadb:10.7
failed as per this issue
mariadb:10.6
data imported correctly
mariadb:10.5
data imported correctly
mariadb:10.2
data imported correctly
Important: remember to completely remove the external volume mount folder content between tests!
So, now I am not sure whether the issue was some kind of sql incompatibility that I need to be aware of, or whether it is a bug that was introduced between v10.6 and 10.7. Therefore I have not logged a bug report. If others with more expertise think this is a bug, I am happy to make a report.
For now I am happy to use 10.6 so I can progress the migration- the deadline is looming!
So, this is sort of "solved".
Thanks for all your help. If I discover anything further I will post back here.
Murray
I'm a first time user of Neo4j and following a training course to install and learn the basics.
I've installed Neo4j Desktop on a Windows machine and can see that it comes with a demo DB called "Movie DBMS". I'm trying to follow steps to dump the database, by stopping the database, clicking on "..." and then "Dump".
The dump errors with the following error in the log file:
[2022-01-31 12:54:36.022] [error] Selecting JVM - Version:11.0.8+10-LTS, Name:OpenJDK 64-Bit Server VM, Vendor:Azul Systems, Inc.
java.nio.file.InvalidPathException: Illegal char <:> at index 128: C:\Users<me>.Neo4jDesktop\relate-data\projects<my project name>\movie-dbms-neo4j-31-Jan-2022-12:54:31.dump
It would appear that the automatic configuration for the dump file is adding a timestamp with includes colons (hh:mm:ss). How can I configure the file name to either exclude the timestamp or avoid using ":"?
Thanks.
I had no responses. But I've figured it out myself.
The answer was to use the command line to dump the database manually. At that point I can specify my own "--to=" filename which doesn't include a ":".
Details in this section of the manual: https://neo4j.com/docs/operations-manual/current/backup-restore/offline-backup/#offline-backup
We are currently using the searchkick gem and it works great. Recently I tried upgrading elasticsearch to 7 in my local development environment. I got it up and running using homebrew ( after researching that I needed to run rm -fr /usr/local/var/lib/elasticsearch). When I went to reindex one of my models I got the following mapping error:
Elasticsearch::Transport::Transport::Errors::BadRequest: [400] {"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [cosmetics/products : {properties={product={type=keyword}}}]"}],"type":"mapper_parsing_exception","reason":"Failed to parse mapping [_doc]: Root mapping definition has unsupported parameters: [cosmetics/products : {properties={product={type=keyword}}}]","caused_by":{"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [cosmetics/products : {properties={product={type=keyword}}}]"}},"status":400}
This error does not occur when using elasticsearch 6.8.4. Can anyone point to me to a resource for resolving this issue?
Mapping type is not supported in version 7.
To solve this, remove all mapping types (_doc for example)
Indices created in Elasticsearch 6.0.0 or later may only contain a single mapping type. Indices created in 5.x with multiple mapping types will continue to function as before in Elasticsearch 6.x. Types will be deprecated in APIs in Elasticsearch 7.0.0, and completely removed in 8.0.0.
Check this out
And this
In addition to the excellent answer by #Assael Azran and the great link he shared, here's some additional information:
If you have indices created in 5.x or before, you'll need to re-index them when you are in 6.8 BEFORE upgrading to 7.x
If you have indices with multiple types, you'll need to re-index them per document type.
Custom type names like products in your case should be replaced with _doc or doc. Ideally, type name should not be there when defining mappings. See this.
All your 5.x or before snapshots, if any, will not work on 7.x. So you'll need to restore the indices from those snapshots when you are in 6.8, then re-index the indices, then again snapshot. You can then delete the indices and also delete older snapshots.
Have a look at this Upgrade link.
Hope this helps
I have got this problem with FireDAC -> FDQuery component when it tries to select data from a database with '.' (dot) in its name.
The database name is TEST_2.0 and the error on Opening the dataset says:
Could not find server 'TEST_2' in sys.servers [...]
I have tried {TEST_2.0} (curly brackets) and [TEST_2.0] (square brackets). Also setting QuotedIdentifiers (Format Opetions) property to True does not seem to fix the problem. In SQL query I can add 'SET QUOTED_IDENTIFIER ON;' but this breaks inserts to the dataset.
The FDConnection component can connect to that server and that database using MSSQL driver without problems. It seems it is the dataset that dosn't handle it. UniDAC seems to handle everything without any problems.
I am using RadStudio 10.2.
Has anyone found any solution to this? Thanks in advance for any replies
I got a response from Emarcadero and it works for me:
"The problem is not in FireDAC, but in SQL Server ODBC driver
SQLPrimaryKeys function. It fails to work with a catalog name
containing a dot. FireDAC uses this function to get primary key fields
for a result set, when fiMeta is included into FetchOptions.Items. So,
as a workaround / solution, please exclude fiMeta from
FetchOptions.Items."
What is wrong?
I was able to reproduce what you've described here. I've ended up on metainformation command, specifically the SQLPrimaryKeys ODBC function call. I have used SQL Server Native Client 11.0 driver connected to Microsoft SQL Server Express 12.0.2000.8, local database server instance.
When I tried to execute the following SQL command (with TEST_2.0 database created) through a TFDQuery component instance with default settings (linked connection object was left with empty database connection parameter) in Delphi Tokyo application:
SELECT * FROM [TEST_2.0].INFORMATION_SCHEMA.TABLES
I got this exception raised when the SQLPrimaryKeys function was called with the CatalogName parameter set to TEST_2.0 (from within the metainformation statement method Execute):
[FireDAC][Phys][ODBC][Microsoft][SQL Server Native Client 11.0][SQL
Server]Could not find server 'TEST_2' in sys.servers. Verify that the
correct server name was specified. If necessary, execute the stored
procedure sp_addlinkedserver to add the server to sys.servers.'.
My next attempt was naturally modifying that CatalogName parameter value to [TEST_2.0] whilst debugging, but even that failed with similar reason (just failed for the name [TEST_2), so for me it seems that the SQLPrimaryKeys ODBC function implementation with the driver I've used cannot properly handle dotted CatalogName parameter values (it seems to ignore everything after dot).
What can I do?
The only solution seems to be just fixing ODBC drivers. Workaround I would suggest is not using dots in database names (as discussed e.g. in this thread). Another might be preventing FireDAC from getting dataset object metadata (by excluding fiMeta option from the Items option set). That will bring you the responsability of supplying dataset object metadata by yourself (at this time only primary key definition).
I am completely new to Firebird; I have been given a Firebird 2.5 database (by our client) - XYZ.fdb
I have registered this XYZ.fdb database in IB Expert.
I am able to run successfully some views and stored procedures. However for some other views or stored procedure, I get the following error:
can't format message 13:896 - message file C:Windows\firebird.msg not found;
invalid request BLR at offset 623; function LTRIM is not defined; module name or entrypoint could not be found; Error while parsing procedure XXXXXXX (stored_procedure name) ;
Error Message:
Access violation at address 00DCA0E5 in module 'IBExpert.exe'. Read of address 00000000.
It was working fine last week; I had tried to restart the system, and install Firebird and IB expert over and over again; I get the above error for a few stored procedures and views, but the other views and stored procedures are working fine.
Since I did not have this issue last week, and in between I reinstalled Firebird and IB Expert a couple of times, I think it has some configuration or registration issue.
Can you provide me a step by step approach to fix this issue - such that I can access all database objects in the Firebird DB using IB Expert ?
The resulting error might be created by an access violation from IBExpert, but the reason is definitely a missing UDF library, for example a .dll file called rfunc.dll or freeadhocudf.dll or whatever it is called.
To find the name of the missing .dll, check the UDFs used in the database by clicking on UDF folder in IBExpert database registration.