There are some databases created inside my influxdb and the names of the databases have space in-between.
I can't able to use those databases using the CLI command use <database name> and hence I can't able to see any series out of it.
I have tried other approaches like using special symbols such as \, [, etc. but all those didn't work
Please provide some olutions
As influxdb format you cant include it, cause it will replace " " to "", influxdb need exactly sintax, and whitespace its key value.
Look that https://github.com/tihomir-kit/InfluxData.Net/issues/49
Related
I'm not a big fan of the command line readline keyboard shortcuts so I'm hoping to remap C-Right/C-Left to navigate one word back/forward and C-BS and C-Del to delete one word back/forward. However, after reading the documentation and forumns, I'm not able to figure out how to do this.
Currently, when C-left/C-right are typed in the command line, the cursor doesn't move and instead keycodes are inserted (C-left = [1;5D, C-right = [1;5C). I've tried many remappings but the mappings that I would think would work best for this are:
cmap <C-right> <A-f>
cmap <C-left> <A-b>
I was able to figure out how to delete one word back using the following mappings (on further review there is documentation in the VIFM manual regarding mapping BS):
cnoremap <BS> <C-w>
cnoremap <C-h> <C-w>
However, I'm still uncertain on how to map delete one word forward using C-Del. When I use the following remapping for C-Del, the result is that one character to the left of the cursor is deleted. Note, when I use other C-* combinations for delete word forward the remappings actually works making me think that it may not be possible to remap C-Del:
cmap <C-Del> <A-d>
I'm using VIFM version 0.12 on Arch Linux. Any suggestions?
List of keys that are supported by angle-bracket notation is available in the documentation. And combinations of Ctrl with arrow keys are not there.
See this GitHub issue for a discussion of why and an example of how to work around it:
" ctrl-right
cnoremap <esc>[1;5C <a-f>
" ctrl-left
cnoremap <esc>[1;5D <a-b>
" ctrl-del
cnoremap <esc>[3;5~ <a-d>
At this point I'm still not sure that these sequences are common enough among different terminal types to be hard-coded and not cause trouble.
I am exporting data from SAP Hybris.
The data I am importing also has semicolons (;).
In the exported data I see the delimiter is ; This is preventing me from splitting the data and do my work. Is there a way to change this delimiter to something else ?
I understand this can be achieved by changing the "csv.fieldseparator" property, but that would affect everywhere and I can't afford that in production.. Any suggestions would be appreciated
Go to backoffice.
Search export.
In the advanced configurations set your new delimiter. By default,
it is semi-colon (;).
I want to make a list of all major towns and cities in the UK.
Geonames seems like a good place to start, although I need to use it locally (as opposed to the API) as I will be working offline while using the information.
Due to the large size of the geonames "allcountries.txt" file it won't open on Notepad, Notepad++ and Sublime. I've tried opening in Excel (including the Data modelling function) but the file has more than a million rows so this won't work either.
Is it possible to open this file, extract the UK-only cities, and manipulate in Excel and/or some other software? I am only after place name, lat, long, country name, continent
#dedek's suggestion (in the comments) to use GB.txt is definitely the best answer for your particular case.
I've added another answer because this technique is much more flexible and will allow you to filter by country or any other column. i.e. You can adapt this solution to filter by language, region in the UK, population, etc or apply it the cities5000.txt file, for example.
Solution:
Use grep to find data that matches a particular pattern. In essence, the command below is saying, find all rows where the 8th column is exactly "GB".
grep -P "[^\t]*\t[^\t]*\t[^\t]*\t[^\t]*\t[^\t]*\t[^\t]*\t[^\t]*\t[^\t]*\tGB\t" allCountries.txt > UK.txt
(grep comes standard with most Unix systems but there are definitely tools out there that can do it on Windows too.)
Details:
grep: The command being executed.
\t: Shorthand for the TAB character.
-P: Tells grep to use a Perl-style regular expression (grep might not recognize \t as a TAB character otherwise). (This might be a bit different if you are using another version of grep.)
[^\t]*: zero or more non-tab characters i.e. an optional column value.
> UK.txt: writes the output of the command to a file called "UK.txt".
Again, you could adapt this example to filter on any column in any file.
We are facing a problem with Imapla Column naming convention which seems unclear to us.
The CDH imapala documentation (http://www.cloudera.com/documentation/archive/impala/2-x/2-0-x/topics/impala_identifiers.html) 3rd bullet point says : An identifier must start with an alphabetic character. The remainder can contain any combination of alphanumeric characters and underscores. Quoting the identifier with backticks has no effect on the allowed characters in the name.
Now, due to dependency with the upstream SAP systems, we had to name a column name starting with (0) zero as numeric. While defining and extracting the records from the table impala does not show any semantic error. While connecting Imapala with SAP HANA through SDA (Smart Data Access), the extraction is failing for this particular column which is starting with a leading zero (0) and fine for rest of the columns which are starting with an alphabet. The error shows as "... ^ Encountered: DECIMAL LITERAL "
I have to points.
If the documentation says, an identifier can not start anything other that alphabet, then how the imapla query is running without any issues.
Why the error is only raised while it is getting extracted from SAP HANA.
Any insight will be highly appreciable.
Ok, I can only say something about the SAP HANA side here, so you will have to check the Impala side somehow.
The error message you get while accessing an external table via SDA typically comes from the 3rd party client software, in this case the ODBC driver you use to connect to Impala.
So, SAP HANA tries to access the table through the Impala ODBC driver and that driver returns the error message.
I assume that the object name check for Impala is implemented in the client in this case. Not sure if the way you use to run the query in Impala also uses the driver.
But even if Impala has the limitation for the table naming in place, I fail to see why this would force you to name the table in SAP HANA that way. If the upstream data access requires the leading 0 just create a view on top of the table and you're good to go.
I'm doing a backup replication of some phpBB forums from one server to another using mysqldump, using some basic options:
mysqldump -h[server] --create-options --add-drop-database -R -E -B [database]
At the time of doing there was an error like this:
ERROR 1062 (23000) at line 9322: Duplicate entry '?????' for key 'wrd_txt'
In phpBB forums that is an UNIQUE key on a table in which every word posted is registered and counted. The problem seems to be this one:
When mysqldump dumps a DOUBLE value, it uses insufficient precision to
distinguish between some close values (and, presumably, insufficient
precision to recreate the exact values from the original database). If
the DOUBLE value is a primary key or part of a unique index, restoring
the database from this output fails with a duplicate key error.
It is been caused due to some posts on cirilic alphabet on our forums; mysqldump seems to be taking cirilic characters as a simple value and truncating them, so every character seems to be the same at using cirilic alphabet (character represented as ? in this case). That results in encountering repeated values for strings of the same size in an UNIQUE key column.
Is there any way to perform a dump using double precision using other options or through other tool?? Or a way to avoid this problem on dump??
Just for the record, since that cirilic words on table was only there due to spam, and we were only interested on latin characters, I got ride of them using this command (maybe it would be useful for anyone).
delete from [table] where NOT HEX([column]) REGEXP '^([0-C][0-9A-F])*$';
Thanks a lot in advance!