I want to edit my headers in fasta file by adding pipes but unable to do so. The header looks like this
KX035646.1 Name:NADH domain
ATGCGGGGCTGC..
I want it like
sp|KX035646.1| Name:NADH domain
The accession number is different for all sequences
Can you please help me doing it? Thanks
You can try a simple sed one liner:
cat test.fasta
>KX035646.1 Name:NADH domain ATGCGGGGCTGC..
ACGT
CTTT
>KX035646.2 Name:NADH domain ATGCGGGGCTGC..43214
GCAT
sed 's/^>\([a-zA-Z0-9.]\+\)\(.*\)/>sp|\1|\2/' test.fasta
>sp|KX035646.1| Name:NADH domain ATGCGGGGCTGC..
ACGT
CTTT
>sp|KX035646.2| Name:NADH domain ATGCGGGGCTGC..43214
GCAT
Related
I'm looking for a script (or if there isn't, I guess I'll have to write my own).
I wanted to ask if anyone here knows a script that can take a txt file with n links (lets say 200). I need to extract only links that have particular characters in them, let's say I only need links that contain "/r/learnprogramming". I need the script to get those links and write them to another txt files.
Edit: Here is what helped me: grep -i "/r/learnprogramming" 1.txt >2.txt
you can use ajax to read .txt file using jquery
<script src=https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.min.js></script>
<script>
jQuery(function($) {
console.log("start")
$.get("https://ayulayol.imfast.io/ajaxads/ajaxads.txt", function(wholeTextFile) {
var lines = wholeTextFile.split(/\n/),
randomIndex = Math.floor(Math.random() * lines.length),
randomLine = lines[randomIndex];
console.log(randomIndex, randomLine)
$("#ajax").html(randomLine.replace(/#/g,"<br>"))
})
})
</script>
<div id=ajax></div>
If you are using linux or macOS you could use cat and grep to output the links.
cat in.txt | grep /r/programming > out.txt
Solution provided by OP:
grep -i "/r/learnprogramming" 1.txt >2.txt
Since you did not provide the exact format of the document I assume those links are separated by newline characters. In this case, the code is pretty straightforward using Python/awk since you can iterate over file.readlines() and print only those that match your pattern (either by using a lines.contains(pattern) or using a regex if the pattern is more complex). To store the links in a new file simply redirect the stdout to a new file like this:
python script.py > links.txt
The solution above works even if links are separated by an arbitrary symbol s, first read the file into a single string and split it over s. I hope this helps.
How bb translation works together?
When I used bb b -l 1 worked fine but there is still needed to rewrite all strings for other languages.
bb t -a adds new language, e.g. "cs-CZ" and creates json file with language code.
The question is how can I export/import all strings into json file to translation?
bb t -e - fileName is json or js in dist? Export doesn't work in my case no strings are exported.
bb t -e filename.txt -l cs-CZ is correct way to export untranslated strings to text file with very simple structure. After it will get back from translation agency you can just import it by bb t -i filename.txt -l cs-CS.
Before exporting always update translation files by bb b -l 1 -u 1 as you already find out. Actual JSON files in translations directory contains array of arrays of 3 or 4 items [original, hint, 0/1 - with/out parameters, translation]. So you can directly translate them if you will create some editor for these...
Also please update bobril-build to 0.56.1, I just fixed wrong error message in export even-though everything was ok. Maybe that confuse you so you have to ask, sorry for that.
I have a large text file full of websites visited by hosts. This is the format:
Host : Url
A lot of the urls look like this:
http://google.com/?aslkdfjasldkfjaskldfjalskdjfalksdfjalksdjfa;sdlkfjas;dklfjasdklfjasdklfjasdklfjJUSTABUNCHOFRANDOMSTUFFaslkdjfaslkdfjaklsdfjaklsdjfasdkfjasdfklj
And it is hard to see what the original website is. How can I use grep to only show this:
Host : http://google.com
I've been looking everywhere to cut a line after the delimiter ".com" and can't find a solution. Thank you for you help!
Bonus: I forgot about .net, .org, and the other extensions. This might be a more difficult problem than I thought
Try this :
grep -oP 'Host : http://[^/]+'
^^^^
(All characters that's not slashes)
or if you want to specify .com :
grep -oP 'Host : http://.*?\.com'
Another solution :
cut -d'/' -f1-3
I am using Powershell 4 and trying to parse an archived event log into a csv file that includes all of the data and has headers associated with them. The closest I have been able to come is by using the following command:
Get-WinEvent -Path .\Security.evtx |Select-Object TimeCreated, ProviderName, Id, Message, Level, Keyword, UserID, Data, Subject, SubjectUserSid, SubjectUserName, SubjectLogonId, ComputerName | Export-Csv .\Logging.csv
This gives me all the header information for all of the fields in the csv file but the only fields that contain data are TimeCreated, ProviderName, ID, Level, & Message. I am trying to get the missing data into columns also but not succeeding. So what am I doing wrong here?
This was copied from an edit to the question itself, and should be credited to the original question author
Ok, I finally figured it out...At least for what I need to accomplish. Hopefully this will help someone.
Get-WinEvent -Path .\Security.evtx | select TimeCreated, ProviderName, Id, #{n='Message';e={$_.Message -replace '\s+', " "}} | Export-Csv .\Logging.csv
This code allows you to export the archived eventlog into csv with headers and puts the whole message body into one cell, which allows import into a database with ease when you have no tools to work with.
Fairly regularly, I need to replace a local url with a live in large WordPress databases. I can do it in TextMate, but it often takes 10+ minutes to complete.
Basically, I have a 10MB+ .sql file and I want to:
Find: http://localhost:8888/mywebsite
and
Replace with: http://mywebsite.com
After that, I'll save the file and do a mysql import to the local/live servers. I do this at least 3-4 times a week and waiting for Textmate has been a pain. Is there an easier/faster way to do this with grep/sed/awk?
Thanks!
Terry
sed 's/http:\/\/localhost:8888\/mywebsite/http:\/\/mywebsite.com/g' FileToReadFrom > FileToWriteTo
This is running switch (s/) globally (/g) and replacing the first URL with the second. Forward slashes are escaped with a backslash.
kent$ echo "foobar||http://localhost:8888/mywebsite||fooooobaaaaaaar"|sed 's#http://localhost:8888/mywebsite#http://mywebsite.com#g'
foobar||http://mywebsite.com||fooooobaaaaaaar
if you want to do the replace in place (change in your original file)
sed -i 's#http://.....#http://mysite#g' input.sql
You don't need to replace the http://
sed "s/localhost:8888/www.my-awesome-page.com/g" input.sql > output.sql