Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Not a homework, but here's the solution:
#!/bin/bash
#Read in the bvecs file
BVECFILE=$1
BVECNEWFILE="bvecs_new"
#Store it as an array
readarray BVEC < $BVECFILE
BVECNUM=${#BVEC[@]}
#Check the number of directions
GRADNUM=$(echo $BVEC | wc -w )
#Indexing at i=1 so we get rid of the first vol0 indexed as 0
if [ -f $BVECNEWFILE ] ; then
echo "$BVECNEWFILE already exists. Removing and replacing..."
rm $BVECNEWFILE
fi
echo "New bvecs are generated in "$BVECNEWFILE" "
#echo "GRADX is $GRADX"
for ((i=2; i<=${GRADNUM}; i++ ));
do
gx=$(echo ${BVEC[0]} | awk -v x=$i '{print $x}')
gy=$(echo ${BVEC[1]} | awk -v x=$i '{print $x}')
gz=$(echo ${BVEC[2]} | awk -v x=$i '{print $x}')
echo "$gx $gy $gz" >> $BVECNEWFILE
done
^ Nice! I never seem to have the time to learn awk properly. I found this to be an entertaining problem, so I did my own variant in Python. Looks like this:
Code:
#!/usr/bin/env python3
# Declare our matrix 3 chars wide, 6 lines
matrix = [[ 0 for i in range(3)] for j in range(6) ]
x = 0
y = 0
count = 0
# Open the original file
with open("f1.txt") as f1:
txt = f1.read()
# Read the file, char by char.
# Insert chars at correct place in matrix.
for char in txt:
if count > 5:
count = 0
x = 0
y += 1
if char != "\n":
matrix[x][y] = char
x += 1
count+=1
# Now let's open a file to write the result to, and create it if not
# already existing
outfile = open("f2.txt", "w")
x = 0
y = 0
while True:
outfile.write(matrix[x][y])
y += 1
if y > 2:
outfile.write("\n")
y = 0
x += 1
if x > 5:
break
Also works for any width of field, alpha or numeric, and any number of columns and lines so long as all rows have same number of space separated columns!
Gotta love awk!
And thanks for the exercise as well!
Last edited by astrogeek; 07-31-2015 at 07:48 PM.
Reason: Fixed percent signs, additions, and more...
Not minimalist, but mine was similarly done (in awk) but using an array of arrays - I find them easy to "walk" as you can use "for i in array" followed by "for j in array[i]" ...
No need to know the bounds in advance.
Edit: - not relevant here as the prints needed to be ordered - I did similar to @astrogeek for the printing.
Just a nit - hanging whitespace on lines makes regex using the "$" anchor more complex than necessary.
Not minimalist, but mine was similarly done (in awk) but using an array of arrays - I find them easy to "walk" as you can use "for i in array" followed by "for j in array[i]" ...
No need to know the bounds in advance.
Edit: - not relevant here as the prints needed to be ordered - I did similar to @astrogeek for the printing.
Just a nit - hanging whitespace on lines makes regex using the "$" anchor more complex than necessary.
HAHA! Yes I was aware of the trailing space but was out of time - hoped no one would notice!
Here is one without trailing whitespace and two lines shorter!
Thanks! These are fun when you have the time and the right mindset, fortunate superposition today!
EDIT ****
Although I use a linear array the associative keys are effectively compound to produce the same logic as would an array of arrays, allowing same use of (i,j), or (n,f) in my own case.
Last edited by astrogeek; 07-31-2015 at 11:12 PM.
Reason: typos, addition, pesky percent signs!
Cute - I did it as follows, but needs an extra print statement for the newline. I might steal that idea ...
Code:
printf(n==NR?"d" : "%d ",_[n][m])
Thanks for the warm fuzzy feeling of satisfaction!
When my son produced his first cut, he had a proper BEGIN block and used print(...) statements.
He also gave us the linear array idea which is good! (My own first attempt used multiple arrays as well.)
In order to get the newlines right with print he had to define ORS="" to override the default (\n), among other things.
I wanted to rely on defaults without a BEGIN block to keep it minimal so I changed to use of printf(...), and hastily introduced the trailing space because I was somewhat rushed, but knew a ternary would fix it - just didn't do it.
No need to "steal" the idea, you would have thought of it yourself in another minute or two!
Anyway, sharing and immitation are both good things - and I have FREELY shared the thought! I like to never miss an opportunity to make that point, so let's make it "official" with a universal statement of copy rights...
Code:
# Acknowledgment of right to use, copy, modify and distribute:
#
# You already have the right to use, modify and distribute this
# or any other thought or idea, and need no license or other
# permission from anyone to do so!
#
# Exercise it freely and never concede it to anyone!
I toyed with only using loops for a while, but found it easier to, eh, "visualize" using a matrix. As I keep saying, one of these days, I might find the time and energy to learn awk properly.
Last edited by HMW; 08-01-2015 at 01:41 AM.
Reason: Clarification
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.