ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I wanted to find the "temp.txt" files recursively in directories and subdirecotires and then display the output of temp.txt as well.
#!/bin/bash
filename='temp.txt'
n=1
while read line; do
# reading each line
echo "$line"
n=$((n+1))
done < $filename
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,818
Rep:
Quote:
Originally Posted by 1s440
Hi all,
I wanted to find the "temp.txt" files recursively in directories and subdirecotires and then display the output of temp.txt as well.
#!/bin/bash
filename='temp.txt'
n=1
while read line; do
# reading each line
echo "$line"
n=$((n+1))
done < $filename
I am not sure if i can just add find command to the above script.
Your script appears to read a single file line by line, counting each line (but never using that result for anything). I'm not sure how you could insert a recursive find into that script either.
Code:
find . -type f -name temp.txt -exec cat {} \;
will display all the files named 'temp.txt' in and beneath the current directory.
If I was correct in assuming that you were trying to count all the lines in the files you find, to get that, you could use:
Code:
wc -l $( find . -type f -name temp.txt ) | grep total | awk '{print $1}'
Instead of using 'find' twice, use it once to capture the list of files in an array:
Code:
declare -a FILES
FILES=( $( find . -type f -name text.txt ) )
While it's not strictly necessary to declare the array, it's not a bad idea to document what that variable is used for (when you come back to make changes to your script next year).
Once you have an array of filepaths, you can use it in other commands:
Code:
for F in ${FILES[@]}
do
cat ${F}
done
echo "$( wc -l ${FILES[@]} | grep total ) lines."
Putting it all together (and simplifying it a bit) and you'd get:
Code:
declare -a FILES
FILES=( $( find . -type f -name text.txt ) )
cat ${FILES[@]}
echo "$( wc -l ${FILES[@]} | grep total ) lines."
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,818
Rep:
Quote:
Originally Posted by 1s440
If I need to find files across the all the servers then I think bash script won’t help isn’t it ?
Hmm... that information might have been nice to know up front.
You could accomplish what I think you want to do using a utility like Ansible (or something similar) assuming you have an inventory of "all the servers". But it doesn't make much sense to propose solutions when the problem hasn't been fully presented.
At this point, however, I think the discussion would benefit greatly from a clear description of the problem you are trying to solve. What other information can you provide? It appears that locating text files may only be a small part of what you are attempting to do.
Your script appears to read a single file line by line, counting each line (but never using that result for anything). I'm not sure how you could insert a recursive find into that script either.
Code:
find . -type f -name temp.txt -exec cat {} \;
will display all the files named 'temp.txt' in and beneath the current directory.
If I was correct in assuming that you were trying to count all the lines in the files you find, to get that, you could use:
Code:
wc -l $( find . -type f -name temp.txt ) | grep total | awk '{print $1}'
Instead of using 'find' twice, use it once to capture the list of files in an array:
Code:
declare -a FILES
FILES=( $( find . -type f -name text.txt ) )
While it's not strictly necessary to declare the array, it's not a bad idea to document what that variable is used for (when you come back to make changes to your script next year).
Once you have an array of filepaths, you can use it in other commands:
Code:
for F in ${FILES[@]}
do
cat ${F}
done
echo "$( wc -l ${FILES[@]} | grep total ) lines."
Putting it all together (and simplifying it a bit) and you'd get:
Code:
declare -a FILES
FILES=( $( find . -type f -name text.txt ) )
cat ${FILES[@]}
echo "$( wc -l ${FILES[@]} | grep total ) lines."
Your script appears to read a single file line by line, counting each line (but never using that result for anything). I'm not sure how you could insert a recursive find into that script either.
Code:
find . -type f -name temp.txt -exec cat {} \;
will display all the files named 'temp.txt' in and beneath the current directory.
If I was correct in assuming that you were trying to count all the lines in the files you find, to get that, you could use:
Code:
wc -l $( find . -type f -name temp.txt ) | grep total | awk '{print $1}'
Instead of using 'find' twice, use it once to capture the list of files in an array:
Code:
declare -a FILES
FILES=( $( find . -type f -name text.txt ) )
While it's not strictly necessary to declare the array, it's not a bad idea to document what that variable is used for (when you come back to make changes to your script next year).
Once you have an array of filepaths, you can use it in other commands:
Code:
for F in ${FILES[@]}
do
cat ${F}
done
echo "$( wc -l ${FILES[@]} | grep total ) lines."
Putting it all together (and simplifying it a bit) and you'd get:
Code:
declare -a FILES
FILES=( $( find . -type f -name text.txt ) )
cat ${FILES[@]}
echo "$( wc -l ${FILES[@]} | grep total ) lines."
Hmm... that information might have been nice to know up front.
You could accomplish what I think you want to do using a utility like Ansible (or something similar) assuming you have an inventory of "all the servers". But it doesn't make much sense to propose solutions when the problem hasn't been fully presented.
At this point, however, I think the discussion would benefit greatly from a clear description of the problem you are trying to solve. What other information can you provide? It appears that locating text files may only be a small part of what you are attempting to do.
Yes I also thought about using ansible to fetch for the text files but I am new to environment and find little complicated to understand it.
First login : user@server:
Second login: ssh server2
Third login: they use alias to connect to the server.. ( here the temp.txt files ) are stored.
Code:
user@server:~$ su - s1
ssh server
[s1@ts1/server ~]$ switchuser
switchuser@server ----------->
this is how we connect and from swichuser@server.. there are temp.txt files under /app/tmp/
in the sameway: we have many users say s2, s3, s4...
for ex: to find second temp.txt on s2, but this is manual..
Code:
user@server:~$ su - s2
ssh server
[s2@ts1/server ~]$ switchuser
switchuser@server ----------->
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.