How do I create multiple variables from a list in Bash?
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How can we tweak this so the variables from the list can have spaces, like:
Charlie Brown
John Stuart Mill
Steve Martin
and so on?
The way it is it gets each word as a variable, ideally it would be an entire line.
(thanks in advance.... if I make it, it will help *a lot* undeleting (with ext3grep) tons of files I've accidentally deleted... and additionally could also improve a bit a screensaver-like-wallpaper-changer script I have )
I have a list of files I can potentially recover with ext3grep, which is somewhat like:
/dir/file.ext
/dir2/file with spaces in its name.ext
...
and with a script similar with this one I could replace the "echo" with something like ext3grep --restore-file /dir2/file with spaces in its name.ext
But that would require the script to somehow get the variables from lines, not just words, OR maybe I could somehow replace all the spaces by double-underlines or some unlikely character, and the script could somehow "decode" it, which would be probably even harder, I think.
This "undelete" program can theroretically "restore all", which is adequate if you have tons of files to recover and not only a few, but it's not working perfectly for me. It goes restoring for a little while, and then there's some obscure, highly-technical error which stops the process. But I still can recover files that were left behind, individually. A script with individual commands for each file would automate that for tons of files.
And, a OT warning, about the quick file listing filter on konqueror. If you filter for the filename "thrash" on a folder, and so various thrash1, thrash2, thrash3, [...] files appear, and then you select them all with control + a, and delete them, you will not be deleting just the files you're seeing selected, but all the files on the folder, and its subfolders, which were selected by control + a too, but are "hidden" from your sight. If it's a large number of files you wanted to delete, you may not even notice that the number/list is too large on the confirmation dialog... I didn't...
If it's not too much to ask for more, would anyone know how to do something with this same basic principle, but instead of loading an item from a list, one by one, loading, say, three lines/items, assigning each as a different variable, so, instead of "do something with x", the command would be "do something with x, do something with y, do something with z".
I'll explain why, roughly. It's ext3grep, the program I'm using. In order to recover a file, it has first to do some sort of "scan" on the partition, and saves something into a file, a sort of "report" for its own use, which will be loaded when actually restoring something. This file is loaded once for each command line, and it takes some time. Not terribly slow for a single instance, like some seconds, but if we need to do this for many, many files, it quickly adds up to days, for something that could be otherwise finished in hours :/
I think I have to add another "layer of loop" in the second script, first it would just do somewhat more or less what it does, but somehow cycling some variables, and, after having done it N times...
In some basic-like language it would be something like:
0 n=1
1 some_command_that_outputs_list | while read x; do
x=a(n)
n=n+1
if n<5 then goto 1
else
command action a(1) action a(2) action a(3) action a(4) action a(5)
goto 0
It's quite frustrating knowing more or less the logic of what have to be done, being able to layout script-like rough, but at the same time not knowing the proper syntax and substitutes for things like "goto" and line numbers :/
Unfortunately it will not be directly useful to me in this task anymore, as, differently from what I've said before, even if we give multiple commands for the same instance, it will read a file for each action/file to recover.
But maybe I'm using an older version of ext3grep, since it was (I think) the author of this program himself who gave me this tip.
Besides adding to my list of personal "how to's" on scripts, I'll give a link to this discussion on the ext3grep list, so maybe it will be useful for people with a newer version, or maybe future versions, until it's finally "GUI-fied" or "ncursed".
Actually I've found a way to use the last script in a way that should speed things somewhat, not as fast as it would be if we could indeed restore many files loading the "stage2" file only once, like in "restore-all", but still something that could be useful until nothing better is implemented.
It's just the same script, but instead of giving the command multiple files to restore, it should give multiple independent commands.
Theoretically it should be 2 times faster if there are two commands/instances, but eventually it should become slower by hardware/processing limits, I think, so I think it shouldn't have so many of them. I'm using about 7 or 8, I think.
I didn't really measured carefully the time gained or anything, I just have this impression by some simple, maybe imprecise tests with two simpler scripts running against a single script, with their results being saved on independent logs. Script 1a + script 1b logs had larger size than script2's log. This logic may be fundamentally flawed in some dull way that will made me ashamed of having posted it, but I think it could possibly be just like that.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.