[SOLVED] Failed miserably to execute bash script via PATH variable after FS migration.
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Failed miserably to execute bash script via PATH variable after FS migration.
Hi,
I just wiped my old big NTFS partition, where I stored all kinds of user-important stuff such as /home/. I started getting strange errors, like the system freezing to a halt every now and then, and input/output errors when writing to my ~/.mozilla directory.
Hoping it was not my Intel 80GB SSD already starting to fail, I backed everyting up on an external magnetic drive, deleted the NTFS and formatted the partition with XFS. (I heard it might be good for big files, but whatever.)
After copying everything back, and setting stuff to order, changing permissions and ownership, and so on, I can't execute bash scripts in my bash script folder ~/bin/.
~/bin/ is in the PATH variable.
What's weird (and what I suppose justifies posting on LQ) is that I can execute a script ~/bin/script fine, either by typing
All files in bin have 'executable' permissions for everyone, but most are run using sudo anyway. And when I run a script called wifil using sudo, I get
The script runs *perfectly* whenever bypassing the PATH variable. Please help me. I think I'm going crazy. That backtrace is too much for a humble shellscript programmer. x)
Check the output for "which sudo" and "ldd sudo", maybe you have some leftover from the previous installation. You should also check the contents of $PATH with "echo $PATH", for the very same reason.
It seems like either sudo is broken or you are using a different version of glibc than the one that was used to compile your sudo binary. It could also be a problem in glibc, but then your whole system would probably be unusable.
I'm sorry, but I really can't say what I did to avoid the backtrace output. But from what I understand (segmentation fault), something is wrong with a binary, and not the script? Sudo and bash are the only binaries involved.
Quote:
Check the output for "which sudo" and "ldd sudo", maybe you have some leftover from the previous installation. You should also check the contents of $PATH with "echo $PATH", for the very same reason.
Code:
[user@computer /]$ which sudo
/usr/bin/sudo
[user@computer /]$
It does seem like I have duplicate entries in PATH. Is this a problem? Also I'm not sure why. ~/bin and ~/bin/scriptscript are added in the .bashrc file at logon.
Quote:
It seems like either sudo is broken or you are using a different version of glibc than the one that was used to compile your sudo binary. It could also be a problem in glibc, but then your whole system would probably be unusable.
The system doesn't exactly feel rock solid with those random (halts/kernelpanics/whatever) [EDIT: those haven't occurred since XFS migration] but sure it is usable... Also, sudo works flawlessly in all other contexts. Actually,
$ sudo ~/bin/wifil [EDIT: *need* to use 'sudo bash (path)'. Sorry for this typo.]
connected me so I could post this.
Note, that all I ever did was
- moving ~/ (let's call it "DIR") from the "storage partition" sda2, to an external disk
- deleting sda2
- creating 70GB XFS system on new sda2
- moving DIR back to sda2
- creating new symlink /home/user --> [sda2_mount_point]/DIR
user's home directory is /home/user/ ,where /home/user is a symlink to a directory on the "storage" partition, which has in effect had a change of filesystem.
I should mention that there was definitely corrupted data on the old filesystem, some of which resided in ~/. However, now I can fsck /dev/sda2, (can't with an NTFS) and the checks show no errors.
I find it hard to beleive that certain programs really cause the problem, because they reside on their own 10GB ext4 partition. EDIT: As do all shared libraries.
I can execute a script ~/bin/script fine, either by typing
Code:
$ bash ~/bin/script
And then:
Code:
$ sudo ~/bin/wifil
connected me so I could post this.
So the problem only occurs sometimes?
I'm beginning to believe you have a somewhat corrupted system, you say yourself it isn't rock solid, with frequent halts/kernelpanics/whatever.
How about free space?
What mount options for your /home?
Try moving the scripts to some location outside this new filesystem see if that works better.
I hoped to fix stuff by reformatting the affected partition, but there might be some physical damage on the disk.
I don't know what to do. I'm terribly keen on keeping the data in ~/.
fs corruption or hardware problem could be also causing this. If the problem is in a script you should run it with '#!/bin/bash -x' so you can get verbose output and tell what exactly inside the script is causing the problem.
If the fs got corrupted you should fsck it first, and then reinstall all the packages. Your package manager should provide an easy way to do that.
Last edited by i92guboj; 08-18-2010 at 05:34 AM.
Reason: added -x
Now wait a minute: Is it so, that you cannot execute any script unless you explicitly runs it with bash - that is, typing "bash scriptfile" works but not "scriptfile" alone?
If so, what is your default shell, is it really bash?
(You can check that with
# cat /etc/passwd |grep user
You will see what is default shell, like mine here is bash:
That shouldn't matter as long as the script has a correct header as it should. It's worth checking though. It's also worth checking what does /usr/bin/sh points to with ls -l.
Try moving the scripts to some location outside this new filesystem see if that works better.
Eureka!!!
Code:
[user@computer /]$ sudo cp ~/bin/wifil /mnt/3
Password:
[user@computer /]$ sudo ~/bin/wifil
Password:
sudo: unable to execute ~/bin/wifil: Permission denied
Segmentation fault
[user@computer /]$ sudo /mnt/3/wifil
Password:
/mnt/3/wifil: line 14: [: ==: unary operator expected
Local wifi connection script: START
Make sure you are root, I won't check this for you
killing all client apps...
wpa_supplicant: no process found
dhcpcd: no process found
DONE
NIC down...
DOWN
NIC up...
UP
The script is currently configured for:
PG
Security: wpa
WPA security enabled. If this is wrong,
kill the script, it's useless.
ioctl[SIOCSIWAP]: Operation not permitted
ioctl[SIOCSIWESSID]: Operation not permitted
WPS-AP-AVAILABLE
Trying to associate with 00:11:6b:44:97:70 (SSID='bono' freq=2462 MHz)
Associated with 00:11:6b:44:97:70
WPA: Key negotiation completed with 00:11:6b:44:97:70 [PTK=CCMP GTK=CCMP]
CTRL-EVENT-CONNECTED - Connection to 00:11:6b:44:97:70 completed (auth) [id=0 id_str=]
dhcpcd[12452]: version 5.2.6 starting
dhcpcd[12452]: wlan0: rebinding lease of 193.11.239.42
dhcpcd[12452]: wlan0: acknowledged 193.11.239.42 from 193.11.239.1
dhcpcd[12452]: wlan0: checking for 193.11.239.42
dhcpcd[12452]: wlan0: leased 193.11.239.42 for 300 seconds
dhcpcd[12452]: forked to background, child pid 12493
Call script itest:
/mnt/3/wifil: line 47: ~/bin/itest: Permission denied
Local wifi connection script: END
[user@computer /]$
What on earth?? I would never have thought of that. THANK YOU!
That shouldn't matter as long as the script has a correct header as it should. It's worth checking though. It's also worth checking what does /usr/bin/sh points to with ls -l.
The header points as follows:
# ! /bin/bash
I changed it from /bin/sh after the errors started to appear.
Code:
[user@computer /]$ ls -l /bin/ | grep "sh \-"
lrwxrwxrwx 1 root root 4 May 17 02:40 sh -> bash
[user@computer /]$
My thought was that you might have /home mounted with options/flags not allowing you to execute.
But then, it shouldn't work with "bash scriptfile" either?
I have never used this possibility myself, but maybe it's worth digging into?
It could be that different filesystems have different default options.
My thought was that you might have /home mounted with options/flags not allowing you to execute.
But then, it shouldn't work with "bash scriptfile" either?
He should check the output from "mount" without arguments and make sure that neither of "noexec" and "users" are between the mount options ("users" implies "noexec" by default).
When you open a script doing "bash <filename>" there's virtually no difference between that and doing "oowrite my_file.doc". Bash will launch a new session and start parsing the file. You can quickly check by setting -x on any random script and then launching it with bash or sh.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.