Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Interesting problem. I think you will have to be much more specific before anyone attempts an answer
which server/client/versions/distros/etc? How is the server started? Xinetd/inetd/something else?
Is it dark net/encrypted/firewalled/redirected in any way that might affect behaviour?
How come you can send tcp and udp packets to the same server?
Well business_kid is definitely correct we need a lot more info on this. However I would suggest it might have something to do with TCP actually making a connection and UDP being connectionless. When what ever service you are running notices the connection ending, it stops listening. UDP is probably not sending a close signal on exit to the service so it stays open.
Well the server program is my program, I setup the socket as a stream socket, do a bind and pend on accept. Client then connects to the server socket, server program then starts doing send() at 120 hz. (120/second). Client receives the data, processes it fine. But as soon as I kill the client program, It kills the server program too.
My linux version is 3.9.0 for git tree.
My HW is a embedded target, so I start 2 seperate programs doing tcp and udp, and same thing on the client side.
My HW is a embedded target, so I start 2 seperate programs doing tcp and udp, and same thing on the client side.
All proprietary stuff, eh? Embedded, eh? There is no valid reason why a 'broken pipe' type error should result in a server crash. It might result in a server instance crash. I would check what signals are being received/sent.
How does your server react to a packet that starts but never finishes? If, otoh, you set the client to send 20 packets and then stop, it will complete the 20 packets. and then close the connection.
Thanks business kid for the insight. What I also experience is, lets say I open up a ssh connection from target which is server to a target which is client, and if I do cat on a very large file (client file), I get broken pipe too. Can this be a driver type issue ? Client server targets talk to each other on a Marvell switch.
At this distance, it could be anything. Take nothing for granted; don't go swapping stuff randomly. Let facts and debug info lead you. To solve this, someone is going to have to be there with the equipment. At the moment, it's you.
lient is connected to server on a TCP socket. If I do same experiment with UDP socket, the server does not exit when the client is killed.
UDP effectively connectionless (though perhaps "sessionless" would be a better adjective).
There is no facility to ask for a re-transmit (ARQ) or even packet accounting, which is why it's normally used for streams where the next packet is always more important than the last (stock market tickers, live feeds, etc).
TCP carries the extra accounting overhead, but at the adapter level that accounting is causing things like ACK, SYN and ARQ passing back and forth. The ARQ has a timeout value, either I get this by <time> or the session closes. UDP does not.
It's a WAG, but I'd start there.
i think it mite have something to do with (for e.g.) the ssh daemon forks a new process for each new connection whereas a daemon-less service mite establish one connection and then exit when the client hangs up (e.g.):
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.