Quote:
Originally Posted by FreezEy
Hello All,
Currently stuck in a little situation and i am leaning towards trying to use GFS to get my way out of it. Heres the situation
...
central location and when one tomcat instance fails and gets pushed to another the other will be able to continue receiving the data thats being uploaded. if i am wrong please somebody correct me...
|
From the filesystem point of view this will work as described.
Quote:
Originally Posted by FreezEy
If this is possible i was looking at CentOS's release of GFS 6.1 and am thinking this is the route to take... i was also looking at iscsi(using backend switch) but the only problem with that is is when one of the tomcat instances is writing to that certain folder i am pretty sure it locks it? or if it doesnt then the other machines wont be able to see the data till its remounted? with this again i am not totally sure but i had a feeling its not the ideal solution also because i get a transfer cap at i think 50mb and if i have over 100 people uploading data(comes in 64k chunks) then i can run into a problem...
|
Don't mix up iSCSI and GFS. iSCSI is a block based storage protocol underneath any filesystem also GFS. When going the shared storage block based way (this includes usage of GFS or the like) you first need to think about providing a shared storage infrastructure for all the four nodes. This means either iSCSI, Fibre Channel, parallel SCSI (will be a little hard with more then two nodes) or the cheaper ways like SAS.
When this is settled bring in the means to share filesystem data between the nodes. This implies a Cluster Filesystem like GFS.
On scalability with GFS (implies a little bit of locking):
Normally a symmetric cluster filesystem like GFS should scale linear even with concurrent access (Locking is done distributed). But when having many files in one directory you might not see a linear scalability line.