Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I was just curious if anyone here on this forum has this setup, as I mentioned in the subject line? Can Oracle RAC, and ASM be setup for redundancy and stanby DB?
We run RHAS 3, Oracle 10g, Oracle RAC but use Oracle Cluster Filesystem) OCFS for our data filesystems instead of ASM raw devices. Even with that we had to use ASM to setup raw devices for Oracle Cluster Ready Services (CRS). One reason we opted for OCFS over ASM was our misperception that we could do filesystem backups. Since OCFS is a clustered filesystem standard tools (tar, cp etc...) don't work on it though Oracle does provide OCFS enabled rpms that help.
However for backup the only way Oracle and NetBackup support it are with use of RMAN. It was an attempt to avoid RMAN that led us to the use of OCFS so if I had it to do over again I'd go the ASM route instead. You of course have to do RMAN for that as well.
P.S. Missed this post until today when I saw it as one of the "similar" posts to a more recent thread.
Last edited by MensaWater; 01-04-2007 at 01:38 PM.
We're using both methods at the moment and seems to be working fine. I often wordering what would be the best practices for setting up OCFS partition though. Do you know the answer?
Thanks,
Bob
Quote:
Originally Posted by jlightner
We run RHAS 3, Oracle 10g, Oracle RAC but use Oracle Cluster Filesystem) OCFS for our data filesystems instead of ASM raw devices. Even with that we had to use ASM to setup raw devices for Oracle Cluster Ready Services (CRS). One reason we opted for OCFS over ASM was our misperception that we could do filesystem backups. Since OCFS is a clustered filesystem standard tools (tar, cp etc...) don't work on it though Oracle does provide OCFS enabled rpms that help.
However for backup the only way Oracle and NetBackup support it are with use of RMAN. It was an attempt to avoid RMAN that led us to the use of OCFS so if I had it to do over again I'd go the ASM route instead. You of course have to do RMAN for that as well.
P.S. Missed this post until today when I saw it as one of the "similar" posts to a more recent thread.
On the left you'll see information regarding OCFS. As I mentioned there are bundles for coreutils and tar that are to allow for use of tar, mv etc... with the OCFS filesystem. They do work to a certain extent (at least better than what you get by default).
Note that the OCFS you can use with 2.4 kernel (the one RHEL AS 3 has) is not the latest one. From a presentation I saw some months back I gather there are some significant improvements in the later OCFS version that runs on 2.6 kernels (RHEL AS 4 and higher). Haven't used that.
We do OCFS on a RAID 5 built on a Clariion CX700 using fibre drives over a SAN for our production systems.
I did actually setup OCFS on a standalone server that had a PERC covering its internal disks in a RAID 5.
Recently we had to migrate the Production environment from the original CX700 to a replace CX700. We came up with a procedure for that but it is rather specific to the fact we were using qlogic fibre cards, CX700 (and EMC Navisphere on the host to talk to the array) as well as EMC PowerPath to deal with multipathing for the two fibre cards and two SPAs being used.
We used OCFS and raw devices to shared the mount point on multiple nodes. And yes, we're currently using RHAS 3.0 in our production and test environment. We also setup ASM and by doing so, we have the ASM and present that to a 3rd server for our production standby. We're also using Oracle RAC, not OS clustering at this time.
However, we used mainly RAID 10 for production and RAID 0 for the test environment. Would you know the syntax to setup OCFS partition?
We use Oracle RAC as well. (More properly we're using Oracle Cluster Ready Services [CRS]). Didn't mean to imply we were doing OS clustering - we aren't.
On our install we got mkfs.ocfs which has a man page.
We also got ocfstool which is a graphic tool for managing ocfs.
I'm not in my office at the moment or I could give you the exact syntax I'd used for setting up the filesystems. One of the flags is -uid and you do have to specify that as the oracle user to allow Oracle DB to use the filesystem - I recall early on I tried to set it to root and it didn't like that at all.
Basically you have to use ASM for the CRS votingdisk and 2 other raw devices. You can use either ASM for raw data devices or OCFS for filesystem data devices that are shared between the nodes.
Packages we have installed with ocfs in the name:
ocfs-2.4.21-EL-1.0.11-1
setupOCFS-1.0.0-2
ocfs-support-1.0.10-1
ocfs-tools-1.0.10-1
ocfs-2.4.21-EL-smp-1.0.11-1
As mentioned we also downloaded and installed replacement tar and coreutils from the link I'd sent earlier at Oracle.
Sounds like we're taking the approach but different hardware. As for ocfstool (graphic tool), where do I get a hold of the tool?? Is this shareware tool or something you have to buy??
I'd greatly appreciate it if you have the time to reply back with the OCFS syntax and the OCFS graphical tool. I love learning about new tools and such.
I believe both mkfs.ocfs and ocfstool are free with bundles from the Oracle link I'd mentioned above. The packages I show installed are the ones that would contain them and should be available at that site. We actually got them installed from a Dell Deployment CD as our RAC was originally installed from that. The documentation I got with that gave me the specific syntax and it is that
You can get the syntax for mkfs.ocfs with "man mkfs.ocfs".
You use the ocfstool GUI to tell it which nodes are sharing the OCFS filesystems you create.
The actual command sytnax used was like the following:
mkfs.ocfs -F -b 128 -L u01 -m /database -u 500 -g 500 -p 0775
/dev/emcpowera1
mkfs.ocfs -F -b 128 -L u02 -m /database/archive -u 500 -g 500 -p 0775
/dev/emcpowerb1
The /dev/emcpower devices were the PowerPath pseudodevices. You'd have to use whatever your shared storage was instead. (So for example on the stand alone node we created for testing I used /dev/sda13 which was not shared with another host.)
The meaning of the above flags:
-F = Force format
-b 128 = Block size of 128 K.
-L u01 = Allow mount via volume label (u01 being the label for first line). This isn't really necessary and in fact I don't mount by volume label but doesn't hurt to put it there either.
-m /database = Mount point directory (/database that line - notice it is /database/archive for the second line and this is submounted under the first one).
-u 500 = Set UID to 500 (the UID for oracle in /etc/passwd)
-g 500 = Set GID to 500 (the GID for oinstall in /etc/group)
-p 0775 = Permissions to set (-rwxrwxrw-).
I actually have 5 filesystems but the above should be enough to give you the idea.
As for the block size 128K (-b 128 = Block size of 128 K.) -- Do you normally use 128K or depending on the RAID configuration? What would be the best practices, I wonder?
I've used 128 K because that was what was specified in the deployment guide. At the time we did this OCFS was brand new to us. We've not seen I/O performance as an issue so much as limited shared memory. If we saw I/O peformance as an issue we'd likely look more closely at the block size. On our HP-UX systems where we run much larger Oracle (no RAC) databases we do use a different block size (8192) for the Veritas Filesystems (VxFS).
This year I'll be building a second machine for the test environment so can do some more in depth testing (it's only taken me a year and half to convince the powers that be that the test system for a cluster environment should also be a cluster environment). I couldn't just load another system because I need to get SAN storage and fibre cards for both nodes.
It appears doing a hugemem kernel would help enhance the shared memory (and therefore allow for a larger SGA in Oracle) but I wasn't willing to test it on the live Production system.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.