[SOLVED] pacemaker - iscsi: how to set up iscsi targets/logical units?
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
pacemaker - iscsi: how to set up iscsi targets/logical units?
Hi!
I have this cluster configuration I'm using to do tests (everything on virtual machines... even the san).
I have a pacemaker configuration where I don't consider the target/LUs. I do login then the computer boots and then my configuration assumes that it will be there (and it has worked fairly well so far) however I would like to now add the target/LUs configuration in pacemaker as well.
The configuration is committed but I can't get it to start the pacemaker target resource:
I'd like to know what's going on when it's trying to start the san resource but syslog doesn't provide that much information. When I try to start the san resource (crm resource san start), this is what I get:
Code:
Dec 28 15:48:38 cluster1 cibadmin: [3671]: info: Invoked: cibadmin -Ql -o resources
Dec 28 15:48:38 cluster1 cibadmin: [3673]: info: Invoked: cibadmin -p -R -o resources
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="182" num_updates="2" >
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - <configuration >
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - <resources >
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - <group id="sanos" >
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - <primitive id="san" >
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - <meta_attributes id="san-meta_attributes" >
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - <nvpair value="Stopped" id="san-meta_attributes-target-role" />
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - </meta_attributes>
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - </primitive>
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - </group>
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - </resources>
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - </configuration>
Dec 28 15:48:38 cluster1 crmd: [823]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: - </cib>
Dec 28 15:48:38 cluster1 crmd: [823]: info: need_abort: Aborting on change to admin_epoch
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="183" num_updates="1" >
Dec 28 15:48:38 cluster1 crmd: [823]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + <configuration >
Dec 28 15:48:38 cluster1 crmd: [823]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + <resources >
Dec 28 15:48:38 cluster1 crmd: [823]: info: do_pe_invoke: Query 235: Requesting the current CIB: S_POLICY_ENGINE
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + <group id="sanos" >
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + <primitive id="san" >
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + <meta_attributes id="san-meta_attributes" >
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + <nvpair value="Started" id="san-meta_attributes-target-role" />
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + </meta_attributes>
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + </primitive>
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + </group>
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + </resources>
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + </configuration>
Dec 28 15:48:38 cluster1 cib: [819]: info: log_data_element: cib:diff: + </cib>
Dec 28 15:48:38 cluster1 cib: [819]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=local/cibadmin/2, version=0.183.1): ok (rc=0)
Dec 28 15:48:38 cluster1 crmd: [823]: info: do_pe_invoke_callback: Invoking the PE: query=235, ref=pe_calc-dc-1356725918-121, seq=192, quorate=0
Dec 28 15:48:38 cluster1 pengine: [822]: notice: unpack_config: On loss of CCM Quorum: Ignore
Dec 28 15:48:38 cluster1 pengine: [822]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Dec 28 15:48:38 cluster1 pengine: [822]: info: determine_online_status: Node cluster1 is online
Dec 28 15:48:38 cluster1 pengine: [822]: ERROR: unpack_rsc_op: Hard error - san:0_monitor_0 failed with rc=6: Preventing san:0 from re-starting anywhere in the cluster
Dec 28 15:48:38 cluster1 pengine: [822]: ERROR: unpack_rsc_op: Hard error - sanwwwsesion_monitor_0 failed with rc=6: Preventing sanwwwsesion from re-starting anywhere in the cluster
Dec 28 15:48:38 cluster1 pengine: [822]: ERROR: unpack_rsc_op: Hard error - sandatapostgres_monitor_0 failed with rc=6: Preventing sandatapostgres from re-starting anywhere in the cluster
Dec 28 15:48:38 cluster1 pengine: [822]: WARN: unpack_rsc_op: Processing failed op pgbouncer_monitor_0 on cluster1: unknown error (1)
Dec 28 15:48:38 cluster1 pengine: [822]: ERROR: unpack_rsc_op: Hard error - sanwwwsanos_monitor_0 failed with rc=6: Preventing sanwwwsanos from re-starting anywhere in the cluster
Dec 28 15:48:38 cluster1 pengine: [822]: ERROR: unpack_rsc_op: Hard error - san_monitor_0 failed with rc=6: Preventing san from re-starting anywhere in the cluster
Dec 28 15:48:38 cluster1 pengine: [822]: notice: group_print: Resource Group: sanos
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: ip_flotante#011(ocf::heartbeat:IPaddr2):#011Started cluster1
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: san#011(ocf::heartbeat:iSCSITarget):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: sanwwwsanos#011(ocf::heartbeat:iSCSILogicalUnit):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: sandatapostgres#011(ocf::heartbeat:iSCSILogicalUnit):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: sanwwwsesion#011(ocf::heartbeat:iSCSILogicalUnit):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: datapostgres#011(ocf::heartbeat:Filesystem):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: wwwsanos#011(ocf::heartbeat:Filesystem):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: wwwsesion#011(ocf::heartbeat:Filesystem):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: postgres#011(lsb:postgresql-8.4):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: pgbouncer#011(lsb:pgbouncer):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: notice: native_print: apache#011(lsb:apache2):#011Stopped
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: ip_flotante: Rolling back scores from san
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: san: Rolling back scores from sanwwwsanos
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource san cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: sanwwwsanos: Rolling back scores from sandatapostgres
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource sanwwwsanos cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: sandatapostgres: Rolling back scores from sanwwwsesion
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource sandatapostgres cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: sanwwwsesion: Rolling back scores from datapostgres
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource sanwwwsesion cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: datapostgres: Rolling back scores from wwwsanos
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource datapostgres cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: wwwsanos: Rolling back scores from wwwsesion
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource wwwsanos cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: wwwsesion: Rolling back scores from postgres
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource wwwsesion cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: postgres: Rolling back scores from pgbouncer
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource postgres cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_merge_weights: pgbouncer: Rolling back scores from apache
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource pgbouncer cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: info: native_color: Resource apache cannot run anywhere
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource ip_flotante#011(Started cluster1)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource san#011(Stopped)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource sanwwwsanos#011(Stopped)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource sandatapostgres#011(Stopped)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource sanwwwsesion#011(Stopped)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource datapostgres#011(Stopped)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource wwwsanos#011(Stopped)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource wwwsesion#011(Stopped)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource postgres#011(Stopped)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource pgbouncer#011(Stopped)
Dec 28 15:48:38 cluster1 pengine: [822]: notice: LogActions: Leave resource apache#011(Stopped)
Dec 28 15:48:38 cluster1 crmd: [823]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Dec 28 15:48:38 cluster1 crmd: [823]: info: unpack_graph: Unpacked transition 40: 0 actions in 0 synapses
Dec 28 15:48:38 cluster1 crmd: [823]: info: do_te_invoke: Processing graph 40 (ref=pe_calc-dc-1356725918-121) derived from /var/lib/pengine/pe-input-327.bz2
Dec 28 15:48:38 cluster1 crmd: [823]: info: run_graph: ====================================================
Dec 28 15:48:38 cluster1 crmd: [823]: notice: run_graph: Transition 40 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-327.bz2): Complete
Dec 28 15:48:38 cluster1 crmd: [823]: info: te_graph_trigger: Transition 40 is now complete
Dec 28 15:48:38 cluster1 crmd: [823]: info: notify_crmd: Transition 40 status: done - <null>
Dec 28 15:48:38 cluster1 crmd: [823]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Dec 28 15:48:38 cluster1 crmd: [823]: info: do_state_transition: Starting PEngine Recheck Timer
Dec 28 15:48:38 cluster1 cib: [3674]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-47.raw
Dec 28 15:48:38 cluster1 pengine: [822]: info: process_pe_message: Transition 40: PEngine Input stored in: /var/lib/pengine/pe-input-327.bz2
Dec 28 15:48:38 cluster1 cib: [3674]: info: write_cib_contents: Wrote version 0.183.0 of the CIB to disk (digest: 0364d6b6e5a2b2b40c5d9f0eddd87737)
Dec 28 15:48:38 cluster1 cib: [3674]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.V9P0Mi (digest: /var/lib/heartbeat/crm/cib.qKbFRh)
I can see this message:
Code:
san:0_monitor_0 failed with rc=6: Preventing san:0 from re-starting anywhere in the cluster
But what does that rc=6 mean for a ocf:heartbeat:iSCSITarget mean?
This is the definition of the resource:
Code:
primitive san ocf:heartbeat:iSCSITarget \
params iqn="iqn.2012-12.san:disk1" portals="192.168.55.11" \
meta target-role="Started"
I started tcpdump checking for traffic to the SAN and there was none which makes me think that I'm either missing something in the configuration or a problem with the iSCSITarget script (or something along those lines).
I hate it when things are hidden from me. Is it possible to manually call iSCSITarget (/usr/lib/ocf/resource.d/heartbeat/iSCSITarget) so that I could see what's going on?
When I call it directly without any params this is what I get:
Code:
/usr/lib/ocf/resource.d/heartbeat/iSCSITarget: line 32: /resource.d/heartbeat/.ocf-shellfuncs: No such file or directory
/usr/lib/ocf/resource.d/heartbeat/iSCSITarget: line 38: have_binary: command not found
/usr/lib/ocf/resource.d/heartbeat/iSCSITarget: line 40: have_binary: command not found
/usr/lib/ocf/resource.d/heartbeat/iSCSITarget: line 42: have_binary: command not found
/usr/lib/ocf/resource.d/heartbeat/iSCSITarget: line 506: ocf_log: command not found
Looks like there's some initialization missing but then what do I have to do to set the params and so on?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.