From ThirdMartini

I recently purchased a DroboPro array/backup robot thingy and had a hard time to initially set it up under Linux. I did finally manage to get it to work with the following settings.

Contents

[edit] Configuration

[edit] Open iSCSI Initiator

You need to install or configure the Open iSCSI Initiator. This is package is typically included ( but not installed ) with all major Linux distributions. In my case I used Ubuntu.

[edit] Configuring Your Drobo

You can use the Windows/Mac dashboard to first configure the DroboPro by enabling iSCSI and setting a static IP address for the drobo.
You should also upgrade to the latest version of Firmware for your drobo. ( 1.1.1 as of this article)

[edit] Connecting to your DroboPro

If you already performed discovery, you should have the iscsi node configuration file located under /etc/iscsi/nodes/....

Mine is in: /etc/iscsi/nodes/iqn.2005-06.com.datarobotics\:drobopro.tdb092940167.node0/10.0.0.11\,3260\,0/default

open the file in an editor (say vi) and change the lines:

node.session.cmds_max = 128
node.session.queue_depth = 64
node.conn[0].tcp.window_size = 524288

to:

node.session.cmds_max = 16
node.session.queue_depth = 16
node.conn[0].tcp.window_size = 65535

If you are already have an iscsi session logged in logout them out first:

iscsiadm -m node --logout

Then shutdown and restart your drobo.

When the drobo starts back up, relogin the iscsi session:

iscsiadm -m node --login

[edit] Formatting the DroboPro Volumes

To format the DroboPro volumes you need to use the tools at drobo-utils

[edit] Storage Sharing

This section will go over the various storage sharing strategies when using The DroboPro over iSCSI. ( NFS/SAMBA, SAN Configuration, VMware, etc )

[edit] File Sharing

[edit] NFS

[edit] SAN Configuration

The Drobo Pro is capable of being accessed by multiple hosts over iSCSI at the same time. That is you can have more than 1 PC connected and using the drobo pro storage at any one time. This is considered a SAN configuration ( Storage Area Network ). By default however, the Drobo Pro can only host a single PC. This is mostly a safety precaution since improperly setting up the Pro in a SAN config can lead to silent and fatal data corruption.

[edit] Precautions

When accessing the drobo using the SAN config there are a couple of things to keep in mind.

  • Most File Systems (ext2, ext3,ntfs, macfs) are not designed with a SAN configuration in mind. These file systems assume 1 writer. That they assume they have exclusive ownership of the storage. In a SAN config the storage can be accessed by more then 1 PC at a time which will result in fatal data corruption when using one of these file systems.
  • It is still possible to use the drobo pro in a san config using the above file systems, but only as long as no 2 PCs(hosts) access the filesystem at any one time. ( This includes having the FS mounted ). We can however partition a drobo volume into partitions and mount each on a different PC. We can also use the Native Drobo Pro lun slicing and create 2 different LUNS and mount each LUN on a different PC.
  • However the best way to use this config is to setup a Shared Storage/Cluster aware filesystem like GFS/GFS2 or OCFS .

[edit] Initial Setup

To setup a SAN config follow the steps for the initial iSCSI Inititiator config above, but using iSCSI Port 3261 instead of the default (3260)

[edit] None Clustered Sharing

This configuration lets you share the DroboPro storage between 1 or more hosts, however, each host can only use it's "own" partition. This can be done by taking a DroboPro iscsi volume and partitioning it:

For instance:
Device Boot Start End Blocks Id System
/dev/sdc1 1 60788 488279578+ 83 Linux
/dev/sdc2 60789 121576 488279610 83 Linux

In the above example I created 2 partitions on a single Drobo iSCSI Lun. I can then format them both as ext3 and mount each partition on a different PC (Host)

IE: /dev/sdc1 on server-1 /dev/sdc2 on server-2

Note Each server can only mount it's own partition, this means that we cannot mount /dev/sdc1 on BOTH server-1 and server-2 as that will result in serious data corruption.

[edit] GFS

[edit] OCFS

[edit] VMWare (ESXi)

Personal tools