Friday, May 4, 2012

NetApp Overview

The following documentation is a guide on using and configuring the NetApp servers, there is also a commandline cheat sheet. I have tried to make this section as brief as possible but still cover a broad range of information regarding the NetApp product but I point you to the Official NetApp web site which contains all the documentation you will ever need.
Introduction
Introduction
History
Filer
Backups

Architecture
Hardware
Software
Storage Terminology
NetApp Terminology

System Administration
Accessing NetApp
System Administration
Licensing
NTP setup

Disk Administration
Storage
Disks
Aggregates
Volumes
FlexCache Volumes
FlexClone Volume
Space Saving
QTrees
CIFS Oplocks
Quotas

Block Access Management
Introduction
Block Based Access
iSCSI Introduction
FC Introduction
Getting the Storage Ready
LUN's, igroups, Lun Maps and iSCSI
Snapshots and Cloning
Disk Space Management

File Access Management
Introduction
NFS
CIFS
FTP
HTTP

Network Management
Interface Configuration
Routing
Hosts and DNS
VLANs
Interface Groups
Diagnostic Tools


Network Appliance (NetApp)
This section is short introduction into Network Appliance (NetApp), the company creates storage systems and management software associated with companies data. They offer products that cater for small, medium and large companies and can provide support.
Other storage main vendors are
  • EMC
  • Hitachi Data Systems
  • HP
  • IBM
The NetApp filer also know as NetApp Fabric-Attached Storage (FAS) is a type of disk storage device which owns and controls a filesystem and present files and directories over the network, it uses an operating systems called Data ONTAP (based on FreeBSD).
NetApp Filers can offer the following
  • Supports SAN, NAS, FC, SATA, iSCSI, FCoE and Ethernet all on the same platform
  • Supports either SATA, FC and SAS disk drives
  • Supports block protocols such as iSCSI, Fibre Channel and AoE
  • Supports file protocols such as NFS, CIFS , FTP, TFTP and HTTP
  • High availability
  • Easy Management
  • Scalable
History
NetApp was created in 1992 by David Hitz, James Lau and Michael Malcolm, the company become public in 1995 and grew rapidly in the dot com boom, the companies headquarters are in Sunnyvale, California, US. NetApp has acquired a number of companies that helped in development of various products. The first NetApp network appliance shipped in 1993 known as a filer, this product was a new beginning in data storage architecture, the device did one task and it did it extremely well. NetApp made sure that the device was fully compatible to use industry standard hardware rather than specialized hardware. Today's NetApp products cater for small, medium and large size corporations and can be found in many blue-chip companies.
NetApp Filer
The NetApp Filer also know as NetApp Fabric-Attached Storage (FAS), is a data storage device, it can act as a SAN or as a NAS, it serves storage over a network using either file-based or block-based protocols
File-Based Protocol NFS, CIFS, FTP, TFTP, HTTP
Block-Based Protocol Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)
The most common NetAPP configuration consists of a filer (also known as a controller or head node) and disk enclosures (also known as shelves), the disk enclosures are connected by FC or parallel/serial ATA, the filer is then accessed by other Linux, Unix or Window servers via a network (Ethernet or FC). An example setup would be like the one in the diagram below

The filers run NetApp's own adapted operating system (based on FreeBSD) called Data ONTAP, it is highly tuned for storage-serving purposes.
All filers have a battery-backed NVRAM, which allows them to commit writes to stable storage quickly, without waiting on the disks.
It is also possible to cluster filers to create a highly-availability cluster with a private high-speed link using either FC or InfiniBand, clusters can then be grouped together under a single namespace when running in the cluster mode of the Data ONTAP 8 operating system.
The filer will be either Intel or AMD processor based computer using PCI, each filer will have a battery-backed NVRAM adaptor to log all writes for performance and to replay in the event of a server crash. The Data ONTAP operating system implements a single proprietary file-system called WAFL (Write Anywhere File Layout).
WAFL is not a filesystem in the traditional sense, but a file layout that supports very large high-performance RAID arrays (up to 100TB), it provides mechanisms that enable a variety of filesystems and technologies that want to access disk blocks. WAFL also offers
  • snapshots (up to 255 per volume can be made)
  • snapmirror (disk replication)
  • syncmirror (mirror RAID arrays for extra resilience, can be mirrored up to 100km away)
  • snaplock (Write once read many, data cannot be deleted until its retention period has been reached)
  • read-only copies of the file system
  • read-write snapshots called FlexClone
  • ACL's
  • quick defragmentation
Filers offer two RAID options (see below), you can also create very large RAID arrays up to 28 disks, this depends on the type of filer.
RAID 4 offers single parity on a dedicated disk (unlike RAID 5)
RAID 6 is the same as RAID 5 but offers double parity (more resilience), two disks in the raid could fail
NetApp Backups
The last point to touch on is backups, NetApp offers two types
Dump
  • backs up files and directories
  • supports level-0, incremental and differential backups
  • supports single file restore
  • capable of backing only the base snapshot copy
SMTape
  • Backs up blocks of data to tape
  • Supports only level-0 backup
  • does not support single file restore
  • capable of backing up multiple snapshot copies in a volume
  • does not support remote tape backups and restores
The filer will support either SCSI or Fibre channel (FC) tape drives and can have a maximum of 64 mixed tape devices attached to a single storage system.
Network Data Management Protocol (NDMP) is a standardized protocol for controlling backup, recovery and other transfers of data between primary and secondary storage devices such as storage systems and tape libraries. This removes the need for transporting the data through the backup server itself, thus enhancing speed and removing load from the backup server. By enabling NDMP support you enable that storage system to carry communications with the NDMP-enabled commercial network-attached backup application, it also provides low-level control of tape devices and medium changers. The advantages of NDMP are
  • provide sophisticated scheduling of data protection across multiple storage systems
  • provide media management and tape inventory management services to eliminate tape handling during data protection operations
  • support data cataloging services that simplify the process of locating specific recovery data
  • supports multiple topology configurations, allowing sharing of secondary storage (tape library) resources through the use of three-way network data connections
  • supports security features to prevent or monitoring unauthorized use of NDMP connections

NetApp Architecture
The NetApp architecture consist of hardware, Data ONTAP operating system and the network. I have already shown you a diagram of a common NetApp setup but now i will go into more detail.
Hardware
NetApp have a number of filers that would fit into any company and cost, the filer itself may have the following
  • can be a Intel or AMD server (up to 8 dual core processors)
  • can have dual power supplies
  • can handle up to 64GB RAM and 4GB NVRAM (non-volatile RAM)
  • can manage up to 1176GB storage
  • has a maximum limit of 1176 disk drives
  • can connect the disk shelves via a FC loop for redundancy
  • can support FCP, SATA and SAS disk drives
  • has a maximum 5 PCI and 3 PCI-express slots
  • has 4/8/10GbE support
  • 64bit support
The filer can be attached to a number of disk enclosures (shelves) which expands the storage allocation, these disk enclosures are attached via FC, as mentioned above the disk enclosures can support the following disks
FCP These are fibre channel disks, they are very fast but expensive
SAS Serial attached SCSI disks again are very fast but expensive , due to replace the FC disks
SATA Serial ATA are slow disks but are cheaper, ideal for QA and DEV environments
One note to remember is that the filer that connects to the top module of a shelf controls (owns) the disks in that shelf under normal circumstances (i.e. non-failover).
The filers can make use of VIF's (Virtual Interfaces), they come in two flavors
Single-mode VIF
  • 1 active link, others are passive, standby links
  • Failover when link is down
  • No configuration on switches
Multi-mode VIF
  • Multiple links are active at the same time
  • Loadbalancing and failover
  • Loadbalancing based on IP address, MAC address or round robin
  • Requires support & configuration on switches
Software
I have already touched on the operating system Data ONTAP, the latest version is currently version 8 which fully supports grid technology (GX in version 7). It is fully compatible with Intel and AMD architectures and supports 64bit, it borrows the idea's from FreeBSD.
All additional NetApp products are activated via licenses, some require the filer to be rebooted so check the documentation.
Management of the filer can be accessed via any of the following
  • Telnet or SSH
  • Filerview (HTTP GUI)
  • System Manager (client software GUI)
  • Console cable
  • snmp and ndmp
Storage Terminology
When talking about storage you probably come across two solutions
NAS
(Network Attached Storage)
NAS storage speaks to a file, so the protocol if a file based one. Data is made to be shared examples are
  • NFS (Unix)
  • CIFS or SMB (Windows)
  • FTP, HTTP, WebDAV, DAFS
SAN
(Storage Area Network)
SAN storage speaks to a LUN (Logical Unit Number) and accesses it via data blocks, sharing is difficult examples are
  • SCSI
  • iSCSI
  • FCAL/FCP
There are a number of terminologies associated with the above solutions, I have already discussed some of them in my EMC section
Terminology
Solution
Description
share/export
NAS
CIFS servers makes data available via shares, a Unix server makes data available via exports
Drive mapping/mounting
NAS
CIFS clients typically map a network drive to access data stored on a storage server, Unix clients typically mount the remote resource
LUN
SAN
Logical Unit Number , basically a disk presented by a SAN to a host, when attached it looks like a locally attached disk.
Target
SAN
The machine that offers a disk (LUN) to another machine in other words the SAN
Initiator
SAN
The machine that expects to see the disk (LUN) the host OS, appropriate initiator software will be required
Fabric
SAN
One or more fibre switches with targets and initiators connected to them are referred to as a fabric. Cisco, McData and Brocade are well know fabric switch makers
See my EMC architecture section for more details
HBA
SAN
Host Bus Adapter, the hardware that connects the server or SAN to the fabric switches. There are also iSCSI HBA's
Multipathing (MPIO)
SAN
The use of redundant storage network components responsible for transfer of data between the server and the storage (Cabling, adapters, switches and software)
Zoning
SAN
The partioning of a fabric into smaller subsets to restrict interference, added security and simplify management, it's like VLAN's in networking
See my EMC zoning section for more details
Below is a typical SAN setup using NetApp hardware
  
NetApp Terminology
Now that we know how a NetApp is configured from a hardware point of view, we now need to know how to present the storage to the outside world, first some NetApp terminologies explained
Disk This is the physical disk itself, normally the disk will reside in a disk enclosure, the disk will have a pathname like 2a.17
  • 2a = SCSI adapter
  • 17 = disk SCSI ID
Any disks that are classed as spare will be used in any group to replace failed disks.
Disks are assigned to a specific pool, also parity disks do not contain any data.
Raid Group (Pool) Normally there are three pools 0, 1 and spare
  • 0 = normal pool
  • 1 = mirror pool (if syncMirror is enabled)
  • spare = spares disks that be used for growth and replacement of failed disks
Aggregate A collection of disks that can have either of the below RAID levels, the aggregate can contain up to 1176 disks, you can have many aggregates with the below different RAID levels. An aggregate can contain many volumes (see volumes below).
  • RAID-4
  • RAID-DP (RAID-6) better fault tolerance
One point to remember is that a aggregate can grow but cannot shrink, the disadvantage with RAID 4 is that a bottleneck can happen on the dedicated parity disk, which is normally the first disk to fail due to it being used the most, however the NVRAM helps out by only writing to disks every 10 seconds or when the NVRAM is 50% full.
Plex When a aggregate is mirrored it will have two plexes, when thinking of plexes think of mirroring. A mirrored aggregated can be split into two plexes.
Volume (Flexible) This is more or like a traditional volume in other LVM's, it is a logical space within an aggregate that will contain the actual data, it can be grown or shrunk as needed
LUN The Logical Unit Number is what is present to the host to allow access to the volume.
WAFL Write anywhere filesystem layout is the filesystem used, it uses inodes just like Unix. Disks are not formatted they are zeroed.
By default WAFL reserves 10% of a disk space (unreclaimable)
Snapshot A frozen read-only image of a volume or aggregate that reflects the state of the new file system at the time the snapshot was created, snapshot features are
  • Up to 255 snapshots per volume
  • can be scheduled
  • Maximum space occupied can be specified (default 20%)
  • File permissions are handled
Snapshots in NetApp world are very fast, basically it takes a snapshot of all the blocks that are associated with the files, this data is never actual changed, if a block is changed a new block is created, the snapshot still points to the old block. NetApp has two products called SnapDrive and SnapManager that deal with consistency problems where data has not actually been written to the disk but cached in memory buffers, you might want to take a look at these products.

There are three additional replication products that can you can use
SyncMirror
  • real time replication of data
  • maximum distance of up to 35km
  • Fibre Channel or DWDM protocol
  • Synchronous
is used primarily for data redundancy
SnapMirror
  • long distance DR data consolidation
  • no limit on distance and uses
  • IP protocol (WAN/LAN)
  • ASync Mirror (> 1 minute)
is used primarily for disaster recovery
SnapVault
  • disk-to-disk backup, restore HSM
  • no limit on distance
  • IP protocol (WAN/LAN)
  • ASync Mirror (> 1 hour)
is used primarily for backup/restore

NetApp System Administration
In this section I will talking about NetApp system adminstration, I will talk about disk administrator in another topic. Basically the NetApp filer is a Unix server highly tuned to deliver large amounts of storage, the hardware again is very similar to the computer that you have at home but will have extra redundancy features.
As you know the Operating Systems is called Data ONTAP which is based on Free BSD, you don't need to know a great deal about Unix in order to manage and setup a NetApp file, it comes with two excellent GUI tools one of which is web based but it would be worth while getting to know Unix for more problematic problems as you will need to use the commandline.
Generally the NetAPP filer will be setup when you receive it, it should have the latest Data ONTAP o/s installed and be ready to go, I am not going to go into much regarding the operating system.
Accessing NetApp
Once you have your NetApp filer powered up and on the network, you can access it by any of the following common methods
telnet/SSH
Web Access GUI (http)
System Manager (GUI)
I will only be using telnet (commandline) and the system manager in my examples.
There are a number of common session related parameters that you may wish to tweak, there are many more than below so take a peek at the documentation
Help ontap1> options ?
Telnet ontap1> options telnet
telnet.access legacy
telnet.distinct.enable on
telnet.enable off

## Enabling telnet access
ontap1> options telnet.enable on
SSH ontap1> options ssh
ssh.access *
ssh.enable on
ssh.idle.timeout 0
ssh.passwd_auth.enable on
ssh.port 22
ssh.pubkey_auth.enable on
ssh1.enable off
ssh2.enable on

## change the idle timeout to 5 minutes
ontap1> options ssh.idle.timeout 300
## You can also use the secureadmin command to setup SSH/SSL

secureadmin [setup|addcert|enable|disable|status]

## You also use the system manager


HTTP ontap1> options http
httpd.access legacy
httpd.admin.access legacy
httpd.admin.enable on
httpd.admin.hostsequiv.enable off
httpd.admin.max_connections 512
httpd.admin.ssl.enable on
httpd.admin.top-page.authentication on
httpd.autoindex.enable off
httpd.bypass_traverse_checking off
httpd.enable off
httpd.log.format common
httpd.method.trace.enable off
httpd.rootdir XXX
httpd.timeout 300
httpd.timewait.enable off

## Enabling HTTP administration access
ontap1> httpd.admin.enable on
Session timeout ontap1> options autologout
autologout.console.enable on
autologout.console.timeout 300
autologout.telnet.enable on
autologout.telnet.timeout 300
## Change the timeout values
ontap1> options autologout.telnet.timeout 300
Security ontap1> options trusted
trusted.hosts *

## Only allow specific hosts to administrate the NepApp filer
ontap1> options trusted.hosts <host1>,<host2>
System Configuration and Administration
NetApp filers have two privilege modes, the advanced privilege allows you to access more advanced and dangerous features
  • Administrative (default)
  • Advanced
To set the privilege
Privilege priv set [-q] [admin | advanced]
Note: by default you are in administrative mode

-q = quiet suppresses warning messages
You can use the normal shutdown or reboot command to halt or restart the Netapp filer, if your filer has a RML or BMC you can also start the filer in different modes
startup modes
  • boot_ontap - boots the current Data ONTAP software release stored on the boot device
  • boot primary - boots the Data ONTAP release stored on the boot device as the primary kernel
  • boot_backup - boots the backup Data ONTAP release from the boot device
  • boot_diags - boots a Data ONTAP diagnostic kernel
Note: there are other options but NetApp will provide these as when necessary
shutdown
halt [-t <mins>] [-f]
-t = shutdown after minutes specified
-f = used with HA clustering, means that the partner filer does not take over
restart reboot [-t <mins>] [-s] [-r] [-f]

-t = reboot in specified minutes
-s = clean reboot but also power cycle the filer (like pushing the off button)
-r = bypasses the shutdown (not clean) and power cycles the filer
-f = used with HA clustering, means that the partner filer does not take over
When the filer boots you have a chance to enter the boot menu [Ctrl-C] which gives you a number of options, that allow you change the system password, put the filer into maintenance mode, wipe all disks, etc.
Boot Menu 1) Normal Boot.
2) Boot without /etc/rc.
3) Change password.
4) Clean configuration and initialize all disks.
5) Maintenance mode boot.
6) Update flash from backup config.
7) Install new software first.
8) Reboot node.
Selection (1-8)?
  • Normal Boot - continue with the normal boot operation
  • Boot without /etc/rc - boot with only default options and disable some services
  • Change Password - change the storage systems password
  • Clean configuration and initialize all disks - cleans all disks and reset the filer to factory default settings
  • Maintenance mode boot - file system operations are disabled, limited set of commands
  • Update flash from backup config - restore the configuration information if corrupted on the boot device
  • Install new software first - use this if the filer does not include support for the storage array
  • Reboot node - restart the filer
To check what versions of Data ONTAP you have use the version command
Data ONTAP version version [-b]

-b = include name and version information for the primary, secondary and diagnostic kernels and the firmware
I am not going to talk much about users, groups and roles as they are the same in the Unix world, the commands and options that you should be aware of are the following
Users you can perform the following using the secureadmin command
  • add
  • modify
  • delete
  • list
Groups you can perform the following using the secureadmin command
  • add
  • modify
  • delete
  • list
Roles you can perform the following using the secureadmin command
  • add
  • modify
  • delete
  • list
Domainuser you can perform the following using the secureadmin command
  • add
  • delete
  • list
  • load
Diaguser you can perform the following using the secureadmin command
  • lock
  • unlock
  • list
  • load
User password options security.passwd.firstlogin.enable off
security.passwd.lockout.numtries 4294967295
security.passwd.rootaccess.enable on
security.passwd.rules.enable on
security.passwd.rules.everyone on
security.passwd.rules.history 6
security.passwd.rules.maximum 256
security.passwd.rules.minimum 8
security.passwd.rules.minimum.alphabetic 2
security.passwd.rules.minimum.digit 1
security.passwd.rules.minimum.symbol 0
System Manager GUI
The system manager can help with user and groups

Change a users password passwd

Note: the passwd command will prompt you for the user to change
When you first login into a filer you are placed into a administrative shell that only allows a number of commands to be used (type help to display commands you can access), you can obtain more commands by using the advanced privilege, but on occasions you need a normal Unix shell prompt that allows you to access the normal Unix commands, this is called the systemshell and can be access by the diag user
Access the systemshell ## First obtain the advanced privileges
priv set advanced

## Then unlock and reset the diag users password
useradmin diaguser unlock
useradmin diaguser password

## Now you should be able to access the systemshell and use all the standard Unix
## commands
systemshell
login: diag
password: ********
There are a number of commands to get system configuration information and statisics
System Configuration
General information sysconfig
sysconfig -v
sysconfig -a (detailed)
Configuration errors sysconfig -c
Display disk devices sysconfig -d
sysconfig -A
Display Raid group information sysconfig -V
Display arregates and plexes sysconfig -r
Display tape devices sysconfig -t
Display tape libraries sysconfig -m
Environment Information
General information environment status
Disk enclosures (shelves) environment shelf [adapter]
environment shelf_power_status
Chassis environment chassis all
environment chassis list-sensors
environment chassis Fans
environment chassis CPU_Fans
environment chassis Power
environment chassis Temperature
environment chassis [PS1|PS2]
Fibre Channel Information
Fibre Channel stats fcstat link_stats
fcstat fcal_stats
fcstat device_map
SAS Adapter and Expander Information
Shelf information sasstat shelf
Expander information sasstat expander
sasstat expander_map
sasstat expander_phy_state
Disk information sasstat dev_stats
Adapter information sasstat adapter_state
Statistical Information
All stats stats show
System stats show system
Processor stats show processor
Disk stats show disk
Volume stats show volume
LUN stats show lun
Aggregate stats show aggregate
FC stats show fcp
iSCSI stats show iscsi
CIFS stats show cifs
Network stats show ifnet
Licensing
The NetApp extra features can be enabled by licensing the product, you can perform this either via the commandline or the system manager GUI
licenses (commandline) ## display licenses
license

## Adding a license
license add <code1> <code2>
## Disabling a license
license delete <service>
licenses (GUI)
NTP setup
One very important configuration is the NTP service, this must be setup as it is important for snapshots.
NTP setup (commandline) ontap1> options timed
timed.enable off
timed.log off
timed.max_skew 30m
timed.min_skew 0
timed.proto ntp
timed.sched hourly
timed.servers
timed.window 0s

ontap1> options timed.servers <ntp server>
ontap1> options timed.enable on
NTP setup (GUI)

NetApp Disk Administration
In this section I will cover the disk administration, I will create another section for common disk and system problems. In this section I will cover the basics on the following:
  • Storage
  • Disks
  • Aggregates (RAID options)
  • Volumes (FlexVol and Traditional)
  • FlexCache
  • FlexClone
  • Deduplication
  • QTrees
  • CIFS Oplocks
  • Security styles
  • Quotas
I have tried to cover as much as possible in as little space (I like to keep things short and sweet), I have briefly touched on some subjects so for more details on these subjects I point you to the NetApp documentation. As i get more experienced with the NetApp products I will come back and update this section.
Storage
The storage command can configure and administrate a disk enclosure, the main storage commands are below
Display storage show adapter
storage show disk [-a|-x|-p|-T]
storage show expander
storage show fabric
storage show fault
storage show hub
storage show initiators
storage show mc
storage show port
storage show shelf
storage show switch
storage show tape [supported]
storage show acp

storage array show
storage array show-ports
storage array show-luns
storage array show-config
Enable storage enable adapter
Disable storage disable adapter
Rename switch storage rename <oldname> <newname>
Remove port storage array remove-port <array_name> -p <WWPN>
Load Balance storage load balance
Power Cycle storage power_cycle shelf -h
storage power_cycle shelf start -c <channel name>
storage power_cycle shelf completed
Disks
Your NetApp filer will have a number of disks attached that can be used, when attached the disk will have the following device name
Disk name This is the physical disk itself, normally the disk will reside in a disk enclosure, the disk will have a pathname like 2a.17 depending on the type of disk enclosure
  • 2a = SCSI adapter
  • 17 = disk SCSI ID
Any disks that are classed as spare will be used in any group to replace failed disks. They can also be assigned to any aggregate. Disks are assigned to a specific pool.
There are only four types of disks in Data ONTAP, I will discuss RAID in the aggregate section.
Data holds data stored within the RAID group
Spare Does not hold usable data but is available to be added to a RAID group in an aggregate, also known as a hot spare
Parity Store data reconstruction information within the RAID group
dParity Stores double-parity information within the RAID group, if RAID-DP is enabled
There are a number of disk commands that you can use
Display disk show
disk show <disk_name>

disk_list

sysconfig -r
sysconfig -d
## list all unnassigned/assigned disks
disk show -n
disk show -a

Adding (assigning) ## Add a specific disk to pool1 the mirror pool
disk assign <disk_name> -p 1

## Assign all disk to pool 0, by default they are assigned to pool 0 if the "-p"
## option is not specififed
disk assign all -p 0
Remove (spin down disk) disk remove <disk_name>
Reassign disk reassign -d <new_sysid>
Replace disk replace start <disk_name> <spare_disk_name>
disk replace stop <disk_name>

Note: uses Rapid RAID Recovery to copy data from the specified file system to the specified spare disk, you can stop this process using the stop command
Zero spare disks disk zero spares
fail a disk disk fail <disk_name>
Scrub a disk disk scrub start
disk scrub stop
Sanitize disk sanitize start <disk list>
disk sanitize abort <disk_list>
disk sanitize status
disk sanitize release <disk_list>

Note: the release modifies the state of the disk from sanitize to spare. Sanitize requires a license.
Maintanence disk maint start -d <disk_list>
disk maint abort <disk_list>
disk maint list
disk maint status

Note: you can test the disk using maintain mode
swap a disk disk swap
disk unswap

Note: it stalls all SCSI I/O until you physically replace or add a disk, can used on SCSI disk only.
Statisics disk_stat <disk_name>
Simulate a pulled disk disk simpull <disk_name>
Simulate a pushed disk disk simpush -l
disk simpush <complete path of disk obtained from above command>

## Example
ontap1> disk simpush -l
The following pulled disks are available for pushing:
                         v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448

ontap1> disk simpush v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448
Aggregates
Disks are grouped together in aggregates, these aggregates provide storage to the volume or volumes that they contain. Each aggregate has it own RAID configuration, plex structure and set of assigned disks or array LUNs. You can create traditional volumes or NetApp's FlexVol volumes (see below section on volumes). There are two types of aggregate
  • 32bit - Maximum 16TB
  • 64bit - Maximum 100TB
A aggregate has only one plex (pool 0), if you use SyncMirror (licensed product) you can mirror the aggregate in which case it will have two plexes (pool 0 and pool 1). Disks can be assigned to different pools which will be used for hot spares or extending aggregates for those pools. The plexes are updated simultaneously when mirroring aggregates and need to be resynchronized if you have problems with one of the plexes. You can see how mirroring works in the diagram below

When using RAID4 or RAID-DP the largest disks will be used as the parity disk/s, if you add a new larger disk to the aggregate, this will be reassigned as the partity disk/s.
An aggregate can be in one of three states
Online Read and write access to volumes is allowed
Restricted Some operations, such as parity reconstruction are allowed, but data access is not allowed
Offline No access to the aggregate is allowed
The aggregate can have a number of diffrent status values
32-bit This aggregate is a 32-bit aggregate
64-bit This aggregate is a 64-bit aggregate
aggr This aggregate is capable of contain FlexVol volumes
copying This aggregate is currently the target aggregate of an active copy operation
degraded This aggregate is contains at least one RAID group with single disk failure that is not being reconstructed
double degraded This aggregate is contains at least one RAID group with double disk failure that is not being reconstructed (RAID-DP aggregate only)
foreign Disks that the aggregate contains were moved to the current storage system from another storage system
growing Disks are in the process of being added to the aggregate
initializing The aggregate is in the process of being initialized
invalid The aggregate contains no volumes and none can be added. Typically this happend only after an aborted "aggr copy" operation
ironing A WAFL consistency check is being performewd on the aggregate
mirror degraded The aggregate is mirrored and one of its plexes is offline or resynchronizing
mirrored The aggregate is mirrored
needs check WAFL consistency check needs to be performed on the aggregate
normal The aggregate is unmirrored and all of its RAID groups are functional
out-of-date The aggregate is mirrored and needs to be resynchronized
partial At least one disk was found for the aggregate, but two or more disks are missing
raid0 The aggrgate consists of RAID 0 (no parity) RAID groups
raid4 The agrregate consists of RAID 4 RAID groups
raid_dp The agrregate consists of RAID-DP RAID groups
reconstruct At least one RAID group in the aggregate is being reconstructed
redirect Aggregate reallocation or file reallocation with the "-p" option has been started on the aggregate, read performance will be degraded
resyncing One of the mirror aggregates plexes is being resynchronized
snapmirror The aggregate is a SnapMirror replica of another aggregate (traditional volumes only)
trad The aggregate is a traditional volume and cannot contain FlexVol volumes.
verifying A mirror operation is currently running on the aggregate
wafl inconsistent The aggregate has been marked corrupted; contact techincal support
You can mix the disks speeds and different types within the aggregate make sure you change the below options
Mixed disk speeds and types ## to allow mixed speeds
options raid.rpm.fcal.enable on
options raid.rpm.ata.enable on

## to allow mixed disk types (SAS, SATA, FC, ATA)
options raid.disktype.enable off
Now I am only going to detail the common commands that you use with aggregates, I will update this section and the cheatsheet as I get more experienced with the NetApp product.
Displaying aggr status
aggr status -r
aggr status <aggregate> [-v]
Check you have spare disks aggr status -s
Adding (creating) ## Syntax - if no option is specified then the defult is used
aggr create <aggr_name> [-f] [-m] [-n] [-t {raid0 |raid4 |raid_dp}] [-r raid_size] [-T disk_type] [-R rpm>] [-L] [-B {32|64}] <disk_list>

## create aggregate called newaggr that can have a maximum of 8 RAID groups
aggr create newaggr -r 8 -d 8a.16 8a.17 8a.18 8a.19
## create aggregated called newfastaggr using 20 x 15000rpm disks
aggr create newfastaggr -R 15000 20

## create aggrgate called newFCALaggr (note SAS and FC disks may bge used)
aggr create newFCALaggr -T FCAL 15
Note:
-f = overrides the default behavior that does not permit disks in a plex to belong to different disk pools
-m = specifies the optional creation of a SyncMirror
-n = displays the results of the command but does not execute it
-r = maximum size (number of disks) of the RAID groups for this aggregate
-T = disk type ATA, SATA, SAS, BSAS, FCAL or LUN
-R = rpm which include 5400, 7200, 10000 and 15000
Remove(destroying) aggr offline <aggregate>
aggr destroy <aggregate>
Unremoving(undestroying) aggr undestroy <aggregate>
Rename aggr rename <old name> <new name>
Increase size ## Syntax
aggr add <aggr_name> [-f] [-n] [-g {raid_group_name | new |all}] <disk_list>

## add an additonal disk to aggregate pfvAggr, use "aggr status" to get group name
aggr status pfvAggr -r
aggr add pfvAggr -g rg0 -d v5.25

## Add 4 300GB disk to aggregate aggr1
aggr add aggr1 4@300
offline aggr offline <aggregate>
online aggr online <aggregate>
restricted state aggr restrict <aggregate>
Change an aggregate options ## to display the aggregates options
aggr options <aggregate>

## change a aggregates raid group
aggr options <aggregate> raidtype raid_dp
aggr options <aggregate> raidtype raid4

## change a aggregates raid size
aggr options <aggregate> raidsize 4
show space usage aggr show_space <aggregate>
Mirror aggr mirror <aggregate>
Split mirror aggr split <aggregate/plex> <new_aggregate>
Copy from one agrregate to another ## Obtain the status
aggr copy status

## Start a copy
aggr copy start <aggregate source> <aggregate destination>

## Abort a copy - obtain the operation number by using "aggr copy status"
aggr copy abort <operation number>

## Throttle the copy 10=full speed, 1=one-tenth full speed
aggr copy throttle <operation number> <throttle speed>
Scrubbing (parity) ## Media scrub status
aggr media_scrub status
aggr scrub status

## start a scrub operation
aggr scrub start [ aggrname | plexname | groupname ]

## stop a scrub operation
aggr scrub stop [ aggrname | plexname | groupname ]

## suspend a scrub operation
aggr scrub suspend [ aggrname | plexname | groupname ]

## resume a scrub operation
aggr scrub resume [ aggrname | plexname | groupname ]
Note: Starts parity scrubbing on the named online aggregate. Parity scrubbing compares the data disks to the
parity disk(s) in their RAID group, correcting the parity disk’s contents as necessary. If no name is
given, parity scrubbing is started on all online aggregates. If an aggregate name is given, scrubbing is
started on all RAID groups contained in the aggregate. If a plex name is given, scrubbing is started on
all RAID groups contained in the plex.
Look at the following system options:
raid.scrub.duration 360
raid.scrub.enable on
raid.scrub.perf_impact low
raid.scrub.schedule
Verify (mirroring) ## verify status
aggr verify status

## start a verify operation
aggr verify start [ aggrname ]

## stop a verify operation
aggr verify stop [ aggrname ]

## suspend a verify operation
aggr verify suspend [ aggrname ]

## resume a verify operation
aggr verify resume [ aggrname ]
Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then
RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in
both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes
are made.
Media Scrub aggr media_scrub status

Note: Prints the media scrubbing status of the named aggregate, plex, or group. If no name is given, then
status is printed for all RAID groups currently running a media scrub. The status includes a
percent-complete and whether it is suspended.
Look at the following system options:

raid.media_scrub.enable on
raid.media_scrub.rate 600
raid.media_scrub.spares.enable on
Volumes
Volumes contain file systems that hold user data that is accessible using one or more of the access protocols supported by Data ONTAP, including NFS, CIFS, HTTP, FTP, FC, and iSCSI.

Each volume depends on its containing aggregate for all its physical storage, that is, for all storage in the aggregate’s disks and RAID groups.
A FlexVol volume is a volume that is loosely coupled to its containing aggregate. A FlexVol volume can share its containing aggregate with other FlexVol volumes. Thus, a single aggregate can be the shared source of all the storage used by all the FlexVol volumes contained by that aggregate.

Because a FlexVol volume is managed separately from the aggregate, you can create small FlexVol volumes (20 MB or larger), and you can increase or decrease the size of FlexVol volumes in increments as small as 4 KB.
When a FlexVol volume is created, it reserves a small amount of extra space (approximately 0.5 percent of its nominal size) from the free space of its containing aggregate. This space is used to store the volume's metadata. Therefore, upon creation, a FlexVol volume with a space guarantee of volume uses free space from the aggregate equal to its size × 1.005. A newly-created FlexVol volume with a space guarantee of none or file uses free space equal to .005 × its nominal size.
There are two types of FlexVolume
  • 32-bit
  • 64-bit
If you want to use Data ONTAP to move data between a 32-bit volume and a 64-bit volume, you must use ndmpcopy or qtree SnapMirror. You cannot use the vol copy command or volume SnapMirror between a 32-bit volume and a 64-bit volume.
A traditional volume is a volume that is contained by a single, dedicated, aggregate. It is tightly coupled with its containing aggregate. No other volumes can get their storage from this containing aggregate.

The only way to increase the size of a traditional volume is to add entire disks to its containing aggregate. You cannot decrease the size of a traditional volume. The smallest possible traditional volume uses all the space on two disks (for RAID4) or three disks (for RAID-DP).

Traditional volumes and their containing aggregates are always of type 32-bit. You cannot grow a traditional volume larger than 16 TB.
You can change many attributes on a volume
  • The name of the volume
  • The size of the volume (assigned only for FlexVol volumes; the size of traditional volumes is determined by the size and number of their disks or array LUNs)
  • A security style, which determines whether a volume can contain files that use UNIX security, files that use NT file system (NTFS) file security, or both types of files
  • Whether the volume uses CIFS oplocks (opportunistic locks)
  • The language of the volume
  • The level of space guarantees (for FlexVol volumes only)
  • Disk space and file limits (quotas, optional)
  • A Snapshot copy schedule (optional)
  • Whether the volume is a root volume
Every volume has a language. The language of the volume determines the character set Data ONTAP uses to display file names and data for that volume. Changing the language of an existing volume can cause some files to become inaccessible.
The language of the root volume has special significance, because it affects or determines the following items:
  • Default language for all volumes
  • System name
  • Domain name
  • Console commands and command output
  • NFS user and group names
  • CIFS share names
  • CIFS user account names
  • Access from CIFS clients that don't support Unicode
  • How configuration files in /etc are read
  • How the home directory definition file is read
  • Qtrees
  • Snapshot copies
  • Volumes
  • Aggregates
The following table displays the possible states for volumes.
Online Read and write access to this volume is allowed.
Restricted Some operations, such as parity reconstruction, are allowed, but data access is not allowed.
Offline No access to the volume is allowed.
There are number of possible status values for volumes
access denied The origin system is not allowing access. (FlexCache volumes
only.)
active redirect The volume's containing aggregate is undergoing reallocation (with the -p option specified). Read performance may be reduced while the volume is in this state.
connecting The caching system is trying to connect to the origin system. (FlexCache volumes only.)
copying The volume is currently the target of an active vol copy or snapmirror operation.
degraded The volume's containing aggregate contains at least one degraded RAID group that is not being reconstructed after single disk failure.
double degraded The volume's containing aggregate contains at least one degraded RAID-DP group that is not being reconstructed after double disk failure.
flex The volume is a FlexVol volume.
flexcache The volume is a FlexCache volume.
foreign Disks used by the volume's containing aggregate were moved to the current storage system from another storage system.
growing Disks are being added to the volume's containing aggregate.
initializing The volume's containing aggregate is being initialized.
invalid The volume does not contain a valid file system.
ironing A WAFL consistency check is being performed on the volume's containing aggregate.
lang mismatch The language setting of the origin volume was changed since the caching volume was created. (FlexCache volumes only.)
mirror degraded The volume's containing aggregate is mirrored and one of its plexes is offline or resynchronizing.
mirrored The volume's containing aggregate is mirrored.
needs check A WAFL consistency check needs to be performed on the volume's containing aggregate.
out-of-date The volume's containing aggregate is mirrored and needs to be resynchronized.
partial At least one disk was found for the volume's containing aggregate, but two or more disks are missing.
raid0 The volume's containing aggregate consists of RAID0 (no parity) groups (array LUNs only).
raid4 The volume's containing aggregate consists of RAID4 groups.
raid_dp The volume's containing aggregate consists of RAID-DP groups.
reconstruct At least one RAID group in the volume's containing aggregate is being reconstructed.
redirect The volume's containing aggregate is undergoing aggregate reallocation or file reallocation with the -p option. Read performance to volumes in the aggregate might be degraded.
rem vol changed The origin volume was deleted and re-created with the same name. Re-create the FlexCache volume to reenable the FlexCache relationship. (FlexCache volumes only.)
rem vol unavail The origin volume is offline or has been deleted. (FlexCache volumes only.)
remote nvram err The origin system is experiencing problems with its NVRAM. (FlexCache volumes only.)
resyncing One of the plexes of the volume's containing mirrored aggregate is being resynchronized.
snapmirrored The volume is in a SnapMirror relationship with another volume.
trad The volume is a traditional volume.
unrecoverable The volume is a FlexVol volume that has been marked unrecoverable; contact technical support.
unsup remote vol The origin system is running a version of Data ONTAP the does not support FlexCache volumes or is not compatible with the version running on the caching system. (FlexCache volumes only.)
verifying RAID mirror verification is running on the volume's containing aggregate.
wafl inconsistent The volume or its containing aggregate has been marked corrupted; contact technical support .
Usually, you should leave CIFS oplocks on for all volumes and qtrees. This is the default setting. However, you might turn CIFS oplocks off under certain circumstances.

CIFS oplocks (opportunistic locks) enable the redirector on a CIFS client in certain file-sharing scenarios to perform client-side caching of read-ahead, write-behind, and lock information. A client can then work with a file (read or write it) without regularly reminding the server that it needs access to the file. This improves performance by reducing network traffic.
You might turn CIFS oplocks off on a volume or a qtree under either of the following circumstances:
  • You are using a database application whose documentation recommends that CIFS oplocks be turned off.
  • You are handling critical data and cannot afford even the slightest data loss.
Otherwise, you can leave CIFS oplocks on. I will discuss in detail CIFS and other File access protocols in another topic.
CIFS oplock options cifs.oplocks.enable on
cifs.oplocks.opendelta 0
Every qtree and volume has a security style setting—NTFS, UNIX, or mixed. The setting determines whether files use Windows NT or UNIX (NFS) security. How you set up security styles depends on what protocols are licensed on your storage system.

Although security styles can be applied to volumes, they are not shown as a volume attribute, and are managed for both volumes and qtrees using the qtree command. The security style for a volume applies only to files and directories in that volume that are not contained in any qtree. The volume security style does not affect the security style for any qtrees in that volume.

The following table describes the three security styles and the effects of changing them.
Security
Style
Description Effect of changing to this style
NTFS For CIFS clients, security is handled using Windows NTFS ACLs.

For NFS clients, the NFS UID (user id) is mapped to a Windows SID (security identifier) and its associated groups. Those mapped credentials are used to determine file access, based on the NFTS ACL.

Note: To use NTFS security, the storage system must be licensed for CIFS. You cannot use an NFS client to change file or directory permissions on qtrees with the NTFS security style.
If the change is from a mixed qtree, Windows NT permissions determine file access for a file that had Windows NT permissions. Otherwise, UNIXstyle (NFS) permission bits determine file access for files created before the change.

Note: If the change is from a CIFS storage system to a multiprotocol storage system, and the /etc directory is a qtree, its security style
changes to NTFS.
UNIX Files and directories have UNIX permissions. The storage system disregards any Windows NT permissions established previously and uses the UNIX permissions exclusively.
Mixed Both NTFS and UNIX security are allowed: A file or directory can have either Windows NT permissions or UNIX permissions.

The default security style of a file is the style most recently used to set permissions on that file.
If NTFS permissions on a file are changed, the storage system recomputes UNIX permissions on that file.

If UNIX permissions or ownership on a file are changed, the storage system deletes any NTFS permissions on that file.
Finally we get to the commands that are used to create and control volumes
General Volume Operations (Traditional and FlexVol)
Displaying vol status
vol status -v (verbose)
vol status -l (display language)
Remove (destroying) vol offline <vol_name>
vol destroy <vol_name>
Rename vol rename <old_name> <new_name>
online vol online <vol_name>
offline vol offline <vol_name>
restrict vol restrict <vol_name>
decompress vol decompress status
vol decompress start <vol_name>
vol decompress stop <vol_name>
Mirroring vol mirror volname [-n][-v victim_volname][-f][-d <disk_list>]
Note:
Mirrors the currently-unmirrored traditional volume volname, either with the specified set of disks or with the contents of another unmirrored traditional volume victim_volname, which will be destroyed in the process.

The vol mirror command fails if either the chosen volname or victim_volname are flexible volumes. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite.
Change language vol lang <vol_name> <language>
Change maximum number of files ## Display maximum number of files
maxfiles <vol_name>

## Change maximum number of files
maxfiles <vol_name> <max_num_files>
Change root volume vol options <vol_name> root
Media Scrub vol media_scrub status [volname|plexname|groupname -s disk-name][-v]

Note: Prints the media scrubbing status of the named aggregate, volume, plex, or group. If no name is given, then
status is printed for all RAID groups currently running a media scrub. The status includes a
percent-complete and whether it is suspended.
Look at the following system options:

raid.media_scrub.enable on
raid.media_scrub.rate 600
raid.media_scrub.spares.enable on
FlexVol Volume Operations (only)
Adding (creating) ## Syntax
vol create vol_name [-l language_code] [-s {volume|file|none}] <aggr_name> size{k|m|g|t}
## Create a 200MB volume using the english character set
vol create newvol -l en aggr1 200M

## Create 50GB flexvol volume
vol create vol1 aggr1 50g
additional disks # First find the aggregate the volume uses
vol container flexvol1

## add an additional disk to aggregate aggr1, use "aggr status" to get group name
aggr status aggr1 -r
aggr add aggr1 -g rg0 -d v5.25
Resizing vol size <vol_name> [+|-] n{k|m|g|t}

## Increase flexvol1 volume by 100MB
vol size flexvol1 +100m
Automatically resizing vol autosize vol_name [-m size {k|m|g|t}] [-I size {k|m|g|t}] on

## automatically grow by 10MB increaments to max of 500MB
vol autosize flexvol1 -m 500m -I 10m on
Determine free space and Inodes df -Ah
df -L
df -i
Determine size vol size <vol_name>
automatic free space preservation vol options <vol_name> try_first [volume_grow|snap_delete]
Note:
If you specify volume_grow, Data ONTAP attempts to increase the volume's size before deleting any Snapshot copies. Data ONTAP increases the volume size based on specifications you provided using the vol autosize command.

If you specify snap_delete, Data ONTAP attempts to create more free space by deleting Snapshot copies, before increasing the size of the volume. Data ONTAP deletes Snapshot copies based on the specifications you provided using the snap autodelete command.
display a FlexVol volume's containing aggregate vol container <vol_name>
Cloning vol clone create clone_vol [-s none|file|volume] -b parent_vol [parent_snap]

vol clone split start
vol clone split stop
vol clone split estimate
vol clone split status
Note: The vol clone create command creates a flexible volume named clone_vol on the local filer that is a clone of a "backing" flexible volume named par_ent_vol. A clone is a volume that is a writable snapshot of another volume. Initially, the clone and its parent share the same storage; more storage space is consumed only as one volume or the other changes.
Copying vol copy start [-S|-s snapshot] <vol_source> <vol_destination>
vol copy status

vol copy abort <operation number>
vol copy throttle <operation_number> <throttle value 10-1>
## Example - Copies the nightly snapshot named nightly.1 on volume vol0 on the local filer to the volume vol0 on remote ## filer named toaster1.
vol copy start -s nightly.1 vol0 toaster1:vol0
Note: Copies all data, including snapshots, from one volume to another. If the -S flag is used, the command copies all snapshots in the source volume to the destination volume. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot. If neither the -S nor -s flag is used in the command, the filer automatically creates a distinctively-named snapshot at the time the vol copy start command is executed and copies only that snapshot to the destination volume.

The source and destination volumes must either both be traditional volumes or both be flexible volumes. The vol copy command will abort if an attempt is made to copy between different volume types.

The source and destination volumes can be on the same filer or on different filers. If the source or destination volume is on a filer other than the one on which the vol copy start command was entered, specify the volume name in the filer_name:volume_name format.
Traditional Volume Operations (only)
adding (creating) vol|aggr create vol_name -v [-l language_code] [-f] [-m] [-n] [-v] [-t {raid4|raid_dp}] [-r raidsize] [-T disk-type] -R rpm] [-L] disk-list

## create traditional volume using vol command
vol create tradvol1 -l en -t raid4 -d v5.26 v5.27

## Create traditional volume using 20 disks, each RAID group can have 10 disks
vol create vol1 -r 10 20
additional disks vol add volname[-f][-n][-g <raidgroup>]{ ndisks[@size]|-d <disk_list> }

## add another disk to the already existing traditional volume
vol add tradvol1 -d v5.28
splitting aggr split <volname/plexname> <new_volname>
Scrubing (parity) ## The more new "aggr scrub " command is preferred

vol scrub status [volname|plexname|groupname][-v]

vol scrub start [volname|plexname|groupname][-v]
vol scrub stop [volname|plexname|groupname][-v]

vol scrub suspend [volname|plexname|groupname][-v]
vol scrub resume [volname|plexname|groupname][-v]

Note: Print the status of parity scrubbing on the named traditional volume, plex or RAID group. If no name is provided, the status is given on all RAID groups currently undergoing parity scrubbing. The status includes a percent-complete as well as the scrub’s suspended status (if any).
Verify (mirroring) ## The more new "aggr verify" command is preferred

## verify status
vol verify status

## start a verify operation
vol verify start [ aggrname ]

## stop a verify operation
vol verify stop [ aggrname ]

## suspend a verify operation
vol verify suspend [ aggrname ]

## resume a verify operation
vol verify resume [ aggrname ]
Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then
RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in
both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes
are made.
FlexCache Volumes
A FlexCache volume is a sparsely-populated volume on a local storage system that is backed by a volume on a different possibly remote storage system, this volume providies access to data in the remote volume without requiring that all the data be in the sparse voluem. This speeds up data access to remote data, because the cached data must be ejected when the data is changed, thus FlexCache volumes work best for data that does not change often.
When a client requests data from the FlexCache volume, the data is read from the origin system and cached on the FlexCache volume, subsequent requests for that data is then served directly from the FlexCache volume. This increases performance as data no longer needs to come across the wire (network). Sometimes a picture best describes things

In order to use FlexCache volumes there are some requirements:
  • Data ONTAP version 7.0.5 or later (caching server)
  • A valid FlexCache license (caching server)
  • A valid NFS license with NFS enabled (caching server)
  • Data ONTAP version 7.0.5 or later (origin server)
  • The flexcache.access option set to allow access to FlexCache volumes (origin )
  • The flexcache.enable options set to on
  • The FlexCache volume must be a FlexVol volume, the origin volume can be a FlexVol or a traditional volume.
  • The FlexCache volume and origin volume can be either 32-bit or 64-bit
You can have a maximum of 100 FlexCache volumes on a storage system. In addition, there are certain features of Data ONTAP that are not available on FlexCache volumes, and others that are not available on volumes that are backing FlexCache volumes.

You cannot use the following Data ONTAP capabilities on FlexCache volumes (these limitations do not apply to the origin volumes):
  • Client access using any protocol other than NFSv2 or NFSv3
  • Client access using IPv6
  • Snapshot copy creation
  • SnapRestore
  • SnapMirror (qtree or volume)
  • SnapVaultFlex
  • Clone volume creation
  • The ndmp command
  • Quotas
  • Qtrees
  • Volume copy
  • Deduplication
  • Creation of FlexCache volumes in any vFiler unit other than vFiler0
  • Creation of FlexCache volumes in the same aggregate as their origin volume
  • Mounting the FlexCache volume as a read-only volume
As mentioned above the FlexCache volume must be a FlexVol volume, the origin volume can be a FlexVol or a traditional volume. Must FlexCache volumes are setup to automatically grow, thus achieving the best performance. FlexCache volumes by default reserve 100MB of space this can be changed by the below option but it is advised to leave it at its default value.
FlexCache default reserve space vol options flexcache_min_reserved
When you put multiple FlexCache volumes in the same aggregate, each FlexCache volume reserves only a small amount of space (as specified by the flexcache_min_reserved volume option. The rest of the space is allocated as needed. This means that a “hot” FlexCache volume (one that is being accessed heavily) is permitted to take up more space, while a FlexCache volume that is not being accessed as often will gradually be reduced in size. When an aggregate containing FlexCache volumes runs out of free space, Data ONTAP randomly selects a FlexCache volume in that aggregate to be truncated. Truncation means that files are ejected from the FlexCache volume until the size of the volume is decreased to a predetermined percentage of its former size.

If you have regular FlexVol volumes in the same aggregate as your FlexCache volumes, and you start to fill up the aggregate, the FlexCache volumes can lose some of their unreserved space (if they are not currently using it). In this case, when the FlexCache volume needs to fetch a new data block and it does not have enough free space to accommodate it, a data block is ejected from one of the FlexCache volumes to make room for the new data block.
You can control how the FlexCache volume functions when connectivity between the caching and origin systems is lost by using the disconnected_mode and acdisconnected volume options. The disconnected_mode volume option and the acdisconnected timeout, combined with the regular TTL timeouts (acregmax, acdirmax, acsymmax, and actimeo), enable you to control the behavior of the FlexCache volume when contact with the origin volume is lost.
Disconnect options disconnected_mode
acdisconnected

## To list all options of a FlexCache volume
vol options <flexcache_name>
A file is the basic object in a FlexCache volume, but sometimes only some of a file's data is cached. If the data is cached and valid, a read request for that data is fulfilled without access to the origin volume. When a data block from a specific file is requested from a FlexCache volume, then the attributes of that file are cached, and that file is considered to be cached, even if not all of its data blocks are
present. If any part of a file is changed, the entire file is invalidated and ejected from the cache. For this reason, data sets consisting of one large file that is frequently updated might not be good candidates for a FlexCache implementation.

Cache consistenancy for FlexCache volumes is achieved by using three techniques
Delegations You can think of a delegation as a contract between the origin system and the caching volume; as long as the caching volume has the delegation, the file has not changed. Delegations are used only in certain situations.

When data from a file is retrieved from the origin volume, the origin system can give a delegation for that file to the caching volume. Before that file is modified on the origin volume, whether due to a request from another caching volume or due to direct client access, the origin system revokes the delegation for that file from all caching volumes that have that delegation.
Attribute cache timeouts When data is retrieved from the origin volume, the file that contains that data is considered valid in the FlexCache volume as long as a delegation exists for that file. If no delegation exists, the file is considered valid for a certain length of time, specified by the attribute cache timeout.

If a client requests data from a file for which there are no delegations, and the attribute cache timeout has been exceeded, the FlexCache volume compares the file attributes of the cached file with the attributes of the file on the origin system.
write operation proxy If a client modifies a file that is cached, that operation is passed back, or proxied through, to the origin system, and the file is ejected from the cache.

When the write is proxied, the attributes of the file on the origin volume are changed. This means that when another client requests data from that file, any other FlexCache volume that has that data cached will re-request the data after the attribute cache timeout is reached.
I have only touched lightly on Cache consistenancy and suggest that you check the documentation and options that are available.
The following table lists the status messages you might see for a FlexCache volume
access denied The origin system is not allowing FlexCache access. Check the setting of the flexcache.access option on the origin system.
connecting The caching system is trying to connect to the origin system.
lang mismatch The language setting of the origin volume was changed since the FlexCache volume was created.
rem vol changed The origin volume was deleted and re-created with the same name. Re-create the FlexCache volume to reenable the FlexCache relationship.
rem vol unavail The origin volume is offline or has been deleted.
remote nvram err The origin system is experiencing problems with its NVRAM.
unsup remote vol The origin system is running a version of Data ONTAP that either does not support FlexCache volumes or is not compatible with the version running on the caching system.
Now for the commands
Display vol status
vol status -v <flexcache_name>

## How to display the options available and what they are set to
vol help options
vol options <flexcache_name>
Display free space df -L
Adding (Create) ## Syntax
vol create <flexcache_name> <aggr> [size{k|m|g|t}] -S origin:source_vol

## Create a FlexCache volume called flexcache1 with autogrow in aggr1 aggregate with the source volume vol1
## on storage netapp1 server
vol create flexcache1 aggr1 -S netapp1:vol1
Removing (destroy) vol offline < flexcache_name>
vol destroy <flexcache_name>
Automatically resizing vol options <flexcache_name> flexcache_autogrow [on|off]
Eject file from cache flexcache eject <path> [-f]
Statistics ## Client stats
flexcache stats -C <flexcache_name>

## Server stats
flexcache stats -S <volume_name> -c <client>

## File stats
flexcache fstat <path>
FlexClone Volumes
FlexClone volumes are writable, point-in-time copies of a parent FlexVol volume. Often, you can manage them as you would a regular FlexVol volume, but they also have some extra capabilities and restrictions.
The following list outlines some key facts about FlexClone volumes:
  • A FlexClone volume is a point-in-time, writable copy of the parent volume. Changes made to the parent volume after the FlexClone volume is created are not reflected in the FlexClone volume.
  • FlexClone volumes are fully functional volumes; you manage them using the vol command, just as you do the parent volume.
  • FlexClone volumes always exist in the same aggregate as their parent volumes.
  • Traditional volumes cannot be used as parent volumes for FlexClone volumes. To create a copy of a traditional volume, you must use the vol copy command, which creates a distinct copy that uses additional storage space equivalent to the amount of storage space used by the volume you copied.
  • FlexClone volumes can themselves be cloned to create another FlexClone volume.
  • FlexClone volumes and their parent volumes share the same disk space for any common data. This means that creating a FlexClone volume is instantaneous and requires no additional disk space (until changes are made to the FlexClone volume or its parent).
  • A FlexClone volume is created with the same space guarantee as its parent. The space guarantee setting is enforced for the new FlexClone volume only if there is enough space in the containing aggregate.
  • A FlexClone volume is created with the same space reservation and fractional reserve settings as its parent.
  • While a FlexClone volume exists, some operations on its parent are not allowed.
  • You can sever the connection between the parent volume and the FlexClone volume. This is called splitting the FlexClone volume. Splitting removes all restrictions on the parent volume and causes the FlexClone to use its own additional disk space rather than sharing space with its parent.
  • Quotas applied to the parent volume are not automatically applied to the FlexClone volume.
  • When a FlexClone volume is created, any LUNs present in the parent volume are present in the FlexClone volume but are unmapped and offline.
The following restrictions apply to parent volumes or their clones:
  • You cannot delete the base Snapshot copy in a parent volume while a FlexClone volume using that Snapshot copy exists. The base Snapshot copy is the Snapshot copy that was used to create the FlexClone volume, and is marked busy, vclone in the parent volume.
  • You cannot perform a volume SnapRestore operation on the parent volume using a Snapshot copy that was taken before the base Snapshot copy was taken.
  • You cannot destroy a parent volume if any clone of that volume exists.
  • You cannot create a FlexClone volume from a parent volume that has been taken offline, although you can take the parent volume offline after it has been cloned.
  • You cannot perform a vol copy command using a FlexClone volume or its parent as the destination volume.
  • If the parent volume is a SnapLock Compliance volume, the FlexClone volume inherits the expiration date of the parent volume at the time of the creation of the FlexClone volume. The FlexClone volume cannot be deleted before its expiration date.
  • There are some limitations on how you use SnapMirror with FlexClone volumes.
A FlexClone volume inherits its initial space guarantee from its parent volume. For example, if you create a FlexClone volume from a parent volume with a space guarantee of volume, then the FlexClone volume's initial space guarantee will be volume also. You can change the FlexClone volume's space guarantee.

For example, suppose that you have a 100-MB FlexVol volume with a space guarantee of volume, with 70 MB used and 30 MB free, and you use that FlexVol volume as a parent volume for a new FlexClone volume. The new FlexClone volume has an initial space guarantee of volume, but it does not require a full 100 MB of space from the aggregate, as it would if you had copied the volume. Instead, the aggregate needs to allocate only 30 MB (100 MB minus 70 MB) of free space to the clone.
If you have multiple clones with the same parent volume and a space guarantee of volume, they all share the same shared parent space with each other, so the space savings are even greater.
You can identify a shared Snapshot copy by listing the Snapshot copies in the parent volume with the snap list command. Any Snapshot copy that appears as busy, vclone in the parent volume and is also present in the FlexClone volume is a shared Snapshot copy.
Splitting a FlexClone volume from its parent removes any space optimizations that are currently employed by the FlexClone volume. After the split, both the FlexClone volume and the parent volume require the full space allocation determined by their space guarantees. The FlexClone volume becomes a normal FlexVol volume.
Creating FlexClone files or FlexClone LUNs is highly space-efficient and time-efficient because the cloning operation does not involve physically copying any data. You can create a clone of a file that is present in a FlexVol volume in a NAS environment, and you
can also clone a complete LUN without the need of a backing Snapshot copy in a SAN environment. The cloned copies initially share the same physical data blocks with their parents and occupy negligible extra space in the storage system for their initial metadata.
Display vol status
vol status <flexclone_name> -v

df -Lh
adding (create) ## Syntax
vol clone create clone_name [-s {volume|file|none}] -b parent_name [parent_snap]

## create a flexclone called flexclone1 from the parent flexvol1
vol clone create flexclone1 -b flexvol1
Removing (destroy) vol offline <flexclone_name>
vol destroy <flexclone_name>
splitting ## Determine the free space required to perform the split
vol clone split estimate <flexclone_name>

## Double check you have the space
df -Ah

## Perform the split
vol clone split start <flexclone_name>

## Check up on its status
vol colne split status <flexclone_name>

## Stop the split
vol clone split stop <flexclone_name>
log file /etc/log/clone

The clone log file records the following information:
• Cloning operation ID
• The name of the volume in which the cloning operation was performed
• Start time of the cloning operation
• End time of the cloning operation
• Parent file/LUN and clone file/LUN names
• Parent file/LUN ID
• Status of the clone operation: successful, unsuccessful, or stopped and some other details
I have only briefly touched on FlexCloning so I advise you to take a peek at the documentation for a full description, including the FlexClone file, FlexClone LUN and rapid cloning utility for VMWare.
Space Saving
ONTAP Data has an additional feature called deduplication, it improves physical storage space by eliminating duplicate data blocks within a FlexVol volume.

Deduplication works at the block level on the active file system, and uses the WAFL block-sharing mechanism. Each block of data has a digital signature that is compared with all other signatures in a data volume. If an exact block match exists, the duplicate block is discarded and its disk space is reclaimed.

You can configure deduplication operations to run automatically or on a schedule. You can deduplicate new and existing data, or only new data, on a FlexVol volume. You do require a license to enable deduplication.

Data ONTAP writes all data to a storage system in 4-KB blocks. When deduplication runs for the first time on a FlexVol volume with existing data, it scans all the blocks in the FlexVol volume and creates a digital fingerprint for each of the blocks. Each of the fingerprints is compared to all other fingerprints within the FlexVol volume. If two fingerprints are found to be identical, a byte-for-byte comparison is done for all data within the block. If the byte-for-byte comparison detects identical fingerprints, the pointer to the data block is updated, and the duplicate block is freed.

Deduplication runs on the active file system. Therefore, as additional data is written to the deduplicated volume, fingerprints are created for each new block and written to a change log file. For subsequent deduplication operations, the change log is sorted and merged with the fingerprint file, and the deduplication operation continues with fingerprint comparisons as previously described.
start/restart deduplication operation sis start -s <path>

sis start -s /vol/flexvol1

## Use previous checkpoint
sis start -sp <path>
stop deduplication operation sis stop <path>
schedule deduplication sis config -s <schedule> <path>

sis config -s mon-fri@23 /vol/flexvol1

Note: schedule lists the days and hours of the day when deduplication runs. The schedule can be of the following forms:
  • day_list[@hour_list]
    If hour_list is not specified, deduplication runs at midnight on each scheduled day.
  • hour_list[@day_list]
    If day_list is not specified, deduplication runs every day at the specified hours.
  • • -
    A hyphen (-) disables deduplication operations for the specified FlexVol volume.
enabling sis on <path>
disabling sis off <path>
status sis status -l <path>
Display saved space df -s <path>
Again I have only briefly touiched on this subject, for more details checkout the documentation.
QTrees

Qtrees enable you to partition your volumes into smaller segments that you can manage individually. You can set a qtree's size or security style, back it up, and restore it.
You use qtrees to partition your data. You might create qtrees to organize your data, or to manage one or more of the following factors: quotas, backup strategy, security style, and CIFS oplocks setting.

The following list describes examples of qtree usage strategies:
  • Quotas - You can limit the size of the data used by a particular project, by placing all of that project's files into a qtree and applying a tree quota to the qtree.
  • Backups -You can use qtrees to keep your backups more modular, to add flexibility to backup schedules, or to limit the size of each backup to one tape.
  • Security style -If you have a project that needs to use NTFS-style security, because the members of the project use Windows files and applications, you can group the data for that project in a qtree and set its security style to NTFS, without requiring that other projects also use the same security style.
  • CIFS oplocks settings - If you have a project using a database that requires CIFS oplocks to be off, you can set CIFS
    oplocks to Off for that project's qtree, while allowing other projects to retain CIFS oplocks.
The table below compares qtree with FlexVol and Traditional volumes
Functionality
QTree
FlexVolume
Traditional Volume
Enables organizing user data
Yes
Yes
Yes
Enables grouping users with similar needs
Yes
Yes
Yes
Accepts a secruity style
Yes
Yes
Yes
Accepts oplocks configuration
Yes
Yes
Yes
Can be backed up and restored as a unit using Snap Mirror
Yes
Yes
Yes
Can be backed up and restored as a unit using Snap Vault
Yes
No
No
Can be resized
Yes (using quota limits)
Yes
Yes
Support snapshot copies
No (qtree data can be extracted from volume snapshot copies)
Yes
Yes
Supports quotas
Yes
Yes
Yes
Can be cloned
No (except as part of a FlexVol volume)
Yes
No
Maximum number allowed
4,995 per volume
500 per system
100 per system
Now for the commands
Display qtree status [-i] [-v]

Note:
The -i option includes the qtree ID number in the display.
The -v option includes the owning vFiler unit, if the MultiStore license is enabled.
adding (create) ## Syntax - by default wafl.default_qtree_mode option is used
qtree create path [-m mode]

## create a news qtree in the /vol/users volume using 770 as permissions
qtree create /vol/users/news -m 770
Remove rm -Rf <directory>
Rename mv <old_name> <new_name>
convert a directory into a qtree directory ## Move the directory to a different directory
mv /n/joel/vol1/dir1 /n/joel/vol1/olddir

## Create the qtree
qtree create /n/joel/vol1/dir1

## Move the contents of the old directory back into the new QTree
mv /n/joel/vol1/olddir/* /n/joel/vol1/dir1

## Remove the old directory name
rmdir /n/joel/vol1/olddir
stats qtree stats [-z] [vol_name]

Note:
-z = zero stats
CIFS Oplocks
CIFS oplocks reduce network traffic and improve storage system performance. However, in some situations, you might need to disable them. You can disable CIFS oplocks for the entire storage system or for a specific volume or qtree.
Usually, you should leave CIFS oplocks on for all volumes and qtrees. This is the default setting. However, you might turn CIFS oplocks off under certain circumstances. CIFS oplocks (opportunistic locks) enable the redirector on a CIFS client in certain file-sharing scenarios to perform client-side caching of read-ahead, write-behind, and lock information. A client can then work with a file (read or write it) without regularly reminding the server that it needs access to the file. This improves performance by reducing network traffic.
You might turn CIFS oplocks off on a volume or a qtree under either of the following circumstances:
  • You are using a database application whose documentation recommends that CIFS oplocks be turned off.
  • You are handling critical data and cannot afford even the slightest data loss
Otherwise, you can leave CIFS oplocks on.
Enabling/Disabling for entire storage cifs.oplocks.enable on
cifs.oplocks.enable off
Enabling/Disabling for qtrees qtree oplocks /vol/vol2/proj enable
qtree oplocks /vol/vol2/proj disable
Security Styles
You might need to change the security style of a new volume or qtree. Additionally, you might need to accommodate other users; for example, if you had an NTFS qtree and subsequently needed to include UNIX files and users, you could change the security style of that qtree from NTFS to mixed.
Make sure there are no CIFS users connected to shares on the qtree whose security style you want to change. If there are, you cannot change UNIX security style to mixed or NTFS, and you cannot change NTFS or mixed security style to UNIX.
Change the security style ## Syntax
qtree security path {unix | ntfs | mixed}
## Change the security style of /vol/users/docs to mixed
qtree security /vol/users/docs mixed
Also see volumes above for more information about security styles
Quotas
Quotas provide a way to restrict or track the disk space and number of files used by a user, group, or qtree. You specify quotas using the /etc/quotas file. Quotas are applied to a specific volume or qtree.
You can use quotas to limit resource usage, to provide notification when resource usage reaches specific levels, or simply to track resource usage.

You specify a quota for the following reasons:
  • To limit the amount of disk space or the number of files that can be used by a user or group, or that can be contained by a qtree
  • To track the amount of disk space or the number of files used by a user, group, or qtree, without imposing a limit
  • To warn users when their disk usage or file usage is high
Quotas can cause Data ONTAP to send a notification (soft quota) or to prevent a write operation from succeeding (hard quota) when quotas are exceeded. When Data ONTAP receives a request to write to a volume, it checks to see whether quotas are activated for that volume. If so, Data ONTAP determines whether any quota for that volume (and, if the write is to a qtree, for that qtree) would be exceeded by performing the write operation. If any hard quota would be exceeded, the write operation fails, and a quota notification is sent. If any soft quota would be exceeded, the write operation succeeds, and a quota notification is sent.
Quotas configuration file /mroot/etc/quotas
Example quota file
##                                           hard limit | thres |soft limit
##Quota Target       type                    disk  files| hold  |disk  file
##-------------      -----                   ----  -----  ----- ----- ----
*                    tree@/vol/vol0           -     -      -     -     -     # monitor usage on all qtrees in vol0
/vol/vol2/qtree      tree                    1024K  75k    -     -     -     # enforce qtree quota using kb
tinh                 user@/vol/vol2/qtree1   100M   -      -     -     -     # enforce users quota in specified qtree
dba                  group@/vol/ora/qtree1   100M   -      -     -     -     # enforce group quota in specified qtree

# * = default user/group/qtree 
# - = placeholder, no limit enforced, just enable stats collection

Note: you have lots of permutations, so checkout the documentation    
Displaying quota report [<path>]
Activating quota on [-w] <vol_name>

Note:
-w = return only after the entire quotas file has been scanned
Deactivitating quota off [-w] <vol_name>
Reinitializing quota off [-w] <vol_name>
quota on [-w] <vol_name>
Resizing quota resize <vol_name>

Note: this commands rereads the quota file
Deleting edit the quota file

quota resize <vol_name>
log messaging quota logmsg

Block Access Management
In my NetApp introduction section I spoke about 2 ways on accessing the NetApp filer, either file based access or block based access
File-Based Protocol NFS, CIFS, FTP, TFTP, HTTP
Block-Based Protocol Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)
In this section I will cover the following common based protocols, if any others are not covered then please checkout the documentation
  • iSCSI
  • FC
I have another web page to cover File Access : NFS, CIFS, FTP, HTTP
Block Based Access
In iSCSI and FC networks, storage systems are targets that have storage target devices, which are referred to as LUNs, or logical units. Using the Data ONTAP operating system, you configure the storage by creating LUNs. The LUNs are accessed by hosts, which are initiators in the storage network. To connect to iSCSI networks, hosts can use standard Ethernet network adapters (NICs), TCP offload engine (TOE) cards with software initiators, or dedicated iSCSI HBAs. To connect to FC networks, hosts require Fibre Channel host bus adapters (HBAs).
Data ONTAP 7.2 added support for the Asymmetric Logical Unit Access (ALUA) features of SCSI, also known as SCSI Target Port Groups or Target Port Group Support. ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA allows the initiator to query the target about path attributes, such as primary path and secondary path. It also allows the target to communicate events back to the initiator. As a result, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required as long as the host supports the ALUA standard. For iSCSI SANs, ALUA is supported only with Solaris hosts running the iSCSI Solaris Host Utilities 3.0 for Native OS.
iSCSI Introduction
The iSCSI protocol is a licensed service on the storage system that enables you to transfer block data to hosts using the SCSI protocol over TCP/IP. The iSCSI protocol standard is defined by RFC 3720. In an iSCSI network, storage systems are targets that have storage target devices, which are referred to as LUNs (logical units). A host with an iSCSI host bus adapter (HBA), or running iSCSI initiator software, uses the iSCSI protocol to access LUNs on a storage system. The iSCSI protocol is implemented over the storage system’s standard gigabit Ethernet interfaces using a software driver. The connection between the initiator and target uses a standard TCP/IP network. No special network configuration is needed to support iSCSI traffic. The network can be a dedicated TCP/IP network, or it can be your regular public network. The storage system listens for iSCSI connections on TCP port 3260.
In an iSCSI network, there are two types of nodes: targets and initiators
Targets Storage Systems (NetApp, EMC)
Initiators Hosts (Unix, Linux, Windows)
Storage systems and hosts can be direct-attached or connected through Ethernet switches. Both direct-attached and switched configurations use Ethernet cable and a TCP/IP network for connectivity. You can of course use existing networks but if possible try to make this a dedicated network for the storage system, as it will increase performance.
Every iSCSI node must have a node name. The two formats, or type designators, for iSCSI node names are iqn and eui. The storage system always uses the iqn-type designator. The initiator can use either the iqn-type or eui-type designator.
iqn The iqn-type designator is a logical name that is not linked to an IP address.

It is based on the following components:
  • The type designator itself, iqn, followed by a period (.)
  • The date when the naming authority acquired the domain name, followed by a period
  • The name of the naming authority, optionally followed by a colon (:)
  • A unique device name
The format is:
               iqn.yyyymm.backward-naming-authority:unique-device-name

Note:
yyyymm = month and year in which the naming authority acquired the domain name.
backward-naming-authority = the reverse domain name of the entity responsible for naming this device.
unique-device-name = a free-format unique name for this device assigned by the naming authority.
eui The eui-type designator is based on the type designator, eui, followed by a period, followed by sixteen hexadecimal digits.

The format is:
                                     eui.0123456789abcdef
Storage system node name Each storage system has a default node name based on a reverse domain name and the serial number of the storage system's non-volatile RAM (NVRAM) card.

The node name is displayed in the following format:
                                  
                                     iqn.1992-08.com.netapp:sn.serial-number

The following example shows the default node name for a storage system with the serial number 12345678:

                                     iqn.1992-08.com.netapp:sn.12345678
The storage system checks the format of the initiator node name at session login time. If the initiator node name does not comply with storage system node name requirements, the storage system rejects the session.
A target portal group is a set of network portals within an iSCSI node over which an iSCSI session is conducted. In a target, a network portal is identified by its IP address and listening TCP port. For storage systems, each network interface can have one or more IP addresses and therefore one or more network portals. A network interface can be an Ethernet port, virtual local area network (VLAN), or virtual interface (vif).

The assignment of target portals to portal groups is important for two reasons:
  • The iSCSI protocol allows only one session between a specific iSCSI initiator port and a single portal group on the target.
  • All connections within an iSCSI session must use target portals that belong to the same portal group.
The Internet Storage Name Service (iSNS) is a protocol that enables automated discovery and management of iSCSI devices on a TCP/IP storage network. An iSNS server maintains information about active iSCSI devices on the network, including their IP addresses, iSCSI node names, and portal groups. You obtain an iSNS server from a third-party vendor. If you have an iSNS server on your network, and it is configured and enabled for use by both the initiator and the storage system, the storage system automatically registers its IP address, node name, and portal groups with the iSNS server when the iSNS service is started. The iSCSI initiator can query the iSNS server to discover the storage system as a target device. If you do not have an iSNS server on your network, you must manually configure each target to be visible to the host.
The Challenge Handshake Authentication Protocol (CHAP) enables authenticated communication between iSCSI initiators and targets. When you use CHAP authentication, you define CHAP user names and passwords on both the initiator and the storage system. During the initial stage of an iSCSI session, the initiator sends a login request to the storage system to begin the session. The login request includes the initiator’s CHAP user name and CHAP algorithm. The storage system responds with a CHAP challenge. The initiator provides a CHAP response. The storage system verifies the response and authenticates the initiator. The CHAP password is used to compute the response.
During an iSCSI session, the initiator and the target communicate over their standard Ethernet interfaces, unless the host has an iSCSI HBA. The storage system appears as a single iSCSI target node with one iSCSI node name. For storage systems with a MultiStore license enabled, each vFiler unit is a target with a different node name. On the storage system, the interface can be an Ethernet port, virtual network interface (vif), or a virtual LAN (VLAN) interface. Each interface on the target belongs to its own portal group by default. This enables an initiator port to conduct simultaneous iSCSI sessions on the target, with one session for each portal group. The storage system supports up to 1,024 simultaneous sessions, depending on its memory capacity. To determine whether your host’s initiator software or HBA can have multiple sessions with one storage system, see your host OS or initiator documentation. You can change the assignment of target portals to portal groups as needed to support multiconnection sessions, multiple sessions, and multipath I/O. Each session has an Initiator Session ID (ISID), a number that is determined by the initiator.
FC Introduction
FC is a licensed service on the storage system that enables you to export LUNs and transfer block data to hosts using the SCSI protocol over a Fibre Channel fabric. In a FC network, nodes include targets, initiators, and switches. Nodes register with the Fabric Name Server when they are connected to a FC switch.
Targets Storage Systems (NetApp, EMC)
Initiators Hosts (Unix, Linux, Windows)
Storage systems and hosts have adapters so they can be directly connected to each other or to FC switches with optical cable. For switch or storage system management, they might be connected to each other or to TCP/IP switches with Ethernet cable. When a node is connected to the FC SAN, it registers each of its ports with the switch’s Fabric Name Server service, using a unique identifier. Each FC node is identified by a worldwide node name (WWNN) and a worldwide port name (WWPN). WWPNs identify each port on an adapter.

WWPNs are used for the following purposes:
  • Creating an initiator group - The WWPNs of the host’s HBAs are used to create an initiator group (igroup). An igroup is used to control host access to specific LUNs. You create an igroup by specifying a collection of WWPNs of initiators in an FC network. When you map a LUN on a storage system to an igroup, you grant all the initiators in that group access to that LUN. If a host’s WWPN is not in an igroup that is mapped to a LUN, that host does not have access to the LUN. This means that the LUNs do not appear as disks on that host. You can also create port sets to make a LUN visible only on specific target ports. A port set consists of a group of FC target ports. You bind a port set to an igroup. Any host in the igroup can access the LUNs only by connecting to the target ports in the port set.
  • Uniquely identifying a storage system’s HBA target ports -The storage system’s WWPNs uniquely identify each target port on the system. The host operating system uses the combination of the WWNN and WWPN to identify storage system adapters and host target IDs. Some operating systems require persistent binding to ensure that the LUN appears at the same target ID on the host.
When the FCP service is first initialized, it assigns a WWNN to a storage system based on the serial number of its NVRAM adapter. The WWNN is stored on disk. Each target port on the HBAs installed in the storage system has a unique WWPN. Both the WWNN and the WWPN are a 64-bit address represented in the following format: nn:nn:nn:nn:nn:nn:nn:nn, where n represents a hexadecimal value. The storage system also has a unique system serial number that you can view by using the sysconfig command. The system serial number is a unique seven-digit identifier that is assigned when the storage system is manufactured. You cannot modify this serial number. Some multipathing software products use the system serial number together with the LUN serial number to identify a LUN.
You use the fcp show initiator command to see all of the WWPNs, and any associated aliases, of the FC initiators that have logged on to the storage system. Data ONTAP displays the WWPN as Portname. To know which WWPNs are associated with a specific host, see the FC Host Utilities documentation for your host. These documents describe commands supplied by the Host Utilities or the vendor of the initiator, or methods that show the mapping between the host and its WWPN. For example, for Windows hosts, use the lputilnt, HBAnywhere, or SANsurfer applications, and for UNIX hosts, use the sanlun command.
Getting the Storage Ready
I have discussed in detail how to create the following in my disk administration section:
  • Aggregates
  • Plexes
  • FlexVol and Traditional Volumes
  • QTrees
  • Files
  • LUNs
Here's a quick recap
  • A plex is a collection of one or more RAID groups that together provide the storage for one or more Write Anywhere File Layout (WAFL) file system volumes. Data ONTAP uses plexes as the unit of RAID-level mirroring when the SyncMirror software is enabled.
  • An aggregate is a collection of one or two plexes, depending on whether you want to take advantage of RAID-level mirroring. If the aggregate is unmirrored, it contains a single plex. Aggregates provide the underlying physical storage for traditional and FlexVol volumes.
  • A traditional volume is directly tied to the underlying aggregate and its properties. When you create a traditional volume, Data ONTAP creates the underlying aggregate based on the properties you assign with the vol create command, such as the disks assigned to the RAID group and RAID-level protection.
  • A FlexVol volume is a volume that is loosely coupled to its containing aggregate. A FlexVol volume can share its containing aggregate with other FlexVol volumes. Thus, a single aggregate can be the shared source of all the storage used by all the FlexVol volumes contained by that aggregate.
Once you set up the underlying aggregate, you can create, clone, or resize FlexVol volumes without regard to the underlying physical storage. You do not have to manipulate the aggregate frequently. You use either traditional or FlexVol volumes to organize and manage system and user data. A volume can hold qtrees and LUNs. A qtree is a subdirectory of the root directory of a volume. You can use qtrees to subdivide a volume in order to group LUNs. You create LUNs in the root of a volume (traditional or flexible) or in the root of a qtree, with the exception of the root volume. Do not create LUNs in the root volume because it is used by Data ONTAP for system administration. The default root volume is /vol/vol0.
Autodelete is a volume-level option that allows you to define a policy for automatically deleting Snapshot copies based on a definable threshold. Using autodelete is recommended in most SAN configurations.

You can set that threshold, or trigger, to automatically delete Snapshot copies when:
  • The volume is nearly full
  • The snap reserve space is nearly full
  • The overwrite reserved space is full
Two other things that you need to be aware of are Space Reservation and Fractional Reserve
Space Reservation When space reservation is enabled for one or more LUNs, Data ONTAP reserves enough space in the volume (traditional or FlexVol) so that writes to those LUNs do not fail because of a lack of disk space.
Fractional Reserve Fractional reserve is a volume option that enables you to determine how much space Data ONTAP reserves for Snapshot copy overwrites for LUNs, as well as for space-reserved files when all other space in the volume is used.
When provisioning storage in a SAN environment, there are several best practices to consider. Selecting and following the best practice that is most appropriate for you is critical to ensuring your systems run smoothly.

There are generally two ways to provision storage in a SAN environment:
  • Using the autodelete feature
  • Using fractional reserve
In Data ONTAP, fractional reserve is set to 100 percent and autodelete is disabled by default. However, in a SAN environment, it usually makes more sense to use autodelete (and sometimes autosize).
When using fractional reserve, you need to reserve enough space for the data inside the LUN, fractional reserve, and snapshot data, or: X + X + Delta. For example, you might need to reserve 50 GB for the LUN, 50 GB when fractional reserve is set to 100%, and 50 GB for snapshot data, or a volume of 150 GB. If fractional reserve is set to a percentage other than 100%, then the calculation becomes more complex.

In contrast, when using autodelete, you need only calculate the amount of space required for the LUN and snapshot data, or X + Delta. Since you can configure the autodelete setting to automatically delete older snapshots when space is required for data, you need not worry about running out of space for data.

For example, if you have a 100 GB volume, you might allocate 50 GB for a LUN, and the remaining 50 GB is used for snapshot data. Or in that same 100 GB volume, you might reserve 30 GB for the LUN, and 70 GB is then allocated for snapshots. In both cases, you can configure snapshots to be automatically deleted to free up space for data, so fractional reserve is unnecessary.
LUN's, iGroups, LUN maps
When you create a LUN there are a number of items you need to know
  • Path name
  • Name
  • Multiprotocol type
  • Size
  • Description
  • Identification number
  • space reservation setting
The path name of a LUN must be at the root level of the qtree or volume in which the LUN is located. Do not create LUNs in the root volume. The default root volume is /vol/vol0, for example /vol/database/lun1.
The name of the LUN is case-sensitive and can contain 1 to 256 characters. You cannot use spaces. LUN names must use only specific letters and characters. LUN names can contain only the letters A through Z, a through z, numbers 0 through 9, hyphen (“-”), underscore (“_”), left brace (“{”), right brace (“}”), and period (“.”).
The LUN Multiprotocol Type, or operating system type, specifies the OS of the host accessing the LUN. It also determines the layout of data on the LUN, the geometry used to access that data, and the minimum and maximum size of the LUN. The LUN Multiprotocol Type values are solaris, solaris_efi, windows, windows_gpt, windows_2008 , hpux, aix, linux, netware, xen, hyper_v, and vmware. When you create a LUN, you must specify the LUN type. Once the LUN is created, you cannot modify the LUN host operating system type.
You specify the size of a LUN in bytes or by using specific multiplier suffixes (k, m, g, t).
The LUN description is an optional attribute you use to specify additional information about the LUN.
A LUN must have a unique identification number (ID) so that the host can identify and access the LUN. You map the LUN ID to an igroup so that all the hosts in that igroup can access the LUN. If you do not specify a LUN ID, Data ONTAP automatically assigns one.
When you create a LUN by using the lun setup command or FilerView, you specify whether you want to enable space reservations. When you create a LUN using the lun create command, space reservation is automatically turned on.
Initiator groups (igroups) are tables of FCP host WWPNs or iSCSI host nodenames. You define igroups and map them to LUNs to control which initiators have access to LUNs. Typically, you want all of the host’s HBAs or software initiators to have access to a LUN. If you are using multipathing software or have clustered hosts, each HBA or software initiator of each clustered host needs redundant paths to the same LUN. You can create igroups that specify which initiators have access to the LUNs either before or after you create LUNs, but you must create igroups before you can map a LUN to an igroup. Initiator groups can have multiple initiators, and multiple igroups can have the same initiator. However, you cannot map a LUN to multiple igroups that have the same initiator.
Host with HBA WWPN's igroups WWPN's added to igroups LUN's mapped to igroup
Linux1, single-path (one HBA)
10:00:00:00:c9:2b:7c:8f
linux-group0 10:00:00:00:c9:2b:7c:8f /vol/vol2/lun0
Linux2, multipath (two HBAs)
10:00:00:00:c9:2b:3e:3c
10:00:00:00:c9:2b:09:3c
linux-group1 10:00:00:00:c9:2b:3e:3c
10:00:00:00:c9:2b:09:3c
/vol/vol2/lun1
The igroup name is a case-sensitive name that must satisfy several requirements. Contains 1 to 96 characters. Spaces are not allowed. Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (“-”), underscore (“_”), colon (“:”), and period (“.”). Must start with a letter or number.
The igroup type can be either -i for iSCSI or -f for FC.
The ostype indicates the type of host operating system used by all of the initiators in the igroup. All initiators in an igroup must be of the same ostype. The ostypes of initiators are solaris, windows, hpux, aix, netware, xen, hyper_v, vmware, and linux. You must select an ostype for the igroup.
Finally we get to LUN mapping which is the process of associating a LUN with an igroup. When you map the LUN to the igroup, you grant the initiators in the igroup access to the LUN. You must map a LUN to an igroup to make the LUN accessible to the host. Data ONTAP maintains a separate LUN map for each igroup to support a large number of hosts and to enforce access control. Specify the path name of the LUN to be mapped. Specify the name of the igroup that contains the hosts that will access the LUN.
Assign a number for the LUN ID, or accept the default LUN ID. Typically, the default LUN ID begins with 0 and increments by 1 for each additional LUN as it is created. The host associates the LUN ID with the location and path name of the LUN. The range of valid LUN ID numbers depends on the host.
There are two ways to setup a LUN
LUN setup command ontap1> lun setup
Note: the "lun setup" will display prompts that lead you through the setup process
Good old fashioned commandline # Create the LUN
lun create -s 100m -t windows /vol/tradvol1/lun1
# Create the igroup, you must obtain the nodes identifier (my home pc is: iqn.1991-05.com.microsoft:xblade)
igroup create -i -t windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade
# Map the LUN to the igroup
lun map /vol/tradvol1/lun1 win_hosts_group1 0
The full set of commands for both lun and igroup are below
LUN configuration
Display lun show
lun show -m
lun show -v
Initialize/Configure LUNs, mapping lun setup
Note: follow the prompts to create and configure LUN's
Create lun create -s 100m -t windows /vol/tradvol1/lun1
Destroy lun destroy [-f] /vol/tradvol1/lun1
Note: the "-f" will force the destroy
Resize lun resize <lun path> <size>
lun resize /vol/tradvol1/lun1 75m
Restart block protocol access lun online /vol/tradvol1/lun1
Stop block protocol access lun offline /vol/tradvol1/lun1
Map a LUN to an initiator group lun map /vol/tradvol1/lun1 win_hosts_group1 0
lun map -f /vol/tradvol1/lun2 linux_host_group1 1

lun show -m
Note: use "-f" to force the mapping
Remove LUN mapping lun show -m
lun offline /vol/tradvol1
lun unmap /vol/tradvol1/lun1 win_hosts_group1 0
Displays or zeros read/write statistics for LUN lun stats /vol/tradvol1/lun1
Comments lun comment /vol/tradvol1/lun1 "10GB for payroll records"
Check all lun/igroup/fcp settings for correctness lun config_check -v
Manage LUN cloning # Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command
snap create tradvol1 tradvol1_snapshot_08122010
# Create the LUN clone by entering the following command
lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/tradvol1_snapshot_08122010 lun1
Show the maximum possible size of a LUN on a given volume or qtree lun maxsize /vol/tradvol1
Move (rename) LUN lun move /vol/tradvol1/lun1 /vol/tradvol1/windows_lun1
Display/change LUN serial number lun serial -x /vol/tradvol1/lun1
Manage LUN properties lun set reservation /vol/tradvol1/hpux/lun0
Configure NAS file-sharing properties lun share <lun_path> { none | read | write | all }
Manage LUN and snapshot interactions lun snap usage -s <volume> <snapshot>
igroup configuration
display igroup show
igroup show -v
igroup show iqn.1991-05.com.microsoft:xblade
create (iSCSI) igroup create -i -t windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade
create (FC) igroup create -i -f windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade
destroy igroup destroy win_hosts_group1
add initiators to an igroup igroup add win_hosts_group1 iqn.1991-05.com.microsoft:laptop
remove initiators to an igroup igroup remove win_hosts_group1 iqn.1991-05.com.microsoft:laptop
rename igroup rename win_hosts_group1 win_hosts_group2
set O/S type igroup set win_hosts_group1 ostype windows
Enabling ALUA igroup set win_hosts_group1 alua yes

Note: ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA enables the initiator to query the target about path attributes, such as primary path and secondary path. It also enables the target to communicate events back to the initiator. As long as the host supports the ALUA standard, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required.
There are a number of iSCSI commands that you can use, I am not going to discuss iSCSI security (CHAPS or RADIUS), I will leave you to look at the doucmentation on this advanced topic.
display iscsi initiator show
iscsi session show [-t]
iscsi connection show -v
iscsi security show
status iscsi status
start iscsi start
stop iscsi stop
stats iscsi stats
nodename iscsi nodename

# to change the name
iscsi nodename <new name>
interfaces iscsi interface show

iscsi interface enable e0b
iscsi interface disable e0b
portals iscsi portal show

Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The storage system's target IP addresses are the addresses of the interfaces used for the iSCSI protocol
accesslists iscsi interface accesslist show

Note: you can add or remove interfaces from the list
We have discussed how to setup a server using iSCSI but what if the server is using FC to connect to the NetApp.
A port set consists of a group of FC target ports. You bind a port set to an igroup, to make the LUN available only on a subset of the storage system's target ports. Any host in the igroup can access the LUNs only by connecting to the target ports in the port set. If an igroup is not bound to a port set, the LUNs mapped to the igroup are available on all of the storage system’s FC target ports. The igroup controls which initiators LUNs are exported to. The port set limits the target ports on which those initiators have access. You use port sets for LUNs that are accessed by FC hosts only. You cannot use port sets for LUNs accessed by iSCSI hosts.
All ports on both systems in the HA pairs are visible to the hosts. You use port sets to fine-tune which ports are available to specific hosts and limit the amount of paths to the LUNs to comply with the limitations of your multipathing software. When using port sets, make sure your port set definitions and igroup bindings align with the cabling and zoning requirements of your configuration
Port Sets
display portset show
portset show portset1
igroup show linux-igroup1
create portset create -f portset1 SystemA:4b
destroy igroup unbind linux-igroup1 portset1
portset destroy portset1
add portset add portset1 SystemB:4b
remove portset remove portset1 SystemB:4b
binding igroup bind linux-igroup1 portset1
igroup unbind linux-igroup1 portset1
FCP service
display fcp show adapter -v
daemon status fcp status
start fcp start
stop fcp stop
stats fcp stats -i interval [-c count] [-a | adapter]
fcp stats -i 1
target expansion adapters fcp config <adapter> [down|up]

fcp config 4a down
target adapter speed fcp config <adapter> speed [auto|1|2|4|8]
fcp config 4a speed 8
set WWPN # fcp portname set [-f] adapter wwpn
fcp portname set -f 1b 50:0a:09:85:87:09:68:ad
swap WWPN # fcp portname swap [-f] adapter1 adapter2
fcp portname swap -f 1a 1b
change WWNN # display nodename
fcp nodename

fcp nodename [-f]nodename
fcp nodename 50:0a:09:80:82:02:8d:ff
Note: The WWNN of a storage system is generated by a serial number in its NVRAM, but it is stored ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this unlikely event, you can change the WWNN of the storage system.
WWPN Aliases - display fcp wwpn-alias show
fcp wwpn-alias show -a my_alias_1
fcp wwpn-alias show -w 10:00:00:00:c9:30:80:2
WWPN Aliases - create fcp wwpn-alias set [-f] alias wwpn

fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f
WWPN Aliases - remove fcp wwpn-alias remove [-a alias ... | -w wwpn]
fcp wwpn-alias remove -a my_alias_1
fcp wwpn-alias remove -w 10:00:00:00:c9:30:80:2
Snapshots and Cloning
Data ONTAP provides a variety of methods for protecting data in an iSCSI or Fibre Channel SAN. These methods are based on Snapshot technology in Data ONTAP, which enables you to maintain multiple read-only versions of LUNs online per volume. Snapshot copies are a standard feature of Data ONTAP. A Snapshot copy is a frozen, read-only image of the entire Data ONTAP file system, or WAFL (Write Anywhere File Layout) volume, that reflects the state of the LUN or the file system at the time the Snapshot copy is created. The other data protection methods listed in the table below rely on Snapshot copies or create, use, and destroy Snapshot copies, as required.
The following table describes the various methods for protecting your data with Data ONTAP
Snapshot copy Make point-in-time copies of a volume.
SnapRestore
  • Restore a LUN or file system to an earlier preserved state in less than a minute without rebooting the storage system, regardless of the size of the LUN or volume being restored.
  • Recover from a corrupted database or a damaged application, a file system, a LUN, or a volume by using an existing Snapshot copy.
SnapMirror
  • Replicate data or asynchronously mirror data from one storage system to another over local or wide area networks (LANs or WANs).
  • Transfer Snapshot copies taken at specific points in time to other storage systems or near-line systems. These replication targets can be in the same data center through a LAN or distributed across the globe connected through metropolitan area networks (MANs) or WANs. Because SnapMirror operates at the changed block level instead of transferring entire files or file systems, it generally reduces bandwidth and transfer time requirements for replication.
SnapVault
  • Back up data by using Snapshot copies on the storage system and transferring them on a scheduled basis to a destination storage system.
  • Store these Snapshot copies on the destination storage system for weeks or months, allowing recovery operations to occur nearly instantaneously from the destination storage system to the original storage system.
SnapDrive for Windows or UNIX
  • Manage storage system Snapshot copies directly from a Windows or UNIX host.
  • Manage storage (LUNs) directly from a host.
  • Configure access to storage directly from a host. SnapDrive for Windows supports Windows 2000 Server and Windows Server 2003. SnapDrive for UNIX supports a number of UNIX environments.
Native tape backup and recovery Store and retrieve data on tape.
NDMP
(Network Data Management Protocol)
Control native backup and recovery facilities in storage systems and other file servers. Backup application vendors provide a common interface between backup applications and file servers.
A LUN clone is a point-in-time, writable copy of a LUN in a Snapshot copy. Changes made to the parent LUN after the clone is created are not reflected in the Snapshot copy. A LUN clone shares space with the LUN in the backing Snapshot copy. When you clone a LUN, and new data is written to the LUN, the LUN clone still depends on data in the backing Snapshot copy. The clone does not require additional disk space until changes are made to it. You cannot delete the backing Snapshot copy until you split the clone from it. When you split the clone from the backing Snapshot copy, the data is copied from the Snapshot copy to the clone, thereby removing any dependence on the Snapshot copy. After the splitting operation, both the backing Snapshot copy and the clone occupy their own space.
Use LUN clones to create multiple read/write copies of a LUN. You might want to do this for the following reasons:
  • You need to create a temporary copy of a LUN for testing purposes.
  • You need to make a copy of your data available to additional users without giving them access to the production data.
  • You want to create a clone of a database for manipulation and projection operations, while preserving the original data in unaltered form.
  • You want to access a specific subset of a LUN's data (a specific logical volume or file system in a volume group, or a specific file or set of files in a file system) and copy it to the original LUN, without restoring the rest of the data in the original LUN. This works on operating systems that support mounting a LUN and a clone of the LUN at the same time. SnapDrive for UNIX allows this with the snap connect command.
Display clones snap list
create clone # Create a LUN by entering the following command
lun create -s 10g -t solaris /vol/tradvol1/lun1
# Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command
snap create tradvol1 tradvol1_snapshot_08122010
# Create the LUN clone by entering the following command
lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/lun1 tradvol1_snapshot_08122010
destroy clone # display the snapshot copies
lun snap usage tradvol1 tradvol1_snapshot_08122010
# Delete all the LUNs in the active file system that are displayed by the lun snap usage command by entering the following command
lun destroy /vol/tradvol1/clone_lun1
# Delete all the Snapshot copies that are displayed by the lun snap usage command in the order they appear
snap delete tradvol1 tradvol1_snapshot_08122010
clone dependency vol options <vol_name> <snapshot_clone_dependency> on
vol options <vol_name> <snapshot_clone_dependency> off
Note: Prior to Data ONTAP 7.3, the system automatically locked all backing Snapshot copies when Snapshot copies of LUN clones were taken. Starting with Data ONTAP 7.3, you can enable the system to only lock backing Snapshot copies for the active LUN clone. If you do this, when you delete the active LUN clone, you can delete the base Snapshot copy without having to first delete all of the more recent backing Snapshot copies.

This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is set to off, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy. If you enable this option, you are not required to rediscover the LUNs. If you perform a subsequent volume snap restore operation, the system restores whichever value was present at the time the Snapshot copy was taken.
Restoring snapshot
snap restore -s payroll_lun_backup.2 -t vol /vol/payroll_lun
splitting the clone lun clone split start lun_path

lun clone split status lun_path
stop clone splitting lun clone split stop lun_path
delete snapshot copy snap delete vol-name snapshot-name

snap delete -a -f <vol-name>
disk space usage lun snap usage tradvol1 mysnap
Use Volume copy to copy LUN's vol copy start -S source:source_volume dest:dest_volume

vol copy start -S /vol/vol0 filerB:/vol/vol1
Disk Space Management
There are number of commands that let you see the disk space and manage it.
Disk space usage for aggregates aggr show_space
Disk space usage for volumes or aggregates df
The estimated rate of change of data between Snapshot copies in a
volume
snap delta

snap delta /vol/tradvol1 tradvol1_snapshot_08122010
The estimated amount of space freed if you delete the specified
Snapshot copies
snap reclaimable
snap reclaimable /vol/tradvol1 tradvol1_snapshot_08122010

File Access Management
I have covered Block Access Management now I will discuss File Access Management covering
  • NFS
  • CIFS
  • FTP
  • HTTP
Data ONTAP controls access to files according to the authentication-based and file-based restrictions that you specify. With authentication-based restrictions, you can specify which client machines and which users can connect to the entire storage system or a vFiler unit. Data ONTAP supports Kerberos authentication from both UNIX and Windows servers.
With file-based restrictions, you can specify which users can access which files. When a user creates a file, Data ONTAP generates a list of access permissions for the file. While the form of the permissions list varies with each protocol, it always includes common permissions, such as reading and writing permissions. When a user tries to access a file, Data ONTAP uses the permissions list to determine whether to grant access. Data ONTAP grants or denies access according to the operation that the user is performing, such as reading or writing, and the following factors:
  • User account
  • User group or netgroup
  • Client protocol
  • Client IP address
  • File type
As part of the verification process, Data ONTAP maps host names to IP addresses using the lookup service you specify—Lightweight Directory Access Protocol (LDAP), Network Information Service (NIS), or local storage system information.
File Access using NFS
You can export and unexport file system paths on your storage system, making them available or unavailable, respectively, for mounting by NFS clients, including PC-NFS and WebNFS clients.
Export Options
actual=<path> Specifies the actual file system path corresponding to the exported file system path.
anon=<uid>|<name> Specifies the effective user ID (or name) of all anonymous or root NFS client users that access the file system path.
nosuid Disables setuid and setgid executables and mknod commands on the file system path.
ro | ro=clientid Specifies which NFS clients have read-only access to the file system path.
rw | rw=clientid Specifies which NFS clients have read-write access to the file system path.
root=clientid Specifies which NFS clients have root access to the file system path. If you specify the root= option, you must specify at least one NFS client identifier. To exclude NFS clients from the list, prepend the NFS client identifiers with a minus sign (-).
sec=sectype Specifies the security types that an NFS client must support to access the file system path. To apply the security types to all types of access, specify the sec= option once. To apply the security types to specific types of access (anonymous, non-super user, read-only, read-write, or root), specify the sec= option at least twice, once before each access type to which it applies (anon, nosuid, ro, rw, or root, respectively).
security types could be one of the following:
none No security. Data ONTAP treats all of the NFS client's users as anonymous users.
sys Standard UNIX (AUTH_SYS) authentication. Data ONTAP checks the NFS credentials of all of the
NFS client's users, applying the file access permissions specified for those users in the NFS server's /etc/passwd file. This is the default security type.
krb5 Kerberos(tm) Version 5 authentication. Data ONTAP uses data encryption standard (DES) key
encryption to authenticate the NFS client's users.
krb5i Kerberos(tm) Version 5 integrity. In addition to authenticating the NFS client's users, Data
ONTAP uses message authentication codes (MACs) to verify the integrity of the NFS client's remote procedure requests and responses, thus preventing "man-in-the-middle" tampering.
krb5p Kerberos(tm) Version 5 privacy. In addition to authenticating the NFS client's users and verifying data integrity, Data ONTAP encrypts NFS arguments and results to provide privacy.

Examples rw=10.45.67.0/24
ro,root=@trusted,rw=@friendly
rw,root=192.168.0.80,nosuid
Export Commands
Displaying exportfs
exportfs -q <path>
create # create export in memory and write to /etc/exports (use default options)
exportfs -p /vol/nfs1
# create export in memory and write to /etc/exports (use specific options)
exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1

# create export in memory only using own specific options
exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1
remove # Memory only
exportfs -u <path>

# Memory and /etc/exportfs
exportfs -z <path>
export all exportfs -a
check access exportfs -c 192.168.0.80 /vol/nfs1
flush exportfs -f
exportfs -f <path>
reload exportfs -r
storage path exportfs -s <path>
Write export to a file exportfs -w <path/export_file>
fencing # Suppose /vol/vol0 is exported with the following export options:

   -rw=pig:horse:cat:dog,ro=duck,anon=0

# The following command enables fencing of cat from /vol/vol0
exportfs -b enable save cat /vol/vol0

# cat moves to the front of the ro= list for /vol/vol0:

   -rw=pig:horse:dog,ro=cat:duck,anon=0
stats nfsstat

File Access using CIFS

Netapp supports a number of Windows versions when it comes to CIFS, it is a licenced product. before you begin you need to setup the CIFS server by running the follow command. I am not going to go into detail but here are the basic commands that you need. If you are familiar with SAMBA then you will have no troble with this.
Useful CIFS options
change the security style options wafl.default_security_style {ntfs | unix | mixed}
timeout options cifs.idle_timeout time
Performance options cifs.oplocks.enable on

Note: Under some circumstances, if a process has an exclusive oplock on a file and a second process attempts to open the file, the first process must invalidate cached data and flush writes and locks. The client must then relinquish the oplock and access to the file. If there is a network failure during this flush, cached write data might be lost.
CIFS Commands
useful files
/etc/cifsconfig_setup.cfg
/etc/usermap.cfs
/etc/passwd
/etc/cifsconfig_share.cfg


Note: use "rdfile" to read the file
CIFS setup cifs setup

Note: you will be prompted to answer a number of questions based on what requirements you need.
start cifs restart
stop cifs terminate

# terminate a specific client
cifs terminate <client_name>|<IP Address>
sessions cifs sessions
cifs sessions <user>
cifs sessions <IP Address>

# Authentication
cifs sessions -t

# Changes
cifs sessions -c

# Security Info
cifs session -s
Broadcast message cifs broadcast * "message"
cifs broadcast <client_name> "message"
permissions cifs access <share> <user|group> <permission>

# Examples
cifs access sysadmins -g wheel Full Control
cifs access -delete releases ENGINEERING\mary
Note: rights can be Unix-style combinations of r w x - or NT-style "No Access", "Read", "Change", and "Full Control"
stats cifs stat <interval>
cifs stat <user>
cifs stat <IP Address>
create a share # create a volume in the normal way
# then using qtrees set the style of the volume {ntfs | unix | mixed}
# Now you can create your share
cifs shares -add TEST /vol/flexvol1/TEST -comment "Test Share " -forcegroup workgroup -maxusers 100
change share characteristics cifs shares -change sharename {-browse | -nobrowse} {-comment desc | - nocomment} {-maxusers userlimit | -nomaxusers} {-forcegroup groupname | -noforcegroup} {-widelink | -nowidelink} {-symlink_strict_security | - nosymlink_strict_security} {-vscan | -novscan} {-vscanread | - novscanread} {-umask mask | -noumask {-no_caching | -manual_caching | - auto_document_caching | -auto_program_caching}

# example
cifs shares -change <sharename> -novscan
home directories # Display home directories
cifs homedir

# Add a home directory
wrfile -a /etc/cifs_homedir.cfg /vol/TEST

# check it
rdfile /etc/cifs_homedir.cfg

# Display for a Windows Server
net view \\<Filer IP Address>

# Connect
net use * \\192.168.0.75\TEST

Note: make sure the directory exists
domain controller # add a domain controller
cifs prefdc add lab 10.10.10.10 10.10.10.11
# delete a domain controller
cifs prefdc delete lab

# List domain information
cifs domaininfo
# List the preferred controllers
cifs prefdc print

# Restablishing
cifs resetdc
change filers domain password cifs changefilerpwd
Tracing permission problems sectrace add [-ip ip_address] [-ntuser nt_username] [-unixuser unix_username] [-path path_prefix] [-a]

#Examples
sectrace add -ip 192.168.10.23
sectrace add -unixuser foo -path /vol/vol0/home4 -a
# To remove
sectrace delete all
sectrace delete <index>

# Display tracing
sectrace show

# Display error code status
sectrace print-status <status_code>
sectrace print-status 1:51544850432:32:78
File Access using FTP

You can enable and configure the Internet File Transfer Protocol (FTP) server to let users of Windows and UNIX FTP clients access the files on your storage system. Again there is not much to say about FTP so I will keep this short and sweet.
Useful Options
Enable options ftpd.enable on
Disable options ftpd.enable off
File Locking options ftpd.locking delete
options ftpd.locking none

Note: To prevent users from modifying files while the FTP server is transferring them, you can enable FTP file locking. Otherwise, you can disable FTP file locking. By default, FTP file locking is disabled.
Authenication Style options ftpd.auth_style {unix | ntlm | mixed}
bypassing of FTP traverse checking options ftpd.bypass_traverse_checking on
options ftpd.bypass_traverse_checking off

Note: If the ftpd.bypass_traverse_checking option is set to off, when a user attempts to access a file using FTP, Data ONTAP checks the traverse (execute) permission for all directories in the path to the file. If any of the intermediate directories does not have the "X" (traverse permission), Data ONTAP denies access to the file. If the ftpd.bypass_traverse_checking option is set to on, when a user attempts to access a file, Data ONTAP does not check the traverse permission for the intermediate directories when determining whether to grant or deny access to the file.
Restricting FTP users to a specific directory options ftpd.dir.restriction on
options ftpd.dir.restriction off
Restricting FTP users to their home directories or a default directory options ftpd.dir.override ""
Maximum number of connections options ftpd.max_connections n
options ftpd.max_connections_threshold n
idle timeout value options ftpd.idle_timeout n s | m | h
anonymous logins options ftpd.anonymous.enable on
options ftpd.anonymous.enable off

# specify the name for the anonymous login
options ftpd.anonymous.name username

# create the directory for the anonymous login
options ftpd.anonymous.home_dir homedir
FTP Commands
Log files /etc/log/ftp.cmd
/etc/log/ftp.xfer

# specify the max number of logfiles (default is 6) and size
options ftpd.log.nfiles 10
options ftpd.log.filesize 1G

Note: use rdfile to view
Restricting access /etc/ftpusers

Note: using rdfile and wrfile to access /etc/ftpusers
stats ftp stat

# to reset
ftp stat -z
File Access using HTTP

To let HTTP clients (web browsers) access the files on your storage system, you can enable and configure Data ONTAP's built-in HyperText Transfer Protocol (HTTP) server. Alternatively, you can purchase and connect a third-party HTTP server to your storage system.
HTTP Options
enable options httpd.enable on
disable options httpd.enable off
Enabling or disabling the bypassing of HTTP traverse checking options httpd.bypass_traverse_checking on
options httpd.bypass_traverse_checking off

Note: this is similar to the FTP version
root directory options httpd.rootdir /vol0/home/users/pages
Host access options httpd.access host=Host1 AND if=e3
options httpd.admin.access host!=Host1
HTTP Commands
Log files /etc/log/httpd.log

# use the below to change the logfile format
options httpd.log.format alt1

Note: use rdfile to view
redirects redirect /cgi-bin/* http://cgi-host/*
pass rule pass /image-bin/*
fail rule fail /usr/forbidden/*
mime types /etc/httpd.mimetypes

Note: use rdfile and wrfile to edit
interface firewall ifconfig f0 untrusted
stats httpstat [-dersta]

# reset the stats
httpstat -z[derta]

Network Management
Your storage system supports physical network interfaces, such as Ethernet and Gigabit Ethernet interfaces, and virtual network interfaces, such as interface group and virtual local area network (VLAN). Each of these network interface types has its own naming convention.
Your storage system supports the following types of physical network interfaces:
  • 10/100/1000 Ethernet
  • Gigabit Ethernet (GbE)
  • 10 Gigabit Ethernet
In addition, some storage system models include a physical network interface named e0M. The e0M interface is used only for Data ONTAP management activities, such as for running a Telnet, SSH, or RSH session. The following table lists interface types, interface name formats, and example of names that use these
identifiers.
Interface Type Interface Name Format Example
Physical interface on a single-port adapter or slot e<slot_number> e0
e1
Physical interface on a multiple-port adapter or slot e<slot_number><port_letter> e0a
e0b
e1a
e1b
Interface group Any user-specified string that meets certain criteria web_ifgrp
ifgrp1
VLAN <physical_interface_name>-<vlan-ID> or
<ifgrp_name>-<vlan_ID>
e8-2
ifgrp1-3
Beginning with Data ONTAP 7.3, storage systems can accommodate from 256 to 1,024 network interfaces per system, depending on the storage system model, system memory, and whether they are in an HA pair. Each storage system can support up to 16 interface groups. The maximum number of VLANs that can be supported equals the maximum number of network interfaces shown in the following table minus the total number of physical interfaces, interface groups, vh, and loopback interfaces supported by the storage system.
You can manage your storage system locally from an Ethernet connection by using any network interface. However, to manage your storage system remotely, the system should have a Remote LAN Module (RLM) or Baseboard Management Controller (BMC). These provide remote platform management capabilities, including remote access, monitoring, troubleshooting, and alerting features.
Jumbo frames are larger than standard frames and require fewer frames. Therefore, you can reduce the CPU processing overhead by using jumbo frames with your network interfaces. Particularly, by using jumbo frames with a Gigabit or 10 Gigabit Ethernet infrastructure, you can significantly improve performance,depending on the network traffic. Jumbo frames are packets that are longer than the standard Ethernet (IEEE 802.3) frame size of 1,518 bytes. The frame size definition for jumbo frames is vendor-specific because jumbo frames are not part of the IEEE standard. The most commonly used jumbo frame size is 9,018 bytes. Jumbo frames can be used for all Gigabit and 10 Gigabit Ethernet interfaces that are supported on your storage system. The interfaces must be operating at or above 1,000 Mbps. You can set up jumbo frames on your storage system in the following two ways:
  • During initial setup, the setup command prompts you to configure jumbo frames if you have an interface that supports jumbo frames on your storage system.
  • If your system is already running, you can enable jumbo frames by setting the MTU size on an interface.
You can configure IP addresses for your network interface during system setup. To configure the IP addresses later, you should use the ifconfig command.
Display ifconfig -a
ifconfig <interface>
IP address ifconfig e0 <IP Address>
ifconfig e0a <IP Address>

# Remove a IP Address
ifconfig e3 0
subnet mask ifconfig e0a netmask <subnet mask address>
broadcast ifconfig e0a broadcast <broadcast address>
media type ifconfig e0a mediatype 100tx-fd
maximum transmission unit (MTU) ifconfig e8 mtusize 9000
Flow control ifconfig <interface_name> <flowcontrol> <value>

# example
ifconfig e8 flowcontrol none
Note: value is the flow control type. You can specify the following values for the flowcontrol option:

none    - No flow control
receive - Able to receive flow control frames
send    - Able to send flow control frames
full    - Able to send and receive flow control frames

The default flowcontrol type is full.
trusted ifconfig e8 untrusted

Note: You can specify whether a network interface is trustworthy or untrustworthy. When you specify an interface as untrusted (untrustworthy), any packets received on the interface are likely to be dropped.
HA Pair ifconfig e8 partner <IP Address>

## You must enable takeover on interface failures by entering the following commands:
options cf.takeover.on_network_interface_failure enable
ifconfig interface_name {nfo|-nfo}
nfo   — Enables negotiated failover
-nfo  — Disables negotiated failover
Note: In an HA pair, you can assign a partner IP address to a network interface. The network interface takes over this IP address when a failover occurs
Alias # Create alias
ifconfig e0 alias 192.0.2.30

# Remove alias
ifconfig e0 -alias 192.0.2.30
Block/Unblock protocols # Block
options interface.blocked.cifs e9
options interface.blocked.cifs e0a,e0b

# Unblock
options interface.blocked.cifs ""
Stats ifstat
netstat

Note: there are many options to both these commands so I will leave to the man pages
bring up/down an interface ifconfig <interface> up
ifconfig <interface> down
Routing
You can have Data ONTAP route its own outbound packets to network interfaces. Although your storage system can have multiple network interfaces, it does not function as a router. However, it can route its outbound packets.

Data ONTAP uses two routing mechanisms:
  • Fast path Data ONTAP uses this mechanism to route NFS packets over UDP and to route all TCP traffic.
  • Routing table To route IP traffic that does not use fast path, Data ONTAP uses the information available in the local routing table. The routing table contains the routes that have been established and are currently in use, as well as the default route specification.
Fast path is an alternative routing mechanism to the routing table, in which the responses to incoming network traffic are sent back by using the same interface as the incoming traffic. It provides advantages such as load balancing between multiple network interfaces and improved storage system performance. Fast path is enabled automatically on your storage system; however, you can disable it. Using fast path provides the following advantages:
  • Load balancing between multiple network interfaces on the same subnet. Load balancing is achieved by sending responses on the same interface of your storage system that receives the incoming requests.
  • Increased storage system performance by skipping routing table lookups.
You can manage the routing table automatically by using the routed daemon, or manually by using the route command. The routed daemon performs the following functions by default:
  • Deletes redirected routes after a specified period
  • Performs router discovery with ICMP Router Discovery Protocol (IRDP) This is useful only if there is no static default route.
  • Listens for Routing Information Protocol (RIP) packets
  • Migrates routes to alternate interfaces when multiple interfaces are available on the same subnet
The routed daemon can also be configured to perform the following functions:
  • Control RIP and IRDP behavior
  • Generate RIP response messages that update a host route on your storage system
  • Recognize distant gateways identified in the /etc/gateways file
If you are firmiliar with Unix routing then you should have no trouble with the following routing commands:
default route # using wrfile and rdfile edit the /etc/rc file with the below
route add default 192.168.0.254 1

# the full /etc/rc file will look like something below
hostname netapp1
ifconfig e0 192.168.0.10 netmask 255.255.255.0 mediatype 100tx-fd
route add default 192.168.0.254 1
routed on
enable/disable fast path options ip.fastpath.enable {on|off}

Note:
on   — Enables fast path
off  — Disables fast path
enable/disable routing daemon routed {on|off}

Note:
on   — Turns on the routed daemon
off  — Turns off the routed daemon
Display routing table netstat -rn
route -s
routed status
Add to routing table route add 192.168.0.15 gateway.com 1
Hosts and DNS
Hosts and DNS are the same as Unix but here is a quick table just to jog your memory
Hosts # use wrfile and rdfile to read and edit /etc/hosts file , it basically use the sdame rules as a Unix
# hosts file
nsswitch file # use wrfile and rdfile to read and edit /etc/nsswitch.conf file , it basically uses the same rules as a
# Unix nsswitch.conf file
DNS # use wrfile and rdfile to read and edit /etc/resolv.conf file , it basically uses the same rules as a
# Unix resolv.conf file

options dns.enable {on|off}

Note:
on   — Enables DNS
off  — Disables DNS
Domain Name options dns.domainname <domain>
DNS cache options dns.cache.enable
options dns.cache.disable

# To flush the DNS cache
dns flush

# To see dns cache information
dns info
DNS updates options dns.update.enable {on|off|secure}
Note:
on     — Enables dynamic DNS updates
off    — Disables dynamic DNS updates
secure — Enables secure dynamic DNS updates
time-to-live (TTL) options dns.update.ttl <time>
# Example
options dns.update.ttl 2h

Note: time can be set in seconds (s), minutes (m), or hours (h), with a minimum value of 600 seconds
and a maximum value of 24 hour
I will leave you to read the documentation regarding how to configure NIS.
VLAN
This section is a breif introduction into VLANs. VLANs provide logical segmentation of networks by creating separate broadcast domains. A VLAN can span multiple physical network segments. The end-stations belonging to a VLAN are related by function or application. For example, end-stations in a VLAN might be grouped by departments, such as engineering and accounting, or by projects, such as release1 and release2. Because physical proximity of the endstations is not essential in a VLAN, you can disperse the end-stations geographically and still contain the broadcast domain in a switched network.
An end-station must become a member of a VLAN before it can share the broadcast domain with other end-stations on that VLAN. The switch ports can be configured to belong to one or more VLANs (static registration), or end-stations can register their VLAN membership dynamically, with VLAN-aware switches. VLAN membership can be based on one of the following:
  • Switch ports
  • End-station MAC addresses
  • Protocol
In Data ONTAP, VLAN membership is based on switch ports. With port-based VLANs, ports on the same or different switches can be grouped to create a VLAN. As a result, multiple VLANs can exist on a single switch.
Any broadcast or multicast packets originating from a member of a VLAN are confined only among the members of that VLAN. Communication between VLANs, therefore, must go through a router. The following figure illustrates how communication occurs between geographically dispersed VLAN members.

In this figure, VLAN 10 (Engineering), VLAN 20 (Marketing), and VLAN 30 (Finance) span three floors of a building. If a member of VLAN 10 on Floor 1 wants to communicate with a member of VLAN 10 on Floor 3, the communication occurs without going through the router, and packet flooding is limited to port 1 of Switch 2 and Switch 3 even if the destination MAC address to Switch 2 and Switch 3 is not known.
GARP VLAN Registration Protocol (GVRP) uses Generic Attribute Registration Protocol (GARP) toallow end-stations on a network to dynamically register their VLAN membership with GVRP-aware switches. Similarly, these switches dynamically register with other GVRP-aware switches on the network, thus creating a VLAN topology across the network. GVRP provides dynamic registration of VLAN membership; therefore, members can be added or removed from a VLAN at any time, saving the overhead of maintaining static VLAN configuration on switch ports. Additionally, VLAN membership information stays current, limiting the broadcast domain of a VLAN only to the active members of that VLAN.
By default, GVRP is disabled on all VLAN interfaces in Data ONTAP; however, you can enable it. After you enable GVRP on an interface, the VLAN interface informs the connecting switch about the VLANs it supports. This information (dynamic registration) is updated periodically. This information is also sent every time an interface comes up after being in the down state or whenever there is a change in the VLAN configuration of the interface.
A VLAN tag is a unique identifier that indicates the VLAN to which a frame belongs. Generally, a VLAN tag is included in the header of every frame sent by an end-station on a VLAN. On receiving a tagged frame, the switch inspects the frame header and, based on the VLAN tag, identifies the VLAN. The switch then forwards the frame to the destination in the identified VLAN. If the destination MAC address is unknown, the switch limits the flooding of the frame to ports that belong to the identified VLAN.
VLANs provide a number of advantages such as ease of administration, confinement of broadcast domains, reduced network traffic, and enforcement of security policies.
Create vlan create [-g {on|off}] ifname vlanid

# Create VLANs with identifiers 10, 20, and 30 on the interface e4 of a storage system by using the following command:
vlan create e4 10 20 30
# Configure the VLAN interface e4-10 by using the following command
ifconfig e4-10 192.168.0.11 netmask 255.255.255.0
Add vlan add e4 40 50
Delete # Delete specific VLAN
vlan delete e4 30

# Delete All VLANs on a interface
vlan delete e4
Enable/Disable GRVP on VLAN vlan modify -g {on|off} ifname
Stat vlan stat <interface_name> <vlan_id>

# Examples
vlan stat e4
vlan stat e4 10
Interface Groups
An interface group is a feature in Data ONTAP that implements link aggregation on your storage system. Interface groups provide a mechanism to group together multiple network interfaces (links) into one logical interface (aggregate). After an interface group is created, it is indistinguishable from a physical network interface.
Interface groups provide several advantages over individual network interfaces:
  • Higher throughput Multiple interfaces work as one interface.
  • Fault tolerance If one interface in an interface group goes down, your storage system stays connected to the network by using the other interfaces.
  • No single point of failureIf the physical interfaces in an interface group are connected to multiple switches and a switchgoes down, your storage system stays connected to the network through the other switches.
You can create three different types of interface groups on your storage system: single-mode interface groups, static multimode interface groups, and dynamic multimode interface groups. Each interface group provides different levels of fault tolerance. Multimode interface groups provide methods for load balancing network traffic.
In a single-mode interface group, only one of the interfaces in the interface group is active. The other interfaces are on standby, ready to take over if the active interface fails. All interfaces in a singlemode interface group share a common MAC address. There can be more than one interface on standby in a single-mode interface group. If an active interface fails, your storage system randomly picks one of the standby interfaces to be the next active link. The active link is monitored and link failover is controlled by the storage system; therefore, single-mode interface group does not require any switch configuration. Single-mode interface groups also do not require a switch that supports link aggregation.
Dynamic multimode interface groups can detect not only the loss of link status (as do static multimode interface groups), but also a loss of data flow. This feature makes dynamic multimode interface groups compatible with high-availability environments. The dynamic multimode interface group implementation in Data ONTAP is in compliance with IEEE 802.3ad (dynamic), also known as Link Aggregation Control Protocol (LACP). Dynamic multimode interface groups have some special requirements. They include the following:
  • Dynamic multimode interface groups must be connected to a switch that supports LACP.
  • Dynamic multimode interface groups must be configured as first-level interface groups.
  • Dynamic multimode interface groups should be configured to use the IP-based load-balancing method.
In a dynamic multimode interface group, all interfaces in the interface group are active and share a single MAC address. This logical aggregation of interfaces provides higher throughput than a singlemode interface group. A dynamic multimode interface group requires a switch that supports link aggregation over multiple switch ports. The switch is configured so that all ports to which links of an interface group are connected are part of a single logical port. For information about configuring the switch, see your switch vendor's documentation. Some switches might not support link aggregation of ports configured for jumbo frames.
The load-balancing method for a multimode interface group can be specified only when the interface group is created. If no method is specified, the IP address based load-balancing method is used.
Create (single-mode) # To create a single-mode interface group, enter the following command:
ifgrp create single SingleTrunk1 e0 e1 e2 e3
# To configure an IP address of 192.168.0.10 and a netmask of 255.255.255.0 on the singlemode interface group SingleTrunk1
ifconfig SingleTrunk1 192.168.0.10 netmask 255.255.255.0
# To specify the interface e1 as preferred
ifgrp favor e1
Create ( multi-mode) # To create a static multimode interface group, comprising interfaces e0, e1, e2, and e3 and using MAC
# address load balancing
ifgrp create multi MultiTrunk1 -b mac e0 e1 e2 e3
# To create a dynamic multimode interface group, comprising interfaces e0, e1, e2, and e3 and using IP
# address based load balancing
ifgrp create lacp MultiTrunk1 -b ip e0 e1 e2 e3
Create second level intreface group # To create two interface groups and a second-level interface group. In this example, IP address load
# balancing is used for the multimode interface groups.
ifgrp create multi Firstlev1 e0 e1
ifgrp create multi Firstlev2 e2 e3
ifgrp create single Secondlev Firstlev1 Firstlev2
# To enable failover to a multimode interface group with higher aggregate bandwidth when one or more of
# the links in the active multimode interface group fail
options ifgrp.failover.link_degraded on
Note: You can create a second-level interface group by using two multimode interface groups. Secondlevel interface groups enable you to provide a standby multimode interface group in case the primary multimode interface group fails.
Create second level intreface group in a HA pair # Use the following commands to create a second-level interface group in an HA pair. In this example,
# IP-based load balancing is used for the multimode interface groups.

# On StorageSystem1:
ifgrp create multi Firstlev1 e1 e2
ifgrp create multi Firstlev2 e3 e4
ifgrp create single Secondlev1 Firstlev1 Firstlev2

# On StorageSystem2 :
ifgrp create multi Firstlev3 e5 e6
ifgrp create multi Firstlev4 e7 e8
ifgrp create single Secondlev2 Firstlev3 Firstlev4

# On StorageSystem1:
ifconfig Secondlev1 partner Secondlev2

# On StorageSystem2 :
ifconfig Secondlev2 partner Secondlev1
Favoured/non-favoured interface # select favoured interface
ifgrp nofavor e3
# select a non-flavoured interface
ifgrp nofavor e3
Add ifgrp add MultiTrunk1 e4
Delete ifconfig MultiTrunk1 down
ifgrp delete MultiTrunk1 e4

Note: You must configure the interface group to the down state before you can delete a network interface
from the interface group
Destroy ifconfig ifgrp_name down
ifgrp destroy ifgrp_name
Note: You must configure the interface group to the down state before you can delete a network interface
from the interface group
Enable/disable a interface group ifconfig ifgrp_name up
ifconfig ifgrp_name down
Status ifgrp status [ifgrp_name]
Stat ifgrp stat [ifgrp_name] [interval]
Diagnostic Tools
There are a number of tools and options that you can use to help with network related problems
Useful options
Ping thottling # Throttle ping
options ip.ping_throttle.drop_level <packets_per_second>

# Disable ping throttling
options ip.ping_throttle.drop_level 0
Forged IMCP attacks options ip.icmp_ignore_redirect.enable on

Note: You can disable ICMP redirect messages to protect your storage system against forged ICMP redirect attacks.
Useful Commands
netdiag The netdiag command continuously gathers and analyzes statistics, and performs diagnostic tests. These diagnostic tests identify and report problems with your physical network or transport layers and suggest remedial action.
ping You can use the ping command to test whether your storage system can reach other hosts on your network.
pktt You can use the pktt command to trace the packets sent and received in the storage system's network.

No comments:

Post a Comment