Thursday, 11 July 2013

NetApp disks and RAID Groups best Practices


Parity disks and RAID Group best Practices

  • NetApp always recommends the use of RAID-DP - whether SAS or SATA
  • We recommend a "RAID Group size" of 12 (10 +2) to 20 (18 +2)
  •  Due to the high reliability SAS disks, it may make sense to define a RAID group size of 24 (22 +2) to adapt it to the shelf layout
  •  The default RAID group size of SSD is 23 (21 +2), the maximum 28 For SSD RAID Groups and Aggregates NetApp recommends a RAID group size in the range of 20 (18 +2) to 28 (26 +2)

Default and Maximum RAID Group Sizes


Tuesday, 23 October 2012

NetApp Partition Alignment


Partition Alignment

Virtual machines store their data on virtual disks. As with physical disks, these virtual disks contain storage partitions and file systems, which are created by the VM’s guest operating system. In order to ensure optimal disk I/O within the VM one must align the partitions of the virtual disks to the block boundaries of VMFS and the block boundaries of the storage array. Failure to align all three of these items will result in a
dramatic increase of I/O load on a storage array and will negatively impact the performance of all Virtual Machines being served on the array.

The impact of mis-aligned partitions


Failure to properly align the file systems within Virtual Machines has a negative impact on many aspects of a Virtual Infrastructure. Customers may first notice the impact of misalignment with virtual machines running high performance applications. The reason for this is every I/O operation executed within the VM will require multiple I/O operations on the storage array.

In addition to the negative performance impact storage savings with NetApp Data Deduplication will be negatively impacted, reducing the total amount of storage savings. Finally, storage arrays will be over taxed and as the Virtual Infrastructure grows the storage array will require hardware upgrades in order to meet the additional I/O load generated by this misalignment. Simply put, one can save their company a significant amount of money by optimizing the I/O of their VMs.

Verifying partition alignment with windows operating systems


To verify the starting partition offset for a windows based virtual machine log onto the VM and run the System Information utility (or msinfo32). There you will be able to find this setting). To run msinfo32, select Start > All Programs > Accessories > System Tools > System Information.




NetApp volume Language


Volume Language

Every volume has a language. The storage system uses a character set appropriate to the language for the following items on that volume:
  •  File names
  • File access

The language of the root volume is used for the following items:
  •  System name
  •  CIFS share names
  •  NFS user and group names
  •  CIFS user account names
  •  Domain name
  •  Console commands and command output
  •  Access from CIFS clients that don’t support Unicode
  •  Reading the following files:
  •  /etc/quotas
  •  /etc/usermap.cfg
  • the home directory definition file

You are strongly advised to set all volumes to have the same language as the root volume, and to set the volume language at volume creation time. Changing the language of an existing volume can cause some files to become inaccessible.

NetApp root volume

Recommended root volume (vol0) sizes


Root Volume

The storage system's root volume contains special directories and configuration files that help you administer your storage system. The storage system contains a root volume that was created when the storage system was initially setup at the factory. The root volume can be contained in its own aggregate which is the best practice for performance and scalability, if you are restrained by the amount of disks then the root volume can also exist in a larger aggregate. If the root volume exists in a larger aggregate then as it is a flexible volume you have the opportunity to resize the volume to reclaim more usable space for other volumes. Below is a table showing Data ONTAP Recommended Root FlexVol Volume Size by Platform.

Platform
Root Volume size 7.3.x
Root Volume Size 8.0.x
FAS2040
16GiB
160GiB

FAS/V3040
16GiB
160GiB

FAS/V3140
16GiB
160GiB

FAS/V3070
23GiB
230GiB

FAS/V3160
23GiB
230GiB

FAS/V3170
25GiB
250GiB

FAS/V3210
10GiB
100GiB

FAS/V3240
15GiB
150GiB

FAS/V3270
30GiB
300GiB

FAS/V6000 series
25GiB
250GiB

FAS/V6200 series
Unsupported
300GiB

NetApp Best Practices


Below is a compiled list of the top recomendations and best practices when deploying a NetApp Filer.

  • Use RAID-DP™, the NetApp high-performance implementation of RAID 6, for better data protection.
  • Use multipath HA with active-active storage configurations to improve overall system availability as well as promote higher performance consistency.
  • Use the default RAID group size when creating aggregates or traditional volumes.
  • Allow Data ONTAP to select disks automatically when creating aggregates or volumes.
  • Use the latest Data ONTAP general deployment release available on the NOW™ (NetApp on the Web) site.
  • Use the latest storage controller, shelf, and disk firmware available on the NOW site.
  • Maintain at least one hot spare for each type of disk drive in the storage system:
  • Disk drive differences are FC, SAS, SATA disk drive types, disk size, and rotational speed (RPM).
  • Maintain two hot spares for each type of disk drive in the storage system to take advantage of Maintenance Center.
  • Reserve space for at least one week of Snapshot™ backups.
  • Do not put user data into the root volume.
  • Replicate data with SnapMirror® or SnapVault® for disaster recovery (DR) protection.
  • Replicate to remote locations to increase data protection levels.
  • Use an active-active storage controller configuration (clustered failover) to eliminate single points of failure (SPOFs).
  • Deploy SyncMirror® and RAID-DP for the highest level of storage resiliency. 


Tuesday, 16 October 2012

Export NFS Shares From Command Line

Exporting NFS shares can be painful when you are trying to export the shares through the command line as the syntax's have to be 100% correct and i think altogether confusing so here is what i think is the simplest way to export NFS shares via the command line.

run rdfile /vol/vol0/etc/exports

The output you get should look something like this


#Auto-generated by setup Wed Oct 14 09:35:11 GMT 2012
/vol/vol0 -sec=sys,rw,anon=0,nosuid
/vol/vol0/home -sec=sys,rw,nosuid

To add in an export the easiest way is to write it directly to the file then run exportfs -a

So to add an NFS share for /vol/nfs_vol1 with Read, Write and Root access for host 1, host 2 and host 3 then add the following lines to the exports file

/vol/nfs_vol1     -sec=sys,rw
/vol/nfs_vol1      -sec=sys,rw,root=host1:host2:host3

So now you will need to write this to the exports file you can do this on the command line or by opening the exports file with wordpad to do this from the command line copy the old and new exports contents and  run wrfile /vol/vol0/etc/exports like below

wrfile /vol/vol0/etc/exports

#Auto-generated by setup Wed Oct 14 09:35:11 GMT 2012
/vol/vol0 -sec=sys,rw,anon=0,nosuid
/vol/vol0/home -sec=sys,rw,nosuid
/vol/nfs_vol1     -sec=sys,rw
/vol/nfs_vol1      -sec=sys,rw,root=host1:host2:host3

hit enter
then ctrl-c



the file will now have been updated so running wrfile /vol/vol0/etc/exports will now show


#Auto-generated by setup Wed Oct 14 09:35:11 GMT 2012
/vol/vol0 -sec=sys,rw,anon=0,nosuid
/vol/vol0/home -sec=sys,rw,nosuid
/vol/nfs_vol1     -sec=sys,rw
/vol/nfs_vol1      -sec=sys,rw,root=host1:host2:host3


Now all you will have to do is run the command exportfs -a and the share will have been exported and you will be able to mount the share on the host.

Tuesday, 9 October 2012

How does Snapmanager for Virtual infrastructure work

SMVI is now integrated into VSC which is good as most of the VMware integrated products are now centralised hopefully the rest of the snapmanger suite will soon be added. Here is how snapmanager for virtual infrastructure works


  1. VSC initiates a backup
  2. VSC triggers VMware snapshots of VMs
  3. vCenter creates VM snapshots and captures active transactions in delta files
  4. VSC triggers instant NetApp snapshot
  5. VSC triggers VMware snapshot removal
  6. vCenter removes consistent snapshots and reapplies delta files

Friday, 5 October 2012

SnapManager for SQL Sizing

SnapManager for SQL Server gives extended capabilities for SQL backups which include backing up and restoring SQL Server databases with minimum disruption to the database and minimising backup and restore times. These capabilities are provided by using the NetApp underlying snapshot technology which will quiecse the database and snapshot the data at a volume level so no data is actually copied the process is completed in a timely.

SnapManager for SQL Server includes a client-based graphical user interface (GUI) program to perform various activities on SQL Server. These activities include:
Setting options for SnapManager for SQL Server operations.
Backing up, restoring and cloning databases and database components.
Monitoring and Reporting SnapManager for SQL Server operations.

Limitations & Known issues


All SQL Server 2000, 2005, & 2008 Editions are supported, except Microsoft Desktop Edition (MDSE) 2000. And all versions of MYSQL

Pre-Requisites


  • System database files cannot reside on the same LUN as user databases
  • Each user database log files and data files need to be on separate LUNs
  • Tempdb must have its own LUN or stay on Local disk
  • A separate LUN needs to be created to store the Snapshots
  • SQL must be installed on the server before installing Snapdrive and SMSQL
  • ONLY database files can exist on the LUN. You’ll need to put all application binaries and non-database files on different LUNs.
  • The client machine must be in DNS (whichever is appropriate), and be part of a Domain.
  • A reboot of the client will be required, so a downtime slot must be arranged. This does not have to be at the same time as the installation, if necessary.
  • Verify that the follow requirements are met for the database software.
  • SQL Server is installed on Windows 2000 or later.
  • Snapdrive must be installed prior to the install of Snap Manager for SQL

For further requirements please refer to the new SnapManager for SQL server 5.2 best practices - http://media.netapp.com/documents/tr-4003.pdf

Drive layouts


Mount points will be used instead of drive letters as using drive letters gives you a limit of 26 LUNs per server.

Each SQL server will be provisioned with the following LUN and volume layout:

System Databases will be migrated onto a separate LUN contained in a single volume

LUN Name –  systemdb_”server name”.lun
Volume Name – v_systemdb_”server name”
Qtree Name – q_systemdb_”server name”

Tempdb will be migrated onto a separate LUN contained in a single volume

LUN Name – tempdb_”server name”.lun
Volume Name – v_tempdb_”server name”
Qtree name – q_tempdb_”server name”

Each Database that is in need of a granular restore or is an intensively used database will need to placed on its own Data LUN along with its own Log LUN both of these LUNs will be contained in separate volumes

Database
LUN Name – “database name”_servername”.lun
Volume Name – v_”database name”_”server name”
Qtree Name – q_”databse name”_”server name”

Logs
LUN Name – “log name”_”server name”.lun
Volume Name – v_”log name”_”server name”
Qtree Name – q_”log name”_”server name”

For small databases that are not heavily utilised and are not in need of a granular restore, these can be grouped together and will be backed up together. NOTE – When a restore is performed on a single database in this configuration all databases will be restored, a restore should only be performed in a DR scenario.

Database
LUN Name – “db1”_servername”.lun
Volume Name – v_”db1”_”server name”
Qtree Name – q_”db1”_”server name”

Logs
LUN Name – “log1”_”server name”.lun
Volume Name – v_”log1”_”server name”
Qtree Name – q_”log1”_”server name”

A snapinfo LUN will also need to be created as this will hold a copy of the trans logs that have been backed up sizing for this LUN is shown in section 6.5.
LUN Name – snapinfo_”servername”.lun
Volume Name – v_snapinfo_”server name”
Qtree Name – q_snapinfo_”server name”

Database Volume Sizing


To find out what the size of the Database volume should be follow the procedure below

Database LUN size = Check with application owner or look on server

Database LUN change rate = 5%

Number of online snaps = 3

Database LUN Size x 5% = N

LUN Size + (N x 3) = volume size


For example - Note this is an example and will not ascertain the size of the volume you are creating

Database LUN size = 100Gb

100 x 5% = 5Gb = N

5Gb x 3 = 15Gb

Volume Size = 100Gb + (5gb x 3) = 115Gb


Log Volume Sizing


To find out what the size of the Log volume should be follow the procedure below

Log LUN size = Check with application owner or look on server

Log LUN change rate = 5%

Number of online snaps = 3

Log LUN Size x 5% = N

LUN Size + (N x 3) = volume size


For example - Note this is an example and will not ascertain the size of the volume you are creating

Log LUN size = 100Gb

100 x 5% = 5Gb = N

5Gb x 3 = 15Gb

Volume Size = 100Gb + (5gb x 3) = 115Gb

Snapinfo LUN Sizing


To make sure that we have adequate space available for our SLQ backups we must size the snapinfo volume and LUN. To do this run through the following steps for each SQL server:

Step 1

Run the below script on the database that is being moved

-- Create the table to accept the results
create table #output (dbname char(30),log_size real, usage real, status int)
-- execute the command, putting the results in the table
insert into #output
exec ('dbcc sqlperf(logspace)')
-- display the results
select *
from #output
go
--open cursor
declare output_cur cursor read_only for
select log_size
from #output
--make space computations
declare @v1 as real
declare @base_val as real
set @base_val = 0
open output_cur
FETCH NEXT FROM output_cur
INTO @v1
WHILE @@FETCH_STATUS = 0
BEGIN
set @base_val = @base_val + @v1
FETCH NEXT FROM output_cur
INTO @v1
END
set @base_val = @base_val + 15
PRINT 'BASE_VAL = ' + cast(@base_val as nvarchar) + ' MB'
--clean-up
close output_cur
drop table #output

BASE_VAL =  21.46906 MB

Step 2

Total number of transaction log files to be kept in the snapinfo LUN = 3

LUN Size = 21.4906 * 3 = 64MB

Snap info Volume Sizing


To calculate the size of the snapinfo volume follow the procedure below

Volume size = LUN Size + (Largest Trans log * 3)

64MB + (21.46906 * 3) = 128MB