Monday, December 5, 2016

Tutorial : Move all volumes from one aggregate to another

Whether it's for a migration or a Non-disruptive operation.  Every once in a while, you might need to clean out an entire aggregate and move all the volumes to one or more aggregates.  Those volume moves can take a while, and you don't want them all running at the same time.  Maybe you want only 1 at a time, or maybe 2.  I've created a workflow that does exactly that.  Using an excel file as input and a datasource to collect the vol move information.  In this tutorial we will go over all the bits and pieces, including reservations, custom datasources, importing excelfiles and repeat rows.

Custom Scheme and Dictionaries

In WFA, there is no information available about volume moves, so we need to create our own datasource to collect that information.  Before we can create the datasource, we first need a new scheme and dictionary.

Originally, I used a single dictionary called "volume".  But then I started thinking about the primary keys of such a volume move.  When collecting volume move information, I noticed that also the finished ones are still in there.  So if were to move the same volume twice, I would need more that just the volume as a primary key.  I used the "start_timestamp" as an additional primary key, you can't move the same volume twice at the same time, right ?

Pay attention with long primary keys !!!

But now I would have 4 primary keys :
  • volume name
  • vserver name
  • cluster name
  • timestamp
Primary keys are not unlimited in WFA.  There is a certain length involved.  To make sure we don't go over the limit, I have split up the dictionary in 3 dictionaries :
  • cluster
  • vserver
  • volume
This way volume would have only "name" and "timestamp" and "vserver id" as primary key.  By reducing the primary keys of vserver & cluster to foreign keys, I shouldn't hit the limit.  Unfortunately, I figured this one out the hard way :(.

Custom Datasource

In WFA, there is no information available about volume moves, so we need to create our own datasource to collect that information.  I just used my Generic Datasource Template that is available on this blog and I changed the "custom code" part.

# ==============================================================================================
# ======================================== START CUSTOM CODE ===================================
# ==============================================================================================

$cluster_table = @()
$volume_table = @()
$vserver_table = @()

# get all clusters
$rs = Invoke-MySqlQuery -query "select primary_address,name from cm_storage.cluster"

$cluster_count = $rs[0]
if($cluster_count -gt 0){
    for($x=1;$x -le $cluster_count;$x++){
     $cluster_primaryaddress = $rs[$x].primary_address
        $cluster_name = $rs[$x].name

        logDebug "Processing cluster $cluster_name"

        # add cluster in table
        $hc = @{}
        $ = $cluster_primaryaddress
        $hc.primary_address = $cluster_primaryaddress
        $ = $cluster_name
        $oc = New-Object -TypeName psobject -Property $hc

            $controller = connect-WfaCluster $cluster_primaryaddress -EA Stop

            # get vservers
            $vservers = get-ncvserver -Controller $controller
            foreach($vs in $vservers){
                # store vserver
                $hvs = @{}
                $ = $ + "_" + $vs.vserver
                $ = $vs.vserver
                $hvs.cluster_id = $
                $ovs = New-Object -TypeName psobject -Property $hvs
                $vserver_table += $ovs

                logDebug "Processing vserver $($vs.vserver)"

            # get volmoves
            $volmoves = get-ncvolmove 

            # loop volmoves
            foreach($vm in $volmoves){

                $hv                                = @{}
                $                           = $vm.Volume
                $hv.phase                          = $vm.Phase
                $hv.state                          = $vm.State
                $hv.source_aggregate               = $vm.SourceAggregate
                $hv.source_node                    = $vm.SourceNode
                $hv.destination_aggregate          = $vm.DestinationAggregate
                $hv.destination_node               = $vm.DestinationNode
                $hv.start_timestamp                = [double]$vm.StartTimestamp
                $hv.actual_completion_timestamp    = [double]$vm.ActualCompletionTimestamp
                $hv.actual_duration                = [double]$vm.ActualDuration
                $hv.bytes_remaining                = [double]$vm.BytesRemaining
                $hv.bytes_sent                     = [double]$vm.BytesSent
                $hv.completion_code                = [int]$vm.CompletionCode
                $hv.completion_status              = $vm.CompletionStatus
                $hv.cutover_action                 = $vm.CutoverAction
                $hv.cutover_attempted_count        = [int]$vm.CutoverAttemptedCount
                $hv.cutover_attempts               = [int]$vm.CutoverAttempts
                $hv.cutover_hard_deferred_count    = [int]$vm.CutoverHardDeferredCount
                $hv.cutovers_soft_deferred_count   = [int]$vm.CutoversSoftDeferredCount
                $hv.cutover_trigger_timestamp      = [double]$vm.CutoverTriggerTimestamp
                $hv.cutover_window                 = [int]$vm.CutoverWindow
                $hv.details                        = $vm.Details
                $hv.estimated_completion_time      = $vm.EstimatedCompletionTime
                $hv.estimated_remaining_duration   = $vm.EstimatedRemainingDuration
                $hv.execution_progress             = $vm.ExecutionProgress
                $hv.internal_state                 = $vm.InternalState
                $hv.jobid                          = [int]$vm.Jobid
                $hv.job_uuid                       = $vm.JobUuid
                $hv.last_cutover_trigger_timestamp = [double]$vm.LastCutoverTriggerTimestamp
                $hv.managing_node                  = $vm.ManagingNode
                $hv.percent_complete               = [int]$vm.PercentComplete
                $hv.prior_issues                   = $vm.PriorIssues
                $hv.replication_throughput         = $vm.ReplicationThroughput
                $hv.vserver_id                     = $cluster_primaryaddress + "_" + $vm.vserver
                $ov = new-object -TypeName psobject -Property $hv
                # store in table

            LogWarn $_.Exception.Message

CheckDictionary -dictionary 'Cluster' -table $cluster_table
CheckDictionary -dictionary 'Vserver' -table $vserver_table
CheckDictionary -dictionary 'Volume' -table $volume_table

# ==============================================================================================
# ======================================== END CUSTOM CODE   ===================================
# ==============================================================================================

Custom Command for Volume Move Reservations

After a volume move is executed, we want to make sure our volume is known by WFA and we can't wait for our datasource to pick up the change.  We need a reservation and the volume move command only registers the aggregate swap, so I created my own command to register a volume move of my custom dictionary.  I'm passing my volume and destination aggregate (just as in the vol move command) and using the following MySql for reservation

# Create a new volume move
        NULL AS id,
        '${volume}' AS name,
        'initializing' AS phase,
        NULL AS state,
        '${sourceaggregate}' AS source_aggregate,
        NULL AS source_node,
        '${destinationaggregate}' AS destination_aggregate,
        NULL AS destination_node,
        UNIX_TIMESTAMP(NOW()) AS start_timestamp,
        NULL AS actual_completion_timestamp,
        NULL AS actual_duration,
        NULL AS bytes_remaining,
        NULL AS bytes_sent,
        NULL AS completion_code,
        NULL AS completion_status,
        NULL AS cutover_action,
        NULL AS cutover_attempted_count,
        NULL AS cutover_attempts,
        NULL AS cutover_hard_deferred_count,
        NULL AS cutovers_soft_deferred_count,
        NULL AS cutover_trigger_timestamp,
        NULL AS cutover_window,
        NULL AS details,
        NULL AS estimated_completion_time,
        NULL AS estimated_remaining_duration,
        NULL AS execution_progress,
        NULL AS internal_state,
        NULL AS jobid,
        NULL AS job_uuid,
        NULL AS last_cutover_trigger_timestamp,
        NULL AS managing_node,
        NULL AS percent_complete,
        NULL AS prior_issues,
        NULL AS replication_throughput, AS vserver_id
        vserver.cluster_id =
        AND = '${vserver}'
        AND (
            cluster.primary_address = '${cluster}'
            OR = '${cluster}'

And the following code for the verification

    AND vserver.cluster_id =
    AND = '${volume}'
    AND = '${vserver}'
    AND (
        cluster.primary_address = '${cluster}'
        or = '${cluster}'
    AND source_aggregate = '${sourceaggregate}'
    AND destination_aggregate = '${destinationaggregate}'

I do realize that the timestamp is not part of the verification, but the chance is pretty low that you are going to move the same volume multiple times and from the same source aggregate to the same destination aggregate.

Custom Excel Datasource

To make sure we can schedule this workflow, we need to make sure the user-input is abstracted from the workflow, so we are using an excel file.  We install the EDM package (shown in another tutorial on this blog), clone the datasource for our own customer dictionary.
Dictionary fields :
  • cluster
  • vserver regex
  • volume regex
  • source aggregate
  • destination aggregate
We import this info from an excel file and with this simple model we should get pretty far.  The Regular Expressions are giving us quite some flexibility.

Repeat row

To list up all the volumes matching our excel file, we prepare following filter, returning plain cm_storage.volume objects.

    cluster.primary_address AS 'vserver.cluster.primary_address'
    aggr_to_empty.aggregate ate,
    AND aggregate.node_id =
    AND node.cluster_id =
    AND (
        OR ate.cluster=cluster.primary_address
    AND REGEXP ate.volume_regex
    AND REGEXP ate.vserver_regex
    volume.size_mb DESC

Using this filter, we create a repeat row.

"Too many volumes moving" Filter

Of course we don't want ALL volumes to start moving at the same time, so we use another filter to check if other volume moves are already running.  We want the limit to be a variable though, so we store this in a constant called "MAX_TRANSFERS".

SELECT, as '',
    cluster.primary_address as 'vserver.cluster.primary_address',
    volume.vserver_id =
    AND vserver.cluster_id =
    AND ( = '${clusterName}'
        or cluster.primary_address = '${clusterName}'
    AND (
        volume.phase = 'replicating'
        OR volume.phase = 'finishing'
        OR volume.phase = 'initializing'

With a simple conditional check on the move & reservation command, we can now limit the number of volume moves to a custom number.

I know this is a lot to take in.  Watch the tutorial, if something isn't clear, let me know.

Download dar-file


  1. Just changed this to version 1.0.1. Bugfix.

  2. Workflow works well but after I run it for a while I start to get an error "Duplicate entry '6' for key 'PRIMARY' on the "Get CDOT Volume Move" data source. I tried to reset the schema but the error still occurs.

  3. I was able to get this to work initially against a test lab cluster, moving two volumes to a new aggregate, then switching the aggregates in the spreadsheet, moving the volumes back to the original aggregate. After that, I'm not able to get any volumes to appear in the workflow preview whether using the same exact Excel file or working with another cluster. Looking at the logs, it seems it is reading the Excel file successfully but I'm not sure where to troubleshoot from there. Any ideas?

  4. This comment has been removed by the author.

  5. Hi Wfaguy, i was at Netapp insight and had the privilege of attending your session.

    Can i know if you have the section where the volume move sends an email notification for the successful volume move.

    It would be great if you can write up that for us or give us an hint. Thanks, Vinay