Visio 2010 Workaround: Application Hangs when opening ‘Database Model Diagram’ template

I received the following error today when attempting to use Visio’s ‘Database Model Diagram’ template under category ‘Software and Database’.

The program VISIO.EXE version 14.0.6122.5000 stopped interacting with Windows and was closed. To see if more information about the problem is available, check the problem history in the Action Center control panel.  Process ID: 3fa8  Start Time: 01cdad7f36f7bba9  Termination Time: 0  Application Path: D:\Program Files\Microsoft Office\Office14\VISIO.EXE  Report Id: 9c9eef0a-1972-11e2-9a12-005056995c65

Sure enough, per the seemingly improbable suggestion made in the post below, Visio did not hang after changing the default printer to ‘Microsoft XPS Document Writer’. Don’t make the i mistake I did and change the default printer to ‘Microsoft XPS Document Writer (redirected 7)’.

http://answers.microsoft.com/en-us/office/forum/office_2010-visio/visio-2010-not-responding-when-opening-database/64b2b582-53f0-41f2-bb47-d3a3f36aeb82

As it turns out,  Remote Desktop (RDP) assigns remotely-mounted printers which can cause issues with Visio’s templates.

Below is similar to what you should see once you’ve assigned the correct default printer.

Parallel Data Warehouse (PDW) Performance Tip: CTAS vs INSERT SELECT

When loading tables in Microsoft’s Parallel Data Warehouse (PDW) via T-SQL, one will soon encounter the need to load data into a new table. Whether the developer needs to perform an initial data load of table or even go through the PDW-specific steps to perform a ‘trunc and load’ of a table, there are a two table load methods to consider.

PDW T-SQL Load Methods:

  • INSERT SELECT
  • CREATE TABLE AS SELECT(CTAS)

To make a direct comparison between the two methods, I have provided the actual PDW query and execution plans generated by each method using identical SELECTs

Load Method: INSERT SELECT  

Usage
Appends one or more new rows to a table. Add one row by listing the row values or add multiple rows by inserting the results of a SELECT statement.

Example
Insert Into TABLEA SELECT only, use, this, method, to, append FROM TABLEB

Generated Query Plan

Image

Generated Execution Plan

Load Method: Create Table as Select (CTAS)

Usage
Creates a new table in SQL Server PDW that is populated with the results from a SELECT statement.

Example 
CREATE TABLE myTable (
yearId int NOT NULL,
some varchar(50),
stuff varchar(50))
WITH
( CLUSTERED INDEX (yearId) );

Generated Query Plan

Generated Execution Plan

*When comparing to INSERT SELECT, notice the absence of the Index Scan and Parallelism steps above.  CTAS results in an Index Seek for this particular SELECT and also does away with the Parallellism steps.

The Verdict – Use CREATE TABLE AS SELECT (CTAS) when loading empty tables!

Performance Results
CTAS loaded the test table more that twice as fast as INSERT SELECT.

  • CTAS runtime: 1min  44sec
  • INSERT SELECT runtime: 4min  2sec 

Simple Explanation
On the PDW, INSERT SELECT is a fully logged transaction and occurs serially per each distribution. CTAS, however, is minimally logged and happens parellelly across all nodes.

NOTE: When performing CTAS or re-creating tables in any way, you will need to create statistics on those tables upon loading.

Ideally, you should create stats on all the join columns, group by, order by and restriction. SQL Server PDW does not automatically create and update statistics on the Control node for every occasion when SQL Server creates or updates statistics on the Compute nodes:

— This will create stat for all columns on all objects

select ‘create statistics ‘+b.name+‘ on dbo.’+a.name+‘ (‘+b.name+‘)’
from sys.tablesa,sys.columnsb
where a.object_id=b.object_id and notexists(
selectnullfromsys.stats_columnswhere object_idin(selectobject_idfromsys.stats_columnsgroupbyobject_idhavingcount(*)>=1)
and object_id=b.object_idandcolumn_id=b.column_id)
order bya.name,b.column_id;

Parallel Data Warehouse (PDW) How-To: Avoid ShuffleMove and PartitionMove Operations

There are a couple process hungry operations that can be avoided when developing or migrating T-SQL queries into Microsoft’s Parallel Data Warehouse (PDW) environment. With proper planning and attention to detail, one can side-step two operations more commonly known as the Data Movement Service (DMS) ShuffleMove and PartitionMove operations.  To shed a bit of light on what DMS is, its basically a  service agent that moves data between the appliance compute nodes within the PDW ‘shared-nothing’ architecture.

In an attempt to set a hook of interest in on ya, I’ll share a recent project breakthrough associated with the bypassing of these two DMS operations.  Recently my PDW implementation team experienced a 600% performance boost in query runtimes after reformulating a few steps in a query that previously utilized the PartitionMove and ShuffleMove DMS operations….I would assume I have your attention now!

So lets get to it… the idea of a large table or the concept of a subquery should be a familiar to data developers, DBAs,  and the like. When I refer to table distributions this may sound foreign, however, I am referring to the method in which the developer has decided to distribute the data across the PDW nodes via a CREATE TABLE statement. There are only two methods of distribution, and one must be chosen when creating a table:

Replicated Tables – Replicated tables are an ideal method for storing dimensional tables, lookup tables, or other small reference tables that are commonly used in PDW joins. The developer should consider creating tables via the replicated method when they would like to have a local copy of a table on each node. Since every node has a local copy, joins to the replicated table should be quick and efficient as long as the DMS doesn’t have to arrange for datasets to be distributed on other nodes (see incompatible joins toward the bottom of this post) and as long as the replicated table is relatively small, say less than 5mil records (this is just an estimate, there is a science behind choosing distribution modes and I’ll dive further into this on a future post).

Sample:

CREATE TABLE [testpdwdb].[dbo].[replicatedTable]
(
[AcctCorp]
INT NULL,
[CutoverDate] DATE
NULL
)
WITH (DISTRIBUTION = REPLICATE);

Distributed Tables –  Distributed tables are best suited for fact tables, very large dimension tables, or larger tables that contain a column that has many distinct values used in table join predicates (columns that meet this criteria are considered candidate distribution columns).  Finding the best distribution column can be a time consuming process as you really have to understand the queries that run/may run against the table. After an appropriate column has been chosen, PDW’s hash distrubution algorithim takes care of evenly distributing the data across the appliance nodes.

Note: Even data distribution across the appliance compute nodes prevents queries from encountering  performance draining ‘data skew’ scenarios. Data skew occurs when the developer makes a poor choice in a distribution column which can result in the hash algorithm ‘clumping’  together data that have same distributed column values. Ultimately, this results in some nodes working overtime while others nodes are vacationing…it’s tough to call this a parallel data warehouse if all nodes are not working at the same time under similar workloads!

Sample:

CREATE TABLE [testpdwdb].[dbo].[distrubutedTable]
(
[AcctCorp]
INT NULL,
[CutoverDate] DATE
NULL
)
WITH (CLUSTEREDINDEX([Cutoff_Date]),
DISTRIBUTION = HASH
([AcctCorp]));

With that mini-lesson out of the way, let me formally introduce the ShuffleMove and the PartitionMove DMS operations as these  performance-draining operations kick in when tables are not distributed correctly.

The ShuffleMove: 

Lets start with the ShuffleMove. A ShuffleMove occurs when the PDW must move data between nodes to fulfill a query request.  These occur when a query joins between two or more tables that are not distributed on the same column (also knows as an incompatible join).  To create a compatible join, PDW must create a temp table on every node for the incompatible table, redistribute the data from the incompatible table on a compatible column across the nodes, join the temp table back to the other table,  and stream results to the client.

As you can see in the following diagram, there is a lot of data movement going on in a shuffle move to perform a simple query task.  Records in Fact_dealer_sales must be ‘shuffled’ to align with the make_id hash of  table fact_manufacturer_sales.

.

..

.

The Partition Move:
A Partition move is the most expensive DMS operation and involves moving large amounts of data to the Control Node and across all of the appliance distributions on each node (8 per node).  Partition moves are typically the result of a GROUP BY statement that does not include the distribution column.  When a query attempts to a GROUP BY on  a non-distribution column, the aggregation takes place locally on each nodes distribution (10 nodes * 8 per node = 80 distributions), then the results are sent up to the Control Node for final aggregation.

Note: the column fact_dreamcar_sales.make would not reside in the fact table as a text field (more likely to be make_id) and was included just for kicks. BTW Lamborghini makes one heck of a supercar, however, it would’ve made the ‘make’ column quite wide.  Additionally, Zonda is a ‘model’ and not a ‘make’ although the Zonda did ‘make’ a name for its manufacturer – Pagini (just occured to me that I’ve been watching too much of BBC’s Top Gear lately).

Here are some steps that be taken to avoid PDW ShuffleMove and PartitionMove operations:

  • CTAS, CTAS, CTAS…when you see a table that has a distribution method that doesn’t work for the majority of queries ran, use CTAS to reassign distribution modes or the distribution column (for syntax on how to do this, see https://saldeloera.wordpress.com/2012/07/12/creating-tables-on-sql-server-parallel-data-warehouse-pdw/)
  • Research the size (row count, width)  and common joins to every table that you create in the PDW. It is critical to align every table to a proper distribution method and column.
  • Keep joins between non-compatible data sets to a minimum.
  • Try to use a distributed column as a join predicates when joining between two distributed tables.
  • If neccessary, explicitly create temp tables via CTAS to store non-compatible datasets and align distributions in the WITH clause
  • Avoid using GROUP BY on non-distributed columns.
  • If GROUP BY is neccessary on a non-distributed column, use the DISTRIBUTED_AGG query hint. This will force dynamic data redistributoin on a table before query is processed.
  • Cost based optimization is the execution engines ability to derive the most efficient query plan based on table size, statistics, and cardinality. In AU2, there is no cost-based query optimizer so additional steps may be required (see CTAS hint above) to get adequate performance out of your queries.
  • Thoroughly examine the steps created in the DMS D-SQL (Distributed SQL) plans to identify ShuffleMove or PartitionMove operations.
  • Left Outer Joins between a replicated (Left table) and distributed table (Right table) are considered incompatible joins, use sparingly.
  • Right Outer Joins between a distributed(Left table) and replicated table (Right table) are considered incompatible joins, use sparingly.
  • Full Outer Joins are incompatible joins, use sparingly.

If interested, below I included the T-SQL skeleton and the DMS Query Plan for the query that was running for 4+ hours on the PDW. Note the T-SQL was pulled directly off of the SMP (where runs in 40 min) and was modified slightly to run on the PDW.

As you look at the T-SQL and the DMS Query Plan below, hopefully a couple of burning questions come to mind after reading this post – maybe something along the lines of “Are all of those joins to subquries aligned?”, or “Is the distribution key included in those group by’s ? ” or maybe even “WTH is this?”… we asked ourselves these questions, made some code changes (CTAS’d the subqueries into temp tables and aligned distribution columns before joining), and managed to get it down to a 30min runtime on the PDW…not bad considering all of the indexes the SMP required to get it to run in 40 min!

Declare @EndDate as date
Set @EndDate = '7/4/2012'

Declare @StartDate as date
Set @StartDate = ‘7/4/2012’

Delete From testpdwdb.dbo.someFinancialMasterTable where fiscal_date = @EndDate
insert into testpdwdb.dbo.someFinancialMasterTable
SELECT
FROM
(
   SELECT
   FROM
  (
    SELECT
    FROM
(
      SELECT
      FROM
(
         SELECT
         FROM testpdwdb.dbo.someOrderSnapshotTable
         WHERE someDatecol = DATEADD(D, -1, @StartDate)
)c
      JOIN
      (
         SELECT
         FROM testpdwdb.dbo.someProductRevenueTable
         WHERE Fiscal_date = DATEADD(D, -1, @StartDate)
)r
      ON c.sys_oci = r.sys
      AND c.prin_oci = r.prin
      AND c.sub_acct_no_oci = r.sub_acct_no
      AND R.some_other_date_col = C.some_date_col
      JOIN
      (
        SELECT
        FROM testpdwdb.dbo.someMarketTable
      )M
      ON R.sys = M.sys
      and r.prin = m.prin
    LEFT JOIN
    (
        SELECT
        FROM testpdwdb.dbo.someHierarchyTable
     )h
    ON r.gl_acct_id = h.gl_acct_id
    ) m
GROUP BY
) a
FULL OUTER JOIN
(
SELECT
FROM
(
    SELECT
    FROM
    (
        SELECT
        FROM testpdwdb.dbo.someItemTable a
        full outer JOIN
        (
            SELECT
            FROM testpdwdb.dbo.someAccountItemTable b
            left join testpdwdb.dbo.someActivityTable c
            on b.sys = c.sys
            and b.prin = c.prin
            and b.SUB_ACCT_NO = c.sub_acct_no
            and b.date = c.date
            WHERE b.date BETWEEN @StartDate AND @EndDate
            and c.date BETWEEN @StartDate AND @EndDate
        ) b
    ON a.sys = b.sys
    AND a.prin = b.prin
    AND a.acct = b.sub_acct_no
    AND UPPER(a.product) = UPPER(b.product)
    AND a.date = b.date
    GROUP BY
) z
JOIN
(
    SELECT
    FROM
    testpdwdb.dbo.someMarketTable
)M
ON z.sys = M.sys
and z.prin = m.prin
GROUP BY
) X
FULL OUTER JOIN
(
    SELECT
    FROM
    (
        SELECT
        FROMtestpdwdb.dbo.someFinancialModelTable
WHERE DATE BETWEEN @StartDate AND @EndDate AND gl_acct_id IS NOT NULL
    )pm
    JOIN
    (
        SELECT
        FROM testpdwdb.dbo.someMarketTable
    )M
    ON pm.sys = M.sys
    and pm.prin = m.prin
    GROUP BY
) PM
ON X.some_entity_col = PM.some_entity_col
AND UPPER(X.some_customer_col) = UPPER(PM.some_customer_col)
AND UPPER(X.some_product_col) = UPPER(PM.some_product_col)
AND X.gl_acct_id = PM.gl_acct_id
) b
ON a.some_entity_col = b.some_entity_col
AND UPPER(a.some_customer_col) = UPPER(b.some_customer_col)
AND UPPER(a.some_product_col) = UPPER(b.some_product_col)
AND a.gl_acct_id = b.gl_acct_id
GROUP BY
) x
GROUP BY

…and the Parallel Data Warehouse DMS plan only a mother could love:

T-SQL Tip: Highlight Code Between Any Two Parenthesis

Where has this been all my working life? If you frequently find your self scouring complex (and sometimes needlessly complex) T-SQL code, simply place the cursor on the inside or outside of a parenthesis, and perform a CTRL+SHIFT+]. Sql Server Management Studio will then highlight the code for that particular SELECT block. There is some complex T-SQL out there in the wild (future consultants beware!), this will save you loads of scrolling and searching time and help you wrangle in pesky blocks of extended code.

Crunch that code!
– Sal

Parallel Data Warehouse (PDW) Basics: CREATE TABLE and CTAS Syntax

After forgetting the syntax for temp tables a couple of times, I decided to write a brief overview of the table creation syntax for Microsoft’s Parallel Data Warehouse (PDW) architecture.

One thing worth noting up-front, this post does not include considerations that should be made in determinining whether a table should be distributed vs replicated.

TEMPORARY TABLE BASICS

-Temp tables will reside in the tempdb (no surprise there), however, the ## global prefix as we see it on SQL Server SMP is not available.
-Temp tables can be either distributed or replicated
-Temp tables cannot have partitions, views, or indexes
-Table permissions cannot be changed
-Visible in only the current session
– CONTROL, INSERT, SELECT and UPDATE permissions are granted to the temp table creator.

Example: Distributed temporary table

CREATE TABLE testDB.dbo.#TableA (
   here int NOT NULL,
   we varchar(50),
   go varchar(50))
WITH
   ( LOCATION = USER_DB,
    DISTRIBUTION = HASH (here) );

NON-TEMP TABLE BASICS
-Indexes cannot be added (or dropped) after a table has been created (must use ‘CREATE TABLE AS SELECT’ aka ‘CTAS’ syntax if the developer wants to preserve the underlying data).
-All tables are created with page compression, there is no configurable option for this.
-When creating a distributed table, keep in mind that partitions will exist in the 8 distributions per compute node. So if you want the math on that, an db may have 10 year partitions on a server – in PDW, this translates to 10(partitions) * 8 (distributions) = 80 partitions on each compute node.  If you have a Parallel Data Warehouse (PDW) ‘Full-Rack’, there will be a total of 800 partitions (80 partitions per node * 10 nodes) …  interesting nonetheless!
-The default collation for the PDW appliance is Latin1_General_100_CI_AS_KS_WS – this is configurable, however, use caution if you plan on modifying a column collation as it can lead to collation errors when comparing strings.

Example: Replicated Table with Index

CREATE TABLE myTable (
   yearId int NOT NULL,
   some varchar(50),
   stuff varchar(50))
WITH 
    ( CLUSTERED INDEX (yearId) );

Example: Distributed Table with Multiple Partitions

CREATE TABLE TableA(
   yearId int NOT NULL,
   some varchar(50),
   stuff varchar(50)
WITH     ( DISTRIBUTION = HASH (yearId),     
PARTITION ( YearPurchased     
RANGE RIGHT FOR VALUES       
( 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 )));

Example:  Adding a new distribution column and clustered index to an existing table using ‘Create Table as Select’ aka ‘CTAS’

CREATE TABLE TableTemp WITH ( 
    DISTRIBUTION = HASH (newDistributionColumn),
    CLUSTERED INDEX (id ASC) )
AS SELECT * FROM TableOriginal;
DROP TABLE TableOriginal
RENAME OBJECT [TableTemp] TO [TableOriginal]

Below is the base syntax for creating tables as provided by Microsoft:

  CREATE TABLE [ database_name . [ dbo ] . | dbo. ] table_name
(
{ column_name
[ COLLATE Windows_collation_name ]
[ NULL | NOT NULL ] }
[ ,...n ]
)[ WITH ( <table_option> [ ,...n ] ) ]

Create a new temporary table.

CREATE TABLE [ database_name . [ dbo ] . | dbo. ] #table_name
(
{ column_name
[ COLLATE Windows_collation_name ]
[ NULL | NOT NULL ] }
[ ,...n ]
)
WITH ( LOCATION = USER_DB [, <table_option>
[ ,...n ] ] ) [;]
::=
datetimeoffset [ ( n ) ]
| datetime2 [ ( n ) ]
| datetime
| smalldatetime
| date
| time [ ( n ) ]
| float [ ( n ) ]
| real [ ( n ) ]
| decimal [ ( precision [ , scale ] ) ]
| money
| smallmoney
| bigint
| int
| smallint
| tinyint
| bit
| nvarchar [ ( n ) ]
| nchar [ ( n ) ]
| varchar [ ( n ) ]
| char [ ( n ) ]
| varbinary [ ( n ) ]
| binary [ ( n ) ]
::={[ CLUSTERED INDEX ( { index_column_name [ ASC | DESC ] } [ ,...n ] ) ] | [ DISTRIBUTION = { HASH( distribution_column_name ) | REPLICATE } ] | [ PARTITION( partition_column_name RANGE [ LEFT|RIGHT ] FOR VALUES ( [ boundary_value [,...n] ] ) ) ] }

NOTE: When performing CTAS or re-creating tables in any way, you will need to create statistics on those tables upon loading.

Ideally, you should create stats on all the join columns, group by, order by and restriction. SQL Server PDW does not automatically create and update statistics on the Control node for every occasion when SQL Server creates or updates statistics on the Compute nodes:

— This will create stat for all columns on all objects

select ‘create statistics ‘ + b.name + ‘ on dbo.’ + a.name + ‘ (‘ + b.name + ‘)’ from sys.tables a, sys.columns b where a.object_id = b.object_id and not exists ( select null from sys.stats_columns where object_id in (select object_id from sys.stats_columns group by object_id having count(*)>=1) and object_id = b.object_id and column_id = b.column_id) order by a.name, b.column_id;

Dynamic Multi-Threaded SSIS using C#

Here is a snippet of code that will assist in the dynamic ‘spin-up’ of duplicate SSIS packages.  To put it simply, I used this approach in a C# service that monitored a nightly FTP file drop folder and inserted records into a queue table (one record per file) for SSIS  processing.  The code below essentially looks at queue table on a timer interval, gathers the count of files ready to be processed, compares that count to the number of available dtexec threads (equals the number of SSIS packages currently running minus the configurable number of SSIS threads allowed) and fires off dtexec calls as needed. Per Microsoft’s recommendation, the maximum number of concurrently running executables is equal to the total number of processors on the computer executing the package, plus two. In my case I configured this routine to have 18 max available processing threads as the dtexec server contained 16 cores. Happy coding!

Parallel Data Warehouse (PDW) Lesson 1: – Basic Architecture Overview

I’ll begin my coverage of Microsoft’s Parallel Data Warehouse (PDW) architecture / appliance with a brief overview of the Massively Parallel Processing (MPP) architecture and the specific hardware components of the PDW appliance.

For starters, think of PDW as a single, pre-configured server and network environment, comprised of all the hardware and software neccessary to perform a very specific task – process and store massive amounts of data and retrieve it as quickly and reliably as possible. Overall, this goal is achieved by limiting the user workloads with the PDW to reading vs. writing, parallel load balancing, and multi-level system redundancies.

The Parallel Data Warehouse appliance architecture is based on  MPP technology which has been in existence for a number of years going back the early 80’s. Additionally, MPP was not widely adopted at that time due to high hardware costs. The primary difference between traditional SQL Server symmetric multiprocessing (SMP) systems that we are all currently working with and the re-emerging MPP architecture lies in the method of processing, storage, and hardware configuration.

Scale-Up vs. Scale-Out
With SMP systems, as the size and demands on the data grow, many server administrators choose to “throw more hardware” at emerging performance issues or increasing storage requirements. Whether it is adding more CPUs, RAM, or discs to the storage area network (SAN) to an existing SMP implementaion, this approach is referred to as ‘scaling- up’. There are many issues with scaling-up and I will briefly summarize the heavy hitters below:
– Performance Bottlenecks: Increasing the number of discs managed by a single storage controller negatively impacts performance. In an SMP environment, the single system bus cannot efficiently manage multiple user requests against an excessive number of discs.
– System Availability: Maintenance tasks and server outages typically impacts all users.
– Hardware Limitations: Eventually the system architecture/hardware will not support any more CPUs or memory.
– System Compatibility: Typically the SMP is a multivendor solution which leads to technical challenges and incompatibility.
– Total Cost of Ownership: ongoing maintenance and costs of changing server and network compacity lead to significant ownership costs over time.

The other school of thought is to ‘scale-out’, which offers a modular approach to scaling the server capabilities.  When scaling-out, storage units or ‘racks’ are added to a MPP appliance as needed. Scaling-out not only addresses the scaling up issues from above but also offers parallel processing of user requests via MPP specific distribution and partitioning methods (This will be covered in Lesson 3: Distribution and Replication Tables).

Performance Bottlenecks RESOLVED:
Parallel Data Warehouse addresses performance bottlenecks via parallel processing, dual infiniband networks, multi-core CPUs, ample amounts of RAM, and multiple system buses.

System Availability RESOLVED:
Parallel Data Warehouse includes multiple levels of redundancy including hot-swap drives, “warm” failover servers, mirrored drives, dual cooling fans, dual networks, dual power supplies, failover clusters

Hardware Limitations RESOLVED
Parallel Data Warehouse is expandable up to four full data racks

System Compatibility RESOLVED
Parallel Data Warehouse is a single vendor turn-key appliance

Total Cost of Ownership RESOLVED
Parallel Data Warehouse is known for ease of maintenance and effortless scalability

Microsoft has paired up with hardware vendors, namely HP and Dell, to provide a few configuration options that fit the specific storage needs for any mid to large size customer (more on that later)

The base configuration for a “full rack” is comprised of two physical racks – the control rack and the data rack as shown below.

Image

Control Rack
The control rack essentially hosts the management hardware/software and acts as the liason between user requests and the control nodes.  As seen in the diagram above, the management node and the control node have backup servers to provide redundancy.  I will provide detail for each of the servers found in the Control Rack.

Management Node – Think of this as the nerve center of the PDW. The management node contains node images for reimaging activities, deploys software updates to all appliance nodes, monitors system health, serves as the domain controller while handling non-login authentication, and performs general hardware and software management functions.

Control Node – Serves as  the central point of control for the appliance, point of client connection and user query processing, system and database metadata storage, and supports a centralized hardware monitoring scheme.

Landing Zone – Primarily serves as the point of entry for incoming data streams as data is cleansed and stored here.  Authorized NT users are able to access the LZ as it is visible to the corporate network. Third party tools such as the Nexus Query editor by Coffing Data Warehousing are also installed here.

Backup Node – Acts as a central backup and restore location for data. As with any backup location, files can be copied from here and stored in a separate backup archive.

Data Rack
On the other hand, the data rack handles the parallel execution of user queries, and PDW can support up to 4 data racks. By definition, 4 data racks is the currently published scale out limit of PDW architecture with each rack containing 10 compute nodes (effectively a 40 compute node limit).

Compute Node – Think of a compute node as a single standalone SQL Server instance with dedicated storage and memory. Current base specs for each compute node include 96 Gb of memory, hex-core CPUs, and local tempdb workspace.  Also, each node has a dedicated disc array which is managed via a SAN component and interconnected to other disc arrays via a dual Fibre Channel data bus.  The data bus supports high-speed disk I/O plays an integral role in disk redundancy. Additionally, the primary data is stored as RAID1 on the SANs.

So to make a more direct connection to the DBA and/or developer that may be reading this, let’s walk through a specific scenario where a user wants to execute a query against the PDW.  The interaction with these system entities is as follows: The user connects to the control node via a query editing tool. The control node gets the request from the tool and creates a distributable query plan to execute on the compute nodes.  The compute nodes work on their subsets of data and return the result sets to the control node.  The control node compiles the result sets into one collection and outputs the results to the query editor/user.  Pretty simple stuff.

Dual Infiniband Network

The compute nodes are connected together using dual InfiniBand network which enables high-speed data sharing and computations between nodes called data shuffling. Compute nodes are also connect to the control and admin nodes to support high-speed backups/restores, table copies, data loads, and query execution.

Dual Fibre Channel

Essentially serving as the data bus, the dual fibre channel ensures super-fast I/O processing and failover redundancy.

General PDW Base Features:

Strong relational engine
Parallel Database Copy
Microsoft product stack integration
Physical separation of server workloads
Separation of IO patterns
Parallel loading – 1.5TB per hour on 1 rack
High speed scanning – 20 to 35GBps per rack

The “Half-Rack”

I think it is important to note here that the current PDW offerings best serve customers that have between 10 Tb and 500 Tb projected storage requirement over the next few years.

To support those customers that do not have hundreds of terabytes worth of data but still want to take advantage of the features of PDW, HP and Microsoft agreed on an additional topology called the Half-Data Rack.  The Half-Data Rack provides around 40% of the performance of a full rack and is around 65% of the price.

Below is a breakdown of the current offerings:

Configuration Servers Processors Cores Space (Tb)
HP PDW  Full Rack 17 22 132 125
HP PDW  Full Rack with 4 Data Racks 47 82 492 500
HP PDW  Half Rack 11 8 48 15-60 (Optional disc sizes   available)

That wraps up this initial discussion on the basics of PDW hardware architecture. I hope you have found this post useful and have a better understanding of the hardware components that make up the PDW architecture.

%d bloggers like this: