Data Warehousing Tip: Using T-SQL vs SSIS for Slowly Changing Dimension (SCD) Types 1 and 2

Here’s an example of using T-SQL to process a Slowly Changing Type 1&2 Dimension . Note:  SSIS has an SCD transform ,however, it does not process large dimensional datasets very quickly.
…eh, I’m putting it too nicely, the SSIS SCD transform is painfully slow!  This T-SQL performs 20X quicker than the SSIS SCD transform.
USE [SomeDB]
/****** Object: StoredProcedure [dbo].[sp_UpsertDimGuest] Script Date: 9/11/2013 6:09:04 PM ******/
CREATE PROCEDURE [dbo].[sp_UpsertDimGuest]
–DROP Constraints
–SCD 2 Historical Attribute Tracking via Upsert
INSERT INTO [dbo].[DimGuest]
( [GuestContactSrgKey],
MERGE GTDW.dbo.DimGuest AS [Target]
USING GTDW_Staging.dbo.DimGuestStagingFinal AS [Source]
ON Target.GuestIDAlternateKey = Source.GuestIDAlternateKey
AND Target.IsCurrent = 1
(Target.[GuestFirstName] <> Source.[GuestFirstName]
OR Target.[GuestLastName] <> Source.[GuestLastName]
IsCurrent = 0
,EffEndDt = GETDATE()
IsCurrent = 0
,EffEndDt = GETDATE()
OUTPUT $action AS Action
) AS MergeOutput
WHERE MergeOutput.Action = ‘UPDATE’
AND GuestIDAlternateKey IS NOT NULL
–Changing Attributes – history is not recorded
[GuestContactSrgKey] = DSF.[GuestContactSrgKey]
,[GuestAcct] = DSF.[GuestAcct]
,[ETLLoadID] = DSF.[ETLLoadID]
FROM DimGuest DimG
INNER JOIN [GTDW_Staging].[dbo].[DimGuestStagingFinal] DSF
ON DimG.GuestIDAlternateKey = DSF.GuestIDAlternateKey
AND DimG.IsCurrent = 1 –Optional
DimG.[GuestAcct] <> DSF.[GuestAcct]

Data Warehousing Tip: T-SQL Script to Create DEFAULT Constraints by Data Type

There is an ongoing discussion/debate regarding the handling of NULLS in a data warehouse, however, the following article by Michelle Poolet summarizes it beautifully – “consider the data warehouse business requirements” (see

Beyond consideration, there are some data types that should get the DEFAULT treatment (bit, char, nchar, nvarchar, and varchar) and the following script can assist in creating these across every applicable column in your database.

SELECT Object_schema_name(T.[object_id], Db_id()) AS [Schema], 
       T.[name]  AS [table_name], 
       AC.[name] AS [column_name], 
       TY.[name] AS system_data_type, 
       'ALTER TABLE dbo.' + T.[name] 
       + ' ADD CONSTRAINT ' + AC.[name] + ' DEFAULT(' 
       + CASE TY.[name] 
            --WHEN 'int' THEN '0' 
          --WHEN 'bigint' THEN '0' 
          WHEN 'varchar' THEN '' 
          WHEN 'bit' THEN '' 
          WHEN 'char' THEN '' 
          WHEN 'smalldatetime' THEN '' 
          WHEN 'datetime' THEN ''
          --WHEN 'tinyint' THEN '0'
          --WHEN 'smallint' THEN '0' 
          WHEN 'nvarchar' THEN '' 
          --WHEN 'money' THEN '0.00'
          ELSE '' 
       END +') FOR ' AS sqlstring 
   FROM   sys.[tables] AS T 
       INNER JOIN sys.[all_columns] AC 
               ON T.[object_id] = AC.[object_id] 
       INNER JOIN sys.[types] TY 
               ON AC.[system_type_id] = TY.[system_type_id] 
                  AND AC.[user_type_id] = TY.[user_type_id] 
WHERE  T.[is_ms_shipped] = 0 
       AND T.[name] LIKE ('DIM%') AND AC.[name] NOT LIKE ( '%SrgKey%' )  AND 
TY.[name] in ('varchar', 'bit' , 'char', 'smalldatetime', 'datetime' , 'nvarchar')

I’ve taken the approach of defaulting values for all columns (less those affected by the WHERE clause) in my STAGING Database.  The thought is that the non-NULL values inserted into my staging tables will persist to the TARGET database and I won’t experience one of a variety of issues (OLAP, T-SQL, or otherwise) that a development and/or user team will encounter due to having NULL values in the data warehouse.


SSAS How-To: Create Dynamic Time Calculations To Reuse Over Any Measure

Rather than making individual time calculations for each measure in an SSAS cube (i.e. YTD, Prior YTD, YTD Variances, etc), one can use a more dynamic approach to time calcs applicable to all measures and save a great amount of dev time and overhead in the process.

1. Create Named Query in DSV:

SELECT      1 AS DateCalculationKey, ‘Selected Date’ AS DateCalculation
SELECT      2, ‘YTD’
SELECT      3, ‘PY YTD’
SELECT      4, ‘YTD Chg’
SELECT      5, ‘YTD Chg %’

2. Create the dimension and add it to the Dimension Usage tab of the cube (no relations to the fact will exist)

3. Add the MDX script calculations:

[Date].[Calendar Hierarchy].MEMBERS);
[Date Calculations].[Date Calculations].[YTD] =
    {[Date Calculations].[Date Calculations].[Selected Date]} *
       [Date].[Calendar Hierarchy].[Calendar Year],
       [Date].[Calendar Hierarchy].CURRENTMEMBER
[Date Calculations].[Date Calculations].[PY YTD] =
   {[Date Calculations].[Date Calculations].[Selected Date]} *
      [Date].[Calendar Hierarchy].[Calendar Year],
         [Date].[Calendar Hierarchy].[Calendar Year],
         [Date].[Calendar Hierarchy].CURRENTMEMBER
SCOPE ([Date].[Calendar Hierarchy].[All]);
[Date Calculations].[Date Calculations].[YTD] = NULL;
[Date Calculations].[Date Calculations].[PY YTD]=NULL;
[Date Calculations].[Date Calculations].[YTD Chg]=[Date Calculations].[Date Calculations].[YTD]
[Date Calculations].[Date Calculations].[PY YTD];
[Date Calculations].[Date Calculations].[YTD Chg %] =
[Date Calculations].[Date Calculations].[PY YTD] = 0,
([Date Calculations].[Date Calculations].[YTD] [Date Calculations].[Date Calculations].[PY YTD]) /
[Date Calculations].[Date Calculations].[PY YTD]
FORMAT_STRING([Date Calculations].[Date Calculations].[YTD Chg %]) = “#,0.0%”;
//Create current year and month in scripts:
AS {(strtomember(“[Date].[Calendar Hierarchy].[Calendar Year].[“+ cstr(year(now()))+”]”))}, DISPLAY_FOLDER = ‘Named Sets’ ;
[Date].[Calendar Hierarchy].[Month Name Year],
[Measures].[Fact Sales Count]

The result!


SSRS How-To: Simple Method to Pass MDX Multi-Select Parameters to MDX Datasets

SSRS Tip: Simplest Method to Pass MDX Multi-Select Parameters to MDX Datasets

I recently ran across a poorly covered topic in regards SSRS development using MDX parameters and datasets.  After some trail and error and visiting a number of online resources, I found the approach below to be the most straight forward.

To pass a multivalued MDX parameter to an MDX based dataset, perform the following routine:

1. Create the parameter dataset (Right click DataSet >> Add Dataset…) – Note: the MEMBER_CAPTION and UNIQUENAME enables the cleanest approach to assigning values to the parameter (Step 2). Also, we will be filtering out unwanted parameter values.

Step 1.1 – click ‘use dataset embedded in report’ 
Step 1.2 – select datasource 
Step 1.3 – click ‘Query Designer’ 
Step 1.4 – Click ‘Design Mode’ icon 
Step 1.5 – Enter query

WITH MEMBER [Measures].[ParameterCaption] AS [Fixed Asset Book].[Book ID].CURRENTMEMBER.MEMBER_CAPTION MEMBER [Measures].[ParameterValue] AS [Fixed Asset Book].[Book ID].CURRENTMEMBER.UNIQUENAME

SET [Book ID] AS [Fixed Asset Book].[Book Id].ALLMEMBERS

SELECT {[Measures].[ParameterCaption], [Measures].[ParameterValue]} ON COLUMNS , FILTER([Book ID], [ParameterCaption] <> “All” AND [ParameterCaption] <> “” AND [ParameterCaption] <> “N/A” AND [ParameterCaption] <> “Unknown”) ON ROWS FROM [FixedAssets]


Notice the full unique names in ‘ParameterValue’ Column…the use of these values will enable a StrToSet function later in our primary MDX dataset.

2. Create the parameter (on the left pane, right click ‘Parameters’, ‘Add Parameter…’)

 2.1 – From the General tab, add ‘allow multiple selection’ if needed

 2.2 – Assign ParameterCaption as the label (will be displayed to the user on the report)and ParameterValue as the value (will be interpreted by SSRS when filtering report data)


3. Create the dataset (Right click DataSet >> Add Dataset…)

Step 3.1 – click ‘use dataset embedded in report’  
Step 3.2 – select datasource  
Step 3.3 – click ‘Query Designer’  
Step 3.4 – Click ‘Design Mode’ icon  
Step 3.5 – Enter query to include the WHERE (StrToSet(@BookID, CONSTRAINED)) clause – this will enable the MDX to interpret the @BookID multiselect object (from step 1) as dynamic memberset.  
Step 3.6 – Click the ‘Query Parameters’ icon  
Step 3.7 – Create the BookID parameter assignment



4. You’re all set!




Parallel Data Warehouse (PDW) Tip: Proof of Concept (POC) Guidelines

As with any other project our objective is to drive PDW POC success!  To clear and cultivate the playing field for successful POC engagements, we must, at a minimum, consider the following guidelines.

Guideline 1: Identify Existing Roadblocks and De-scope
Major POC challenges can manifest themselves in many different forms i.e. inability to seed data, dealing with sub-par data warehouse architectures, wrangling poorly concieved T-SQL/ETL architectures, etc.  If these challenges exist in the current production environment and are the primary driver behind the PDW POC initiative, approach with caution as these core issues can take more time to address when compared to minor issues such as lack of an SSIS framework, data delivery issues, or inability to generate very large data volumes.  If show-stoppers exist, try either de-scoping the issues/components from the PDW altogether or try to find a sustainable work-around.

Guideline 2: Gather Accurate Business and Technical Requirements
This should be fairly standard as the client will have a core dataset and/or processes that they would like see run on the PDW appliance coupled with an ‘ideal’ runtime. Keep the deliverables manageable and implement the SMART methodology (Specific, Measureable, Achievable, Realistic, Timescale). Be sure to gather base requirements for documentation, data volumes, environmental requirements, etc.   For example, the client may want to stage both a dev and test environment, they may only want a schemas worth of tables/data brought over from an existing  SMP database (Note: PDW does not support schemas), or the client may want to bring over 2 years of history rather than 10 years.  Keep historical data requirements to a minimum, as it may be difficult to find extraction windows from existing source systems. Similarly, data loads into the PDW via dwloader command or SSIS destination (slower than dwloader) can take a significant amount of time and may run over the course of X number of days depending on source system availability and data volumes.  Maybe most importantly, assess the current environment, data volumes, and processing windows to manage realistic expectations! Although there have been instances where specific loading routines can result in 100X improvement in load times, this is not always the case and this factor should not be used as a baseline.

Guideline 3: Streamline Testing Requirements
When addressing the testing phase, do not overload the POC activities with extraneous testing/recon requirements and keep the scope of testing and data validation to a minimum. Simplify the testing criteria to critical dataset load times vs. data volumes.  The success criteria should be clear i.e. Company GECKO has defined success as loading 2Tb of prior day policy activity data for 7 consecutive days in under 30 minutes each day, or Company STERNXM defines success as loading 20 regional subscriber datasets via flatfile concurrently in under 2 hours once a week.  Also, the client may be interested in testing ad-hoc query and data loads while the appliance is actively processing large data load requests. At a minimum, the testing process will address the following :

  • Data Load
  • Data Expansion
  • Query Performance
  • Query Concurrency
  • Mixed Workload Concurrency

Guideline 4:  Resource Planning and Communication
While keeping project activities within scope, ensure that the team has at least the minimum number of dedicated resources to the project.  In-house members of a POC delivery team usually include at minimum: DBA, Business Analyst, and a Data Developer engineer. Keep in mind, most streamlined POC engagements wrap up within a 2-3 week timeframe and if the delivery requirements are modified (heaven forbid) , to say,  include more complex PDW processing routines or include additional table loads, always ensure that you remain communicative and provide updates/recommendations to resource budgets and overall POC timelines.  Additionally, provide daily email updates and try to incorporate simple graphical representations of project progress (i.e. RAG Traffic lights). While never under-estimating, pad for the unexpected and ensure that time is added to the plan to account for contingencies.

Guideline 5: Partnership with Microsoft PDW COE
In a recent PDW implementation, I was involved with a few PDW-specific challenges applicable to query optimization, data loading, and appliance software upgrades.  As a result of both the clients and the consulting company’s partnership with the Microsoft COE, the downtime associated with these  challenges were minimized. Partnership led to smoother and more efficient PDW POC delivery as the COE was ready and willing to recommend and demonstrate solutions that aligned to MS COE best practices . Ultimately, the COE recommendations and actions directly benefited system usability and sustainability and ensured a successful POC delivery.

I would like to build a collection of guidelines within this post to benefit future PDW POC implementations!  If you have an idea or want to expand on one of these guidelines, feel free to share your personal experiences and/or workarounds to common POC challenges.

I want to thank the following contributors of the PDW community for providing insight that led to the formulation of the POC guidelines above:

Matt Goswell
James Herring
Ralph Kemperdick
Derek Comingore
Matthew Winter

Parallel Data Warehouse (PDW) Performance Tip: CTAS vs INSERT SELECT

When loading tables in Microsoft’s Parallel Data Warehouse (PDW) via T-SQL, one will soon encounter the need to load data into a new table. Whether the developer needs to perform an initial data load of table or even go through the PDW-specific steps to perform a ‘trunc and load’ of a table, there are a two table load methods to consider.

PDW T-SQL Load Methods:


To make a direct comparison between the two methods, I have provided the actual PDW query and execution plans generated by each method using identical SELECTs

Load Method: INSERT SELECT  

Appends one or more new rows to a table. Add one row by listing the row values or add multiple rows by inserting the results of a SELECT statement.

Insert Into TABLEA SELECT only, use, this, method, to, append FROM TABLEB

Generated Query Plan


Generated Execution Plan

Load Method: Create Table as Select (CTAS)

Creates a new table in SQL Server PDW that is populated with the results from a SELECT statement.

yearId int NOT NULL,
some varchar(50),
stuff varchar(50))

Generated Query Plan

Generated Execution Plan

*When comparing to INSERT SELECT, notice the absence of the Index Scan and Parallelism steps above.  CTAS results in an Index Seek for this particular SELECT and also does away with the Parallellism steps.

The Verdict – Use CREATE TABLE AS SELECT (CTAS) when loading empty tables!

Performance Results
CTAS loaded the test table more that twice as fast as INSERT SELECT.

  • CTAS runtime: 1min  44sec
  • INSERT SELECT runtime: 4min  2sec 

Simple Explanation
On the PDW, INSERT SELECT is a fully logged transaction and occurs serially per each distribution. CTAS, however, is minimally logged and happens parellelly across all nodes.

NOTE: When performing CTAS or re-creating tables in any way, you will need to create statistics on those tables upon loading.

Ideally, you should create stats on all the join columns, group by, order by and restriction. SQL Server PDW does not automatically create and update statistics on the Control node for every occasion when SQL Server creates or updates statistics on the Compute nodes:

— This will create stat for all columns on all objects

select ‘create statistics ‘‘ on dbo.’‘ (‘‘)’
from sys.tablesa,sys.columnsb
where a.object_id=b.object_id and notexists(
selectnullfromsys.stats_columnswhere object_idin(selectobject_idfromsys.stats_columnsgroupbyobject_idhavingcount(*)>=1)
and object_id=b.object_idandcolumn_id=b.column_id)

T-SQL Tip: Highlight Code Between Any Two Parenthesis

Where has this been all my working life? If you frequently find your self scouring complex (and sometimes needlessly complex) T-SQL code, simply place the cursor on the inside or outside of a parenthesis, and perform a CTRL+SHIFT+]. Sql Server Management Studio will then highlight the code for that particular SELECT block. There is some complex T-SQL out there in the wild (future consultants beware!), this will save you loads of scrolling and searching time and help you wrangle in pesky blocks of extended code.

Crunch that code!
– Sal

%d bloggers like this: