Wednesday, December 16, 2015

Business Intelligence - real time example

Did anyone ever think of applying business intelligence techniques for a dosa shop?

if so we are like minded.

One of the easiest things human can do is Imagination.

lets assume a Dosa Corner guy, a friend of mine, has 56 Dosa shops in Bangalore.

he'd like to sustain and withstand in Dosa Business at least in Bangalore. of course he has a vision statement to expand business world wide.

he came to me and asked for a BI solution for his wish.

Here is all started with Dosa intelligence.

I convinced him that, with our tools and capabilities we can help him in improving his business a way better.

since I design the transaction application for my friend, here are very simple database overview of his shop transactional DB.


-->
Table Fields Possible Data
Table Fields Possible Data
Item

Shop
Item_ID 1,2,3...
Shop_ID 1,2,3...
Item_Name Idly, Dosa, Vada, Chapati..etc
Shop_Location Madiwala, Indra Nagar, Majestic
Item_Description Idly, Dosa, Vada, Chapati..etc
Address
Item_Price 10, 20, 20, 25





Purchase



Purchase_ID 1,2,3...
Sales

Description Groceries, Vegetables, Transport Charges...
Item_ID 1, 2,3


Quantity 10, 15,1,2


Amount 10,30,3
User
User_ID OO7
User_ID OO1, OO7
Shop_ID 1,2,3...
PIN asdf, asdf
TimeStamp 04/12/15 03:08
Role Admin, Cashier





Expenses



Expense_ID 1,2,3...


Purchase_ID 1,2,3...


Amount 1000,3000,250


User_ID OO7


Shop_ID 1, 2,3


TimeStamp 04/12/15 03:08








above few tables would definitely give a superficial view of what kind of data we are dealing with these dosa shops on daily basis.

Sum(Amount) from sales - Sum(Amount) from Expenses is his profit.

now, we have to apply all our data warehousing business Intelligence logic to help my friend in improving his business.

I'm going to create a data warehouse and a data mart for this application.
with prediction techniques, we will help him to make proper decisions.

watch this space for more., whenever i get time i will spend writing for it..
cheers.




Thursday, September 11, 2014

PK, FK Rename and duplicate keys removal.

Well, here is the frustration with SSDT schema compare, it doesn't have options to ignore column order.
say for an instance, you have a table in one DB Tab (col1, col2) & Tab(Col2, Col1) in another DB, while you compare both these DB's Schema compare would find a difference.

also it has given some more pain with user defined data types and PK, FK order.
so I have decided to live with my VS and SSDT.

Written below script to rename primary keys and foreign keys
e.g, Vendor_Details is the table name, so I'll rename primary keys to Vendor_Details_PK and foreign keys to Vendor_Details_FK01, Vendor_Details_FK02.. etc



DECLARE @TBL_NME   AS VARCHAR(max),
        @CONST_NME AS VARCHAR(max),
        @Qry       AS NVARCHAR(max);
        
-- Renaming PK      
        
DECLARE cur_pk CURSOR FOR
  SELECT constraint_name,
         table_name
  FROM   information_schema.table_constraints
  WHERE  constraint_type = 'PRIMARY KEY'
  ORDER  BY constraint_name

OPEN cur_pk

FETCH next FROM cur_pk INTO @CONST_NME, @tbl_nme

WHILE @@FETCH_STATUS = 0
  BEGIN
      SET @Qry = 'SP_RENAME ''' + @CONST_NME + ''' , '''
                 + @TBL_NME + '_PK'''

      EXEC Sp_executesql
        @Qry;

      FETCH next FROM cur_pk INTO @CONST_NME, @tbl_nme
  END

CLOSE cur_pk

DEALLOCATE cur_pk

-- Renaming FK

DECLARE curtab CURSOR FOR
  SELECT table_name
  FROM   information_schema.tables

OPEN curtab

FETCH next FROM curtab INTO @TBL_NME

WHILE @@FETCH_STATUS = 0
  BEGIN
      DECLARE @INC AS INT = 1;
      DECLARE cur_fk CURSOR local fast_forward FOR
        SELECT constraint_name
        FROM   information_schema.table_constraints
        WHERE  table_name = @TBL_NME
               AND constraint_type = 'FOREIGN KEY'
        ORDER  BY constraint_name

      OPEN cur_fk
      FETCH next FROM cur_fk INTO @CONST_NME
           
      WHILE @@FETCH_STATUS = 0
        BEGIN
       
             SET @Qry = 'SP_RENAME ''' + @CONST_NME + ''' , '''
                         + @TBL_NME + '_FK' + RIGHT('0'+CONVERT(VARCHAR(2), @INC),2)
                         + ''';';
              begin distributed tran         
                        EXEC Sp_executesql @Qry;
              commit tran
            
            SET @INC = @INC + 1;
            WAITFOR DELAY '00:00:00:1'

             FETCH next FROM cur_fk INTO @CONST_NME
        END

      CLOSE cur_fk

      DEALLOCATE cur_fk
WAITFOR DELAY '00:00:00:1'
      FETCH next FROM curtab INTO @TBL_NME
  END

CLOSE curtab

DEALLOCATE curtab


/*

--There are some chances that you might have duplicate FK's, you can fetch them and delete with below query, Evaluate the output and run the output manually.

with CTE as (
SELECT RC.CONSTRAINT_NAME FK_Name
, KF.TABLE_NAME FK_Table
, KF.COLUMN_NAME FK_Column
, RC.UNIQUE_CONSTRAINT_NAME PK_Name
, KP.TABLE_NAME PK_Table
, KP.COLUMN_NAME PK_Column
,ROW_NUMBER() over (partition by KF.TABLE_NAME, KF.COLUMN_NAME order by RC.CONSTRAINT_NAME) as RN
FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS RC
JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE KF ON RC.CONSTRAINT_NAME = KF.CONSTRAINT_NAME
JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE KP ON RC.UNIQUE_CONSTRAINT_NAME = KP.CONSTRAINT_NAME
)
select distinct 'alter table ' +FK_Table+ ' drop constraint '+ FK_Name + ';'from CTE
where RN>1


*/


Once you are happy with  keys, you can continue SSDT schema compare. still you will have some manual work, but minimal.

;-)
happy Querying.

Thursday, May 29, 2014

Physical Data Modeling with System Architect work around solution for easy handling

100 % third party component / exe free solution. so, all you need is SSMS or any other SQL Editor.
I bet most of the physical data modelers would have come across enough happy moments with System Architect tool (SA).

so, when we build / convert physical data model, we would come across few challenges
1. Names of Tables, Columns, Relationship would not be in proper format
2. Constraints
3. Indexes
4. Maintaining Associative Relationships from LDM to PDM.

so, the solution is instead of modifying names, relationships & constraints in SA, we could do it simply in SQL Editor and reverse engineer back to SA.

here are few steps would involve to make this happen.

  • open LDM
  • Click Dictionary --> Create Data Model --> Physical Data Model
  • Open PDM
  • Click Tools-->DB Schema Generation
  • Choose SQL Server any version availabe as Target db --> Next
  • Give local path for storing SQL DDL.. etc and finish the wizard to generate DDL
  • open DDL in any Editor, modify table names, column names as per the requirement
  • check constraints for all entities
  • Execute the SQL in SSMS to ensure the script is bug free.
  • Reverse Engineer back in SA in order to replace PDM.

Wednesday, March 12, 2014

To Delete All Default constraints from all tables of one Database

SELECT 'Alter Talbe ' + SO.name + ' Drop Constraint ' 
       + So1.name 
FROM   dbo.sysobjects SO 
       INNER JOIN dbo.syscolumns SC 
               ON SO.id = SC.id 
       LEFT JOIN dbo.syscomments SM 
              ON SC.cdefault = SM.id 
       JOIN sys.objects so1 
         ON so.id = so1.parent_object_id 
            AND so1.type_desc = 'DEFAULT_CONSTRAINT' 
WHERE  SM.text = '(newid())' 
ORDER  BY SO.[name], 
          SC.colid 

Monday, February 10, 2014

PIVOT UNPIVOT HIERARCHY repetition of levels issue

I happened to solve one business case in my work.
here is my source table

Source
Name Parent_key  Child_key  Level
CEO 1 1 0
Director Finance 1 2 1
Director Marketting 1 3 1
Project Manager 2 21 3
Project Manager 3 31 3
Team Lead 21 211 4
Team Lead 21 212 4
Team Lead 21 213 4
Victim (me) 213 2131 5

I'd like to TSQL Query in order to get output as follows.

Expected Output
Name Level0 Level1 Level2 Level3 Level4 Level5
CEO 1 1 1 1 1 1
Director Finance 1 2 2 2 2 2
Director Marketting 1 3 3 3 3 3
Project Manager 1 2 21 21 21 21
Project Manager 1 3 31 31 31 31
Team Lead 1 2 21 211 211 211
Team Lead 1 2 21 212 212 212
Team Lead 1 2 21 213 213 213
Victim (me) 1 2 21 213 2131 2131

I got the output at different levels by using a CTE & Pivot, 

Name Level0 Level1 Level2 Level3 Level4 Level5
CEO 1          
Director Finance   2        
Director Marketting   3        
Project Manager     21      
Project Manager     31      
Team Lead       211    
Team Lead       212    
Team Lead       213    
Victim (me)         2131  

but I struggled a lot to repeat parent levels values to the top most parent. also duplicate the bottom most child to the rest of all levels.

hence ended up writing the output in a temporary table and keep updating it.. (though I hate it, I need to do it.)

if you have any better solution, kindly let me know?

Thursday, January 9, 2014

SSIS Package Design Solution from Normalized tables to De-normalized Data Mart Layer / SSIS Columns to Rows

I had a problem while building one package.
I had to read the columns
Contact Association Contact Type Device Type Value
Business Telephone Cellphone 12312
Business Telephone landline 1231231
Business Telephone Fax 1342343
Home Telephone Cellphone 12312
Home Telephone landline 1231231
Home Telephone Fax 1342343

This has to be inserted in 

Some Junk Columns Business Cell No Business Tele No Bus Fax Home Cell No Home Tele No Home Fax 
Junkies.. 12312 1231231 1342343 12312 1231231 1342343
like this, there are 92 columns in my warehouse table.
I was little pissed off with the approach suggested in my design document.
Do lookup for business entities and store bla bla.. then Pivot..

Pivot is not only meant for transposing columns to rows. it was best while aggregating / any operation how you play in your excel.

here is my design.
Do the lookups and get all data in one set.
Distribute the required set of rows with Conditional split Component
Add a Derived Columns component for every conditional split flow, Rename the columns accordingly,

I could not provide my solution samples since it would not be in compliance. will try to mock the scenario and some dummy package and post later.


Tuesday, December 31, 2013

Generate GUID / Global Unique Identifer simple approach

DECLARE @CNT AS INT =1; 

CREATE TABLE #tmpguid   (      id UNIQUEIDENTIFIER   ) 
WHILE ( @CNT < 100 )   BEGIN       INSERT INTO #tmpguid                   (id)       VALUES      (Newid()) 
      SET @CNT = @CNT + 1   END 
SELECT * FROM   #tmpguid