Tuesday, May 18, 2010

Alternative way to get the table's row count


To get the total row count in a table, we usually use the following select statement:

SELECT count(*) FROM table_name

This query performs full table scan to get the row count. You can check it by setting SET SHOWPLAN ON , SET SHOWPLAN_TEXT ON for SQL Server . So, if the table is very big, it can take a lot of time. In this example, the tbTest table will be created and 10000 rows will be inserted into this table

CREATE TABLE tbTest (
id int identity primary key,
Name char(10)
)
GO
DECLARE @i int
SELECT @i = 1
WHILE @i <= 10000 BEGIN INSERT INTO tbTest VALUES (LTRIM(str(@i))) SELECT @i = @i + 1 END GO
There is another way to determine the total row count in a table. You can use the sysindexes system table for this purpose. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of above one:

SELECT rows FROM sysindexes
WHERE id = OBJECT_ID('table_name') AND indid <>

There are physical read and logical read operations. A logical read occurs if the page is currently in the cache. If the page is not currently in the cache, a physical read is performed to read the page into the cache. To see how many logical or physical read operations were made, you can use SET STATISTICS IO ON command.


SET STATISTICS IO ON
GO
SELECT count(1) FROM tbTest
GO
SELECT rows FROM sys.sysindexes WHERE id = OBJECT_ID('tbTest') AND indid < 2 GO SET STATISTICS IO OFF GO




Index Optimization Tips

Consider creating index on column(s) frequently used in the WHERE, ORDER BY, and GROUP BY clauses.


These column(s) are best candidates for index creating. You should analyze your queries very attentively to avoid creating not useful indexes.

-----------------------------------------------------------------------------------------------------------

Keep your indexes as narrow as possible.
Because each index take up disk space try to minimize the index key's size to avoid using superfluous disk space. This reduces the number of reads required to read the index and boost overall index performance.

-----------------------------------------------------------------------------------------------------------

Drop indexes that are not used.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should drop indexes that are not used. You can use Index Wizard to identify indexes that are not used in your queries.

-----------------------------------------------------------------------------------------------------------

Try to create indexes on columns that have integer values rather than character values.
Because the integer values usually have less size then the characters values size (the size of the int data type is 4 bytes, the size of the bigint data type is 8 bytes), you can reduce the number of index pages which are used to store the index keys. This reduces the number of reads required to read the index and boost overall index performance.

-----------------------------------------------------------------------------------------------------------

Limit the number of indexes, if your application updates data very frequently.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should create new indexes only after analyze the uses of the data, the types and frequencies of queries performed, and how your queries will use the new indexes. In many cases, the speed advantages of creating the new indexes outweigh the disadvantages of additional space used and slowly rows modification. However, avoid using redundant indexes, create them only when it is necessary. For read-only table, the number of indexes can be increased.

-----------------------------------------------------------------------------------------------------------

Check that index you tried to create does not already exist.
Keep in mind that when you create primary key constraint or unique key constraints SQL Server automatically creates index on the column(s) participate in these constraints. If you specify another index name, you can create the indexes on the same column(s) again and again.

-----------------------------------------------------------------------------------------------------------

Create clustered index instead of nonclustered to increase performance of the queries that return a range of values and for the queries that contain the GROUP BY or ORDER BY clauses and return the sort results.
Because every table can have only one clustered index, you should choose the column(s) for this index very carefully. Try to analyze all your queries, choose most frequently used queries and include into the clustered index only those column(s), which provide the most performance benefits from the clustered index creation.

-----------------------------------------------------------------------------------------------------------

Create nonclustered indexes to increase performance of the queries that return few rows and where the index has good selectivity.
In comparison with a clustered index, which can be only one for each table, each table can have as many as 249 nonclustered indexes. However, you should consider nonclustered index creation as carefully as the clustered index, because each index take up disk space and drag on data modification.

-----------------------------------------------------------------------------------------------------------

Create clustered index on column(s) that is not updated very frequently.
Because the leaf node of a nonclustered index contains a clustered index key if the table has clustered index, then every time that a column used for a clustered index is modified, all of the nonclustered indexes must also be modified.

-----------------------------------------------------------------------------------------------------------

Create clustered index based on a single column that is as narrow as possibly.
Because nonclustered indexes contain a clustered index key within their leaf nodes and nonclustered indexes use the clustered index to locate data rows, creating clustered index based on a single column that is as narrow as possibly will reduce not only the size of the clustered index, but all nonclustered indexes on the table also.

-----------------------------------------------------------------------------------------------------------

Avoid creating a clustered index based on an incrementing key.
For example, if a table has surrogate integer primary key declared as IDENTITY and the clustered index was created on this column, then every time data is inserted into this table, the rows will be added to the end of the table. When many rows will be added a "hot spot" can occur. A "hot spot" occurs when many queries try to read or write data in the same area at the same time. A "hot spot" results in I/O bottleneck.
Note. By default, SQL Server creates clustered index for the primary key constraint. So, in this case, you should explicitly specify NONCLUSTERED keyword to indicate that a nonclustered index is created for the primary key constraint.

-----------------------------------------------------------------------------------------------------------

Create a clustered index for each table.
If you create a table without clustered index, the data rows will not be stored in any particular order. This structure is called a heap. Every time data is inserted into this table, the row will be added to the end of the table. When many rows will be added a "hot spot" can occur. To avoid "hot spot" and improve concurrency, you should create a clustered index for each table.

-----------------------------------------------------------------------------------------------------------

Don't create index on column(s) which values has low selectivity.
For example, don't create an index for columns with many duplicate values, such as "Sex" column (which has only "Male" and "Female" values), because in this case the disadvantages of additional space used and slowly rows modification outweigh the speed advantages of creating a new index.

-----------------------------------------------------------------------------------------------------------

If you create a composite (multi-column) index, try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.
The order of the columns in a composite (multi-column) index is very important. This can increase the chance the index will be used.

-----------------------------------------------------------------------------------------------------------

If you create a composite (multi-column) index, try to order the columns in the key so that the WHERE clauses of the frequently used queries match the column(s) that are leftmost in the index.
The order of the columns in a composite (multi-column) index is very important. The index will be used to evaluate a query only if the leftmost index key's column are specified in the WHERE clause of the query. For example, if you create composite index such as "Name, Age", then the query with the WHERE clause such as "WHERE Name = 'Alex'" will use the index, but the query with the WHERE clause such as "WHERE Age = 28" will not use the index.

-----------------------------------------------------------------------------------------------------------

If you need to join several tables very frequently, consider creating index on the joined columns.
This can significantly improve performance of the queries against the joined tables.

-----------------------------------------------------------------------------------------------------------

Consider creating a surrogate integer primary key (identity, for example).
Every table must have a primary key (a unique identifier for a row within a database table). A surrogate primary key is a field that has a unique value but has no actual meaning to the record itself, so users should never see or change a surrogate primary key. Some developers use surrogate primary keys, others use data fields themselves as the primary key. If a primary key consists of many data fields and has a big size, consider creating a surrogate integer primary key. This can improve performance of your queries.

-----------------------------------------------------------------------------------------------------------

Consider creating the indexes on all the columns, which referenced in most frequently used queries in the WHERE clause which contains the OR operator.
If the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed, then the table or clustered index scan will be made. In this case, creating the indexes on all such columns can significantly improve your queries performance.

-----------------------------------------------------------------------------------------------------------

If your application will perform the same query over and over on the same table, consider creating a covering index including columns from this query.
A covering index is an index, which includes all of the columns referenced in the query. So the creating covering index can improve performance because all the data for the query is contained within the index itself and only the index pages, not the data pages, will be used to retrieve the data. Covering indexes can bring a lot of performance to a query, because it can save a huge amount of I/O operations.

-----------------------------------------------------------------------------------------------------------

Use the DBCC DBREINDEX statement to rebuild all the indexes on all the tables in your database periodically (for example, one time per week at Sunday) to reduce fragmentation.
Because fragmented data can cause SQL Server to perform unnecessary data reads and the queries performance against the heavy fragmented table can be very bad, you should periodically rebuild all indexes to reduce fragmentation. Try to schedule the DBCC DBREINDEX statement during CPU idle time and slow production periods.

-----------------------------------------------------------------------------------------------------------

Consider using the SORT_IN_TEMPDB option when you create an index and when tempdb is on a different set of disks than the user database.
The SORT_IN_TEMPDB option is a new SQL Server 2000 feature, which was not supported in the previous versions. When you create an index with the SORT_IN_TEMPDB option, SQL Server uses the tempdb database, instead of the current database, to sort data during the index creation. Using this option can reduce the time it takes to create an index, but increases the amount of disk space used to create an index.

-----------------------------------------------------------------------------------------------------------

Use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables" trace to determine which tables in your database may need indexes.
This trace will show which tables are being scanned by queries instead of using an index.

-----------------------------------------------------------------------------------------------------------

Sunday, June 28, 2009

Piecemeal Restoration in SQL Server 2005

How to Perform a Piecemeal Restore

SQL Server 2005 Enterprise Edition supports piecemeal restore operations, which enable you to recover one or more entire filegroups while the remaining filegroups are online. Piecemeal restore works with all recovery models, but is more flexible for the full and bulk-logged models than for the simple model Piecemeal restore allows filegroups to be restored after an initial, partial restore of the primary and some of the secondary filegroups. Filegroups that are not restored are marked as offline and are not accessible. The offline filegroups, however, can be restored later by performing a file restore. To allow the entire database to be restored in stages at different times, piecemeal restore maintains checks to ensure that the database will be
consistent in the end.

Performing a Piecemeal Restore
A piecemeal restore sequence restores and recovers a database in stages at the filegroup
level, beginning with the primary and all read/write, secondary filegroups.

--Start the Example by connecting to sql server
USE [master]
Go

--create database for demo make sure that database comtains multiple filegroups.
CREATE DATABASE [TestDB] ON PRIMARY
( NAME = N'TestDB', FILENAME = N'C:\Program Files\Microsoft SQL server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDB.mdf' , SIZE = 4048KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ), FILEGROUP [fg1] ( NAME = N'fg1', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\fg1.ndf' , SIZE = 2048KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ), FILEGROUP [fg2] ( NAME = N'fg2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\fg2.ndf' , SIZE = 2048KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ), FILEGROUP [fg3]
( NAME = N'fg3', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\fg3.ndf' , SIZE = 2048KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ) LOG ON
( NAME = N'TestDB_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDB_log.ldf' , SIZE = 1024KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
Go
/* create multiple backup device for for database,filegroup and log backup*/
--device for filegroup1 backup
EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N'fg1', @physicalname = N'C:\fg1.bak'
Go
--device for filegroup2 backup
EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N'fg2', @physicalname = N'C:\fg2.bak'
Go
--device for filegroup3 backup
EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N'fg3', @physicalname = N'C:\fg3.bak'
Go

EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N'prm', --device for Primary backup
@physicalname = N'C:\prm.bak'
Go
--device for TransactionLog1 backup
EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N't1', @physicalname = N'C:\t1.bak'
Go
--device for TransactionLog2 backup
EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N't2', @physicalname = N'C:\t2.bak'
Go
--device for TransactionLog3 backup
EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N't3', @physicalname = N'C:\t3.bak'
Go
--device for TransactionLog Tail backup
EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N'tail', @physicalname = N'C:\tail.bak'
Go
--device for Full database backup
EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N'full', @physicalname = N'C:\full.bak'
Go

-- use database
USE testdb
Go
--create table located to each filegroup and insert one row in each
--Table for filegroup1

CREATE TABLE T1
(ID INT)
ON FG1
Go
INSERT INTO T1 VALUES(1)
Go
--Table for filegroup1
CREATE TABLE T2
(ID INT)
ON FG2
Go
INSERT INTO T2 VALUES(1)
Go
--Table for filegroup3
CREATE TABLE T3
(ID INT)
ON FG3
Go
INSERT INTO T3 VALUES(1)
Go
--Table for Primary FileGroup
CREATE TABLE T4
(ID INT)
ON [PRIMARY]
Go
INSERT INTO T4 VALUES(1)
Go
----take full backup and file group backup of Database
BACKUP DATABASE [TestDB] TO [full] WITH NOFORMAT, NOINIT, NAME = N'TestDB-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
Go
BACKUP DATABASE [TestDB] FILEGROUP = N'PRIMARY' TO [prm] WITH NOFORMAT, NOINIT, NAME = N'TestDB-Full Filegroup Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
Go
BACKUP DATABASE [TestDB] FILEGROUP = N'fg1' TO [fg1] WITH NOFORMAT, NOINIT, NAME = N'TestDB-Full Filegroup Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
Go
BACKUP DATABASE [TestDB] FILEGROUP = N'fg2' TO [fg2] WITH NOFORMAT, NOINIT, NAME = N'TestDB-Full Filegroup Backup1', SKIP, NOREWIND, NOUNLOAD, STATS = 10
Go
BACKUP DATABASE [TestDB] FILEGROUP = N'fg3' TO [fg3] WITH NOFORMAT, NOINIT, NAME = N'TestDB-Full Filegroup Backup2', SKIP, NOREWIND, NOUNLOAD, STATS = 10
--insert few more rows before Transation log backup(it is optional)
--before transaction log1

INSERT INTO T1 VALUES(2)
INSERT INTO T2 VALUES(2)
INSERT INTO T3 VALUES(2)
INSERT INTO T4 VALUES(2)
--take transaction backup
--tranasction log backup taken as normal schedule

BACKUP LOG [TestDB] TO [t1] WITH NOFORMAT, NOINIT, NAME = N'TestDB-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
Go
--insert few more rows before Transation log backup(it is optional)
--before transaction log2

INSERT INTO T1 VALUES(3)
INSERT INTO T2 VALUES(3)
INSERT INTO T3 VALUES(3)
INSERT INTO T4 VALUES(3)
Go
--take transaction backup2
--tranasction log backup taken as normal schedule
BACKUP LOG [TestDB] TO [t2] WITH NOFORMAT, NOINIT, NAME = N'TestDB-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
Go
--insert few more rows before Transation log backup(it is optional)
--before transaction log3

INSERT INTO T1 VALUES(4)
INSERT INTO T2 VALUES(4)
INSERT INTO T3 VALUES(4)
INSERT INTO T4 VALUES(4)
Go
--tranasction log backup taken as normal schedule
--take transaction backup2

BACKUP LOG [TestDB] TO [t3] WITH NOFORMAT, NOINIT, NAME = N'TestDB-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
Go
--before transaction TAIL
INSERT INTO T1 VALUES(5)
INSERT INTO T2 VALUES(5)
INSERT INTO T3 VALUES(5)
INSERT INTO T4 VALUES(5)
Go
--Now ur database is crash and you are manage to get tail log bakup of your database
--Tail log backup before restoration Process
--TAKE TAIL LOG

Use master
go
BACKUP LOG [TestDB] TO [tail] WITH NO_TRUNCATE , NOFORMAT, NOINIT, NAME = N'TestDB-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, NORECOVERY , STATS = 10
Go


--Now restoration process is get started from here after ecah phase of restoration
--you will see that respective filegroups will come online for processing and other are still unaccessible.
-----RESTORATION PROCESS for primary and filegroup1
--this always start with Primary Filegroup and follwed by other

RESTORE DATABASE TestDB FILEGROUP='Primary' FROM prm
WITH PARTIAL, NORECOVERY
Go
RESTORE DATABASE TestDB FILEGROUP='fg1' FROM fg1
WITH NORECOVERY
Go
RESTORE LOG TestDB FROM t1 WITH NORECOVERY
Go
RESTORE LOG TestDB FROM t2 WITH NORECOVERY
Go
RESTORE LOG TestDB FROM t3 WITH NORECOVERY
Go
RESTORE LOG TestDB FROM tail WITH RECOVERY
Go
--check sataus of filegroup accesiblity
-- you will see here that table belongs to primary and filegroup1 are accesible and other are not.

use TestDB
Go
--accesible filegroup1
select * from t1
Go
--not accesible filegroup2(show you error unable to show becouse table exits on file filegroup which is not online)
Select * from t2
Go
--not accesible filegroup3(show you error unable to show becouse table exits on file filegroup which is not online)
select * from t3
Go
--accesible Primary filegroup
select * from t4
Go
--after some time you can start filegroup2 restoration
--RESTORATION PROCESS for filegroup2

use master
Go
RESTORE DATABASE TestDB FILEGROUP='fg2' FROM fg2
WITH NORECOVERY
Go
RESTORE LOG TestDB FROM t1 WITH NORECOVERY
Go
RESTORE LOG TestDB FROM t2 WITH NORECOVERY
Go
RESTORE LOG TestDB FROM t3 WITH NORECOVERY
Go
RESTORE LOG TestDB FROM tail WITH RECOVERY
Go

--check the accesiblty now.
use TestDB
Go
--accesible filegroup1
select * from t1
Go
--accesible filegroup2
select * from t2
Go
--not accesible filegroup3(show you error unable to show becouse table exits on file filegroup which is not online)
select * from t3
Go
--accesible Primary filegroup
select * from t4
Go
--at the end restore filegroup3 and full database will become online.
--restoration for filegroup3

Use master
Go
RESTORE DATABASE TestDB FILEGROUP='fg3' FROM fg3
WITH NORECOVERY
Go
RESTORE LOG TestDB FROM t1 WITH NORECOVERY
Go
RESTORE LOG TestDB FROM t2 WITH NORECOVERY
Go
RESTORE LOG TestDB FROM t3 WITH NORECOVERY
Go
RESTORE LOG TestDB FROM tail WITH RECOVERY
Go
--check the accesiblty now.
use TestDB
Go
--accesible filegroup1
select * from t1
Go
--accesible filegroup2
select * from t2
Go
--accesible filegroup3
select * from t3
Go
--accesible Primary filegroup
select * from t4
Go
--Clean your server
use master
go
drop database [TestDB]
Go
EXEC sp_dropdevice 'fg1'
EXEC sp_dropdevice 'fg2'
EXEC sp_dropdevice 'fg3'
EXEC sp_dropdevice 't1'
EXEC sp_dropdevice 't2'
EXEC sp_dropdevice 't3'
EXEC sp_dropdevice 'prm'
EXEC sp_dropdevice 'tail'
EXEC sp_dropdevice 'full'
Go


Thursday, June 25, 2009

Designing Partition Table for Automatic Archiving


Partitioned Table

A partitioned table is a table in which the data is separated horizontally into multiple physical locations based on a range of values for a specific column. The physical locations for partitions are filegroups. For example, you could use a partitioned table to store sales orders and then separate the order records into different filegroups based on
the order date.

Benefits of Partitioned Tables
The ability to implement separate backup strategies.
• Control over storage media.
• Index management benefits.


Partitioned Function
A partition function specifies the data type of the key used to partition the data and the
boundary values for each partition. The number of partitions defined by a partition
function is always one more than the number of boundary values that the function defines.
For example, a partition function that defines a Integer partitioning key with the
boundary values 1, 2, and 3’ will result in four partitions <1,1-2,2-3,>3.
Example:-
CREATE PARTITION FUNCTION pf_OrderDate (datetime)
AS RANGE RIGHT
FOR VALUES ('01/01/2003', '01/01/2004', '01/01/2005')


Partitioned Function
A partition scheme maps the partitions that are defined in a partition function to the
filegroups where the partitions will be physically stored. You can map all partitions to the
same filegroup, or you can map some or all of the partitions to different filegroups,
depending on your needs.
Example:-
CREATE PARTITION SCHEME ps_OrderDate
AS PARTITION pf_OrderDate
TO (fg1, fg2, fg3, fg4, fg5)
---------------------------------------------------------------------------------
CREATE PARTITION SCHEME ps_OrderDate
AS PARTITION pf_OrderDate
ALL TO ([PRIMARY])
Fg5 is next used partition will be used in future with partition table.
Example Partitioned Tables
CREATE TABLE TESTEMPLOYEE
(EMPID INT IDENTITY ,EMPNAME CHAR(40),RETDATE DATETIME,RETMONTH AS CONVERT(TINYINT,DATEPART (MM,RETDATE)) PERSISTED
CONSTRAINT PKKEY PRIMARY KEY (EMPID,RetMONTH))
ON MyPartitionScheme(RETMONTH)


Switching Partitions
You can swap a populated table or partition with an empty table or partition by using the
SWITCH clause of the ALTER TABLE statement. This technique is commonly used to
archive data from a partitioned table or to bulk-insert new data from a staging table.
ALTER TABLE dbo.PartitionedTransactions
SWITCH PARTITION 1
TO dbo.TransactionArchive

Merging Partitions
You can use the ALTER PARTITION FUNCTION statement to merge a partition. When
you perform a merge operation, the partition for the boundary value specified in the
ALTER PARTITION FUNCTION statement is removed, and the data is merged into the
adjacent partition.

ALTER PARTITION FUNCTION pf_OrderDate()
MERGE RANGE ('01/01/2003')

Splitting Partitions
Like a merge, a split operation is performed by using the ALTER PARTITION
FUNCTION statement. This creates a new partition and reassigns the data accordingly.
The new partition is created on the filegroup designated as the next filegroup in each
partition scheme based on the partition function. If a partition scheme has no next
filegroup defined when the partition is split, an error occurs. So earlier fg5 next used partition is used by this function.
ALTER PARTITION FUNCTION pf_OrderDate()
SPLIT RANGE ('01/01/2006')


Designing Partitions Table To Archive Data In Another Database At The End Of Each Month Automatically.

--Create Database for Partition table with 12 filegroups and Data file associated with each
CREATE DATABASE [PartitionDB] ON PRIMARY
( NAME = N'PartitionDB', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB.mdf' , SIZE = 4048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup1]
( NAME = N'PartitionDB1', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB1.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup10]
( NAME = N'PartitionDB10', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB10.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup11]
( NAME = N'PartitionDB11', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB11.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup12]
( NAME = N'PartitionDB12', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB12.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup2]
( NAME = N'PartitionDB2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB2.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup3]
( NAME = N'PartitionDB3', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB3.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup4]
( NAME = N'PartitionDB4', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB4.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup5]
( NAME = N'PartitionDB5', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB5.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup6]
( NAME = N'PartitionDB6', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB6.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup7]
( NAME = N'PartitionDB7', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB7.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup8]
( NAME = N'PartitionDB8', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB8.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ),
FILEGROUP [filegroup9]
( NAME = N'PartitionDB9', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB9.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'PartitionDB_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PartitionDB_log.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
GO
--Sample Picture is Given
--Use Database
use PartitionDB
go
--create partition function to create logical boundary for each month
CREATE PARTITION FUNCTION MyPartitionRange (TINYINT)
AS RANGE LEFT FOR VALUES (1,2,3,4,5,6,7,8,9,10,11)

GO
--create partition scheme to associate in logical boundary with file groups
CREATE PARTITION SCHEME MyPartitionScheme AS
PARTITION MyPartitionRange
To([filegroup1],[filegroup2],[filegroup3],[filegroup4],[filegroup5],[filegroup6],[filegroup7],[filegroup8],[filegroup9],[filegroup10],[filegroup11],[filegroup12])
GO
--now create a table with store data using partition scheme and partition function
--see we have created computed column RETMONTH to idetified the month of employee retirment and pass this to partition scheme. make sure if you are associate a computed column with partition schema it must be persisted.
CREATE TABLE EMPLOYEE
(EMPID INT IDENTITY ,EMPNAME CHAR(40),RETDATE DATETIME,RETMONTH AS CONVERT(TINYINT,DATEPART (MM,RETDATE)) PERSISTED
CONSTRAINT PKKEY PRIMARY KEY (EMPID,RetMONTH))
ON MyPartitionScheme(RETMONTH)
Go
--populate partition table with data.(adventure works sample database is reuired for [Person].[Contact] table.
INSERT INTO EMPLOYEE
SELECT [FirstName]
,[ModifiedDate]
FROM [AdventureWorks].[Person].[Contact]
go
--varify no of records in each partition by using sys.partition function
SELECT partition_number,rows
FROM sys.partitions
WHERE OBJECT_ID = OBJECT_ID('EMPLOYEE')
go
--varify rows are related to which partition for this we are usinf $partition predicate with partition function
SELECT *,$PARTITION.MyPartitionRange(retmonth)as PartitionNo FROM EMPLOYEE
go

--ARCHIVED Destination
--creating Database for archived table
Create database ArchivedDB
Go
Use ArchivedDB
go
-- Create table to store Archived data.
CREATE TABLE ArchivedData
(EMPID INT ,EMPNAME CHAR(40),RETDATE DATETIME )
Go
--varify there is no records in ArchivedData
SELECT * FROM ArchivedData
go
--we are creating temprory table to switch data using StoreProc to transfer data from source to temprory table and then archicedtable
--Considrations
--1. Switch can transfer data to empty table or empty partition only.-
--2. Destination(Empty Table or Partition) must be exits on same filegroup from where data is extracted.
--3.you can not create a temp table becouse it exists in TEMDB.
--4.Temp table must have same schema as source table even charter length of columns.
-- we are using dynamic sql to craete temp table on same filegroup from data is extracted by passing filegroup as parameter.
GO
USE PartitionDB
GO
CREATE PROC SWITCHTABLE @FG CHAR(20)
AS
BEGIN
exec ('CREATE TABLE TempEmployee
(EMPID INT IDENTITY ,EMPNAME CHAR(40),RETDATE DATETIME,RETMONTH AS CONVERT(TINYINT,DATEPART (MM,RETDATE)) PERSISTED
CONSTRAINT PKKEYTEMP PRIMARY KEY (EMPID,RETMONTH))
ON '+@FG)
END
Go
-- Creating another storeproc to swith data from source to temp table created earlier we have to pass partition id from data gets extracted passing partition id as parameter.
create proc SWAPPARTITION @ID INT
AS
BEGIN
EXEC ('ALTER TABLE EMPLOYEE switch PARTITION
'+' '+@ID+' TO TempEmployee')
END
go
--another proc to transfer data from temp table to destination archived table in archivedDB
CREATE PROC ARCHIVEDATA
AS
BEGIN
INSERT INTO ArchivedDB.dbo.ArchivedData SELECT EMPID,EMPNAME,RETDATE FROM TempEmployee
END
go
-- create single store proc to call all three proc(creaete table,swith partionand transfer data)
--ones you get partitionid you can extract the file group associtead with it so we are using a query to know archived partition file group and pass this to first procedure to create temp table after archiving we have to make sure that temp table must be drop for next time.

CREATE PROC SWAPEMPLOYEE @ID INT
AS
BEGIN
Declare @fg Char(20)
select @fg=fg.Name from sys.indexes i inner join sys.partition_schemes ps on ps.data_space_id = i.data_space_id
inner join sys.destination_data_spaces dds on dds.partition_scheme_id = ps.data_space_id
inner join sys.filegroups fg on fg.data_space_id = dds.data_space_id
left outer join sys.partition_range_values prv on prv.boundary_id = dds.destination_id
where i.[object_id] = object_id('Employee') and destination_id =@id
EXEC SWITCHTABLE @FG
EXEC SWAPPARTITION @ID
EXEC ARCHIVEDATA
DROP TABLE TempEmployee
END
Go
--create a t-sql job and schedule at the end on each month with given code.
go
use PartitionDB
go
DECLARE @PNO INT
SET @PNO=DATEPART(MM,GETDATE())
EXEC SWAPEMPLOYEE @PNO
go


2. give name to job and select step tab.

3. click new step.
4. fill step page as shown in picture.
5.create schedule for job occurred at end of each month. Fill as given in picture.

6. change the system date to end of month set timing 11:57 pm and wait for two minutes. --check previous month data in employee table
go
SELECT * FROM EMPLOYEE WHERE RETMONTH=datepart(mm,getdate())
7 --now check data in archived table
go
select * from ArchivedDB.dbo.ArchivedData
--clean your system
go
use master
go
drop database ArchivedDB,PartitionDB