Thursday, December 23, 2010

[SQL Server] Paging with Total Record Count in SQL Server 2005


In professional business, developers are usually required to query database and fetch results. With huge datasets, it downgrades the application performance. One of the solutions to this problem is paging which allows the user to fetch small chunks of data and traverse the entire dataset through forward/backward navigations.

While paging, developers do a big mistake. They fetch the entire dataset and filter out required data chunk at application level. This works pretty well with small datasets, but doesn’t suit well for huge datasets. The good approach is to request only the required chunk of data and improve the application response time. This is how we’ll do it:

Java Code:

private static final int DEFAULT_PAGE_SIZE = 10;
int totalRecords = -1, recordShowing = 0, pageIndex = 0, pageSize = DEFAULT_PAGE_SIZE;

String action = request.getParameter("action");
pageIndex = StringUtils.isBlank( request.getParameter("pageIndex") ) ? 0 : Integer.valueOf(request.getParameter("pageIndex")) ;   
pageIndex = (pageIndex <= 0) ? 0 : pageIndex;

if ( action.equalsIgnoreCase("showNone") ) {
    // Some code to display default page, when no action is set (if required)...
}
else {
    // Set page Index and page Size, based on user action
    if ( action.equalsIgnoreCase("showAll") ) {    // for Show All records
        pageSize = -1;
        pageIndex = 0;
    }
    else if ( action.equalsIgnoreCase("showPrev") ) { // for Backward navigation
        pageIndex = (pageIndex <= 0) ? 0 : (pageIndex - 1);   
    }
    else if ( action.equals("showNext") ) { // for Forward Navigation
        pageIndex = pageIndex < 0 ? 0 : (pageIndex + 1);
    }
   
    int startIndex = (pageSize < 0) ? -1 : (pageIndex * pageSize) + 1;
    int endIndex = (pageSize < 0) ? -1 : (pageIndex * pageSize) + pageSize;

    /**
    *    Call your DAO method to fetch the required result set...
    *     Here I'm assuming that result set is returned in following fashion:
    *
    *    Map<Object, Object> serachResult = new HashMap<Object, Object>();
    *    serachResult.put("totalRecords", totalRecords);
    *    serachResult.put("resultList", resultList);
    */
   
    // Set variables based on the result set 
    // to display proper pagination info to the user
    totalRecords = Integer.valueOf( resultSet.get("totalRecords").toString() );
    pageSize = (pageSize == -1) ? totalRecords : pageSize;
   
    if ( action.equalsIgnoreCase("showAll") ) {
        recordShowing = resultList.size();
    }
    else {
        recordShowing = resultList.size() < DEFAULT_PAGE_SIZE ? totalRecords : (pageIndex + 1) * DEFAULT_PAGE_SIZE;
    }   
}

 
SQL Query:

In SQL, we can achieve this through temp. tables and CTE (Common Table Expressions) as well. But what if these are not the options for us. Below is how we can do it through select query. 

SELECT *  FROM (
    SELECT *, ROW_NUMBER() OVER (ORDER BY <column-name>) AS ROWNUM
    , COUNT(1) OVER () AS TOTALRECORDS
    FROM <table-name>
    WHERE <conditions>
) recSet
WHERE ((@startIndex = -1 AND @endIndex = -1) OR (ROWNUM BETWEEN @startIndex AND @endIndex ))
ORDER BY <column-name>

Friday, December 17, 2010

[Java] Error: Method getBindStatus threw exception for reference $springMacroRequestContext

Recently, I came across a problem while working on Spring framework and Velocity; the exception appears to be "ERROR [velocity.app.VelocityEngine] Method getBindStatus threw exception for reference $springMacroRequestContext in template <xyz.vm>". Apparently, it seems like velocity was unable to call the macro or unable to bind some variable in the template.

Actually, I used macro to display tree structure, defined on two different velocity templates, both macro have the same name but different variables were bind with respect to each template requirement. This caused the first visited page to be loaded properly, and on visiting next page, the same macro (from the first page) is called at back end, hence was throwing above mentioned exception. The exception occured because JVM was unable to find the variables used in first template macro, while loading the macro in second template with same macro name.

The solution, is quite simple to this problem. To use different macro name in each template, IF it has any variable or form controls specific to that particualar template.

Another useful information: Changes done in velocity macros do not appear until we restart Tomcat. To have macros changes appear on browser without restarting Tomcat, just change the macro name and refresh the browser.

Enjoy working with Velocity templates. :)

Thursday, November 25, 2010

[SQL Server] Search Text in Database Objects

We can easily search the data that we store in our database tables through queries, but what if we want to search the database objects, like stored procedure, trigger, views, user-defined functions, etc. For instance, if we want to know if any specific table is used in any database object, or want to search any special comment mentioned in any of these DB objects. Here is what we do to get it done:

SELECT O.[name] AS [ObjectName]
, O.[type_desc] AS [ObjectType]
, O.[modify_date] AS [ModifiedDate]
, C.text
FROM sys.syscomments C
INNER JOIN sys.objects O ON C.[id] = O.[object_id]
WHERE C.[text] LIKE '%<text-to-search>%'
AND O.[type] in ('V', 'FN', 'TR', 'P')

Solution is just like Aladin's Lamp...! Enjoy.

[SQL Server] Search Table using Column Name

In high pressure professional environment, we sometime require to know which tables have some xyz column, and its quite tedious to search entire database. Here is the magic-wand query for this problem:

SELECT * FROM sys.tables WHERE object_id IN
(SELECT object_id FROM sys.columns WHERE name LIKE '%<column-name>%')

Amazing... isn't it! :)

[T-SQL] Alter Table - Add/Drop Constraint Query

It is a good practice to identify and apply constraints while creating the table, but in real scenario, we usually require to add constraint on existing column or while adding a new column in existing table. For this purpose, a simple and quick solution is to do it through query. Below is how you achieve it:

Add Constraint on Existing Column:

ALTER TABLE <table-name>
ADD CONSTRAINT <constraint-name> 
PRIMARY KEY (<column-name>)

where, "<constraint-name>" is a user defined constraint name to refer it properly.

The above syntax could be used for UNIQUE, DEFAULT and CHECK constraints, but for FOREIGN KEY constraint, the query would be as follows:

ALTER TABLE <target-table-name>
ADD CONSTRAINT <constraint-name>
FOREIGN KEY ( <target-column-name> )
REFERENCES <source-table-name> ( <source-column-name> )

Add Constraint on New Column:

ALTER TABLE <table-name>
ADD <column-name> BIGINT IDENTITY
CONSTRAINT <constraint-name> PRIMARY KEY

where, "<constraint-name>" is a user defined constraint name and "IDENTITY"  is optional if you want to make this identity with default seed and increment value set to 1.


The above syntax could be used for UNIQUE, DEFAULT and CHECK constraints, but for FOREIGN KEY constraint, the query would be as follows:

ALTER TABLE <target-table-name>
ADD <target-column-name> BIGINT
CONSTRAINT <constraint-name>
FOREIGN KEY ( <target-column-name> )
REFERENCES <source-table-name> ( <source-column-name> )

Drop Constraint from Existing Column:

To drop a constraint is very simple:

ALTER TABLE <table-name> 
DROP CONSTRAINT <constraint-name>

Enjoy!

Wednesday, November 24, 2010

[T-SQL] Table Constraints

Table constraints define rules regarding the values allowed in columns and are the standard mechanism for enforcing data integrity.

Types of Constraints:
  1. NOT NULL enforces not to accept NULL values
  2. UNIQUE assures that the data is unique in the column (including one NULL).
  3. PRIMARY KEY uniquely identifies each record in the table without allowing NULL.
  4. FOREIGN KEY constraints the data to be inserted that is available in table's column it points to.
  5. CHECK is used to limit the value range that can be placed in that column.
  6. DEFAULT is used to insert a default value to all new records, if no value is specified.

[SQL Server] SQL Server Isolation level

An isolation level determines the degree of isolation of data between concurrent transactions. The default SQL Server isolation level is Read Committed. A lower isolation level increases concurrency, but at the expense of data correctness. Conversely, a higher isolation level ensures that data is correct, but can affect concurrency negatively. The isolation level required by an application determines the locking behavior SQL Server uses. SQL-92 defines the following isolation levels, all of which are supported by SQL Server:
  1. Read uncommitted (the lowest level where transactions are isolated only enough to ensure that physically corrupt data is not read).
  2. Read committed (SQL Server default level). 
  3. Repeatable read.
  4. Serializable (the highest level, where transactions are completely isolated from one another).

[SQL Server] Retrieving last generated Identity value

@@IDENTITY:
  1. Returns the last IDENTITY value generated implicitly (via trigger or user defined function) or explicitly (by user) in current session.
  2. It returns the value regardless of the scope of the statement and the table.
SCOPE_IDENTITY():
  1. Returns the last IDENTITY value generated explicitly in current session and by a statement in the same scope.
  2. It returns the value regardless of the table, generated explicitly rather than any identity that was created by a trigger or a user defined function.
IDENT_CURRENT(<table-name>):
  1. Returns the last IDENTITY value generated in a table.
  2. It returns the value regardless of the session and the scope of the statement.

[T-SQL] Referential Integrity

Referential integrity refers to the consistency that must be maintained between primary and foreign keys, i.e. every foreign key value must have a corresponding primary key value.

[T-SQL] Database Transaction and ACID properties

Transaction:
A transaction is a sequence of operations performed as a single logical unit of work. This logical unit of work must exhibit four properties, called the ACID (Atomicity, Consistency, Isolation, and Durability) properties, to qualify as a transaction.

  1. Atomicity: Each transaction is said to be atomic if when one part of the transaction fails, the entire transaction fails and database state is left unchanged.
  2. Consistency: If, for some reason, a transaction is executed that violates the database’s consistency rules, the entire transaction could be rolled back to the pre-transactional state - or if a DBMS allows fields of a record to act as references to another record, then consistency implies the DBMS must enforce referential integrity: by the time any transaction ends, each and every reference in the database must be valid.
  3. Durability: When the DBMS's guarantee that once the user has been notified of a transaction's success the transaction will not be lost, the transaction's data changes will survive system failure, and that all integrity constraints have been satisfied, so the DBMS won't need to reverse the transaction.
  4. Isolation: Refers to the requirement that other operations cannot access data that has been modified during a transaction that has not yet completed.

[T-SQL] Using specific index in SELECT clause

You can specify any index that you want to use explicitly in your SELECT clause after table name:

SELECT <column-names>
FROM <table-name> ( index = <index-name> )

This scheme is also called index hint.

[T-SQL] Difference between Primary Key and Unique Key

Primary Key:
  1. Enforce uniqueness of the column on which they are defined.
  2. By default, creates a clustered index on the column.
  3. Doesn't allow NULL value.
  4. A table can have only one primary key.

Unique Key:
  1. Enforce uniqueness of the column on which they are defined.
  2. By default, creates a non-clustered index on the column.
  3. Allows one NULL value.
  4. A table can have multiple unique keys.

Tuesday, November 23, 2010

My First Post

This is my first post. I'm so happy to eventually have it. I hope knowledge shared through this blog will be useful for others as it was for me. :)