sp_helpdb: How big is your database log file?

All SQL Server data modifications (updates, inserts, and deletes) are initially recorded in the database log file. Depending on the frequency of modification and your backup and recovery model, the size of the database log file can grow to large proportions in a short amount of time.

Use sp_helpdb to determine, among other things, the size of both data and log files separately. The stored procedure returns two separate recordsets.

Syntax

EXECUTE sp_helpdb MyDatabase

INSERT (SQL Statement)

Description

Adds a new row to a table or a view.

 

Syntax

INSERT [ INTO]  { table_name  | view_name } [ ( column_list ) ]
VALUES  ( expression [ ,...n] )

OR

INSERT [ INTO]  { table_name  | view_name } [ ( column_list ) ]
select_statement

Example

INSERT table1 (c1)
VALUES (9)

OR

INSERT table1 (c1)
SELECT c1
FROM table2

Detail

INSERT appends new rows to a table. To replace data in a table, the DELETE or TRUNCATE TABLE statements must be used to clear existing data before loading new data with INSERT. To modify column values in existing rows, use UPDATE. To create a new table and load it with data in one step, use the INTO option of the SELECT statement.

INSERT permissions default to members of the sysadmin fixed server role, the db_owner and db_datawriter fixed database roles, and the table owner. Members of the sysadmindb_owner, and the db_securityadmin roles, and the table owner can transfer permissions to other users.

Understanding SQL Server Memory Internals & Cache

Like all database servers, Microsoft SQL Server thrives on memory. Each instance of Microsoft SQL Server has its own memory address space. The SQL Server address space (Virtual memory allocation by Windows) consists of two main components: executable code and memory pool. Let’s examine each of these components in more details.

SQL Server executable code

Executable code is basically SQL Server’s own EXEs and DDLs that are loaded into the address space. The following is breakdown of the various components:

  • SQL Server Engine
  • Open Data Services
  • Server Net-Libraries
  • Distributed Queries
  • Extended Stored Procedures (DLLs)
  • OLE Automation Stored Procedures

SQL Server memory pool

The memory pool is the unit of memory that an instance of SQL Server uses to create and manage data structures relating to client requests. The following data structures are allocated in the memory pool of an instance of SQL Server:

  • Buffer Cache: This is the pool of memory pages into which data pages are read. An important indicator of the performance of the buffer cache is the Buffer Cache Hit Ratio performance counter. It indicates the percentage of data pages found in the buffer cache as opposed to disk. A value of 95% indicates that pages were found in memory 95% of the time. The other 5% required physical disk access. A consistent value below 90% indicates that more physical memory is needed on the server.
  • Procedure Cache: This is the pool of memory pages containing the execution plans for all Transact-SQL statements currently executing in the instance. An important indicator of the performance of the procedure cache is the Procedure Cache Hit Ratio performance counter. It indicates the percentage of execution plan pages found in memory as opposed to disk.
  • Log Caches: This is the pool of memory used to read and write log pages. Each log has a set of cache pages. The log caches are managed separately from the buffer cache to reduce the synchronization between log and data buffers.
  • Connection Context: Each connection has a set of data structures that record the current state of the connection. These data structures hold items such as parameter values for stored procedures, cursor positioning information, and tables currently being referenced.
  • System-level Data Structures: These are data structures that hold data global to the instance, such as database descriptors and the lock table.

The buffer cache, procedure cache, and log caches are the only memory elements whose size is controlled by SQL Server.

A very important aspect to watch for is whether SQL Server is using the maximum memory available on the system (assuming the system is dedicated to SQL Server). A system with a fully utilized memory may be prone to performance bottlenecks when competition for resources increases. Prepared Transact-SQL statements, for example, may suffer when the procedure cache is unable to expand due to fully utilized buffer caches.

 

Related Articles

The 4 Pillars of SQL Server Performance: CPU, Memory, Disk I/O, & Cache Hit Ratio

xp_fixeddrives: Monitor SQL Server Disk Space Usage

Monitoring free disk space on SQL Server servers becomes crucial if your database growth is relatively fast. If you don’t have direct access to your server’s hard disks, xp_fixeddrives provides an easy way to monitor free disk space remotely.

The xp_fixeddrives system stored procedure returns a list of physical hard drives and the amount of free space on each one.

Syntax

EXEC master.dbo.xp_fixeddrives

xp_fixeddrives is an undocumented system stored procedure. Microsoft may not include it in future releases of SQL Server 2000 or MSDE.

DBCC CLEANTABLE: Reclaim space for dropped columns

DBCC CLEANTABLE reclaims table space after a column is dropped from the database using the ALTER TABLE DROP COLUMN statement. DBCC CLEANTABLE works for variable-length columns only. Variable-length columns are columns of type varchar, text, nvarchar, ntext, varbinary, and image.

 

Syntax

DBCC CLEANTABLE ('MyDatabase', 'MyTable')
You need sysadmin or db_owner privileges to run DBCC CLEANTABLE.

SSAS Tutorial: Building SQL Server Analysis Services Cubes

SQL Server 2005 Analysis Services (SSAS) projects are developed and deployed using the SQL Server Business Intelligence Development Studio that is installed as part of the SQL Server installation. SQL Server Management Studio, on the other hand can be used to manage the Analysis Services databases instantiated from these projects.

In this tutorial from Microsoft, you will learn how to build a SQL Server 2005 Analysis Services cube using the SQL Server Business Intelligence Development Studio. You will build the sample cube and deploy it on an instance of Analysis Services.

During the course of the tutorial, you will learn how to define data sources, data source views, cubes, dimensions, hierarchies and attributes. You will also learn how to view cube and dimension data by processing the deployed objects and populating them with data from the underlying data source.

Lesson 1: Defining a Data Source View within an Analysis Services project
Lesson 2: Defining and Deploying a Cube
Lesson 3: Modifying Measures, Attributes and Hierarchies
Lesson 4: Defining Advanced Attribute and Dimension Properties
Lesson 5: Defining Relationships Between Dimensions and Measure Groups
Lesson 6: Defining Calculations
Lesson 7: Defining Key Performance Indicators (KPIs)
Lesson 8: Defining Actions
Lesson 9: Defining Perspectives and Translations
Lesson 10: Defining Administrative Roles

The 4 Pillars of SQL Server Performance: CPU, Memory, Disk I/O, & Cache Hit Ratios

The most important performance counters to watch when tuning SQL Server performance are CPU usage, memory usage, cache hit ratios, and disk I/O. Even though CPU capacity and available memory are crucial, disk I/O is frequently the cause of bottlenecks in most systems. Paying extra attention to disk I/O and cached disk access is worthwhile.

SQL Server CPU utilization

Measuring the amount of time the CPU is busy running non-idle SQL Server threads is an important aspect of SQL Server performance monitoring and tuning. A continually high CPU usage rate may indicate the need for a CPU upgrade or the addition of multiple processors. Alternatively, a high CPU usage rate may indicate a poorly tuned application or SQL code. Optimizing the application can lower CPU utilization. A consistent state of 80 to 90 percent may indicate the need for a CPU upgrade or the addition of more processors. When you examine processor usage, consider the type of work the instance of SQL Server is performing. If SQL Server is performing a lot of calculations, such as queries involving aggregates or memory-bound queries that require no disk I/O, 100 percent of the processor’s time may not seem unusual. If this causes the performance of other applications to suffer, try changing the timing of the workload.

SQL Server memory utilization

By default, SQL Server changes its memory requirements dynamically based on available system resources. If SQL Server needs more memory, it queries the operating system to determine whether free physical memory is available and then uses the memory available. If SQL Server does not need the memory currently allocated to it, it releases the memory to the operating system. However, the option to dynamically use memory can be overridden using the min server memory, max server memory, and working set size server configuration options.

The more memory your server has the more cached disk reads and writes it can perform. Lack of system memory can cause high non-cached disk reads and writes. Adding memory to your server can help lower physical disk access.

SQL Server Cache Hit Ratios

SQL Server cache hit ratios indicate the percentage of data pages read from the SQL Server cache (memory) vs. those read from the physical disk. The SQL Server cache consists of the buffer cache and the procedure cache.

The Buffer cache is the memory pool used to store SQL Server data. The buffer cache hit ratio indicates the percentage of data pages read from the buffer cache vs. physical disk. Even though CPU capacity and available memory are crucial, disk I/O is frequently the cause of bottlenecks in most systems. Paying extra attention to cached disk I/O is crucial when monitoring the performance of SQL Server. A value of 90% cache hit ratio indicates pages were retrieved from memory 90% of the time. The other 10% were read from disk. A consistent value below 90% indicates that more physical memory is needed on the server.

The Procedure cache is the memory pool used to store SQL Server execution plans. The procedure cache hit ratio indicates the percentage of execution plan pages read from the procedure cache vs. disk.

SQL Server disk I/O

SQL Server disk I/O is frequently the cause of bottlenecks in most systems. The I/O subsystem includes disks, disk controller cards, and the system bus. If disk I/O is consistently high, consider:

  • Move some database files to an additional disk or server.
  • Use a faster disk drive or a redundant array of inexpensive disks (RAID) device.
  • Add additional disks to a RAID array, if one already is being used.
  • Tune your application or database to reduce disk access operations. Consider index coverage, better indexes, and/or normalization.

Microsoft SQL Server uses Microsoft Windows I/O calls to perform disk reads and writes. SQL Server manages when and how disk I/O is performed, but the Windows operating system performs the underlying I/O operations. Applications and systems that are I/O-bound may keep the disk constantly active.

Different disk controllers and drivers use different amounts of CPU time to perform disk I/O. Efficient controllers and drivers use less time, leaving more processing time available for user applications and increasing overall throughput.

 

Related Articles

Understanding SQL Server Memory Internals & Cache

Tuning tempdb For Better SQL Server Performance

One of the most important areas of SQL Server performance is the tempdb database. It maintains temporary user tables and intermediate query results used to prepare and optimize queries. The tempdb in essence is very similar to a user database but it is not intended to persist data. The tempdb is replaced with a copy of the model database every time SQL Server starts. The architecture of the tempdb in SQL Server 2005 did not change much from that of SQL Server 2000, however, its usage has been optimized and expanded in SQL Server 2005.

Optimizations to tempdb introduced since SQL Server 2005

SQL Server 2005 has a plethora of new features among which internal optimizations tend to be lost. These are some of the enhancements made to tempdb in SQL Server 2005:

  • Internal worktables for execution plans across multiple executions are truncated down to 9 pages to save space.
  • Local temporary tables and table-valued functions are all cached when used in a stored procedure, function or trigger. Items that are used frequently stay in cache while least-used items are dropped to save cache space.
  • Logging in tempdb has been lowered for certain changes to reduce the amount of I/O generated. While UPDATE queries still log the original data and the new data (updated values), INSERT queries only log the new data and DELETE queries only log what was deleted.
  • Physical file contention in the tempdb database has been minimized with proportionate fills. Each tempdb file is filled at the same time so each physical file on a separate spindle will benefit from parallel efficiently.
  • Temporary tables that are dropped from tempdb are handled in the background to reduce waiting by the host application.

Monitoring the tempdb database using performance counters

SQL Server 2005 provides a number of performance counters that can be used to monitor space usage in tempdb.

  • The Database: Log File(s) Size(KB) performance counter returns the cumulative size of all the log files in the database. This size can grow if you have not set a maximum size for the log in tempdb.
  • The Database: Log File(s) Used (KB) performance counter returns the cumulative used size of all log files in the database. A large active portion of the log in tempdb can be a warning sign that a long transaction that is preventing log cleanup.
  • The Free Space in tempdb (KB) (SQL Server 2005) performance counter tracks free space in tempdb in kilobytes. Administrators use this counter to determine if tempdb is running low on free space so they can take appropriate corrective action. This counter accounts for space allocated by all three types of objects in tempdb.
  • The Version Store Size (KB) (SQL Server 2005) performance counter monitors the size in KB in both version stores. If a version store is not shrinking, it implies that a long-running transaction is preventing version store cleanup.
  • The Version Generation Rate (KB/s) (SQL Server 2005) performance counter monitors the version generation rate in kilobytes per second (KBps) in both version stores.
  • The Version Cleanup Rate (KB/s) (SQL Server 2005) performance counter monitors the version cleanup rate in KBps in all version stores. If the version cleanup rate is lower than the version generation rate, the version store will use more and more space in tempdb. However, if the version cleanup rate is 0 but the version generation rate is not, there is probably a long-running transaction that is preventing the version store cleanup.

Monitoring the tempdb database using dynamic management views

SQL Server 2005 provides a number of dynamic management views (DMVs) that can be used to monitor space usage in tempdb. The DMVs provide rich features to track tempdb space usage. You can track tempdb space usage at the instance level, the session level, or by individual task (a task is also known as a batch).

sys.dm_db_file_space_usage

This DMV returns space allocation information for the files associated with tempdb. The allocation information is grouped by object category (user, internal, and version store) for the instance. The following code is an example.

SELECT
SUM (user_object_reserved_page_count)*8 as usr_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8 as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage

The following sample output was generated when a query was executed with a hash-join that created a work table.

usr_obj_kb internal_obj_kb version_store_kb freespace_kb mixedextent_kb
8192 4608 0 3840 1024

You can use the output to learn how tempdb space is being used. The following points will help you analyze the output.

A higher % allocation for user objects implies that objects that are created by applications (for example, global and local temporary tables and variables) are the major consumers of tempdb. This is not necessarily a cause of concern.

A higher % allocation for internal objects implies that the query plans make heavy use of tempdb. This is not necessarily a problem, but you may want to look at the query plans to see if alternate query plans can be generated by creating indexes or by re-formulating the queries so as to minimize tempdb space usage.

A higher % allocation for the version store implies that version store cleanup cannot keep pace with version generation. See if a long-running transaction is preventing version store cleanup. Or, a high transaction throughput might be generating a large number of versions per minute. The background task cleans up versions every minute.

sys.dm_db_session_file_usage

This DMV tracks the historical allocation/deallocation of pages in tempdb for the active sessions. A session is established when a user connects to the database. The session is active until the connection is terminated. During the course of the session, the user submits one or more batches. This DMV tracks the tempdb space usage only by the completed batches. The following code example shows the top five sessions that have allocated a maximum space for user objects and internal objects in tempdb. This represents the batches that have already completed, but the code lists sessions with heavy tempdb space use. You could look at these first if you want to minimize tempdb consumption. Note that this tempdb space usage does not take into account the impact of this session on the version store space.

SELECT top 5 *
FROM sys.dm_db_session_space_usage
ORDER BY (user_objects_alloc_page_count +
internal_objects_alloc_page_count) DESC

sys.dm_db_task_space_usage

This DMV tracks the allocation/deallocation of tempdb pages by the currently executing tasks (also called batches). This is extremely useful when you are running out of space in tempdb. Using this DMV, you can identify tasks with heavy tempdb space use and optionally kill them. You can then analyze why these tasks require heavy tempdb space usage and take corrective action. You can join this DMV with other DMVs to identify the SQL statement and its corresponding query plan for deeper analysis. The following query shows the top five tasks that are currently executing tasks and consuming the most tempdb space. The tempdb space usage returned does not allow for the impact on space consumed by the version store.

SELECT top 5 *
FROM sys.dm_db_task_space_usage
ORDER BY (user_objects_alloc_page_count +
internal_objects_alloc_page_count) DESC

Connecting to SQL Server 2005 and SQL Server Express

Protocol configuration and firewalls play an important role when you are trying to establish a connection to any database server. SQL Server 2005 and SQL Server 2005 Express are no exception. The greater the need to protect databases against attacks, the harder it is to troubleshoot failing connection attempts.

Firewalls

The first thing that can block a connection to SQL Server is a firewall. If you have any firewalls, make sure they are configured to allow connections to SQL Server. The default TCP port that SQL Server uses is 1433. Firewalls include McAfee, Norton, Windows Firewall which ships with Windows XP SP2, and Internet Connection Firewall (ICF) which ships with Windows 2000.

Service verification

Before you can connect to SQL Server 2005, you need to verify that SQL Server is running. By default, the SQL Server Express edition is installed as a named instance (SQLEXPRESS). This means that you need to access it by using (local)SQLEXPRESS from the local machine. SQLEXPRESS without the prefix will not work. You can also use the 127.0.0.1 IP address on a local machine to avoided DNS related problems.

To verify that the service is running, type sqlcmd –S(local)SQLEXPRESS at the command prompt. If you see “1>” that means that you managed to connect. Type exit to exit the sqlcmd program.

Connection protocols

SQL Server 2005 supports a number of protocols for connecting client applications with the database server. These protocols are TCP, Named Pipes (NP), Shared Memory (SM), VIA, and HTTP. Only TCP, NP, and SM are supported in SQL Server Express.

By default, only SM is accessible for SQL Server Express on the local machine. This means that connections from a remote machine to SQL Server Express will fail unless TCP and/or NP is enabled. If you want to use TCP and NP, you must explicitly turn them on. Use SQL Server Configuration Manager to explicitly enable TCP and NP for server and client. After enabling TCP and NP, you need to start the SQL Browser service (See SQL Browser below).

If you are connecting remotely, you need to substitute “(local)” with the IP address of the server. You can also use the server name instead of the IP address if DNS can resolve the name.

If you are connecting via a specific IP address, make sure you enable the connection for it. In SQL Configuration Manager, expand the SQL Server 2005 Network Configuration node then select TCP/IP Properties from the pane on the right. Select the IP Addresses tab and change Enabled to Yes for the specific IP address.

SQL Server Configuration Manager

The SQL Server Configuration Manager in SQL Server 2005 and SQL Server Express replaces both Client Network Utility and the Server Network Utility. It allows you to configure the protocols that SQL Server listens to as well as the protocols that ADO.NET 2.0 application can use. However, to configure client protocol for applications that use ADO instead of ADO.NET 2.0, you still need to use the Client Network Utility. The Client Network Utility ships with ADO and is part of Windows 2000, Windows XP, and Windows 2003.

To connect to SQL Server Express remotely, make sure that the server can listen to TCP connections. From the SQL Server Configuration Manager, expand “SQL Server 2005 Network Configuration” and navigate to “Protocols for SQL Server Express” then enable TCP. You need to restart the server for the change to take effect.

If you are using Teratrax Database Manager, you can configure client protocols by clicking on the “Client Network Utility” button in the connection dialog. Make sure that you meet the operating system requirement for Teratrax Database Manager (Windows 2000, Windows XP, or Windows 2003).

SQL Server Browser

SQL Browser is a new service in SQL Server 2005 that is used to identify the ports that named instances listen on. The SM protocol does not use this service. This service is turned off in SQL Server Express by default. This means that the user will have to start this service so that remote access can work. You can start the SQL Browser service from the Service Manager or by typing “NET START SQLBROWSER” from the command line.

SQL Browser listens on UDP port 1434. However, pre-SP3 versions of SQL Server 2000 holding port UDP 1434 may result in failure of SQL Browser name resolution, since they may refuse to give up the port. The workaround is to upgrade all SQL Server 2000/MSDE instances on the machine to SP3 or higher.