making it easier to find your app's SQL in the Profiler's output

This is a trick I’ve used in the past when the database I was running the profiler against had a lot going on and I just wanted to filter it based on a particular app.  This post describes it nicely!

SQL Profiler and “Application Name=MyApp;”

Here’s a great trick that can be really handy if you’re a fan of the SQL Profiler. In your application connection strings add the “Application Name” keyword/value.

Technorati Tags:

Posted in sql |

want to help make excel 12 faster for *your* data?

Ok, now admittedly it’s tempting to use some of the O12 features and whip up something silly like a Mandelbrot generator, but still – I’m sure someone out there has a spreadsheet they’re hoping could get quicker in Excel 2007! 🙂

https://i2.wp.com/www.math.binghamton.edu/gifs/mandel.gif

Help us make Excel 2007 faster …

Given the near-infinite variety of things we see people build in Excel, we are always looking for good examples of workbooks that are calculation-intensive to help us compare Excel 2007’s calculation performance with previous versions’ performance on real-world files that matter to customers. At some point last week it dawned on me that some of the Excel 12 blog readers might be in a position to help (given the number of comments and emails I have had from folks interested in or concerned about calculation and function speed). So I’d like to appeal to folks to send in calculation-intensive workbooks they would like us to use as part of our performance tuning exercise.

Technorati Tags: ,

SQL Server's row versioning – half of a conversation

The below are the parts of my response in an email thread that involved me – I haven’t heard back from the other party, so I’m not posting his parts of the thread since I don’t have his permission (yet).  Now, admittedly, this means that without sufficient context, some of this will be confusing, but hopefully you’ll get some value out of it.

[His part deleted]

Agreed – IMHO, the SQL Server team should be shouting about this from the rooftops – the reason they may not be doing so is because the response from the Oracle camp would (correctly) be: “Hey, we’ve had that for well over a decade – congrats on finally catching up with that feature!  Tell us when you can do a real grid!”

I did do a post about the feature.  I’m not sure if it properly covers it with the below issues, but give it a read and let me know 🙂

Also, I’ll try to answer things in this email as well – feel free to post it or tell me to if you think there’s enough additional value to have it posted.

http://blog.sublogic.com/2006/02/oracles-multiversioning-now-available-in-sql-server/

Also, there’s a *great* (IMHO) TechNote about it.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql90/html/sql2k5snapshotisol.asp

[His part deleted]

You don’t need A_S_I turned on to get this – once RCS is on, writes to the database go to 2 places – the actual table and a copy to tempdb with the versioning info (see technote for a good description of the linked list that’s created).

The versioning information isn’t actually kept around for the statement/transaction that performs it – it’s kept for the other statements/transactions that are going on in the system.  After all, whoever wrote the data wants to see the post-write data.  It’s everyone else in the system that doesn’t want to see it, they want to see the pre-write state instead.

When 1) read committed statements start (with RCS on) OR 2) snapshot isolation transactions (with ASI also ON and snapshot requested), they take note of the current “transaction sequence number” (just a global increment of committed transactions).  During the statement/transaction, the TSN is kept along with the query.

As queries during the statement/transaction happen, they evaluate based on that point-in-time of the database (TSN is a point in time, much like changeset in a version control system).  If the query runs and it needs a particular row that’s been modified/deleted/whatever, it can’t get it from the current contents of the table, but that’s not a problem – it goes over to the tempdb and finds the right version of the row there (following the linked list from the technote).

[His part deleted]

Couple issues here.

How long versions are kept
First, as described above, the writers cause the versioning information to be stored, but it’s all the other things going on in the system that determine how long it’s kept around.  For instance, if I kept a snapshot isolation transaction running for 10 minutes, that’s how long all the other writes would have to be kept around in tempdb.  This is because at any point in time during my snapshot-isolation transaction, I may query any table I want, and I have to see the table in whatever state it was 10 minutes ago when I started by transaction.

This is why snapshot isolation gets its own A_S_I setting and explicitly has to be requested – it doesn’t change the number of writes to tempdb (those are fixed once row versioning starts getting used), but it does have a significant effect on how long the versioning information is required.  When just RCS is on (and no snapshot isolation is in effect), you don’t typically have to keep around the row versions nearly as long since they’re only needed to evaluate currently running statements.

Once snapshot isolation is in effect, the row versions have to be kept around (potentially) much longer (based on how long the transactions run, of course).  For instance, if all your transactions were a single statement, they would have equivalent load on the tempdb, but that’s rarely the case, of course 🙂

Implicit transactions
Each statement commits as its own transaction by default, but it’s a bad idea (IMHO) to call this implicit transactions. (I’ve seen it called “autocommit mode” a bit.  Admittedly, this is more trivia than anything else, but you can turn on implicit_transactions for a connection and when one of the “state of the data matters to me” commands executes, an implicit transaction starts (and does *not* commit as soon as the statement is done).  These implicit transactions span multiple statements, and you need to commit/rollback them like any other transaction – it just saved you from doing BEGIN TRAN.

[His part deleted]

I’m not sure I’m parsing the above sentence correctly, so I’ll just re-state the above – once the snapshot isolation transaction starts, all writes done by *other* transactions have to be kept around in tempdb until the transaction ends.  This is necessary since the snapshot transaction has to see the entire database at the point in time when it start, with only the changes applied that it made.

Row Versioning goodness

SQL Server 2005 provides non-locking, non-blocking read consistency to your users via snapshot isolation. Find out when to use it to improve performance and reduce latency in your applications.

Technorati Tags: ,

SQL Server semantics: autocommit (every statement is a transaction) vs. implicit transactions

I’ve seen these get confused, so I wanted to give a quick definition:

1) autocommit mode: each statement is its own transaction (selects don’t need to be committed, of course)
2) implicit transaction: any “the state of the data matters to me” statement implicitly starts a transaction (that does *not* commit automatically at the end of the statement).  All it does is save you from having to “BEGIN TRAN” explicitly – you still need to commit/rollback.

A couple of msdn pages that may help:

connection string for SQL Server Express from asp.net 2.0

One problem I hit that I didn’t expect to in playing around with Visual Web Designer Express and SQL Server 2005 Express was the Web Site Administration Tool (WSAT) having a problem connecting to said SQL Server Express instance even after aspnet_regsql had been run fine, and I could see the aspnetdb database and the requisite tables (through SQL Server Management Studio Express).

Since SSMS Express saw things fine, I pulled up my machine.config (in the \WINDOWS\Microsoft.NET\Framework\v2.0.50727\CONFIG dir).  First, I verified that the providers were set correctly (AspNetSqlMembershipProvider, AspNetSqlProfileProvider, etc) and then checked the connection string.  It was still the default, which was:

    <add name=”LocalSqlServer” connectionString=”data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true” providerName=”System.Data.SqlClient” />

Looking at ScottGu’s blog post about changing the configuration to point to a “real” SQL Server instance, I changed it to this, which worked!

    <add name=”LocalSqlServer” connectionString=”data source=jmanning-p4\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=aspnetdb” providerName=”System.Data.SqlClient” />

It’s still a SQL Server Express isntance (as you can see from the “\SQLEXPRESS” part above, which would be dropped if I was pointing at a SQL Server instance).  My database was correctly at Microsoft SQL Server\MSSQL.1\MSSQL\Data\aspnetdb.mdf, so I’m not sure what was wrong about the first, but I’d rather use the latter form anyway, so I’m happy with the change.

dropping the asp.net tables 2.0 tables (membership, roles, etc.)

I had accidentally run aspnet_regsql on the wrong database (I meant to let it go ahead with the default aspnetdb) and I ended up with the asp.net tables in my actual database.  In case someone else hits this, here’s the lines to run.  Note that you’ll likely need to run it at least 2 times, maybe 3, due to FK relationships and me being too lazy to order them properly.

DROP TABLE [dbo].[aspnet_Applications]
DROP TABLE [dbo].[aspnet_Membership]
DROP TABLE [dbo].[aspnet_Paths]
DROP TABLE [dbo].[aspnet_PersonalizationAllUsers]
DROP TABLE [dbo].[aspnet_PersonalizationPerUser]
DROP TABLE [dbo].[aspnet_Profile]
DROP TABLE [dbo].[aspnet_Roles]
DROP TABLE [dbo].[aspnet_SchemaVersions]
DROP TABLE [dbo].[aspnet_Users]
DROP TABLE [dbo].[aspnet_UsersInRoles]
DROP TABLE [dbo].[aspnet_WebEvent_Events]
GO