A technical blog about my projects, challenges, and discoveries in the world of data warehousing using SQL Server, Power BI Desktop, DevExpress, and more.

Saturday, November 20, 2010

Transactional Replication: A Journey into the Unknown Part 2

I worked some more on this transactional replication process this morning. I discovered that the snapshot creation problem was being caused by one or more "views" that included select * somewhere in them. I eliminated all my personal views and left only the "official" views created automatically by the Shelby v.5 system, and the snapshot creation process completed normally.

After this things went smoothly until I tried to initiate the first replication process, at which point I started getting errors saying that the "replication subsystem failed to load" and also that the process as "rejected" because the job was "suspended." It took more digging, but I discovered that all I needed to do was to restart the SQL Server Agent service. After I restarted it and tried again, the synchronization process "started successfully."

As I type this the first synchronization is still in progress. Its been running for several minutes, and I'm not sure when it will complete. I'm going to let it keep running and I'll post again once I have more to report.

For now, here are three things I learned about implementing Transactional Replication that was not included in the 1-2-3 steps of the SQL Server Management Studio wizard:


  1. Make sure the SQL Server Browser service is running on the Principal/Distribution server(s).
  2. Do not include any views as replication articles if they include select * anywhere in them.
  3. After setting up the Subscription on the Subscriber server, restart the SQL Server Agent.

Knowing these three points would have helped me avoid several minutes of hunting down the causes of problems and then following several trouble-shooting steps. I'm sure there will be more to learn ahead.

Transactional Replication: A Journey into the Unknown Part 1

As excited as I am about the potential that an OLAP database would offer for reporting and other business intelligence solutions, I discovered yesterday that with regard to SQL Server concepts, high availability is currently more on the minds of our development team. As a result I have started exploring concepts such as replication, mirroring, fail-over databases, and such.

After reading over all the options, I have decided to try to create a transactional replication solution as a test case. My desktop will be the primary server (where the transactional database lives) and also the Publication server (where the database is "staged" for replication). My laptop will be the Subscription server (where the remote duplicated database lives). 

After installing SQL Server Management Studio and the Replication tools onto my laptop, I ran the SSMS wizards on both my desktop and my laptop. The desktop SSMS installation ran the Publication wizard without a hitch, excecpt that the SQL Agent didn't run. I started the SQL Agent service manually after the fact. At that point, everything appeared normal.

Then I ran the subscription wizard. I had trouble getting the SSMS on the laptop to connect to the desktop server instance until I figured out that the SQL Server Browser service was needed but was not running. I started the browser, and then the laptop connected to the desktop like a champ. I walked through the wizard without any trouble until I got to the message that the initial snapshot of the source database needed to be made before I could continue.

I checked the desktop SSMS replication properties to verify the existence of the snapshot, only to instead find out that the snapshot did not exist. I tried to run the SQL Snapshot Agent manually, but I started getting errors during the process. Something about GROUP BY and the need for LEFT OUTER JOIN. 

Well, this took me a couple more hours than I was expecting, so I will have to leave the troubleshooting of the Snapshot Agent for another day. But once I figure out what is happening with that, I'll post the next chapter in the transactional replication adventure.

Tuesday, November 16, 2010

Top 10 Things I Learned Thursday at SQL Pass Summit

Sorry for the delay in posting this last Top 10 list of things I learned at the SQL PASS Summit in Seattle. Here is the final list:

10. The Paramount Hotel is probably where I need to stay next year if I get to come back.
9. A data mart/data warehouse solution would answer a lot of the temporal reporting challenges I have been unable to solve with straight T-SQL against the OLTP database.
8. The query execution process turns out to have three layers: the visible "typed" layer, the logical layer, and the physcial layer. I was only aware of the first two before.
7. Query optimization is more arcane and difficult than I would have ever thought (not that I ever thought about it).
6. The "shared data sources" and "mini-charts" in Reporting Services 2008 R2 will be a great help for self-service reporting.
5. The T-SQL function APPLY allows the query to apply a calculated table column or function to every row of the "left" table.  Believe me, it is much more useful than it sounds.
4. "ETL" means "extract, transform, and load."
3. I need to learn how to use the ETL functions in SQL Server Integration Services.
2. A "fact table" is a table comprised of measureable (i.e. usually numeric) "facts" along with a set of keys to to the facts to "dimension tables" comprised of descriptive (i.e. usually non-numeric) "dimensions."  Together these two tables form a "star schema," the fundamental structure of data marts and data warehouses.
1. I need to learn how to create a data mart for the Shelby v.5 OLTP database.

As a result of SQL PASS Summit 2010, you will be reading a lot more about PowerPivot, data marts, OLAP cubes, and more. I certainly have enough to learn to keep me busy until next year's Summit!

Thursday, November 11, 2010

Top 10 Things I Learned at SQL PASS Summit Today (Wednesday)

10. The joy of having the ability to play arcade games an unlimited number of times all night long is somewhat tarnished by two factors: a) the hundreds of other people in the same arcade with equally unlimited number of times to play and b) I am no longer 12 years old.
9. What I do is actually a hybrid of T-SQL development and BI development.
8. It is an interesting feeling being the SQL novice in every conversation during the conference.
7. I should never post proprietary code on my blog. I don't think I've done that exactly, but the references to the Shelby v.5 database might be a gray area on that one. I am going to have to reconsider my posting style, and I may change how I handle code that is specifically for Shelby v.5 data.
6. I really need to learn the DAX "language" to make PowerPivot a truly potent reporting tool.
5. PowerPivot has no innate "grouping" ability. This means that the query tables I make for PowerPivot usage need to have grouping columns. A little planning ahead will go a long way in making that easier.
4. SharePoint is growing in importance as a collaborative platform in the Microsoft world, meaning that it will be a necessary tool for managing BI tools.
3. I can set up a SQL Azure account for just under $10 a month to have my own little database-in-the-cloud architecture to play with.
2. Reporting Services uses VB .Net for its expression language.
1. I have not even scratched the surface of the amazing things Reporting Services can do.

Tuesday, November 9, 2010

Top Ten Things I Learned at SQL PASS 2010 Today

10. The food served at dinner leaves much to be desired.
9. Report Designer v.3 (in SQL Server 2008 R2) has a mapping feature that can be really cool, but needs a lot of massaging to be truly useful.
8. Building a cube in Analysis Services works very easily if the data are stored in a "star schema." I don't know what that is, but I intend to find out.
7. The PowerPivot tool in MS Excel 2010 is not innate to Excel and must be downloaded and installed separately.
6. The PowerPivot tool is a basic cube technology, so it may not be necessary to learn how to make a "star schema" in order to use cube-style analysis on Shelby v.5 data.
5. Never assume that room 2 is immediately adjacent to room 3.
4. Recursive Common Table Expressions, which I have only recently discovered how to use, should be avoided because of the overhead they add to a query.
3. Tally tables are a way to avoid recursive CTEs.
2. SQL Server "Denali" is going to be available tomorrow for public review of the beta code.
1. SQL Server "Denali" includes "Crescent," a tool that will take PowerPivot ease of use and put it into  Reporting Services, along with a slew of new graphical representation report tools that are simply mind-blowing.

Tuesday, November 2, 2010

MsgBox Response Values Not Returned

I ran into a very odd problem today. I was programming some VBScript into a ShelbyQUERY report design, and I wanted to give the user a Yes/No option using a MsgBox() function. Normally I just use MsgBox() to inform the user of things or to double-check my own work during development by popping up variable values during execution. So normally the only button to click is "OK," and the program resumes after the click.

But today I wanted to know if the user clicked "Yes" or if he clicked "No," so I changed the parameters and tried to capture the value of the click. But I started to get unexpected results, and I got the same thing if I clicked "Yes" or "No," either one.

After doing some digging and double-checking the value being returned from the MsgBox() buttons, I discovered that no values were being returned. I tested this with a much simpler case that was not part of the main script, and sure enough, it verified that the MsgBox() function is returning no values to the main script.

I believe the problem is not with VBScript per se, but with the implementation of it in the Report Designer (the Shelby v.5  flavor of ActiveReports). InputBox() works just fine, but MsgBox() does not, apparently.

So, when you are writing VBScript for the ShelbyQUERY Report Designer, my advice to you is to use MsgBox() only for informational pop-up messages and not for capturing responses. InputBox() is the only way to capture a response from the user, even when you just need a "yes/no" response.

Monday, November 1, 2010

Using Variables in T-SQL

Although there is no way to create a "pop-up" requester using Transact-SQL on its own, there is a way to handle frequently changing parameters that makes them easier to maintain over time. Instead of leaving them buried down in the WHERE clause, which is usually near the bottom of the query and can be sandwiched between the FROM clause and the GROUP BY clause (if there is one), it is possible to move parameters to the very top of the query, even above the SELECT clause, by using variables. Using variables in a query is a three-step process:

First, define the names of the variables and what type of data they will contain.
Second, set the values of the variables.
Third, reference the variables in the query wherever you normally would use the explicit value.

The first step is to define the names and types of variable you are using. There is one naming convention you must follow. The name of the variable must begin with an @ symbol. After that, any combination of letters and numbers is allowed. The variable may  have any appropriate name. The datatype may be any valid datatype supported by SQL Server.  There is a list of them at http://msdn.microsoft.com/en-us/library/ms187752(SQL.90).aspx. The syntax for defining the names and types is as follows:

DECLARE @variable AS datatype [, @variable2 AS datatype ... , @variableN AS datatype];

You may declare as many variables as needed in one DECLARE statement. Note the semicolon at the end of the statement. It is usually optional, but it clarifies the end of a SQL statement.

After defining the variables, you set their values. Unlike the DECLARE satement, the SET statement may only affect one variable at a time:

SET @variable = value;

Again, the semicolon at the end is usually optional. However, if you are going to use a Common Table Expression to define a "view" before the main query, you must include the semicolon after the SET statement.

Finally, you use the variable in the query. Put the @variable reference anyplace where you would normally type in the specific value. In the future, you can update the value by changing the variable assignment in the SET statement instead of searching through the query for the explicit value.

Here is an example of using two variables, @StartDate and @EndDate to search for all the giving recorded in the CNHst table between two dates.

declare @StartDate as datetime, @EndDate as datetime
set @StartDate = '1/1/2010';
set @EndDate = '3/31/2020';

select
 C.NameCounter, sum(C.Amount) as TotalGiving
from
 Shelby.CNHst as C
where
 datediff(d, @StartDate, C.CNDate) >= 0 and datediff(d, C.CNDate, @EndDate) >= 0
group by
 C.NameCounter


Even in this simple example, you can see that it is easier to update the dates at the top of the query rather than updating them in the WHERE clause. Once you are comfortable using variables, you will find them immensely helpful in making your queries easy to update over time.

Monday, August 23, 2010

Simple BFS Query for YTD/Annual Values and No Projects

I was recently asked to help a customer with a Budgeted Financial Statement type query that could be pushed to the Shelby v.5 Dashboard. The customer was only looking for year-to-date actual values and annual budget values. There was no need to break out by Project code. Such a request greatly simplified the query writing process over what I have been doing and documenting in previous posts. Without needing to break out Project codes I could use the GLSummary table, which already aggregates the actual transactions for each period for each account. Without needing to do any specific period reporting, I could use the GLBudget table in its native form, without unpivoting it into periodic rows. For this project I also ignored budget revisions, which further simplifies the query.

I have also discovered recently that ShelbyQUERY supports the use of WITH to define inline views before the main query's SELECT clause, which is great because if you need the same view multiple times in the main query, you can use an alias instead of typing the whole thing each time. It also moves the bulk of the view syntax out of the FROM clause, meaning that the main query's FROM clause is much more readable. One thing to remember if you are using WITH to define an inline view is that any lines of SQL code that come before it must be terminated with a semicolon. The only thing that might come in front of that in ShelbyQUERY would be variable DECLARE and SET commands. Generally the semicolon is optional for DECLARE and SET, but if there is a WITH, the semicolon is not optional.

In the query below I used a WITH statement to define the subquery that adds the columns that make it possible to do Total Line aggregates to the basic GLAcct table. I have already explained how that piece works in previous posts. I also have five variables so that the user can define a fiscal year, a company number, a fund number range, a department number range, and whether or not the Period 13 (commonly called the audit period) will be included in the calculations.

declare @FiscalYear as smallint, @CoNu as tinyint, @FundStart as smallint, @FundEnd as smallint, @DeptStart as smallint, @DeptEnd as smallint, @UsePeriod13 as varchar(1);
set @FiscalYear = 2008;
set @CoNu = 1;
set @FundStart = 0;
set @FundEnd = 999;
set @DeptStart = 0;
set @DeptEnd = 999;
set @UsePeriod13 = 'Y'; -- Change to 'N' to exclude Period 13

with GLAccounts as (select A1.CoNu, A1.BeginDate, A1.FundNu, A1.DeptNu, A1.AcctNu, LineType = A1.LineType, Descr = max(A1.Descr), StmtType = max(A1.StmtType),
T0 = sum(case when A2.LineType like '[0-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T1 = sum(case when A2.LineType like '[1-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T2 = sum(case when A2.LineType like '[2-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T3 = sum(case when A2.LineType like '[3-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T4 = sum(case when A2.LineType like '[4-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T5 = sum(case when A2.LineType like '[5-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T6 = sum(case when A2.LineType like '[6-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T7 = sum(case when A2.LineType like '[7-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T8 = sum(case when A2.LineType like '[8-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T9 = sum(case when A2.LineType like '[9-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end
from
Shelby.GLAcct as A1 left join Shelby.GLAcct as A2 on A1.CoNu = A2.CoNu and A1.BeginDate = A2.BeginDate and (A1.FundNu > A2.FundNu or (A1.FundNu = A2.FundNu and A1.DeptNu > A2.DeptNu) or (A1.FundNu = A2.FundNu and A1.DeptNu = A2.DeptNu and A1.AcctNu >= A2.AcctNu)) and isnumeric(A2.LineType) = 1
where year(A1.BeginDate) = @FiscalYear and A1.CoNu = @CoNu and A1.FundNu between @FundStart and @FundEnd and A1.DeptNu between @DeptStart and @DeptEnd and A1.StmtType in ('I', 'E')
group by A1.CoNu, A1.BeginDate, A1.FundNu, A1.DeptNu, A1.AcctNu, A1.LineType)

select
FiscalYear = @FiscalYear,
FundNu = GLAccounts.FundNu,
DeptNu = GLAccounts.DeptNu,
AcctNu = GLAccounts.AcctNu,
AcctDescription = GLAccounts.Descr,
YTD_Actual = case GLAccounts.StmtType when 'I' then -1 else 1 end * case GLAccounts.LineType when 'D' then GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 When 'Y' then GLSummary.Amt13 else 0 end
when '0' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T0)
when '1' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T1)
when '2' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T2)
when '3' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T3)
when '4' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T4)
when '5' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T5)
when '6' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T6)
when '7' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T7)
when '8' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T8)
when '9' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T9)
end,
Annual_Budget = case GLAccounts.StmtType when 'I' then -1 else 1 end * case GLAccounts.LineType when 'D' then GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end
when '0' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T0)
when '1' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T1)
when '2' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T2)
when '3' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T3)
when '4' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T4)
when '5' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T5)
when '6' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T6)
when '7' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T7)
when '8' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T8)
when '9' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T9)
end,
Annual_Budget_Remaining = (case GLAccounts.StmtType when 'I' then -1 else 1 end * case GLAccounts.LineType when 'D' then GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end
when '0' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T0)
when '1' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T1)
when '2' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T2)
when '3' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T3)
when '4' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T4)
when '5' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T5)
when '6' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T6)
when '7' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T7)
when '8' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T8)
when '9' then sum(GLBudget.Bud01 + GLBudget.Bud02 + GLBudget.Bud03 + GLBudget.Bud04 + GLBudget.Bud05 + GLBudget.Bud06 + GLBudget.Bud07 + GLBudget.Bud08 + GLBudget.Bud09 + GLBudget.Bud10 + GLBudget.Bud11 + GLBudget.Bud12 + case @UsePeriod13 when 'Y' then GLBudget.Bud13 else 0 end) over (partition by T9)
end) - (case GLAccounts.StmtType when 'I' then -1 else 1 end * case GLAccounts.LineType when 'D' then GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 When 'Y' then GLSummary.Amt13 else 0 end
when '0' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T0)
when '1' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T1)
when '2' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T2)
when '3' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T3)
when '4' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T4)
when '5' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T5)
when '6' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T6)
when '7' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T7)
when '8' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T8)
when '9' then sum(GLSummary.Amt01 + GLSummary.Amt02 + GLSummary.Amt03 + GLSummary.Amt04 + GLSummary.Amt05 + GLSummary.Amt06 + GLSummary.Amt07 + GLSummary.Amt08 + GLSummary.Amt09 + GLSummary.Amt10 + GLSummary.Amt11 + GLSummary.Amt12 + case @UsePeriod13 when 'Y' then GLSummary.Amt13 else 0 end) over (partition by T9)
end)
from
GLAccounts left join
Shelby.GLSummary as GLSummary on GLAccounts.CoNu = GLSummary.CoNu and GLAccounts.BeginDate = GLSummary.BeginDate and GLAccounts.FundNu = GLSummary.FundNu and GLAccounts.DeptNu = GLSummary.DeptNu and GLAccounts.AcctNu = GLSummary.AcctNu left join
Shelby.GLBudget as GLBudget on GLAccounts.CoNu = GLBudget.CoNu and GLAccounts.BeginDate = GLBudget.BeginDate and GLAccounts.FundNu = GLBudget.FundNu and GLAccounts.DeptNu = GLBudget.DeptNu and GLAccounts.AcctNu = GLBudget.AcctNu

order by FundNu, DeptNu, AcctNu

Saturday, July 31, 2010

Querying the General Ledger with Total Lines - Actuals

In the earlier post on this topic I included a query that would add ten columns to the basic GLAcct table. These extra columns would allow you to summarize by Total Line right in a query. In this post I will share a query that will allow you to get the period actuals for all the accounts, including total lines. It will also provide extra lines for Project values. If you want to remove the Project level detail, just remove the Project line from the SELECT statement and from the GROUP BY statement.

Most reports need to be able to report on one or more periods, so in order to generate either a pivot table with the ability to pull any period or create a Report Designer that can prompt the user for a period, we need to multiply the accounts in GLAcct thirteen times, one for each possible Period (don't forget about the Audit period, which is period 13). To do this, we will add in a Cartesian Join between GLAcct and an inline view that is nothing but the numbers 1 through 13:

select
*
from
Shelby.GLAcct, (select Period = 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9 union select 10 union select 11 union select 12 union select 13) as Periods

We then substitute this in place of the reference to Shelby.GLAcct in the original Total Line query given in the previous post, and we have this (above query in italics below):

select
A1.CoNu,
A1.BeginDate,
A1.FundNu,
A1.DeptNu,
A1.AcctNu,
A1.LineType,
Descr = max(A1.Descr),
StmtType = max(A1.StmtType),
A1.Period,
T0 = sum(case when A2.LineType like '[0-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T1 = sum(case when A2.LineType like '[1-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T2 = sum(case when A2.LineType like '[2-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T3 = sum(case when A2.LineType like '[3-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T4 = sum(case when A2.LineType like '[4-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T5 = sum(case when A2.LineType like '[5-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T6 = sum(case when A2.LineType like '[6-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T7 = sum(case when A2.LineType like '[7-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T8 = sum(case when A2.LineType like '[8-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T9 = sum(case when A2.LineType like '[9-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end
from
(select * from Shelby.GLAcct, (select Period = 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9 union select 10 union select 11 union select 12 union select 13) as Periods) as A1 left join
Shelby.GLAcct as A2 on A1.CoNu = A2.CoNu and A1.BeginDate = A2.BeginDate and (A1.FundNu > A2.FundNu or (A1.FundNu = A2.FundNu and A1.DeptNu > A2.DeptNu) or (A1.FundNu = A2.FundNu and A1.DeptNu = A2.DeptNu and A1.AcctNu >= A2.AcctNu)) and isnumeric(A2.LineType) = 1
group by
A1.CoNu,
A1.BeginDate,
A1.FundNu,
A1.DeptNu,
A1.AcctNu,
A1.LineType,
A1.Period

Now we need to take the entirety of this result and use it as an inline view, add a join to the actual transaction history table, which is GLDetail, and then construct a SELECT clause that can sum up the numbers as needed. Here is that query. (Warning: This only works on SQL Server 2005 or later. If you are running MSDE or SQL Server 2000, this will not work for you.) Also, this query will return all fiscal years of all companies. This may take a long time to run (15 mintues + for some organizations). Add a WHERE clause to limit the results to just one fiscal year and company to shorten the time. (The above query is in italics below.)

select
A.BeginDate,
A.CoNu,
A.FundNu,
A.DeptNu,
A.AcctNu,
Descr = max(A.Descr),
A.Period,
Project = (select P.Name from Shelby.SSProject as P where P.ProjectCounter = D.ProjectCounter),
PeriodActual = case A.LineType
when 'D' then isnull(sum(D.Amt), 0)
when '0' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T0), 0)
when '1' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T1), 0)
when '2' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T2), 0)
when '3' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T3), 0)
when '4' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T4), 0)
when '5' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T5), 0)
when '6' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T6), 0)
when '7' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T7), 0)
when '8' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T8), 0)
when '9' then isnull(sum(sum(D.Amt)) over (partition by A.BeginDate, A.CoNu, A.Period, A.T9), 0)
end
from
(select A1.CoNu,A1.BeginDate,A1.FundNu,A1.DeptNu,A1.AcctNu,A1.LineType,Descr = max(A1.Descr),StmtType = max(A1.StmtType), A1.Period, T0 = sum(case when A2.LineType like '[0-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,T1 = sum(case when A2.LineType like '[1-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,T2 = sum(case when A2.LineType like '[2-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,T3 = sum(case when A2.LineType like '[3-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,T4 = sum(case when A2.LineType like '[4-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,T5 = sum(case when A2.LineType like '[5-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,T6 = sum(case when A2.LineType like '[6-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,T7 = sum(case when A2.LineType like '[7-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,T8 = sum(case when A2.LineType like '[8-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,T9 = sum(case when A2.LineType like '[9-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end from (select * from Shelby.GLAcct, (select Period = 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9 union select 10 union select 11 union select 12 union select 13) as Periods) as A1 left join Shelby.GLAcct as A2 on A1.CoNu = A2.CoNu and A1.BeginDate = A2.BeginDate and (A1.FundNu > A2.FundNu or (A1.FundNu = A2.FundNu and A1.DeptNu > A2.DeptNu) or (A1.FundNu = A2.FundNu and A1.DeptNu = A2.DeptNu and A1.AcctNu >= A2.AcctNu)) and isnumeric(A2.LineType) = 1 group by A1.CoNu, A1.BeginDate, A1.FundNu, A1.DeptNu, A1.AcctNu, A1.LineType, A1.Period) as A left join
Shelby.GLDetail as D on D.BeginDate = A.BeginDate and D.CoNu = A.CoNu and D.FundNu = A.FundNu and D.DeptNu = A.DeptNu and D.AcctNu = A.AcctNu and D.Period = A.Period
group by
A.BeginDate,
A.CoNu,
A.FundNu,
A.DeptNu,
A.AcctNu,
A.LineType,
A.Period,
D.ProjectCounter,
A.T0, A.T1, A.T2, A.T3, A.T4, A.T5, A.T6, A.T7, A.T8, A.T9
order by
A.BeginDate,
A.CoNu,
A.FundNu,
A.DeptNu,
A.AcctNu

This approach uses the windowing function sum() over (partition by ...) to give a sum of the sums for the total lines. It uses the T0 through T9 columns as the leverage it needs to get the sum values. Windowing functions are great ways to do aggregate values on a scalar set of results or to aggregate an aggregate, as I did here.

This only gets the actual values. Getting the budget values is actually a bit harder, especially with a Project breakout. I'll post more query solutions with budget values in them later.

If you have questions, feel free to post a comment. I'd love to know if this blog is meeting anyones needs or at least of interest to some people.

Tuesday, July 27, 2010

Another Quick Aside - On Finding Server References

A customer asked today about finding references in the Shelby v.5 system to their old server. Now that his organization had a new server, they needed to update all the old server references in the system to the new server name.

The primary (though certainly not the only) places where the old server name is stored are the output file paths of reports in Selections & Listings. To find these reports and other places that reference the server path, use the following query:

select ItemType = 'Report', Title from Shelby.SSReports where Reports like '%server_name_here%'
union
select ItemType = 'Preference', ControlKey from Shelby.SSControl where Memo like '%server_name_here%'
order by ItemType

This will provide a simple list. The Report lines will tell you which reports refer to the server you specified. The Preference lines will tell you where to look for preferences that reference the old server name.

Monday, July 26, 2010

An Aside on VBA and InStrRev

I had an interesting assignment come my way today, and I want to take a moment to write about it here. I will get back to the task of describing the General Ledger budgeted financial statement queries soon.

I have a copy of VB & VBA in a Nutshell by O'Reilly Publishers. I have not had much use for it recently, because the report design tool uses only VBScript, not pure VB or VBA, as its script platform. Like most O'Reilly books of its type, it provides useful insight into the "whys" of the language, and gives "Programming Tips & Gotchas" for most of the language elements. For one function in VB6, however, the writers clearly thought it had no practical use. The function is InStrRev, and their "Programming Tip" is this:
"The usefulness of a function that looks backward through a string for the occurrence of another string isn't immediately apparent."
This seems like a courteous way of saying, "This function is useless." Ah, but I beg to differ. In fact, the task I was given today was greatly simplified (if not in fact made possible) by this function. Today I was tasked with figuring out a way to take a single column of data in Excel with City-State-ZIP (strCSZ) value and separate it into three columns, one for each part of the address.

The strCSZ value had no punctuation, so only spaces separated each element. Also, the state is always just the two-letter abbreviation. At first blush, one could think that breaking the string into pieces by finding the space character would work, but this fails because some city names are actually two words, separate by a space. Working forward from the first character of the string, one could never know if the first space divides the city from the state or if it divides the first word of the city from the second word.

However, working backward from the last character of the string, the first space would always be the breaking point between the state and the ZIP code. Once that is a known value (calculated with the InStrRev function), it is a simple matter of using Mid three times to get each piece of the address. Here is a bit of the script I wrote:

intSpace = InStrRev(strCSZ, " ")
strCity = Mid(strCSZ, 1, intSpace - 5)
strState = Mid(strCSZ, intSpace - 3, 2)
strZIP = Mid(strCSZ, intSpace + 1, 10)

I want to thank the VB6 developers for including InStrRev, even though its usefulness was not "immediately apparent." It sure came in useful to me!

Tuesday, June 29, 2010

Querying the General Ledger with Total Lines

UPDATED AGAIN (7/30/2010): Corrected WHERE clause to fix problem with TotalLine designations.
UPDATED: With a shorter version of the query and added explanation of the new approach.

If you have been waiting on a final solution to my Accounts Payable / General Ledger reconciliation query, I'm afraid you'll have to wait a bit longer. It has come to my attention that the APInv table can actually have multiple rows representing the same original invoice. Ditto for the APInvDetail table. This complicates the problem significantly, and so far I have not yet overcome this challenge in finding a way to correlate the individual detail lines from the GL to the individual detail lines from AP. I promise to post something once I have a solution, whether complete or partial.

Recently, however, I was pulled in a different direction by a customer request to display a fairly simple set of columns from the GL tables on the Shelby v.5 Dashboard. The customer was looking for a basic year-to-date summary of budget information, like a simplified Budgeted Financial Statement report. This request gave me an opportunity to revisit a difficulty with General Ledger queries that I have struggled for years to overcome, only to be forced to rely on VBScript in the Report Designer to work around: the problem of total lines in the chart of accounts.

Total lines are difficult to deal with for three main reasons:
  • They are fundamentally different than detail lines, yet still need to display a value like detail lines do.
  • They are arbitrary, at least in principle, making it impossible to predict where they occur in any given chart of accounts.
  • They are hierarchical, generating a nesting effect that means overlapping group values.
These properties of total lines have defied my query abilities for years. Until I worked out the solution I'm about to share, I used VBScript to calculate the total line values on a row-by-row procedural approach. This was not only inefficient and a bit of a kluge, but it meant that the only way to include total lines was to use the Report Designer; there could be no totals in the basic set of query results.

I took the recent customer request as a fresh start, and I thought through the problem from scratch. The first observation I made was that total lines display aggregate information; therefore, some form of grouping would have to be involved. This immediately led to two conclusions: the GROUP BY clause would be used to calculate the values; and the final query would have to use subqueries for the total lines, in order to preserve the scalar detail lines while including aggregate total lines in the same set of results.

Focusing first on the GROUP BY question, I had to tackle the question: GROUP BY what? In order to be able to group a set of rows, they have to share a common value. This common value would have to be related in some way to the total line level (0 through 9). I took the simplest case to think through: a single total line near the top of the chart of accounts; i.e., no total lines above it and the top of the chart of accounts. In order to calculate the total line, all the detail rows before the total line need to share the same value, and all the rows after the total line need to be a different value. Because I would need to trigger the grouping based on the total line itself, it would also be best for the total line itself to share the same value with the rows above it.

Looking at the aggregate functions available, it seemed that the most likely candidate for giving the value I needed was COUNT(). I could count the number of total lines with an account number less than or equal to the account number of the total line itself. All the lines before the single total line would be 0 (zero) and all the lines after it would be 1 (one). The total line itself would have a COUNT() value of 1, so I would need to subtract 1 from the COUNT() on the total line, so it can share the value of the rows above it.

I wrote a query using COUNT(), but it required a separate subquery for each Total Line rank 0 - 9. This meant having ten subqueries running in the SELECT statement. I later revised my approach to use a non-equi-join between two instances of the GLAcct table. This new query uses SUM() in place of COUNT(). Inside the SUM() function is a CASE statement to add 1 only for Total Line types. This has the same effect of counting the number of total lines. Because I want the total line to share the same value as the detail lines above it, I am also adding one to the SUM() for the detail lines. This works because the non-equi-join returns all the Total Lines from instance 2 that appear at or before each fund/dept/account number in instance 1.

In the GLAcct table, which contains the complete chart of accounts for each company and fiscal year, the total lines are easily distinguished from other types of lines by the LineType column. Total lines have a numbered value, ranging from 0 to 9, representing the total line level. So the SUM() function should only add 1 for the rows with a numbered value. Because a total line of a certain level should ignore any total lines of levels less than its own (a total level 5 should ignore total levels 0 through 4, for example), each total line level must be handled separately, resulting in a separate column of values for each total line level.

At the end of the process, here is the query. It takes the GLAcct table and adds 10 columns to it, one each for total levels 0 through 9. Each column yields a common value for the detail lines that should be summed together for each total line of its level. The counting is reset for each company-and-year chart of accounts.

select
A1.CoNu,
A1.BeginDate,
A1.FundNu,
A1.DeptNu,
A1.AcctNu,
LineType = A1.LineType,
Descr = max(A1.Descr),
StmtType = max(A1.StmtType),
T0 = sum(case when A2.LineType like '[0-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T1 = sum(case when A2.LineType like '[1-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T2 = sum(case when A2.LineType like '[2-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T3 = sum(case when A2.LineType like '[3-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T4 = sum(case when A2.LineType like '[4-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T5 = sum(case when A2.LineType like '[5-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T6 = sum(case when A2.LineType like '[6-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T7 = sum(case when A2.LineType like '[7-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T8 = sum(case when A2.LineType like '[8-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end,
T9 = sum(case when A2.LineType like '[9-9]' then 1 else 0 end) + case when A1.LineType = 'D' then 1 else 0 end
from
Shelby.GLAcct as A1 left join
Shelby.GLAcct as A2 on A1.CoNu = A2.CoNu and A1.BeginDate = A2.BeginDate and (A1.FundNu > A2.FundNu or (A1.FundNu = A2.FundNu and A1.DeptNu > A2.DeptNu) or (A1.FundNu = A2.FundNu and A1.DeptNu = A2.DeptNu and A1.AcctNu >= A2.AcctNu)) and isnumeric(A2.LineType) = 1
group by
A1.CoNu,
A1.BeginDate,
A1.FundNu,
A1.DeptNu,
A1.AcctNu,
A1.LineType

Once this query has been saved as a view, it can be used in place of GLAcct to provide queries on the GL tables the ability to pull out the total line sums. This is only a building-block for a complete solution. My next post will show how to take this building block and join it to General ledger transaction history and budget information, taking a step closer to a full query solution.

Tuesday, June 22, 2010

Finding Your SQL Server Version

On two occasions now I have written somewhat elaborate queries only to have them fail on the client side because they were running under the Microsoft SQL Server Desktop Edition (MSDE) product that originally shipped with Shelby v.5 many years ago. That edition of SQL Server emulated SQL Server 2000, and it lacks some key support features that were released in SQL Server 2005, let alone any further enhancements in even more recent editions.

Here is a SQL script to reveal which version and edition of SQL Server you are running:

select top 1 Version = @@Version, Edition = serverproperty('edition') from NANames

If the version information is not self-evident, compare the version number against the list at http://www.sqlteam.com/article/sql-server-versions in order to determine which version you are running.

After you know your current version, check out the Hardware Requirements page from Shelby v.5 Support. If you are not meeting at least the minimum recommended requirement for SQL Server, seriously consider an upgrade.

By the way, the reference to NANames in the query is only there to "trick" ShelbyQUERY into running the query. Without some kind of FROM clause, some releases of ShelbyQUERY will refuse to run the query, since it is not always recognized as a standard "SELECT" query without one. There is nothing particular about the NANames table; any table could be used in its place, and the same results would appear.

Thursday, June 3, 2010

Relating AP tables to GL tables in Shelby v.5 data

I've been working on a query to correlate General Ledger detail lines to the corresponding Accounts Payable invoice detail lines. This sounds pretty straightforward; however, it is not. The only foreign key relationship available is at the top level: AP invoice header to GL entry header. This means that tying the next level down (detail-to-detail) involves matching up multiple detail values in addition to the header foreign key value. Unfortunately, because the customer I am working on this for has data entry practices that allow for multiple identical detail lines even on the same invoice, I have no truly unique key to work with. If I resolve this, I will post the finished join relationship.

On a related note, however, I have run across an important detail regarding the use of the GLEntriesCounter column, which appears in both the APInv table and the APInvDetail table. I have learned that APInv.GLEntriesCounter is a reference to the GLEntries.HeaderCounter for the rows created when the invoice is posted. APInvDetail.GLEntriesCounter is a reference to the GLEntries.HeaderCounter for the rows created when the invoice is paid. For reporting on accural-based companies, this distinction is critical.

If the invoice has not been paid, the APInvDetail.GLEntriesCounter value will be zero (0). If a company is on a cash basis, I would assume that the APInv.GLEntriesCounter value would be zero, though I have not tested this assumption. (If someone who is on a cash AP basis would check this out and post a confirmation or correction, I'll edit this post to give the definitive answer.)

Friday, May 21, 2010

Checking for NULL in SQL Server Reporting Services

Today I was tasked with replacing blank values on a report output with a double-dash. This was on a SQL Server Reporting Services (SSRS) report design I had made.

I knew this would involve an IIF() conditional. But I was not sure how to phrase the condition to check for the NULL value. I tried two approaches off the top of my head.

First, I tried the SQL syntax of Fields!FieldName.Value IS NULL. It was a long shot, and it didn't work. No real surprise, there, so I moved on.

Second, I tried the VBScript approach of IsNull(Fields!FieldName.Value). I was pretty sure this would work, because I have used some VBScript commands successfully with conditional formatting in SSRS before. However, this didn't work either.

What became apparent was that I needed to understand how to do this kind of condition in .NET, which is the actual programming language supported inside of SSRS for formula expressions. I did some searches on the Internet and eventually found the right syntax. It turns out that IsNothing() is the .NET equivalent to the VBScript function IsNull(). The working syntax for my report requirement is:

= IIf(Not IsNothing(Fields!FieldName.Value), Fields!FieldName.Value, "--")

Sunday, May 2, 2010

Pivoting Rows to Columns - Part 3

In this last installment of this series on pivoting rows into columns, I will show you how to use GROUP BY and CASE together to achieve the desired result. The specific example will take Profile codes from the NAProfiles table of the Shelby v.5 database and pivot them into columns.

Here is the basic query without pivoting, showing each name in the database along with Profile codes:

select
Names.NameCounter,
Names.FirstMiddle,
Names.LastName,
Profiles.Profile
from
Shelby.NANames as Names left join
Shelby.NAProfiles as Profiles on Names.NameCounter = Profiles.NameCounter

This generates a list of names and Profile codes, but a person with three codes will be listed three times. What many people want is to list the person once, with a column for each Profile code of interest. The first step in that direction for the approach today is to add a GROUP BY clause and an aggregate function for the Profile code:

select
Names.NameCounter,
Names.FirstMiddle,
Names.LastName,
Profile = max(Profiles.Profile)
from
Shelby.NANames as Names left join
Shelby.NAProfiles as Profiles on Names.NameCounter = Profiles.NameCounter
group by
Names.NameCounter,
Names.FirstMiddle,
Names.LastName

Now we are back to one line per person, but with only one Profile in the results. Because of the MAX() function, we have the Profile that comes last alphabetically for each person. To gain control over which Profile shows up, add a CASE statement that returns only the desired value inside of the MAX() function:

select
Names.NameCounter,
Names.FirstMiddle,
Names.LastName,
Profile = max(case Profiles.Profile when 'LASTCN' then Profiles.Profile end)
from
Shelby.NANames as Names left join
Shelby.NAProfiles as Profiles on Names.NameCounter = Profiles.NameCounter
group by
Names.NameCounter,
Names.FirstMiddle,
Names.LastName

This returns only the LASTCN Profile code in the Profile column of the results. Now that you have a method of getting the info from NAProfiles on just row(s) you want, you can use a variety of NAProfile columns and column aliases to get the desired results. You can even substitute your own marking as the output of the CASE statement, such as printing an "X" to indicate a match.

select
Names.NameCounter,
Names.FirstMiddle,
Names.LastName,
LastCNDate = max(case Profiles.Profile when 'LASTCN' then convert(varchar, Profiles.Start, 101) end),
AllergyInfo = max(case Profiles.Profile when 'ALLERG' then Profiles.Comment end),
OnReachingTeam = max(case Profiles.Profile when 'ZRREACH' then 'X' end)
from
Shelby.NANames as Names left join
Shelby.NAProfiles as Profiles on Names.NameCounter = Profiles.NameCounter
group by
Names.NameCounter,
Names.FirstMiddle,
Names.LastName

This approach works best when you have only one join that creates a one-to-many relationship, such as NANames to NAProfiles. If you have several joins of that type, then one of the first two approaches (discussed in the prior two blog posts) would probably work better. But when you do have just one such join, this is perhaps the easiest way to move rows to columns.

Tuesday, March 30, 2010

Pivoting Rows to Columns - Part 2

The last post looked at how to pivot rows to columns using multiple joins to the same table. In this post I will explain how to pivot using multiple subqueries in the SELECT clause of the query.

I will also be using a special "view" object in the example solutions. Views are essentially the results of queries that have been saved into the database in such a way that you can use the sets of results just as you would use a table. The view I'm going to use is called Shelby.VIEW_SHELBY_CUSTINFO_ALL. This is a view of the custom information collected on the Custom Tabs of the GlobaFILE module in Shelby v.5. Each row of this view represents one custom value for one name entry in the NANames module.

Here is an example of results from the Shelby.VIEW_SHELBY_CUSTINFO_ALL view:

select * from Shelby.VIEW_SHELBY_CUSTINFO_ALL


Now, let's limit this down to just the Spiritual Gifts tab and the Teacher check box option on that tab:

select * from Shelby.VIEW_SHELBY_CUSTINFO_ALL as CustomInfo where CustomInfo.TabName = 'Spiritual Gifts' and CustomInfo.FieldName = 'Teacher'


So far, so good. Bear with me just a moment as we follow this train of thought two steps further. Let's further limit this down to the value of the Teacher field for just one person. For this example, I will limit it down to the person with a NameCounter value of 8:

select * from Shelby.VIEW_SHELBY_CUSTINFO_ALL as CustomInfo where CustomInfo.TabName = 'Spiritual Gifts' and CustomInfo.FieldName = 'Teacher' and CustomInfo.NameCounter = 8


Now, one last refinement before we talk about how to use this info in a subquery. Let's just get the one column of information we really want out of this view, the actual "true/false" value of the field.

select CustomInfo.Value_c from Shelby.VIEW_SHELBY_CUSTINFO_ALL as CustomInfo where CustomInfo.TabName = 'Spiritual Gifts' and CustomInfo.FieldName = 'Teacher' and CustomInfo.NameCounter = 8



This does not look like much at all, but it is precisely what we need to know about the custom value of the "Teacher" field for one person. (The negative one represents "true," meaning that this person has the check box checked on his or her record.)

Now we can look at how to meld this information as a subquery into a main query that pulls the main name information out of the database.

In case you have not used subqueries in the SELECT clause of a query before, here is a run-down of the basic principles:
  • A subquery uses the same syntax as a regular query.
  • A subquery can have its own SELECT, FROM, WHERE, GROUP BY, and HAVING clauses, just like a regular query.
  • A subquery that is returning a value to a SELECT column must return exactly one value each time it executes: one column and one row only.
  • A subquery may be correlated to the primary query by using one or more columns from tables in the primary query as part of conditions in the WHERE clause of the subquery.
What we need to do it correlate the query we wrote above with another query that pull name information out of the database. Note the bold text in the following query, which shows how the correlation is happening:

select
n.NameCounter,
n.FirstMiddle,
n.LastName,
SpiritualGift_Teacher = (select CustomInfo.Value_c from Shelby.VIEW_SHELBY_CUSTINFO_ALL as CustomInfo where
CustomInfo.TabName = 'Spiritual Gifts' and CustomInfo.FieldName = 'Teacher' and CustomInfo.NameCounter = n.NameCounter)
from
Shelby.NANames as n



What is happening is that for each row of results the subquery executes, substituting the value of the NameCounter from NANames on that row for the WHERE condition in the subquery. Once we have one subquery added, it is quite easy to add more to pull additional custom values into the results:

select
n.NameCounter,
n.FirstMiddle,
n.LastName,
SpiritualGift_Teacher = (select CustomInfo.Value_c from Shelby.VIEW_SHELBY_CUSTINFO_ALL as CustomInfo where
CustomInfo.TabName = 'Spiritual Gifts' and CustomInfo.FieldName = 'Teacher' and CustomInfo.NameCounter = n.NameCounter),
SpiritualGift_Encouragement = (select CustomInfo.Value_c from Shelby.VIEW_SHELBY_CUSTINFO_ALL as CustomInfo where
CustomInfo.TabName = 'Spiritual Gifts' and CustomInfo.FieldName = 'Encouragement' and CustomInfo.NameCounter = n.NameCounter),
SpiritualGift_Service = (select CustomInfo.Value_c from Shelby.VIEW_SHELBY_CUSTINFO_ALL as CustomInfo where
CustomInfo.TabName = 'Spiritual Gifts' and CustomInfo.FieldName = 'Service' and CustomInfo.NameCounter = n.NameCounter)
from
Shelby.NANames as n


In case you would like a comparison, here is the phone number example from the previous post done with this subquery technique:

select
n.NameCounter,
n.FirstMiddle,
n.LastName,
MainPhone = (select p.PhoneNu from Shelby.NAPhones as p where p.PhoneCounter = 1 and p.NameCounter = n.NameCounter),
BusinessPhone = (select p.PhoneNu from Shelby.NAPhones as p where p.PhoneCounter = 2 and p.NameCounter = n.NameCounter)
from
Shelby.NANames as n

Thursday, March 25, 2010

Pivoting Rows to Columns - Part 1

Shortly after learning how to query the tables of a database, the first major hurdle many people find is in taking rows of results and "pivoting" them into columns. In the Shelby v.5 world, some of the most common results that people want this way include reporting one column per:
  • phone number type
  • custom field value
  • Profile code
In this post and the next two, I will show three different ways to pivot rows into columns. Each of the three methods has its strengths and weaknesses, so understanding each technique will help you pick the best one for the query you are writing at the time.

The first method is joining to the same table multiple times, once for each separate column you want. This is very similar to the method of joining to the same table more than once that I described in the earlier post on combining husband and wife together on the same row of results. The only difference here is that, instead of just one additional join, there will need to be as many joins as there are columns of data types.

As a starting point of explanation, let me share with you a query that returns a list of names and phone numbers. This list has one row for each person, and an additional row for every phone type after the first one. If the person has three phone numbers in the database, the person will be listed three times.

select
Names.NameCounter,
Names.FirstMiddle,
Names.LastName,
Phones.PhoneNu
from
Shelby.NANames as Names left join
Shelby.NAPhones as Phones on Names.NameCounter = Phones.NameCounter

This would yield the following snippet of results:


Notice how many people are listed multiple times.

In Transact-SQL, the flavor of SQL that runs on SQL Server, we can add an additional restriction onto the join condition between NANames and NAPhones. This additional condition can "pre-filter" the NAPhones table so that it just returns, say, the Main/Home phone type. With such a filtering condition in place, we can once again get just one row per name, since no one has more than on Main/Home phone. I will do this in the query by limiting the NAPhones table to only the rows with a PhoneCounter of 1, which I know represents the Main/Home phone type. Here is the query:

select
Names.NameCounter,
Names.FirstMiddle,
Names.LastName,
Phones.PhoneNu
from
Shelby.NANames as Names left join
Shelby.NAPhones as Phones on Names.NameCounter = Phones.NameCounter and Phones.PhoneCounter = 1

and here are the results:


Now that I can control exactly which phone type is returned by the NAPhones table in any given join to it, I can simply add more joins and filter each one to a specific type of phone number. In the next query I get both the Main/Home phone number and the Business phone number, which always has a PhoneCounter value of 2:

select
Names.NameCounter,
Names.FirstMiddle,
Names.LastName,
MainPhones.PhoneNu as MainPhone,
BusinessPhones.PhoneNu as BusinessPhone
from
Shelby.NANames as Names left join
Shelby.NAPhones as MainPhones on Names.NameCounter = MainPhones.NameCounter and MainPhones.PhoneCounter = 1 left join
Shelby.NAPhones as BusinessPhones on Names.NameCounter = BusinessPhones.NameCounter and BusinessPhones.PhoneCounter = 2

And here are the results:


Notice in the query that I changed the table alias to reflect the type of phone number I was after, and I added aliases to the phone number columns to also reflect the type of number.

At this point, it is a simple matter of repeating the join for any number of phone types, always being sure to change the table alias and the PhoneCounter value to match the phone type. By the way, if you want to know which PhoneCounter values you have in your database, run the following query to find out:

select * from Shelby.NAPhoneTypes

These examples are all about phone numbers, but this same principle can apply to NAProfiles, MBTextPicks, SGMstOrg, and any other table that has a one-to-many relationship with NANames.

Stay tuned for other ways to take a set of rows and pivot them into columns.

Monday, March 1, 2010

GLAcct Table Gotcha

Assumptions are the bane of any query project, and one assumption I recently made bit me today. I had assumed that only Income and Expense type accounts would have any value stored for the ClosingAcctNu column in the GLAcct table. After all, only those two types of accounts use a closing account. In the software they are the only kinds of accounts that allow you to see or to enter any value for closing account information.

I was wrong. Somehow the ClosingAcctNu column in a customer's data was populated with account values for header accounts, total line accounts, and possibly others as well. This was not a conversion issue; these were brand new accounts in a brand new chart of accounts. I had to adjust my query for this customer in order to specifically restrict the search for closing accounts to the detail account type, in order to avoid picking up erroneous closing account information from the Header and Total Line rows in the GLAcct table.

This was particularly frustrating because it only became apparent in the "real world" environment; in my test database there are no extraneous closing account values in the GLAcct table.

Whenever you get unexpected results, looking for the culprit can be tricky. Just be prepared for the unexpected to sometimes be lurking even in a query of familiar tables.

Friday, February 26, 2010

Calculate a Future Date

Many reports I create are roll sheets or other date-specific reports that need to print a future date, usually the "next Sunday" after "today." Let's walk through the process of finding the "next Sunday" on the calendar.

First, remember the function to return today's date:

select Today = getdate()

Second, remember that we can move the date into the future by adding a value equal to the number of days we want to move. Thus, the date for tomorrow would be calculated this way:

select Tomorrow = getdate() + 1

Third, it is helpful to keep in mind that each day of the week is assigned a digit from 1 (for Sunday) through 7 (for Saturday). The SQL statement that returns the current day-of-the-week digit value is:

select DayOfTheWeekDigit = datepart(dw, getdate())

Thus, what we need to figure out is "How many days after today will be next Sunday?" Obviously, if we are asking that question on Sunday the answer is seven. If we are asking that question on Monday the answer is six, on Tuesday it is fix, and so on through Saturday when it is one. Applying some basic math concepts, we can eventually derive the following formula:

"Next Sunday" = "Today" + (8 - "Today's Day of the Week Digit")

Substituting the acutal SQL syntax for this formula we have:

select NextSunday = getdate() + (8 - datepart(dw, getdate()))

Simply changing the 8 to a 9 will calculate next Monday, a 10 will calculate next Tuesday, and so on. By adjusting this formula, you can calculate any future day of the week.

Monday, February 1, 2010

Rollup and Grouping Functions

If you have done many queries at all, you have probably used the GROUP BY clause to generate aggregate values across a subset of rows. For instance, here is a simple query to calculate the total amount given to each Purpose Code for each year of history in the Shelby v.5 database:

select
year(hst.CNDate) as GiftYear,
pur.Purpose as GiftPurpose,
sum(det.Amount) as TotalGiving
from
Shelby.CNHst as hst inner join
Shelby.CNHstDet as det on hst.Counter = det.HstCounter inner join
Shelby.CNPur as pur on det.PurCounter = pur.Counter
group by
year(hst.CNDate),
pur.Purpose
order by
year(hst.CNDate),
pur.Purpose

This query yields one row per year/purpose combination, showing the total receipts for each purpose in each year.


This is fine as far as it goes, but a simple addition can also give us subtotals for each year of all purposes and a grand total of all years and purposes. All you have to do is add the key words WITH ROLLUP at the end of the GROUP BY clause.

select
year(hst.CNDate) as GiftYear,
pur.Purpose as GiftPurpose,
sum(det.Amount) as TotalGiving
from
Shelby.CNHst as hst inner join
Shelby.CNHstDet as det on hst.Counter = det.HstCounter inner join
Shelby.CNPur as pur on det.PurCounter = pur.Counter
group by
year(hst.CNDate),
pur.Purpose with rollup
order by
year(hst.CNDate),
pur.Purpose

With that simple addition, the results would look like this:

The NULL values are rather unfortunate, though. It would probably be better to replace them with a descriptive phrase to show that the row is a total line. T-SQL includes a function called GROUPING() that helps with that. The GROUPING() function returns a 1 whenever the column inside the parentheses returns a NULL because it is part of a ROLLUP function. It returns a 0 if it is not NULL or if it is a NULL for some other reason other than a ROLLUP function. Thus we can use GROUPING to test for ROLLUP nulls and translate them into better values. Here is the example with the simple query we have been using.

select
case grouping(year(hst.CNDate)) when 1 then 'All Years' else cast(year(hst.CNDate) as varchar(4)) end as GiftYear,
case grouping(pur.Purpose) when 1 then 'All Purposes' else pur.Purpose end as GiftPurpose,
sum(det.Amount) as TotalGiving
from
Shelby.CNHst as hst inner join
Shelby.CNHstDet as det on hst.Counter = det.HstCounter inner join
Shelby.CNPur as pur on det.PurCounter = pur.Counter
group by
year(hst.CNDate), pur.Purpose with rollup
order by
year(hst.CNDate), pur.Purpose


And here is a sample of the results:


The final step would be to use GROUPING() in conjunction with the ORDER BY clause in order to move the total and subtotals to the bottom of each section, where most people expect to find them. I will leave that exercise for you.

Wednesday, January 6, 2010

SQL Saturday in Richmond, VA

I know it has been awhile since I posted anything. Thanksgiving, Christmas, and New Year's Day have all preempted my usual posting schedule. Even now I just have time to post a brief message about the upcoming SQL Saturday in Richmond, VA. On Saturday, January 30th, there will be a SQL Saturday event at the ECPI College of Technology. I will be presenting a session on SELECT Query Fundamentals at this event, and there are lots of other topics there too, for beginners all the way up to experts.

If you want to learn more about SQL and if you can be in Richmond on January 30th, SQL Saturday is the place to be.

Hope to see you there!

Followers

Blog Archive