Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Sunday, March 25, 2012

Archive data suggestion

I have a table contains huge rows of data. Performance issue raised. I am
thinking archive some data so that the table will not be that big. The most
convience way is move it to another table. The problem is: will this solve
my performance problem? or I need to move it to another database to reduce
the database size?

Regards,
TrueNoTaking a chunk of your data out of a table will certainly improve
performance on queries against that table, if users have to access data
that contains data from both the old and new tables performance will
suffer. THink of it this way, if users are looking for a needle in a
haystack, decreasing the size of the haystack will decrease the length
of time to find the needle.

HTH

Ray Higdon MCSE, MCDBA, CCNA

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!|||EK9 (a@.a.com) writes:
> I have a table contains huge rows of data. Performance issue raised. I
> am thinking archive some data so that the table will not be that big.
> The most convience way is move it to another table. The problem is: will
> this solve my performance problem? or I need to move it to another
> database to reduce the database size?

Whether you put the archive table in the same or another database
does not affect performance for queries. It could reduce time for
backup and restore though.

However, it is far from certain than archiving data is the best way.
Maybe you need to review which indexes you have on the table.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Thursday, March 22, 2012

apt select

Hi all,

can any one clearly distinguish between the following 2 select stmts.
which one should I Choose for better performance.

1) select * from tab1,tab2,tab3,tab4
where tab1.Id1=tab2.Id
and tab1.Id2=tab3.Id
and tab1.Id3=tab4.Id

2)select * from tab1 join tab2
on tab1.Id1=tab2.Id
join tab3
on tab1.Id2=tab3.Id
join tab4
on tab1.Id3=tab4.Idwell..............
this is an old one.and i will give the "check it out for yourself" answer first.
....depending on record size and count, (and other factors). the only true test is to check statistics [TIME and IO] against each executed (or estimated) running of the queries..

now for my opinion.
there is no difference between the two.
I prefer the ansi joins sheerly from an portability aspect. everyone who is anyone is using them (even oracle has switched to ansi joins as per 9i).

just me however.

A GIS for "join style" returns a consensus on this matter "no real difference but your code will be portable".

Sunday, March 11, 2012

Apply Your Thought On Char To Int

HI FRIENDS,
IS THERE ANY PERFORMANCE IMPACT WHEN I USE "CHAR" AS A DATA TYPE INSTEAD OF USING "INT" FOR RETRIEVING DATA OR FOR SOME COMPLEX QUERY.
THANKS
WITH BEST REGARDS,
DHIRAJYes - and the result may not be what you expect since char comparison are different from numeric comparisons - if in your example you are storing only numbers in a char field.|||Originally posted by rnealejr
Yes - and the result may not be what you expect since char comparison are different from numeric comparisons - if in your example you are storing only numbers in a char field.

THANKS FOR YOUR KIND REPLY.
I WANT TO STORE ONLY THE NUMBER IN DATABASE.
IF I USE CHAR DATATYPE INSTEAD OF INT AND ALSO USE INDEX ON THAT FIELD.
THEN WHAT HAPPEN.. IS THERE PERFORMANCE IMPACT ON QUERY OR NOT.

PLS I WANT TO CLEAR THAT ACTUALY I ALWAYS USE INT WHEN I WANT TO STORE NUMBERIC(INT) DATA.
ONE FELLOW TOLD ME THAT NEVER USE INT. ALWAYS USE CHAR. IT IS GOOD PRACTICE. OTHERWISE U WILL FACE SO MANY PROBLEM IN FEATURE. IS IT CORRECT?????

I REPLY HIM THAT ONLY BECAUSE OF PERFORMANCE I USE INT INSTEAD OF CHAR.

THANKS
DHIRAJ|||A lot of accounting systems treat numbers as text,, sortof. The numeric data is stored in char fields. One of the big differences as mentioned in a previous post is the sorting. As Int fields 1, 2, 3, 10, 100, 101, 1000 get sorted ascending in that order, if stored as char the same numbers would be sorted ascending as 1, 10, 100, 1000, 101, 2, 3 (I may have 101 and 1000 swapped but you can still see the difference). If you have to connect to an accounting system to update/retrieve data that might influence decisions on which datatype to use. Personally I prefer the int value.

Saturday, February 25, 2012

Application Performance advice please?

Hello all,

I've been recruited to assist in diagnosing and fixing a performance problem
on an application we have running on SQL Server 7.
The application itself is third party software, so we can't get at the
source code. It's a Client Management system, where consultants all over
the
country track their client meetings, results, action plans, etc. , and has
apparently been problematic for a long time now. I came into this
investigation
in mid-stream, but here's the situation as I understand it:

We have users reporting it's slow, with no discernable pattern with respect
to what part of the application they're using or now particular time of day.
I am told that it doesn't appear to be a bandwith or computer resource
problem. They apparently added two app servers a year or so ago, which
temporarily
improved the performance. We're using a nominal percentage of CPU and
memory.

There are three large tables (approx 8 million rows) that are queried often,
as users click to see their calendar of appointments or review past meetings
with a client, etc. The activity on these tables is over 90% reads
(SELECTS) with about 10% INSERTS/UPDATES. We have attempted to run the Index
Analyzer Wizard twice
but so far it just seems to hang (it could be that the workload file is too
big?) . So, what we're doing now is isolating the SELECT statements that
take a long time to run and manually comparing them to the indexes that
exist on these large tables. Since we can't alter the SQL source code,
we're trying to alter the indexes to improve performance.

What I would like to know is, is there a good way to get benchmark
measurements so we can explicitly measure any performance changes? Also, do
you think
we're going about this the right way, or is there some other avenue we could
be looking at to improve performance?

I recognize that performance questions are tricky to post/answer in a
newsgroup, because usually you need more information than is provided. The
problem is
that this is a high profile investigation (they're hauling us into meetings
every two days to report our progress) and I need to be able to convincingly
state that we have either improved performance by X% , or that it is the
application itself that's the problem and we're stuck with it.

Any thoughts would be deeply appreciated.

Thanks and best regards,

Steve"Steve_CA" <steveee_ca@.yahoo.com> wrote in message
news:35vad.13104$3C6.446571@.news20.bellglobal.com. ..
> Hello all,
> I've been recruited to assist in diagnosing and fixing a performance
> problem
> on an application we have running on SQL Server 7.
> The application itself is third party software, so we can't get at the
> source code. It's a Client Management system, where consultants all over
> the
> country track their client meetings, results, action plans, etc. , and has
> apparently been problematic for a long time now. I came into this
> investigation
> in mid-stream, but here's the situation as I understand it:
> We have users reporting it's slow, with no discernable pattern with
> respect
> to what part of the application they're using or now particular time of
> day.
> I am told that it doesn't appear to be a bandwith or computer resource
> problem. They apparently added two app servers a year or so ago, which
> temporarily
> improved the performance. We're using a nominal percentage of CPU and
> memory.
> There are three large tables (approx 8 million rows) that are queried
> often,
> as users click to see their calendar of appointments or review past
> meetings
> with a client, etc. The activity on these tables is over 90% reads
> (SELECTS) with about 10% INSERTS/UPDATES. We have attempted to run the
> Index
> Analyzer Wizard twice
> but so far it just seems to hang (it could be that the workload file is
> too
> big?) . So, what we're doing now is isolating the SELECT statements that
> take a long time to run and manually comparing them to the indexes that
> exist on these large tables. Since we can't alter the SQL source code,
> we're trying to alter the indexes to improve performance.
> What I would like to know is, is there a good way to get benchmark
> measurements so we can explicitly measure any performance changes? Also,
> do
> you think
> we're going about this the right way, or is there some other avenue we
> could
> be looking at to improve performance?
> I recognize that performance questions are tricky to post/answer in a
> newsgroup, because usually you need more information than is provided.
> The
> problem is
> that this is a high profile investigation (they're hauling us into
> meetings
> every two days to report our progress) and I need to be able to
> convincingly
> state that we have either improved performance by X% , or that it is the
> application itself that's the problem and we're stuck with it.
> Any thoughts would be deeply appreciated.
> Thanks and best regards,
> Steve

Your first stop should probably be Profiler, where you can gather lots of
information about the TSQL being executed on the server, along with
durations, I/O cost, query plans etc. And of course Perfmon for checking if
MSSQL is hitting I/O or CPU limits - if you've just joined the
investigation, you should probably satisfy yourself about that, especially
if you're now the person more or less responsible for resolving the
situation.

If it's a third-party app, then it's going to be awkward to find a
resolution, as you say. Indexes are probably the only thing you can change,
any even then you might find you've invalidated your support agreement by
doing so. But certainly, gathering information and establishing where the
bottleneck is (if there is one) should be the first step.

Simon|||On Mon, 11 Oct 2004 08:34:12 -0400, Steve_CA wrote:

>We have users reporting it's slow, with no discernable pattern with respect
>to what part of the application they're using or now particular time of day.

Hi Steve,

In addition to Simon's suggestion, I'd look into possible locking problems
as well. Just add locking events to the Profiler trace already suggested
by Simon.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||Can you capture some of the queries that run slowly ? I imagine that a
number of them will consistently perform slowly. If so, try running
these through QA and look at the execution plan. It should indicate
where the performance hit is. You may find that this is table scans
etc...

Also, you should run the index analysis against these 'common' queries
to see if it comes up with any suggestions.

If it's locking/blocking then
http://www.sommarskog.se/sqlutil/aba_lockinfo.html will likely be of
some help

Finally, a bit of a long shot as it sounds similar to a problem we
had.

Are any views used ? Can these be improved by swapping to tables
populated by stored procedures ? You can keep the naming the same but
swap them over. To give you an example, we had 4 views which were
referenced by a final view (all were complex). Our app needed to look
at the data, but I didn't want to change the code. I swapped the view
to a table which was generated by a copy of the original view from an
SP. This saved a lot of time (1 min from startup down to 5 seconds)
with the SP being run every 10 mins on the server. This combined with
better indexing and it's saving an hour a day for them. OK, it's a
quick overview, but I'd see if you can get away with any little tricks
like this (especially if in certain circumstances you only need to
read the data). Might be worth a try.

Ryan

Hugo Kornelis <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message news:<gh1mm0llnfvomce0s6u684ml81q365ursh@.4ax.com>...
> On Mon, 11 Oct 2004 08:34:12 -0400, Steve_CA wrote:
> >We have users reporting it's slow, with no discernable pattern with respect
> >to what part of the application they're using or now particular time of day.
> Hi Steve,
> In addition to Simon's suggestion, I'd look into possible locking problems
> as well. Just add locking events to the Profiler trace already suggested
> by Simon.
> Best, Hugo|||Thank you all,

Ryan, your suggestion regarding views is very relevant...one of the main
culprits is a commonly used view
and I thought we were dead in the water with version 7 (I know you can
create indexes on views in 2000).

"Ryan" <ryanofford@.hotmail.com> wrote in message
news:7802b79d.0410120046.4ea7dcd@.posting.google.co m...
> Can you capture some of the queries that run slowly ? I imagine that a
> number of them will consistently perform slowly. If so, try running
> these through QA and look at the execution plan. It should indicate
> where the performance hit is. You may find that this is table scans
> etc...
> Also, you should run the index analysis against these 'common' queries
> to see if it comes up with any suggestions.
> If it's locking/blocking then
> http://www.sommarskog.se/sqlutil/aba_lockinfo.html will likely be of
> some help
> Finally, a bit of a long shot as it sounds similar to a problem we
> had.
> Are any views used ? Can these be improved by swapping to tables
> populated by stored procedures ? You can keep the naming the same but
> swap them over. To give you an example, we had 4 views which were
> referenced by a final view (all were complex). Our app needed to look
> at the data, but I didn't want to change the code. I swapped the view
> to a table which was generated by a copy of the original view from an
> SP. This saved a lot of time (1 min from startup down to 5 seconds)
> with the SP being run every 10 mins on the server. This combined with
> better indexing and it's saving an hour a day for them. OK, it's a
> quick overview, but I'd see if you can get away with any little tricks
> like this (especially if in certain circumstances you only need to
> read the data). Might be worth a try.
> Ryan
> Hugo Kornelis <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:<gh1mm0llnfvomce0s6u684ml81q365ursh@.4ax.com>...
> > On Mon, 11 Oct 2004 08:34:12 -0400, Steve_CA wrote:
> > >We have users reporting it's slow, with no discernable pattern with
respect
> > >to what part of the application they're using or now particular time of
day.
> > Hi Steve,
> > In addition to Simon's suggestion, I'd look into possible locking
problems
> > as well. Just add locking events to the Profiler trace already suggested
> > by Simon.
> > Best, Hugo|||Steve,

Got your e-mail (replied directly as well). Essentially, that's what I
meant. Posting in NG in case it helps someone else later.

The scenario that we had with this was that we had a view which gathered
the data from 4 other views. Each of the 4 underlying views were a
little complex and took a while to run. Combined in the top level
view, the performance was slow. The application ran a simple summary
query of the top level view when opening. This took about 50 seconds to
run which was unacceptable to the users.

Imagine (obviously change the names to something more meaningful)

myView1, myView2, myView3, myView4 all feed into myTopView. The
application looks to query the object myTopView.

I renamed the view from myTopView to myTopViewFull.

I created a table called myTopViewTable (the same name as the view and
the same structure as the data returned from the original view
myTopView).

Then I created a view called myTopView (same name as my original top
level view) which pointed to the table myTopViewTable instead of the 4
lower level views. Simple select * will do it.

All I had to do was a very simple SP which truncated the table
myTopViewTable and inserted the data from myTopViewFull. This still took
a while to run, but it only takes the hit on the server and the user
perception is changed and they think its run quicker. Instead of a
bunch of users all doing the same thing at a 50 second cost to each of
them, we only have a 50 second cost on the server every 10 minutes.

For our needs, it doesnt matter for this data if we update it every 10
minutes, but the difference to the users was quite noticeable. The time
to open the application dropped from 50 seconds plus to consistently
lower than 5 seconds. I did add some indexes to the main tables
(referenced by the view) to try and speed things up and it helped a
little. I found more performance improvements in the application though
as a result.

The main benefit of this is shifting the perception of work from the
users to the server. You can also try using OPTION (NO LOCK) on the
select statements to reduce locking / blocking issues as you are taking
the data into a table directly.

Also, as SQL tables / views cannot have the same name, if you have a
view that runs far too slowly, you can sometimes get a bit of a
performance advantage by doing this. Its crude, but it works provided
you apply it correctly. No changes to the application should be needed.

A simple test is, drop the data from the view into a table. Try a query
that you know takes a while against the view and against the table. See
if there is a performance improvement. Oh, and you could try indexes on
the new table provided you dont take a hit re-building them.

Indexed views are possibly another option though.

Hope that helps

Ryan

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!

Application Performance

I'm not sure if this is the best forum to post this in (if not, if somebody
can suggest a good forum to post this question, I would appreciate it!)
I have a client with an application setup as follows:
(1) Web Server running Windows 2003 Standard Edition Server & IIS.
(1) Web Server also running Windows 2003 Standard Edition Server &
Microsoft SQL Server 2000
The database is large - the MDF file is approximately 2.5 Gig. There are
well over 100 tables and many tables have hundreds of thousands of records.
On the web server, we have two applications running that communicate with
the database. One is written in ASP - this is used by their customers and
has a good amount of activity during business hours. The other is written
in ASP.NET/VB.NET 1.1 and is used by their internal staff to manage their
business. This side is used heavily, especially during business hours.
When only a few people are on the system, it flys. i.e. first thing in the
morning before everybody has gotten into the office and started using the
system. It's very fast. But as time progresses, and more and more people
log on, the system (both the ASP system and the ASP.NET system) gets slower
and slower, to the point of timing out regularly. If we do an IISReset,
reboot the server, or do anything that essentially kills all the active
sessions, access to the system speeds up again - for a time - until
everybody has logged back on, when it starts dogging again.
I'm looking for any suggestions on how to go about optimizing the system.
At first we thought it was bandwidth to the system, but everything else on
the network seems to not have a problem - only this system. Should we
investigate a cluster with load balancing on the SQL Server side? Or maybe
it's our application itself. Maybe we are not handling those large record
sets in the most efficient manner? Mabye we need to change some settings
in the SQL Server database itself to make access to the database more
efficient. I'm not really sure where to start with this issue.
If anybody has any advice on where to start looking at this problem, or
what forum I should post this on, I would greatly appreciate it.
Thank you very much for your help!
Sincerely,
Matt SchwartzMatt,
First off clustering does not give you load balancing, it is a hardware fail
over solution only. I am a little confused as to what configuration you
actually have. Is it two different servers and only one has SQL Server or
are they both the same? In any case it is usually not a good idea to run
both the SQL Server and the web service on the same server as they compete
for resources. This is especially true of memory. How much memory does the
machine with SQL Server have and how have you configured the memory? If you
have not set the MAX memory setting to give the ASP stuff enough memory to
use they will constantly be fighting with each other for the memory. Here
are some links that should get you started to determine where the issues
are.
http://www.sql-server-performance.c...nce_audit10.asp
Performance Audit
http://www.microsoft.com/technet/pr...perfmonitor.asp Perfmon counters
http://www.sql-server-performance.c...mance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.c...rmance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/d.../>
on_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Andrew J. Kelly SQL MVP
"Matt Schwartz" <matt@.nelix.com> wrote in message
news:Xns97CA88A7B151Dmattnelixcom@.216.196.97.142...
> I'm not sure if this is the best forum to post this in (if not, if
> somebody
> can suggest a good forum to post this question, I would appreciate it!)
> I have a client with an application setup as follows:
> (1) Web Server running Windows 2003 Standard Edition Server & IIS.
> (1) Web Server also running Windows 2003 Standard Edition Server &
> Microsoft SQL Server 2000
> The database is large - the MDF file is approximately 2.5 Gig. There are
> well over 100 tables and many tables have hundreds of thousands of
> records.
> On the web server, we have two applications running that communicate with
> the database. One is written in ASP - this is used by their customers and
> has a good amount of activity during business hours. The other is written
> in ASP.NET/VB.NET 1.1 and is used by their internal staff to manage their
> business. This side is used heavily, especially during business hours.
> When only a few people are on the system, it flys. i.e. first thing in
> the
> morning before everybody has gotten into the office and started using the
> system. It's very fast. But as time progresses, and more and more people
> log on, the system (both the ASP system and the ASP.NET system) gets
> slower
> and slower, to the point of timing out regularly. If we do an IISReset,
> reboot the server, or do anything that essentially kills all the active
> sessions, access to the system speeds up again - for a time - until
> everybody has logged back on, when it starts dogging again.
> I'm looking for any suggestions on how to go about optimizing the system.
> At first we thought it was bandwidth to the system, but everything else on
> the network seems to not have a problem - only this system. Should we
> investigate a cluster with load balancing on the SQL Server side? Or
> maybe
> it's our application itself. Maybe we are not handling those large record
> sets in the most efficient manner? Mabye we need to change some settings
> in the SQL Server database itself to make access to the database more
> efficient. I'm not really sure where to start with this issue.
> If anybody has any advice on where to start looking at this problem, or
> what forum I should post this on, I would greatly appreciate it.
> Thank you very much for your help!
> Sincerely,
> Matt Schwartz
>|||Thank you very much for your assistance! We have two servers - one is
running IIS, the other running SQL Server. As far as memory, There is 4
Gig on the SQL Server, but it's setup as the default installation.
I'll take a look at the articles you recommended. Thank you very much!
Sincerely,
Matt Schwartz
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in
news:er9IyLRfGHA.1208@.TK2MSFTNGP02.phx.gbl:

> Matt,
> First off clustering does not give you load balancing, it is a
> hardware fail over solution only. I am a little confused as to what
> configuration you actually have. Is it two different servers and only
> one has SQL Server or are they both the same? In any case it is
> usually not a good idea to run both the SQL Server and the web service
> on the same server as they compete for resources. This is especially
> true of memory. How much memory does the machine with SQL Server have
> and how have you configured the memory? If you have not set the MAX
> memory setting to give the ASP stuff enough memory to use they will
> constantly be fighting with each other for the memory. Here are some
> links that should get you started to determine where the issues are.
>|||OK so it is not really a WEB server then, just a server with SQL Server on
it. If you are running Std Edition of SQL Server 2000 you can only use 2GB
anyway and the default config should work just fine. It is hard to say what
the issues can be since there is so little to go on. But those articles
should help to narrow it down some. Feel free to post any follow on
questions as a result of it.
Andrew J. Kelly SQL MVP
"Matt Schwartz" <matt@.nelix.com> wrote in message
news:Xns97CBB0EE525B6mattnelixcom@.216.196.97.142...[vbcol=seagreen]
> Thank you very much for your assistance! We have two servers - one is
> running IIS, the other running SQL Server. As far as memory, There is 4
> Gig on the SQL Server, but it's setup as the default installation.
> I'll take a look at the articles you recommended. Thank you very much!
> Sincerely,
> Matt Schwartz
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in
> news:er9IyLRfGHA.1208@.TK2MSFTNGP02.phx.gbl:
>

Friday, February 24, 2012

Application and database performance benchmarks

Hi
I need to develop and implement procedures to monitor performance of sql
server 2000/2005 databases and various applications. Is there a place where I
can look at for creating benchmarks .
Thanks
--
ontario, canadaChapter 1 of Inside Microsoft SQL Server 2004 Query Tuning and
Optimization has a section on creating a baseline for your workload,
and then monitoring the workload. That would be a reasonable place to
start to study the subject.
Roy Harvey
Beacon Falls, CT
On Fri, 25 Apr 2008 10:10:01 -0700, db <db@.discussions.microsoft.com>
wrote:
>Hi
>I need to develop and implement procedures to monitor performance of sql
>server 2000/2005 databases and various applications. Is there a place where I
>can look at for creating benchmarks .
>Thanks

Monday, February 13, 2012

App_Data vs SQLEXPRESS performance?

Hi all,

Just wondering if there are any performance differences with having your database file in the App_Data folder vs having it directly on SQLEXPRESS.

Cheers,

Simon

Yes, but not much of one in most cases. If it's in App_Data with an autoattach, the database will be attached when it first accessed, which will delay the first request slightly. It will then stay attached for subsequent accesses. Eventually if there are no more accesses, the system will unattach it to conserve resources, I think it's between 5 and 30 minutes, but could be wrong.

In any case, the performance difference is slight, and the auto attaching and unattaching will help free memory and resources for the other applications that may be running on that server when the database isn't being used. Extremely nice for both infrequent database applications (Low resources), and high usage database applications (Since the database will always stay attached).

|||Great thanks for the detailed reply Motley! I think I'll migrate my db into the App_Data folder now =)

Cheers

Sunday, February 12, 2012

Anyway to emulate LIMIT Start,Count?

Does anyone know of a way to emulate the LIMIT clause that is available in Oracle and MySQL?

I could greatly improve my performance on my web application if I could figure out how to do it. I am using TOP right now to just pull back the records needed, but as with any page with many results, once you get 10 pages in you're pulling way too many records accross the wire.

The LIMIT clause works great in the other DBMS's, but MS SQL 2K does not have it. :(set rowcount 100|||That doesn't deal with the offset right??|||It just limits the number of rows returned to the client to 100 (or whatever number you specify)

Don't forget to set it to 0 to restore the default functionality - RETURN ALL ROWS.|||I already get that with TOP so there is no need for this.|||if you want LIMIT 30,20 (start at offset 30, i.e. 31st row, and return 20 rows) like this:

select *
from (
select top 20
foo
, bar
from (
select top 50
foo
, bar
from yourtable
order
by bar desc
)
order
by bar asc
)
order
by bar desc|||http://www.dbforums.com/t994143.html|||I like the multiple subselect idea but don't seem to be able to get it to work. The sql errors out when trying to do an order by on a subselect.

For example - this errors out:

select top 20 AccountID,[Name] from (
select top 50 AccountID,[Name]
from CRDACCOUNT
order by AccountID desc
) order by AccountID

Give an "Incorrect syntax near the keyword 'order'." on the last line.

Ideas?

PS: Thanks for the great replies so far everyone!!|||I'm 99 5/8% certain that you are just missing an alias, something like:SELECT TOP 20 AccountID, [Name]
FROM (SELECT TOP 50 AccountID, [Name]
FROM CRDACCOUNT
ORDER BY AccountID DESC) AS zz
ORDER BY AccountID-PatP|||wow, I just posted a question asking nearly the identical thing. So far it doesn't look good. My options are:

- Using an ORDER BY on column and maintain a bookmark position. This works but I can't get decent performance out of it.
- Paging techniques. Having trouble getting these to work right as well.

Let me know if you find an acceptable solution.|||Yep - it was the alias. Here is the first query formatted with the aliases:

select *
from (
select top 20
foo, bar
from (
select top 50
foo, bar
from yourtable
order
by bar desc
) as tbl1
order
by bar asc
) as tbl2
order
by bar desc

I guess the trick now is to figure out how to have multi fields in the ORDER BY clause.

Thanks for the help that has been given!

Thursday, February 9, 2012

Anyone using Gigaspaces

Has anyone heard of Gigaspaces and are you using it to scale out and improve
database performance issues ?
ThanksWell, Gigaspaces isn't exactly a secret - the company's been around for ten years. It has many customers in high-end financials and ecommerce environments, for example - publicized customers include Virgin Mobile, Dow Jones, Societe Generale, and others.



As far as database performance issues, Gigaspaces is designed to use a relational database (or any other data store) as a backend storage system, which means the actual database performance doesn't matter very much at all, because the ideal use is as an asynchronous backing store.



See http://www.gigaspaces.com/screencasts/developer/ScreenCasts.html#paas-video for an example of the asynchronous backing store in motion.