Tuesday, March 27, 2012

Are Blank String and String with only Space characters equal to each other? They should not be!!

I have a table called Table_1

Table_1
--
a varchar(10),
b varchar(20)

and Column a is the Primary Key.

INSERT INTO Table_1
(a ,b)
VALUES
('' , 'BLANK STRING')

INSERT INTO Table_1
(a ,b)
VALUES
(' ' ,'4 SPACES')

When i execute this Insertions, it inserts the First Record but on the Second one it gives Violation of Uniqueness

(1 row(s) affected)
Msg 2627, Level 14, State 1, Line 9
Violation of PRIMARY KEY constraint 'PK_Table_1'. Cannot insert duplicate key in object 'dbo.Table_1'.
The statement has been terminated.

My Questions is that; HOW can a Blank String or a String with 4(or any number) space charcters in it can be EQUAL? In my opinion they should not be regarded as the same! Is there a solution to this issue? That's not the case in Oracle Database.

Thanks for your help!

Regards,
Cem

According to ANSI standards (or is it SQL Server's implementation of ANSI...), anyway, for sorting purposes, leading blanks are ignored. (And Primary Keys are 'sorted'...)

Try these two statements, the first will provide the 'right' answer, and the second will provide the 'real' answer.


SELECT len( (replicate( ' ', 4 )) )
SELECT datalength( (replicate( ' ', 4 )) )

|||

You should check the setting of SET ANSI_PADDING. If you want behavior such as you find in ORACLE, you will need SET ANSI_PADDING ON. That way trailing blanks will not be trimmed from your VARCHAR insert statements.

http://msdn2.microsoft.com/en-us/library/ms187403.aspx

Dan

Are Bit Flags Bad?

Is the process of using integer data types to represent multiple values via the use of bit flags bad practice? It seems to go against the rules of normalization in a single field can represent multiple values. On the other hand that since these values can be tested for via bitwise operations, that it's not entirely bad.

Any insight would be appreciated.

ThanksI've used integers to store multiple bit states on occasion and found it very usefull.

For instance, you can have multiple boolean security states, and if you think of them as separate boolean values then separate bit fields would seem appropriate. But is you think of them as a single value describing the overall state of the security setting, then you aren't really violating any principles by storing them as a single field.

At a practical level, SQL Server will combine up to eight bit fields and store them internally as a single byte anyway.

Normalization is good practice because it leads to efficient and functional design. But if you avoid an instance where violating it would actually improve an application, then you are letting the cart lead the horse. Normalization is principle, not dogma.|||there is a multitude of examples where M$ itself used...smallint...as a datatype for this type of scenarios. i think int would be an overkill, while following the trend would save you storage space while doing the job.

are backups not successful?

Hello:
A client of mine is using SQL Server 2000 and has constructed a database
maintenance job to do "Complete Backup" and "Transaction Log Backup".
In another thread, I had asked why the job was not deleting its own backups
files as the job is configured to delete backups after one day.
Someone had replied and said that the deletions will not take effect until
the backup job completes successfully.
Since the backup files are not deleting, does that mean that the backup job
is not being successful?
If that's the case, then, I don't know what the client is doing wrong. It's
not hard to set up a backup job in the database maintenance plan.
I mean, does the SQL Server Service need to be stopped in order for these
backups to take place successfully?
Thanks!
childofthe1980s
Hi,
Before i suggest u any thing just check the maintinance plan log file.
It is not true that backup will not delete till the next backup is
successful.
Check maintinance plan again and make sure that u had selected one
day(delete old files)
check the path on the backup.
hope this helps
from
Killer

are backups not successful?

Hello:
A client of mine is using SQL Server 2000 and has constructed a database
maintenance job to do "Complete Backup" and "Transaction Log Backup".
In another thread, I had asked why the job was not deleting its own backups
files as the job is configured to delete backups after one day.
Someone had replied and said that the deletions will not take effect until
the backup job completes successfully.
Since the backup files are not deleting, does that mean that the backup job
is not being successful?
If that's the case, then, I don't know what the client is doing wrong. It's
not hard to set up a backup job in the database maintenance plan.
I mean, does the SQL Server Service need to be stopped in order for these
backups to take place successfully?
Thanks!
childofthe1980sHi,
Before i suggest u any thing just check the maintinance plan log file.
It is not true that backup will not delete till the next backup is
successful.
Check maintinance plan again and make sure that u had selected one
day(delete old files)
check the path on the backup.
hope this helps
from
Killersql

are backups not successful?

Hello:
A client of mine is using SQL Server 2000 and has constructed a database
maintenance job to do "Complete Backup" and "Transaction Log Backup".
In another thread, I had asked why the job was not deleting its own backups
files as the job is configured to delete backups after one day.
Someone had replied and said that the deletions will not take effect until
the backup job completes successfully.
Since the backup files are not deleting, does that mean that the backup job
is not being successful?
If that's the case, then, I don't know what the client is doing wrong. It's
not hard to set up a backup job in the database maintenance plan.
I mean, does the SQL Server Service need to be stopped in order for these
backups to take place successfully?
Thanks!
childofthe1980sHi,
Before i suggest u any thing just check the maintinance plan log file.
--
It is not true that backup will not delete till the next backup is
successful.
Check maintinance plan again and make sure that u had selected one
day(delete old files)
check the path on the backup.
hope this helps
from
Killer

are all SQL 2000 Server Wizard included in SQL 2000 Server Standard Edition

Hi,
right now we use the SQL 2000 Server Developer Edition to
make tests, but we would like to buy SQL 2000 Server. Does
the SQL 2000 Server STandard Edition include all the
WIZARDS that are also included with the Developer or
Enterprise Edition (when I highlight a database in
Enterprise Manager, go to "Tools" and "Wizards").
Can anyone help?
Thanks for reply.
BodoAll the wizards are included in Standard Edition as well. The differences
between Standard and Enterprise Edition are mostly in the high-end
performance and availabilty areas, like clustered servers, indexed views
etc.
For an overview see the topic "Features Supported by the Editions of SQL
Server 2000" in Books Online.
--
Jacco Schalkwijk MCDBA, MCSD, MCSE
Database Administrator
Eurostop Ltd.
"bodo" <bodobecker@.hotmail.com> wrote in message
news:0b6501c36d6b$b90a15c0$a601280a@.phx.gbl...
> Hi,
> right now we use the SQL 2000 Server Developer Edition to
> make tests, but we would like to buy SQL 2000 Server. Does
> the SQL 2000 Server STandard Edition include all the
> WIZARDS that are also included with the Developer or
> Enterprise Edition (when I highlight a database in
> Enterprise Manager, go to "Tools" and "Wizards").
> Can anyone help?
> Thanks for reply.
> Bodo|||Hello Bodo !
Differences could be seen here:
http://www.microsoft.com/sql/evaluation/features/choosing.asp
HTH, Jens Süßmeyer.|||Hi Jacco,
thanks for your help!
Brgds Bodo
>--Original Message--
>All the wizards are included in Standard Edition as well.
The differences
>between Standard and Enterprise Edition are mostly in the
high-end
>performance and availabilty areas, like clustered
servers, indexed views
>etc.
>For an overview see the topic "Features Supported by the
Editions of SQL
>Server 2000" in Books Online.
>--
>Jacco Schalkwijk MCDBA, MCSD, MCSE
>Database Administrator
>Eurostop Ltd.
>
>"bodo" <bodobecker@.hotmail.com> wrote in message
>news:0b6501c36d6b$b90a15c0$a601280a@.phx.gbl...
>> Hi,
>> right now we use the SQL 2000 Server Developer Edition
to
>> make tests, but we would like to buy SQL 2000 Server.
Does
>> the SQL 2000 Server STandard Edition include all the
>> WIZARDS that are also included with the Developer or
>> Enterprise Edition (when I highlight a database in
>> Enterprise Manager, go to "Tools" and "Wizards").
>> Can anyone help?
>> Thanks for reply.
>> Bodo
>
>.
>

Are all indexes created on subscriber ?

Using trans replication and I believe by default all the clustered and non
clustered indexes are copied to the subscriber .. Can someone confirm ?
Hassan,
the settings on the snapshot properties tab of the article properties
determines which indexes are copied over. If any are created afterward
initialization, they'll not be replicated, so sp_addscriptexec can be used
for these.
Rgds,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

Are all DSN's really ODBC?

I'm still trying to learn the alphabet soup of MS's data access methods, but
one thing I just noticed is that the Data Sources control panel is called
"Data Sources (ODBC)". Is this accurate? If so, should I *not* use DSN's in
my code? I am under the impression that ODBC is considerably slower than
OLEDB or other technologies (although what the differences are I don't really
know).
For instance, I have my code literally sprinkled with little ADO lookups
that use a single connection string...
"ODBC;DATABASE=OurDB;DSN=SQL Server"
Removing the "ODBC" does nothing, which I assume is because it's not a
key/value pair. But it still uses a DSN to look up the connection, so does
this mean every query I run using this connection string uses ODBC?
If so, should I really expect any sort of real-world performance boost if I
use a new string, one like...
"Provider=sqloledb;DATABASE=OurDB;DSN=SQL Server"
Is there an even better provider to use?
It's all so confusing!!
Maury
Why use DSN's at all? Go DSN-less. You just need to tack a few things onto
the connection string.
Head to www.connectionstrings.com for more info.
Keith
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:3486DEAD-166D-43C5-A2C7-7B37F66EC392@.microsoft.com...
> I'm still trying to learn the alphabet soup of MS's data access methods,
> but
> one thing I just noticed is that the Data Sources control panel is called
> "Data Sources (ODBC)". Is this accurate? If so, should I *not* use DSN's
> in
> my code? I am under the impression that ODBC is considerably slower than
> OLEDB or other technologies (although what the differences are I don't
> really
> know).
> For instance, I have my code literally sprinkled with little ADO lookups
> that use a single connection string...
> "ODBC;DATABASE=OurDB;DSN=SQL Server"
> Removing the "ODBC" does nothing, which I assume is because it's not a
> key/value pair. But it still uses a DSN to look up the connection, so does
> this mean every query I run using this connection string uses ODBC?
> If so, should I really expect any sort of real-world performance boost if
> I
> use a new string, one like...
> "Provider=sqloledb;DATABASE=OurDB;DSN=SQL Server"
> Is there an even better provider to use?
> It's all so confusing!!
> Maury
|||"Keith Kratochvil" wrote:

> Why use DSN's at all?
That's what I'm asking.
Maury
|||yes.
Choose an OLEDB Provider and do not use a DSN.
OLEDB is going to be faster particularly when you choose the native provider
for your target platform (Example..."SQL Server")
Greg Jackson
PDX, Oregon
|||In addition tot he other comments:
ODBC is a programming API. A low-level API. There exists higher level abstractions such as the RDO
object model on top of ODBC. ODBC uses a driver to connect to the DBMS.
OLEDB is a programming API. A low-level API. There exists higher level abstractions such as the ADO
object model on top of OLEDB. OLEDB uses a driver to connect to the DBMS. There exists an OLEDB
driver which sits on top of ODBC. This is probably what you are seeing.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in message
news:3486DEAD-166D-43C5-A2C7-7B37F66EC392@.microsoft.com...
> I'm still trying to learn the alphabet soup of MS's data access methods, but
> one thing I just noticed is that the Data Sources control panel is called
> "Data Sources (ODBC)". Is this accurate? If so, should I *not* use DSN's in
> my code? I am under the impression that ODBC is considerably slower than
> OLEDB or other technologies (although what the differences are I don't really
> know).
> For instance, I have my code literally sprinkled with little ADO lookups
> that use a single connection string...
> "ODBC;DATABASE=OurDB;DSN=SQL Server"
> Removing the "ODBC" does nothing, which I assume is because it's not a
> key/value pair. But it still uses a DSN to look up the connection, so does
> this mean every query I run using this connection string uses ODBC?
> If so, should I really expect any sort of real-world performance boost if I
> use a new string, one like...
> "Provider=sqloledb;DATABASE=OurDB;DSN=SQL Server"
> Is there an even better provider to use?
> It's all so confusing!!
> Maury

Are all DSN's really ODBC?

I'm still trying to learn the alphabet soup of MS's data access methods, but
one thing I just noticed is that the Data Sources control panel is called
"Data Sources (ODBC)". Is this accurate? If so, should I *not* use DSN's in
my code? I am under the impression that ODBC is considerably slower than
OLEDB or other technologies (although what the differences are I don't reall
y
know).
For instance, I have my code literally sprinkled with little ADO lookups
that use a single connection string...
"ODBC;DATABASE=OurDB;DSN=SQL Server"
Removing the "ODBC" does nothing, which I assume is because it's not a
key/value pair. But it still uses a DSN to look up the connection, so does
this mean every query I run using this connection string uses ODBC?
If so, should I really expect any sort of real-world performance boost if I
use a new string, one like...
" Provider=sqloledb;DATABASE=OurDB;DSN=SQL
Server"
Is there an even better provider to use?
It's all so confusing!!
MauryWhy use DSN's at all? Go DSN-less. You just need to tack a few things onto
the connection string.
Head to www.connectionstrings.com for more info.
Keith
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:3486DEAD-166D-43C5-A2C7-7B37F66EC392@.microsoft.com...
> I'm still trying to learn the alphabet soup of MS's data access methods,
> but
> one thing I just noticed is that the Data Sources control panel is called
> "Data Sources (ODBC)". Is this accurate? If so, should I *not* use DSN's
> in
> my code? I am under the impression that ODBC is considerably slower than
> OLEDB or other technologies (although what the differences are I don't
> really
> know).
> For instance, I have my code literally sprinkled with little ADO lookups
> that use a single connection string...
> "ODBC;DATABASE=OurDB;DSN=SQL Server"
> Removing the "ODBC" does nothing, which I assume is because it's not a
> key/value pair. But it still uses a DSN to look up the connection, so does
> this mean every query I run using this connection string uses ODBC?
> If so, should I really expect any sort of real-world performance boost if
> I
> use a new string, one like...
> " Provider=sqloledb;DATABASE=OurDB;DSN=SQL
Server"
> Is there an even better provider to use?
> It's all so confusing!!
> Maury|||"Keith Kratochvil" wrote:

> Why use DSN's at all?
That's what I'm asking.
Maury|||yes.
Choose an OLEDB Provider and do not use a DSN.
OLEDB is going to be faster particularly when you choose the native provider
for your target platform (Example..."SQL Server")
Greg Jackson
PDX, Oregon|||In addition tot he other comments:
ODBC is a programming API. A low-level API. There exists higher level abstra
ctions such as the RDO
object model on top of ODBC. ODBC uses a driver to connect to the DBMS.
OLEDB is a programming API. A low-level API. There exists higher level abstr
actions such as the ADO
object model on top of OLEDB. OLEDB uses a driver to connect to the DBMS. Th
ere exists an OLEDB
driver which sits on top of ODBC. This is probably what you are seeing.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in messag
e
news:3486DEAD-166D-43C5-A2C7-7B37F66EC392@.microsoft.com...
> I'm still trying to learn the alphabet soup of MS's data access methods, b
ut
> one thing I just noticed is that the Data Sources control panel is called
> "Data Sources (ODBC)". Is this accurate? If so, should I *not* use DSN's i
n
> my code? I am under the impression that ODBC is considerably slower than
> OLEDB or other technologies (although what the differences are I don't rea
lly
> know).
> For instance, I have my code literally sprinkled with little ADO lookups
> that use a single connection string...
> "ODBC;DATABASE=OurDB;DSN=SQL Server"
> Removing the "ODBC" does nothing, which I assume is because it's not a
> key/value pair. But it still uses a DSN to look up the connection, so does
> this mean every query I run using this connection string uses ODBC?
> If so, should I really expect any sort of real-world performance boost if
I
> use a new string, one like...
> " Provider=sqloledb;DATABASE=OurDB;DSN=SQL
Server"
> Is there an even better provider to use?
> It's all so confusing!!
> Maurysql

Are all DSN's really ODBC?

I'm still trying to learn the alphabet soup of MS's data access methods, but
one thing I just noticed is that the Data Sources control panel is called
"Data Sources (ODBC)". Is this accurate? If so, should I *not* use DSN's in
my code? I am under the impression that ODBC is considerably slower than
OLEDB or other technologies (although what the differences are I don't really
know).
For instance, I have my code literally sprinkled with little ADO lookups
that use a single connection string...
"ODBC;DATABASE=OurDB;DSN=SQL Server"
Removing the "ODBC" does nothing, which I assume is because it's not a
key/value pair. But it still uses a DSN to look up the connection, so does
this mean every query I run using this connection string uses ODBC?
If so, should I really expect any sort of real-world performance boost if I
use a new string, one like...
"Provider=sqloledb;DATABASE=OurDB;DSN=SQL Server"
Is there an even better provider to use?
It's all so confusing!!
MauryWhy use DSN's at all? Go DSN-less. You just need to tack a few things onto
the connection string.
Head to www.connectionstrings.com for more info.
--
Keith
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:3486DEAD-166D-43C5-A2C7-7B37F66EC392@.microsoft.com...
> I'm still trying to learn the alphabet soup of MS's data access methods,
> but
> one thing I just noticed is that the Data Sources control panel is called
> "Data Sources (ODBC)". Is this accurate? If so, should I *not* use DSN's
> in
> my code? I am under the impression that ODBC is considerably slower than
> OLEDB or other technologies (although what the differences are I don't
> really
> know).
> For instance, I have my code literally sprinkled with little ADO lookups
> that use a single connection string...
> "ODBC;DATABASE=OurDB;DSN=SQL Server"
> Removing the "ODBC" does nothing, which I assume is because it's not a
> key/value pair. But it still uses a DSN to look up the connection, so does
> this mean every query I run using this connection string uses ODBC?
> If so, should I really expect any sort of real-world performance boost if
> I
> use a new string, one like...
> "Provider=sqloledb;DATABASE=OurDB;DSN=SQL Server"
> Is there an even better provider to use?
> It's all so confusing!!
> Maury|||"Keith Kratochvil" wrote:
> Why use DSN's at all?
That's what I'm asking.
Maury|||yes.
Choose an OLEDB Provider and do not use a DSN.
OLEDB is going to be faster particularly when you choose the native provider
for your target platform (Example..."SQL Server")
Greg Jackson
PDX, Oregon|||In addition tot he other comments:
ODBC is a programming API. A low-level API. There exists higher level abstractions such as the RDO
object model on top of ODBC. ODBC uses a driver to connect to the DBMS.
OLEDB is a programming API. A low-level API. There exists higher level abstractions such as the ADO
object model on top of OLEDB. OLEDB uses a driver to connect to the DBMS. There exists an OLEDB
driver which sits on top of ODBC. This is probably what you are seeing.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in message
news:3486DEAD-166D-43C5-A2C7-7B37F66EC392@.microsoft.com...
> I'm still trying to learn the alphabet soup of MS's data access methods, but
> one thing I just noticed is that the Data Sources control panel is called
> "Data Sources (ODBC)". Is this accurate? If so, should I *not* use DSN's in
> my code? I am under the impression that ODBC is considerably slower than
> OLEDB or other technologies (although what the differences are I don't really
> know).
> For instance, I have my code literally sprinkled with little ADO lookups
> that use a single connection string...
> "ODBC;DATABASE=OurDB;DSN=SQL Server"
> Removing the "ODBC" does nothing, which I assume is because it's not a
> key/value pair. But it still uses a DSN to look up the connection, so does
> this mean every query I run using this connection string uses ODBC?
> If so, should I really expect any sort of real-world performance boost if I
> use a new string, one like...
> "Provider=sqloledb;DATABASE=OurDB;DSN=SQL Server"
> Is there an even better provider to use?
> It's all so confusing!!
> Maury

are after triggers really after?

The other day when I was preparing for Microsoft exam 70-229 and I was reading up on triggers and I noticed that the documentation states that the default behavoir of triggers is to fire after the update\insert\delete transaction on the table that the trigger is created on. However, I have noticed that this is not totally accurate. If you setup a trace in profiler with the with all of the xxxstarted and xxxcompleted events for the T-SQL and stored procedure event classes you will notice that the originating transaction starts, then the trigger starts, then the trigger completes and then the originating transaction completes.

So after thinking about this for a while, I have to come to the conclusion that sql server must consider the trigger as part of the orignating transaction, and must therefore complete the trigger before the originating transaction commits. With this in mind, should'nt the default behavoir of triggers be considered "during" and not "after".I believe they fire after the update\insert\delete, but before the commit. They certainly occur during the transaction, because an error in a poorly written trigger can roll back the whole thing.|||I believe they fire after the update\insert\delete, but before the commit. They certainly occur during the transaction, because an error in a poorly written trigger can roll back the whole thing.

I think both statements are accurate. What differentiates an 'after' trigger from a before (or INSTEAD OF) trigger is that it (the INSTEAD OF trigger) executes before any transaction is opened on the underlying table/view. Perhaps a blinding statement of the obvious, but as often the case in semantics, the BEFORE trigger needed an 'opposite' and that opposite became 'after'.

Regards,

hmscott

Are _WA_Sys_ statistics indexes an indicator of what should be indexed?

I know these are just statistics placeholders, but I am wondering if
they might be a good hint into what fields should be indexed. Any
comments?
Example:
_WA_Sys_DisplayName_0B27A5C0 - Should I index DisplayName?
_WA_Sys_CreatedDate_0B27A5C0 - Should I index CreatedDate?
_WA_Sys_CreatedBy_0B27A5C0 - Should I index CreatedBy?Hi Joshua,
Auto generated statistics on a column are indeed a good indicator that the
column might have to be indexed. Keep in mind though that statistics are
created the first time the column is accessed as part of a where clause,
join, group by, order by etc, and that that may also be the last time. In
other words, check that the column is indeed used more or less regularly
before you start creating indexes everywhere.
Jacco Schalkwijk
SQL Server MVP
"Joshua Guttman" <joshuaguttman@.excite.com> wrote in message
news:5a7a8917.0401291329.57833b2@.posting.google.com...
quote:

> I know these are just statistics placeholders, but I am wondering if
> they might be a good hint into what fields should be indexed. Any
> comments?
> Example:
> _WA_Sys_DisplayName_0B27A5C0 - Should I index DisplayName?
> _WA_Sys_CreatedDate_0B27A5C0 - Should I index CreatedDate?
> _WA_Sys_CreatedBy_0B27A5C0 - Should I index CreatedBy?

Are _WA_Sys_ statistics indexes an indicator of what should be indexed?

I know these are just statistics placeholders, but I am wondering if
they might be a good hint into what fields should be indexed. Any
comments?
Example:
_WA_Sys_DisplayName_0B27A5C0 - Should I index DisplayName?
_WA_Sys_CreatedDate_0B27A5C0 - Should I index CreatedDate?
_WA_Sys_CreatedBy_0B27A5C0 - Should I index CreatedBy?Hi Joshua,
Auto generated statistics on a column are indeed a good indicator that the
column might have to be indexed. Keep in mind though that statistics are
created the first time the column is accessed as part of a where clause,
join, group by, order by etc, and that that may also be the last time. In
other words, check that the column is indeed used more or less regularly
before you start creating indexes everywhere.
--
Jacco Schalkwijk
SQL Server MVP
"Joshua Guttman" <joshuaguttman@.excite.com> wrote in message
news:5a7a8917.0401291329.57833b2@.posting.google.com...
> I know these are just statistics placeholders, but I am wondering if
> they might be a good hint into what fields should be indexed. Any
> comments?
> Example:
> _WA_Sys_DisplayName_0B27A5C0 - Should I index DisplayName?
> _WA_Sys_CreatedDate_0B27A5C0 - Should I index CreatedDate?
> _WA_Sys_CreatedBy_0B27A5C0 - Should I index CreatedBy?

ArcServ and SQl Server 2000

Hi,

I am trying to create a backup agent for a SQl Server. I have given the information for the following: instance, AUTHENTICATION, USERNAME, PASSWORD, CONFIRM PASSWORD.

I keep getting the error message, enter valid instance, even though i am entering in the instance.

any ideas?

Is this a named instance? Do you have any special characters like dashes in the server name?

-Sue

|||Problem solved ? If yes, post how you solved it, or if the hint from Sue helped you′.

Jens K. Suessmeyer


http://www.sqlserver2005.de

sql

Archiving very slow, help needed please

Hey guys,
The vendor gave us an archiving tool for our huge tables. We gave them a
production copy of our tables. They tested their tool against their own
server and told us that it takes 2 seconds to insert each record. I know
this is already bad. But today, I tested it against our own test server.
They also gave us instruction on what to do first before running the
archiving tool. Well anyways, after running the archiving tool, it took 20
seconds to insert 1 record. That's totally bad!
I would like to know if you will be able to help me guys identify the
problem by just looking at this links.
http://restricted.dyndns.org/executionplan.txt
http://restricted.dyndns.org/execplan1.gif
http://restricted.dyndns.org/execplan2.gif
http://restricted.dyndns.org/execplan3.gif
The INSERT statement that you will see there consumed 10 seconds of CPU and
30 seconds of Duration.
Are there any other statements that I can execute against my captured
profiler table that can us troubleshoot?
Any help will be greatly appreciated.
Thanks.
V1rt>> told us that it takes 2 seconds to insert each record
well....
Go back to the vendor (I assume you haven't paid them yet) and tell them
this isn't acceptable.
How many records do you have to deal with - even at 2 secs per record?
"Neil" wrote:
> Hey guys,
> The vendor gave us an archiving tool for our huge tables. We gave them a
> production copy of our tables. They tested their tool against their own
> server and told us that it takes 2 seconds to insert each record. I know
> this is already bad. But today, I tested it against our own test server.
> They also gave us instruction on what to do first before running the
> archiving tool. Well anyways, after running the archiving tool, it took 20
> seconds to insert 1 record. That's totally bad!
> I would like to know if you will be able to help me guys identify the
> problem by just looking at this links.
> http://restricted.dyndns.org/executionplan.txt
> http://restricted.dyndns.org/execplan1.gif
> http://restricted.dyndns.org/execplan2.gif
> http://restricted.dyndns.org/execplan3.gif
> The INSERT statement that you will see there consumed 10 seconds of CPU and
> 30 seconds of Duration.
> Are there any other statements that I can execute against my captured
> profiler table that can us troubleshoot?
> Any help will be greatly appreciated.
> Thanks.
> V1rt
>
>|||It looks like you are filtering the source table, Enclosure. Nothing wrong
with that. However, then you are joining this to the destination table.
Why?
An archive table usually doesn't contain the data in it yet. Are you
joining to make sure that you don't attempt to archive data that's already
been copied?
If so, there are better ways to write this. If you are INSERTing into a
table, not an update, there is rarely a need to join the destination to the
source. This will only slow you down because an INSERT will usually involve
updating the joined columns.
Try something more like this:
BEGIN TRANSACTION
INSERT INTO TableDestination
(Column1, Column2, ..., ColumnN)
SELECT Column1, Column2, ..., ColumnN
FROM TableSource
WHERE TableSource.ColumnX = ExpressionX
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
DELETE FROM s
FROM TableDestination AS d
INNER JOIN TableSource AS s
ON d.Key1 = s.Key1
AND d.Key2 = s.Key2
AND...
AND d.KeyN = s.KeyN
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
COMMIT TRANSACTION
Sincerely,
Anthony Thomas
"Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...
> >> told us that it takes 2 seconds to insert each record
> well....
> Go back to the vendor (I assume you haven't paid them yet) and tell them
> this isn't acceptable.
> How many records do you have to deal with - even at 2 secs per record?
>
> "Neil" wrote:
> > Hey guys,
> >
> > The vendor gave us an archiving tool for our huge tables. We gave them a
> > production copy of our tables. They tested their tool against their own
> > server and told us that it takes 2 seconds to insert each record. I know
> > this is already bad. But today, I tested it against our own test server.
> > They also gave us instruction on what to do first before running the
> > archiving tool. Well anyways, after running the archiving tool, it took
20
> > seconds to insert 1 record. That's totally bad!
> >
> > I would like to know if you will be able to help me guys identify the
> > problem by just looking at this links.
> >
> > http://restricted.dyndns.org/executionplan.txt
> > http://restricted.dyndns.org/execplan1.gif
> > http://restricted.dyndns.org/execplan2.gif
> > http://restricted.dyndns.org/execplan3.gif
> >
> > The INSERT statement that you will see there consumed 10 seconds of CPU
and
> > 30 seconds of Duration.
> >
> > Are there any other statements that I can execute against my captured
> > profiler table that can us troubleshoot?
> >
> > Any help will be greatly appreciated.
> >
> > Thanks.
> >
> > V1rt
> >
> >
> >|||Hi Anthony,
Thanks for the awesome reply. Am I correct that the destination table that
is being joint is the tablefieldaudit? That's what I saw in the INSERT
statement.
Below is what I captured using Profiler. I saw tons of this running for 20+
seconds each. :(
INSERT INTO
TABLEFIELDAUDIT(TABLENAME,FIELDNAME,FIELDVALUE,CHANGEDATE,KEYVALUE,USERID,SUBKEY1,
SUBKEY2)
SELECT TABLENAME,FIELDNAME,FIELDVALUE,CHANGEDATE,E.RECORDID AS
KEYVALUE,USERID,SUBKEY1, SUBKEY2
FROM TABLEFIELDAUDIT
INNER JOIN ENCLOSURE E
ON TABLEFIELDAUDIT.SUBKEY1=E.BARCODE AND
TABLEFIELDAUDIT.SUBKEY2=E.ENCLOSURENUMBER
WHERE TABLENAME='ENCLOSURE' AND SUBKEY1='00010690'
Thanks again,
Neil
"AnthonyThomas" <Anthony.Thomas@.CommerceBank.com> wrote in message
news:OLmPhvtyEHA.2568@.TK2MSFTNGP11.phx.gbl...
> It looks like you are filtering the source table, Enclosure. Nothing
> wrong
> with that. However, then you are joining this to the destination table.
> Why?
> An archive table usually doesn't contain the data in it yet. Are you
> joining to make sure that you don't attempt to archive data that's already
> been copied?
> If so, there are better ways to write this. If you are INSERTing into a
> table, not an update, there is rarely a need to join the destination to
> the
> source. This will only slow you down because an INSERT will usually
> involve
> updating the joined columns.
> Try something more like this:
> BEGIN TRANSACTION
> INSERT INTO TableDestination
> (Column1, Column2, ..., ColumnN)
> SELECT Column1, Column2, ..., ColumnN
> FROM TableSource
> WHERE TableSource.ColumnX = ExpressionX
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> DELETE FROM s
> FROM TableDestination AS d
> INNER JOIN TableSource AS s
> ON d.Key1 = s.Key1
> AND d.Key2 = s.Key2
> AND...
> AND d.KeyN = s.KeyN
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> COMMIT TRANSACTION
> Sincerely,
>
> Anthony Thomas
>
> --
> "Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
> news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...
>> >> told us that it takes 2 seconds to insert each record
>> well....
>> Go back to the vendor (I assume you haven't paid them yet) and tell them
>> this isn't acceptable.
>> How many records do you have to deal with - even at 2 secs per record?
>>
>> "Neil" wrote:
>> > Hey guys,
>> >
>> > The vendor gave us an archiving tool for our huge tables. We gave them
>> > a
>> > production copy of our tables. They tested their tool against their own
>> > server and told us that it takes 2 seconds to insert each record. I
>> > know
>> > this is already bad. But today, I tested it against our own test
>> > server.
>> > They also gave us instruction on what to do first before running the
>> > archiving tool. Well anyways, after running the archiving tool, it took
> 20
>> > seconds to insert 1 record. That's totally bad!
>> >
>> > I would like to know if you will be able to help me guys identify the
>> > problem by just looking at this links.
>> >
>> > http://restricted.dyndns.org/executionplan.txt
>> > http://restricted.dyndns.org/execplan1.gif
>> > http://restricted.dyndns.org/execplan2.gif
>> > http://restricted.dyndns.org/execplan3.gif
>> >
>> > The INSERT statement that you will see there consumed 10 seconds of CPU
> and
>> > 30 seconds of Duration.
>> >
>> > Are there any other statements that I can execute against my captured
>> > profiler table that can us troubleshoot?
>> >
>> > Any help will be greatly appreciated.
>> >
>> > Thanks.
>> >
>> > V1rt
>> >
>> >
>> >
>

Archiving very slow, help needed please

Hey guys,
The vendor gave us an archiving tool for our huge tables. We gave them a
production copy of our tables. They tested their tool against their own
server and told us that it takes 2 seconds to insert each record. I know
this is already bad. But today, I tested it against our own test server.
They also gave us instruction on what to do first before running the
archiving tool. Well anyways, after running the archiving tool, it took 20
seconds to insert 1 record. That's totally bad!
I would like to know if you will be able to help me guys identify the
problem by just looking at this links.
http://restricted.dyndns.org/executionplan.txt
http://restricted.dyndns.org/execplan1.gif
http://restricted.dyndns.org/execplan2.gif
http://restricted.dyndns.org/execplan3.gif
The INSERT statement that you will see there consumed 10 seconds of CPU and
30 seconds of Duration.
Are there any other statements that I can execute against my captured
profiler table that can us troubleshoot?
Any help will be greatly appreciated.
Thanks.
V1rt
>> told us that it takes 2 seconds to insert each record
well....
Go back to the vendor (I assume you haven't paid them yet) and tell them
this isn't acceptable.
How many records do you have to deal with - even at 2 secs per record?
"Neil" wrote:

> Hey guys,
> The vendor gave us an archiving tool for our huge tables. We gave them a
> production copy of our tables. They tested their tool against their own
> server and told us that it takes 2 seconds to insert each record. I know
> this is already bad. But today, I tested it against our own test server.
> They also gave us instruction on what to do first before running the
> archiving tool. Well anyways, after running the archiving tool, it took 20
> seconds to insert 1 record. That's totally bad!
> I would like to know if you will be able to help me guys identify the
> problem by just looking at this links.
> http://restricted.dyndns.org/executionplan.txt
> http://restricted.dyndns.org/execplan1.gif
> http://restricted.dyndns.org/execplan2.gif
> http://restricted.dyndns.org/execplan3.gif
> The INSERT statement that you will see there consumed 10 seconds of CPU and
> 30 seconds of Duration.
> Are there any other statements that I can execute against my captured
> profiler table that can us troubleshoot?
> Any help will be greatly appreciated.
> Thanks.
> V1rt
>
>
|||It looks like you are filtering the source table, Enclosure. Nothing wrong
with that. However, then you are joining this to the destination table.
Why?
An archive table usually doesn't contain the data in it yet. Are you
joining to make sure that you don't attempt to archive data that's already
been copied?
If so, there are better ways to write this. If you are INSERTing into a
table, not an update, there is rarely a need to join the destination to the
source. This will only slow you down because an INSERT will usually involve
updating the joined columns.
Try something more like this:
BEGIN TRANSACTION
INSERT INTO TableDestination
(Column1, Column2, ..., ColumnN)
SELECT Column1, Column2, ..., ColumnN
FROM TableSource
WHERE TableSource.ColumnX = ExpressionX
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
DELETE FROM s
FROM TableDestination AS d
INNER JOIN TableSource AS s
ON d.Key1 = s.Key1
AND d.Key2 = s.Key2
AND...
AND d.KeyN = s.KeyN
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
COMMIT TRANSACTION
Sincerely,
Anthony Thomas

"Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...[vbcol=seagreen]
> well....
> Go back to the vendor (I assume you haven't paid them yet) and tell them
> this isn't acceptable.
> How many records do you have to deal with - even at 2 secs per record?
>
> "Neil" wrote:
20[vbcol=seagreen]
and[vbcol=seagreen]
|||Hi Anthony,
Thanks for the awesome reply. Am I correct that the destination table that
is being joint is the tablefieldaudit? That's what I saw in the INSERT
statement.
Below is what I captured using Profiler. I saw tons of this running for 20+
seconds each.
INSERT INTO
TABLEFIELDAUDIT(TABLENAME,FIELDNAME,FIELDVALUE,CHA NGEDATE,KEYVALUE,USERID,SUBKEY1,
SUBKEY2)
SELECT TABLENAME,FIELDNAME,FIELDVALUE,CHANGEDATE,E.RECORD ID AS
KEYVALUE,USERID,SUBKEY1, SUBKEY2
FROM TABLEFIELDAUDIT
INNER JOIN ENCLOSURE E
ON TABLEFIELDAUDIT.SUBKEY1=E.BARCODE AND
TABLEFIELDAUDIT.SUBKEY2=E.ENCLOSURENUMBER
WHERE TABLENAME='ENCLOSURE' AND SUBKEY1='00010690'
Thanks again,
Neil
"AnthonyThomas" <Anthony.Thomas@.CommerceBank.com> wrote in message
news:OLmPhvtyEHA.2568@.TK2MSFTNGP11.phx.gbl...
> It looks like you are filtering the source table, Enclosure. Nothing
> wrong
> with that. However, then you are joining this to the destination table.
> Why?
> An archive table usually doesn't contain the data in it yet. Are you
> joining to make sure that you don't attempt to archive data that's already
> been copied?
> If so, there are better ways to write this. If you are INSERTing into a
> table, not an update, there is rarely a need to join the destination to
> the
> source. This will only slow you down because an INSERT will usually
> involve
> updating the joined columns.
> Try something more like this:
> BEGIN TRANSACTION
> INSERT INTO TableDestination
> (Column1, Column2, ..., ColumnN)
> SELECT Column1, Column2, ..., ColumnN
> FROM TableSource
> WHERE TableSource.ColumnX = ExpressionX
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> DELETE FROM s
> FROM TableDestination AS d
> INNER JOIN TableSource AS s
> ON d.Key1 = s.Key1
> AND d.Key2 = s.Key2
> AND...
> AND d.KeyN = s.KeyN
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> COMMIT TRANSACTION
> Sincerely,
>
> Anthony Thomas
>
> --
> "Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
> news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...
> 20
> and
>

Archiving very slow, help needed please

Hey guys,
The vendor gave us an archiving tool for our huge tables. We gave them a
production copy of our tables. They tested their tool against their own
server and told us that it takes 2 seconds to insert each record. I know
this is already bad. But today, I tested it against our own test server.
They also gave us instruction on what to do first before running the
archiving tool. Well anyways, after running the archiving tool, it took 20
seconds to insert 1 record. That's totally bad!
I would like to know if you will be able to help me guys identify the
problem by just looking at this links.
http://restricted.dyndns.org/executionplan.txt
http://restricted.dyndns.org/execplan1.gif
http://restricted.dyndns.org/execplan2.gif
http://restricted.dyndns.org/execplan3.gif
The INSERT statement that you will see there consumed 10 seconds of CPU and
30 seconds of Duration.
Are there any other statements that I can execute against my captured
profiler table that can us troubleshoot?
Any help will be greatly appreciated.
Thanks.
V1rt>> told us that it takes 2 seconds to insert each record
well....
Go back to the vendor (I assume you haven't paid them yet) and tell them
this isn't acceptable.
How many records do you have to deal with - even at 2 secs per record?
"Neil" wrote:

> Hey guys,
> The vendor gave us an archiving tool for our huge tables. We gave them a
> production copy of our tables. They tested their tool against their own
> server and told us that it takes 2 seconds to insert each record. I know
> this is already bad. But today, I tested it against our own test server.
> They also gave us instruction on what to do first before running the
> archiving tool. Well anyways, after running the archiving tool, it took 20
> seconds to insert 1 record. That's totally bad!
> I would like to know if you will be able to help me guys identify the
> problem by just looking at this links.
> http://restricted.dyndns.org/executionplan.txt
> http://restricted.dyndns.org/execplan1.gif
> http://restricted.dyndns.org/execplan2.gif
> http://restricted.dyndns.org/execplan3.gif
> The INSERT statement that you will see there consumed 10 seconds of CPU an
d
> 30 seconds of Duration.
> Are there any other statements that I can execute against my captured
> profiler table that can us troubleshoot?
> Any help will be greatly appreciated.
> Thanks.
> V1rt
>
>|||It looks like you are filtering the source table, Enclosure. Nothing wrong
with that. However, then you are joining this to the destination table.
Why?
An archive table usually doesn't contain the data in it yet. Are you
joining to make sure that you don't attempt to archive data that's already
been copied?
If so, there are better ways to write this. If you are INSERTing into a
table, not an update, there is rarely a need to join the destination to the
source. This will only slow you down because an INSERT will usually involve
updating the joined columns.
Try something more like this:
BEGIN TRANSACTION
INSERT INTO TableDestination
(Column1, Column2, ..., ColumnN)
SELECT Column1, Column2, ..., ColumnN
FROM TableSource
WHERE TableSource.ColumnX = ExpressionX
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
DELETE FROM s
FROM TableDestination AS d
INNER JOIN TableSource AS s
ON d.Key1 = s.Key1
AND d.Key2 = s.Key2
AND...
AND d.KeyN = s.KeyN
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
COMMIT TRANSACTION
Sincerely,
Anthony Thomas
"Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...[vbcol=seagreen]
> well....
> Go back to the vendor (I assume you haven't paid them yet) and tell them
> this isn't acceptable.
> How many records do you have to deal with - even at 2 secs per record?
>
> "Neil" wrote:
>
20[vbcol=seagreen]
and[vbcol=seagreen]|||Hi Anthony,
Thanks for the awesome reply. Am I correct that the destination table that
is being joint is the tablefieldaudit? That's what I saw in the INSERT
statement.
Below is what I captured using Profiler. I saw tons of this running for 20+
seconds each.
INSERT INTO
TABLEFIELDAUDIT(TABLENAME,FIELDNAME,FIEL
DVALUE,CHANGEDATE,KEYVALUE,USERID,SU
BKEY1,
SUBKEY2)
SELECT TABLENAME,FIELDNAME,FIELDVALUE,CHANGEDAT
E,E.RECORDID AS
KEYVALUE,USERID,SUBKEY1, SUBKEY2
FROM TABLEFIELDAUDIT
INNER JOIN ENCLOSURE E
ON TABLEFIELDAUDIT.SUBKEY1=E.BARCODE AND
TABLEFIELDAUDIT.SUBKEY2=E.ENCLOSURENUMBER
WHERE TABLENAME='ENCLOSURE' AND SUBKEY1='00010690'
Thanks again,
Neil
"AnthonyThomas" <Anthony.Thomas@.CommerceBank.com> wrote in message
news:OLmPhvtyEHA.2568@.TK2MSFTNGP11.phx.gbl...
> It looks like you are filtering the source table, Enclosure. Nothing
> wrong
> with that. However, then you are joining this to the destination table.
> Why?
> An archive table usually doesn't contain the data in it yet. Are you
> joining to make sure that you don't attempt to archive data that's already
> been copied?
> If so, there are better ways to write this. If you are INSERTing into a
> table, not an update, there is rarely a need to join the destination to
> the
> source. This will only slow you down because an INSERT will usually
> involve
> updating the joined columns.
> Try something more like this:
> BEGIN TRANSACTION
> INSERT INTO TableDestination
> (Column1, Column2, ..., ColumnN)
> SELECT Column1, Column2, ..., ColumnN
> FROM TableSource
> WHERE TableSource.ColumnX = ExpressionX
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> DELETE FROM s
> FROM TableDestination AS d
> INNER JOIN TableSource AS s
> ON d.Key1 = s.Key1
> AND d.Key2 = s.Key2
> AND...
> AND d.KeyN = s.KeyN
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> COMMIT TRANSACTION
> Sincerely,
>
> Anthony Thomas
>
> --
> "Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
> news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...
> 20
> and
>

archiving tables

We have some fast growing tables in our SQL Server 2000 db
which content 13 million records already. So, we going to
periodically archive them base on datetime field and allow
user to retrieve archive data via our application by
checking corresponding checkbox. The problem is that, we
do not know what to do with FK's. How to re-create FK's
after archiving to restore referal integrity.
Is there some standard solution?
Any comments will be appreciated.
Merry chistmas and happy new year to everybody.Vitalik
http://vyaskn.tripod.com/sql_archive_data.htm
"vitaliyk" <vitaliykrasner@.hotmail.com> wrote in message
news:08ee01c3c964$5faa1340$a401280a@.phx.gbl...
> We have some fast growing tables in our SQL Server 2000 db
> which content 13 million records already. So, we going to
> periodically archive them base on datetime field and allow
> user to retrieve archive data via our application by
> checking corresponding checkbox. The problem is that, we
> do not know what to do with FK's. How to re-create FK's
> after archiving to restore referal integrity.
> Is there some standard solution?
> Any comments will be appreciated.
> Merry chistmas and happy new year to everybody.

Archiving Problem

Hi!
The SQL Server Express run several versions of an Archive file, such as
Archive #1, etc. under 'SQL Server Logs'.
What is generating these archives? Also, How can I see what has been
archived? Can I stop the server from doing this or needed for to operate?
Thank you very much!
-BahmanHI
"Bahman" wrote:
> Hi!
> The SQL Server Express run several versions of an Archive file, such as
> Archive #1, etc. under 'SQL Server Logs'.
> What is generating these archives? Also, How can I see what has been
> archived? Can I stop the server from doing this or needed for to operate?
> Thank you very much!
> -Bahman
>
These are old versions of the ERRORLOG file. A new ERRORLOG is created when
you restart SQL Server or run sp_cycle_errorlog. The file will be ERRORLOG.1
ERRORLOG.2 etc.. in the LOGS directory. The Displayed name in SSMS and EM
will have the date and time that the log goes up to. More information is in
Books online.
John|||John,
Thanks a million. I guess the word archive was throwing me off. I was
looking in the wrong place. Somehow I thought it is archiving, so that I can
restore from it. But no info on that anywhere. Thank you very much for your
help.
-Bahman
"John Bell" wrote:
> HI
> "Bahman" wrote:
> > Hi!
> >
> > The SQL Server Express run several versions of an Archive file, such as
> > Archive #1, etc. under 'SQL Server Logs'.
> >
> > What is generating these archives? Also, How can I see what has been
> > archived? Can I stop the server from doing this or needed for to operate?
> >
> > Thank you very much!
> >
> > -Bahman
> >
> These are old versions of the ERRORLOG file. A new ERRORLOG is created when
> you restart SQL Server or run sp_cycle_errorlog. The file will be ERRORLOG.1
> ERRORLOG.2 etc.. in the LOGS directory. The Displayed name in SSMS and EM
> will have the date and time that the log goes up to. More information is in
> Books online.
> John
>sql

Archiving Problem

Hi!
The SQL Server Express run several versions of an Archive file, such as
Archive #1, etc. under 'SQL Server Logs'.
What is generating these archives? Also, How can I see what has been
archived? Can I stop the server from doing this or needed for to operate?
Thank you very much!
-Bahman
HI
"Bahman" wrote:

> Hi!
> The SQL Server Express run several versions of an Archive file, such as
> Archive #1, etc. under 'SQL Server Logs'.
> What is generating these archives? Also, How can I see what has been
> archived? Can I stop the server from doing this or needed for to operate?
> Thank you very much!
> -Bahman
>
These are old versions of the ERRORLOG file. A new ERRORLOG is created when
you restart SQL Server or run sp_cycle_errorlog. The file will be ERRORLOG.1
ERRORLOG.2 etc.. in the LOGS directory. The Displayed name in SSMS and EM
will have the date and time that the log goes up to. More information is in
Books online.
John
|||John,
Thanks a million. I guess the word archive was throwing me off. I was
looking in the wrong place. Somehow I thought it is archiving, so that I can
restore from it. But no info on that anywhere. Thank you very much for your
help.
-Bahman
"John Bell" wrote:

> HI
> "Bahman" wrote:
>
> These are old versions of the ERRORLOG file. A new ERRORLOG is created when
> you restart SQL Server or run sp_cycle_errorlog. The file will be ERRORLOG.1
> ERRORLOG.2 etc.. in the LOGS directory. The Displayed name in SSMS and EM
> will have the date and time that the log goes up to. More information is in
> Books online.
> John
>

Archiving Problem

Hi!
The SQL Server Express run several versions of an Archive file, such as
Archive #1, etc. under 'SQL Server Logs'.
What is generating these archives? Also, How can I see what has been
archived? Can I stop the server from doing this or needed for to operate?
Thank you very much!
-BahmanHI
"Bahman" wrote:

> Hi!
> The SQL Server Express run several versions of an Archive file, such as
> Archive #1, etc. under 'SQL Server Logs'.
> What is generating these archives? Also, How can I see what has been
> archived? Can I stop the server from doing this or needed for to operate?
> Thank you very much!
> -Bahman
>
These are old versions of the ERRORLOG file. A new ERRORLOG is created when
you restart SQL Server or run sp_cycle_errorlog. The file will be ERRORLOG.1
ERRORLOG.2 etc.. in the LOGS directory. The Displayed name in SSMS and EM
will have the date and time that the log goes up to. More information is in
Books online.
John|||John,
Thanks a million. I guess the word archive was throwing me off. I was
looking in the wrong place. Somehow I thought it is archiving, so that I can
restore from it. But no info on that anywhere. Thank you very much for your
help.
-Bahman
"John Bell" wrote:

> HI
> "Bahman" wrote:
>
> These are old versions of the ERRORLOG file. A new ERRORLOG is created whe
n
> you restart SQL Server or run sp_cycle_errorlog. The file will be ERRORLOG
.1
> ERRORLOG.2 etc.. in the LOGS directory. The Displayed name in SSMS and EM
> will have the date and time that the log goes up to. More information is i
n
> Books online.
> John
>

Archiving older data

Hi
Like in Oracle high water mark concept is there, which takes the older data into another partition and archives them to keep the running production table short. Similarly is there any similar concept is there in Oracle
Thanks in advanc
Regards
SunilSunil
>Similarly is there any similar concept is there in Oracle.
Did you mean in SQL Server?
If you do please take a look at this very useful article
http://vyaskn.tripod.com/sql_archive_data.htm
"Sunil" <anonymous@.discussions.microsoft.com> wrote in message
news:35F449AD-92EC-42DB-AA37-340BFC261223@.microsoft.com...
> Hi,
> Like in Oracle high water mark concept is there, which takes the older
data into another partition and archives them to keep the running production
table short. Similarly is there any similar concept is there in Oracle.
>
> Thanks in advance
> Regards,
> Sunil

Archiving older data

Hi,
Like in Oracle high water mark concept is there, which takes the older data
into another partition and archives them to keep the running production tabl
e short. Similarly is there any similar concept is there in Oracle.
Thanks in advance
Regards,
SunilSunil
>Similarly is there any similar concept is there in Oracle.
Did you mean in SQL Server?
If you do please take a look at this very useful article
http://vyaskn.tripod.com/sql_archive_data.htm
"Sunil" <anonymous@.discussions.microsoft.com> wrote in message
news:35F449AD-92EC-42DB-AA37-340BFC261223@.microsoft.com...
> Hi,
> Like in Oracle high water mark concept is there, which takes the older
data into another partition and archives them to keep the running production
table short. Similarly is there any similar concept is there in Oracle.
>
> Thanks in advance
> Regards,
> Sunil

Archiving logging db

I'm fairly new to SQL. I've inherited a logging db that records the actions taken on a web service. One of the first questions I'm being asked to solve is to explain how I should concatenate the db and save older entries to a archived db.

I'm sure this is a pretty simple action - just identify the date I want to archive from and develop a query that copies all entries from before that time period to a separate db. I'm looking for some guidlines for this kind of action, but there's so much info on SQL out there, I'm having trouble parsing out the noise and finding the answers I'm looking for. Can anyone point me in the right direction, or give me advice?

Thanks

Levi

One way to do this is to create an SSIS package:

http://msdn2.microsoft.com/en-us/library/ms169917.aspx

|||

Create a archiving table (Arch_Table) in the same db or different db with same schema.

insert into dbname..Arch_Table

select * from Table where dtColumn <= dateadd(mm, -3, getdate()) -- two archive 3months and older data

while 1 = 1

begin

set rowcount 100 delete 100 rows at a time... if you are using sql 2005 you can use top clause check bol for details

delete from Table where dtColumn <= (select max(dtColumn ) from dbname..Arch_Table )

If @.@.rowcount = 0

Break

end

sql

Archiving Large table

Hi,

I am having problems with archiving. Any help will be greatly appreciated.

I have a table with 12 million records, and it needs to be archived to another table.

Since there is a primary key(let's say, SessionID), so I tried to copy 1000 records at a time, and delete those records afterwards.

Problem is, it used to take 1 second to process 1000 records. Now, it takes anywhere from 2 minutes to 14 minutes!!!

Does anyone have a better idea of doing this? I am really stuck...RE: I am having problems with archiving. Any help will be greatly appreciated. I have a table with 12 million records, and it needs to be archived to another table. Since there is a primary key(let's say, SessionID), so I tried to copy 1000 records at a time, and delete those records afterwards. Problem is, it used to take 1 second to process 1000 records. Now, it takes anywhere from 2 minutes to 14 minutes!!! Does anyone have a better idea of doing this? I am really stuck...

Q1 [It used to take 1 second to process 1000 records. Now, it takes 2 minutes to 14 minutes. Why?]

A1 There may be many different issues, (insufficient information to suggest a reasonably good guess and / or answer).

Q2 [Does anyone have a better, i.e.(FASTER?) idea of doing this?]

A2 Archiving may be accomplished efficiently. What is "Better" really depends on existing overall constraints and designs, available resources, and the details of the circumstances. An example that should be fairly quick (but not overly 'user friendly') would be Archiving a Test table in a Demo DB to an ArchiveDB database table named ArchiveTest.

Demo..Test To ArchiveDB..ArchiveTest

Use Demo
Go

INSERT INTO
[ArchiveDB].[dbo].[ArchiveTest]
([Parent], [Child])
SELECT
[Parent], [Child]
FROM
[Demo].[dbo].[Test]
GO

Alter Database Demo
Set Restricted_User
With
RollBack Immediate
Go

Alter Database Demo
Set Single_User
With
RollBack Immediate
Go

Alter Database Demo
Set Recovery Simple
With
RollBack Immediate
Go

-- drop and recreate, or truncate, delete, etc.
Drop TABLE [Test]
Go
CREATE TABLE [Test] (
[Parent] [varchar] (50) NOT NULL ,
[Child] [varchar] (50) NOT NULL)

Alter Database Demo
Set Recovery Full
With
RollBack Immediate
Go

Alter Database Demo
Set Multi_User
With
RollBack Immediate
Go|||how many indexs do you have on this table? Are any of them clustered?

I would suggest
1. copying all data to your archive table
2. script out all indexes and then drop them
3. build one non clustered index that would allow you to join to the archive table.
4. begin a transaction, delete a few thousand records, commit the transaction.
5. adjust the number of deleted records for best performance
6. restore indexes from step 2.|||Originally posted by Paul Young
how many indexs do you have on this table? Are any of them clustered?

I would suggest
1. copying all data to your archive table
2. script out all indexes and then drop them
3. build one non clustered index that would allow you to join to the archive table.
4. begin a transaction, delete a few thousand records, commit the transaction.
5. adjust the number of deleted records for best performance
6. restore indexes from step 2.

Thank you for your replies,

Actually, there is only primary index with identity on. That's it.
The only problem is that, this table should be on-line all the time. i cannot restrict the access to this table.

Somehow, the records don't seem to be sorted at all when I open the table. I tried to add sort(desc) option on the table, and it seemed to be working. However, after a couple of archiving procedure run, the performance gets worse. If I open the table again, it is again a mess. I don't see sorted order in this table.

Once it is properly sorted, the performance is great. What can I do to keep the old record + new records sorted at all times? I cannot manually sort the table, and this process hurts the server badly.|||RE: Thank you for your replies, Actually, there is only primary index with identity on. That's it. The only problem is that, this table should be on-line all the time. i cannot restrict the access to this table. Somehow, the records don't seem to be sorted at all when I open the table. I tried to add sort(desc) option on the table, and it seemed to be working. However, after a couple of archiving procedure run, the performance gets worse. If I open the table again, it is again a mess. I don't see sorted order in this table. Once it is properly sorted, the performance is great. What can I do to keep the old record + new records sorted at all times? I cannot manually sort the table, and this process hurts the server badly.

Q1 [I tried to add sort(desc) option on the table, and it seemed to be working. However, after a couple of archiving procedure run, the performance gets worse.]
A1 You are probably not updating your indexes at a suitable interval (to ensure optimal performance).

Q2 What can I do to keep the old record + new records sorted at all times?
A2 Cluster both TABLES on the desired column.|||my first inclination is that you have a corrupted index. the overall sorting should not change (aside from changes in data) due to inserting, updateing or deleting data.

During a one week prieod I rebuilt all my index once if not twice on very dynamic tables. Are you doing this?

If your primary index is clustered, you are reordering some part of your data everytime you insert, update or delete. This can lead to slow performance at times. If you must have this index then you just live with it, if you don't need it then change to non-clustered.|||Thank you, Paul Young.

Actually, I never rebuilt indexes on any of the tables. My bad...

What do I have to do to rebuild indexes? I tried DBCC DBREINDEX, and it didn't improve the performance of the archiving.

Can you guide me step-by-step what has to be done?

Thank you again.|||Generally I use maintinance plans to rebuild indexes and statistics along with other things however to answer your question DBCC DBREINDEX will do the trick.

Even if you haven't EVER rebuilt your index(s) they still should produce a result set correctly sorted. Again My hunch is that you have a corrupt index. To fix this you will need to drop the index an re-create it. You can do this while other are using the system but I would NOT advise it.|||One more thing, once you rebuild your index you probably want to update statistics so the correct optimization plans will be used.|||Thank you, Paul.

I tried to rebuild the index using 'DROP EXISTING'. It took about 5 minutes, and I opened the table, and it still looks messy.

But now, the index seems to be functioning faster. The problem is that, I don't use the primary key as query condition. Usually, my query condition is the 'CreationTime' which gets filled with default values getdate().

Basically, I query all the data created during a time period.

Should I create another index on CreationTime?

Thank you,|||If this table is used in an OLTP environment you want to keep the number of indexes to a minimum because evryting you inset/update/delete a row you also have to update ALL indexes. In your case you only have one index so adding one more shouldn't cause you a noticable slowdown and will GREATLY improve the prformance of your select.

before adding any indexes drop your select statement into Query Analyzer, turn on Show Execution Plan, Show Server Trace and Show Client Statistics and execute your select. Look at the "Execution Plan" tab and you will get a diagram representing what your select is doing.

Next click on Index Tunning Wizard and let SQL server suggest indexes to be built. Concider the suggestions and implament whatever you tinks looks good. Now rerun your select and look at the differences on the "Execution Plan" tab.

All of this is covered in Books Online, an excelent source of info once you know what to look up.|||Q1 Usually, my query condition is the 'CreationTime' Should I create another index on CreationTime?

A1 If you are looking for good performance, Yes. Generally, one wants a (well maintained) index available for the query parser to take advantage of for any column that is frequently queried.

Archiving is Not reducing DB Size

I created few jobs that would archive the production DB and delete the archived data...
but it looks like the DB size is not reducing!!! Some times it looks like the size has increased!!

I think this is because of the log file size has increaded by the DELETE operations...But what can I do for this?

Please Help!!Action\All Tasks\Shrink Database in EM
or
DBCC SHRINKDATABASE in QA

Archiving database for reporting

Hi
SSIS can be a good choice.
Also
>also keep about 1 year
> of data in archive database.
DELETE FROM tbl WHERE dt <DATEADD(Year,-1,GETDATE())

> I must keep data in production database about
> two months, b
Create a job and schedule it on day period
I must keep data in production database about
INSERT INTO arch. tbl (..) SELECT ... FROM prod WHERE dt >='19000101' AND
dt<=DATEADD(month,-2,GETDATE())

>Additionally two
> months old data in the production database can change (delete, update)
> and that also must be reflected in archive database.
Here, I'd suggest you to create a trigger or (take a look at OUTPUT clause)
and flag or whatever modified data
"OgnjenT" <OgnjenT@.gmail.com> wrote in message
news:3f74847e-97c1-4a69-bbb8-17813ccc2cbb@.u10g2000prn.googlegroups.com...
> HI,
> I have a database on sql server 2005 ent edition with about 300 000
> new records every day. I must keep data in production database about
> two months, but there are a lot of reporting activities which are
> interrupting normal functioning of production server and there is also
> requirement to query historical data(about 1 year) with reports. I
> must add new data to the archive database every day. Additionally two
> months old data in the production database can change (delete, update)
> and that also must be reflected in archive database. So basically I
> must insert new data, update changed date and delete deleted data from
> production database into archive database but also keep about 1 year
> of data in archive database. Then I can separately optimize archive
> database for reporting. What is the best way to satisfy all that
> requirements. I am thinking about SSIS.
On Feb 7, 9:03Xam, "Uri Dimant" <u...@.iscar.co.il> wrote:
> Hi
> SSIS can be a good choice.
> Also
>
> DELETE FROM tbl WHERE dt <DATEADD(Year,-1,GETDATE())
>
> X X Create a job and schedule it on day period
> XI must keep data in production database about
> INSERT INTO arch. tbl (..) SELECT ... FROM prod WHERE dt >='19000101' AND
> dt<=DATEADD(month,-2,GETDATE())
>
> Here, I'd suggest Xyou to create a trigger or (take a look at OUTPUT clause)
> and flag or whatever modified data
> "OgnjenT" <Ognj...@.gmail.com> wrote in message
> news:3f74847e-97c1-4a69-bbb8-17813ccc2cbb@.u10g2000prn.googlegroups.com...
>
>
> - Show quoted text -
Problem with triger is when I clean data older then two month it will
delete data in archive database too. Also I must then put the triger
on the other tables because of the referential integrity in the
archive database.

>INSERT INTO arch. tbl (..) SELECT ... FROM prod WHERE dt >='19000101' AND
>dt<=DATEADD(month,-2,GETDATE())
I can't do that because I must have to synchronize production and
archive database every day because every night I mast have a dozen of
reports.
I decided to have archive database because of different way of queries
for normal processing and for reporting and I don't wont reporting
have so mutch influence on normaln work. So it is not only archiving
data but creating separate database for reporting so I can put some
more indexes, indexed views and in the same time my insert and updates
will still be fast in the production database. Also production server
doesn't have to hold data older than two months so it is better to
clean it so my queries woold be faster.
Is it maybe ok to delete data older than two months from archive
database every night and insert that data from production database. It
seems to me that it will be faster than check every row in production
database and then update archive database if row is updated, delete if
deleted. The esiest of course is to insert new rows.
Maybe before that bulk delete and insert it wood be smart to drop all
indexes, indexed views and constraint and recreate it after. But I
must do all that for about 15 minutes.

Archiving database for reporting

HI,
I have a database on sql server 2005 ent edition with about 300 000
new records every day. I must keep data in production database about
two months, but there are a lot of reporting activities which are
interrupting normal functioning of production server and there is also
requirement to query historical data(about 1 year) with reports. I
must add new data to the archive database every day. Additionally two
months old data in the production database can change (delete, update)
and that also must be reflected in archive database. So basically I
must insert new data, update changed date and delete deleted data from
production database into archive database but also keep about 1 year
of data in archive database. Then I can separately optimize archive
database for reporting. What is the best way to satisfy all that
requirements. I am thinking about SSIS.SSIS will work. You could also consider a small trigger on the main table
that will put the key values into a holding table when
inserts/updates/deletes occur so you can very quickly make the necessary
changes to the historical records when <2mth old data is modified.
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"OgnjenT" <OgnjenT@.gmail.com> wrote in message
news:3f74847e-97c1-4a69-bbb8-17813ccc2cbb@.u10g2000prn.googlegroups.com...
> HI,
> I have a database on sql server 2005 ent edition with about 300 000
> new records every day. I must keep data in production database about
> two months, but there are a lot of reporting activities which are
> interrupting normal functioning of production server and there is also
> requirement to query historical data(about 1 year) with reports. I
> must add new data to the archive database every day. Additionally two
> months old data in the production database can change (delete, update)
> and that also must be reflected in archive database. So basically I
> must insert new data, update changed date and delete deleted data from
> production database into archive database but also keep about 1 year
> of data in archive database. Then I can separately optimize archive
> database for reporting. What is the best way to satisfy all that
> requirements. I am thinking about SSIS.|||Hi
SSIS can be a good choice.
Also
>also keep about 1 year
> of data in archive database.
DELETE FROM tbl WHERE dt <DATEADD(Year,-1,GETDATE())
> I must keep data in production database about
> two months, b
Create a job and schedule it on day period
I must keep data in production database about
INSERT INTO arch. tbl (..) SELECT ... FROM prod WHERE dt >='19000101' AND
dt<=DATEADD(month,-2,GETDATE())
>Additionally two
> months old data in the production database can change (delete, update)
> and that also must be reflected in archive database.
Here, I'd suggest you to create a trigger or (take a look at OUTPUT clause)
and flag or whatever modified data
"OgnjenT" <OgnjenT@.gmail.com> wrote in message
news:3f74847e-97c1-4a69-bbb8-17813ccc2cbb@.u10g2000prn.googlegroups.com...
> HI,
> I have a database on sql server 2005 ent edition with about 300 000
> new records every day. I must keep data in production database about
> two months, but there are a lot of reporting activities which are
> interrupting normal functioning of production server and there is also
> requirement to query historical data(about 1 year) with reports. I
> must add new data to the archive database every day. Additionally two
> months old data in the production database can change (delete, update)
> and that also must be reflected in archive database. So basically I
> must insert new data, update changed date and delete deleted data from
> production database into archive database but also keep about 1 year
> of data in archive database. Then I can separately optimize archive
> database for reporting. What is the best way to satisfy all that
> requirements. I am thinking about SSIS.|||On Feb 7, 9:03=A0am, "Uri Dimant" <u...@.iscar.co.il> wrote:
> Hi
> SSIS can be a good choice.
> Also
> >also keep about 1 year
> > of data in archive database.
> DELETE FROM tbl WHERE dt <DATEADD(Year,-1,GETDATE())
> > I must keep data in production database about
> > two months, b
> =A0 =A0 Create a job and schedule it on day period
> =A0I must keep data in production database about
> INSERT INTO arch. tbl (..) SELECT ... FROM prod WHERE dt >=3D'19000101' AN=D
> dt<=3DDATEADD(month,-2,GETDATE())
> >Additionally two
> > months old data in the production database can change (delete, update)
> > and that also must be reflected in archive database.
> Here, I'd suggest =A0you to create a trigger or (take a look at OUTPUT cla=use)
> and flag or whatever modified data
> "OgnjenT" <Ognj...@.gmail.com> wrote in message
> news:3f74847e-97c1-4a69-bbb8-17813ccc2cbb@.u10g2000prn.googlegroups.com...
>
> > HI,
> > I have a database on sql server 2005 ent edition with about 300 000
> > new records every day. I must keep data in production database about
> > two months, but there are a lot of reporting activities which are
> > interrupting normal functioning of production server and there is also
> > requirement to query historical data(about 1 year) with reports. I
> > must add new data to the archive database every day. Additionally two
> > months old data in the production database can change (delete, update)
> > and that also must be reflected in archive database. So basically I
> > must insert new data, update changed date and delete deleted data from
> > production database into archive database but also keep about 1 year
> > of data in archive database. Then I can separately optimize archive
> > database for reporting. What is the best way to satisfy all that
> > requirements. I am thinking about SSIS.- Hide quoted text -
> - Show quoted text -
Problem with triger is when I clean data older then two month it will
delete data in archive database too. Also I must then put the triger
on the other tables because of the referential integrity in the
archive database.
>INSERT INTO arch. tbl (..) SELECT ... FROM prod WHERE dt >=3D'19000101' AND=
>dt<=3DDATEADD(month,-2,GETDATE())
I can't do that because I must have to synchronize production and
archive database every day because every night I mast have a dozen of
reports.
I decided to have archive database because of different way of queries
for normal processing and for reporting and I don't wont reporting
have so mutch influence on normaln work. So it is not only archiving
data but creating separate database for reporting so I can put some
more indexes, indexed views and in the same time my insert and updates
will still be fast in the production database. Also production server
doesn't have to hold data older than two months so it is better to
clean it so my queries woold be faster.
Is it maybe ok to delete data older than two months from archive
database every night and insert that data from production database. It
seems to me that it will be faster than check every row in production
database and then update archive database if row is updated, delete if
deleted. The esiest of course is to insert new rows.
Maybe before that bulk delete and insert it wood be smart to drop all
indexes, indexed views and constraint and recreate it after. But I
must do all that for about 15 minutes.

Archiving data to a seperate database.

Hi all,
Would appreciate some thoughts on the best way to do this scenario.
This is not a large database in the number of tables, but has about 100,000
new rows per day. Data is inserted only, and no queries / SPs are being run
against the data.
Database 1 (transactional database)
Holds all current data, and all data up to 100 days old.
Database 2 (historic database)
Holds all data and is max 15 mins older than Database 1.
So, what would be the best way to keep the historic database current?
Replication, triggers, agent running SPs, keeping it within 15mins sync to
the transactional database, and deleting records older than 100 days from th
e
transactional database...
Would appreciate anyones thoughts on this...
-MarkFor 100000 rows, in general, one could write a simple job & schedule it to
run every day. The job can have a script that copies the historical data to
the destination & purges it from the source.
Anith|||My choices...
Part 1: Removing from Active db.
Use a nightly job to delete all records over 100 days (You do have a Date
Added Column? Is it indexed?)
Part 2: Moving new data to 'Historic' db
Trigger that copies the INSERTED table to the 'Historic' db.
For this application, a trigger would be have the same or less
operational impact, and the maintenance would be significantly
less. This assumes, of course, that security and connection
issues allow the use of a Trigger.
--
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another certification Exam
"MarkieMark" <u20627@.uwe> wrote in message news:5e6ed66fd1456@.uwe...
> Hi all,
> Would appreciate some thoughts on the best way to do this scenario.
> This is not a large database in the number of tables, but has about
> 100,000
> new rows per day. Data is inserted only, and no queries / SPs are being
> run
> against the data.
> Database 1 (transactional database)
> Holds all current data, and all data up to 100 days old.
> Database 2 (historic database)
> Holds all data and is max 15 mins older than Database 1.
> So, what would be the best way to keep the historic database current?
> Replication, triggers, agent running SPs, keeping it within 15mins sync to
> the transactional database, and deleting records older than 100 days from
> the
> transactional database...
> Would appreciate anyones thoughts on this...
> -Mark|||Arnie,
My thoughts were with triggers to. The only issue I had was if the historic
database went off-line for some reason etc., that the databases would get ou
t
of sync. Haven't been able to get my head round that little problem yet.
-Marksql

archiving data

Hi all hope this is in the right forum.
Ive got a SQL database which accumulates about 4 million records each year.
This amount causes my PC to slow down when searching, and my hard drive is
getting fuller.
To overcome this problem I have to delete half the records each year (2
million) which takes quite some time.
My question is - is it possible to archive some of the SQL database each
year onto another hard drive or DVD. The archived data should them be removed
from the original SQL data base.
I Would also need to be able to view, sort and search this archived data on
another machine in excell or something.
Hope someone can help.
Thanks
Jon
"jsw" <jsw@.discussions.microsoft.com> wrote in message
news:E55D02B1-FD8D-468A-A716-9CF9A1717363@.microsoft.com...
> Hi all hope this is in the right forum.
> Ive got a SQL database which accumulates about 4 million records each
> year.
> This amount causes my PC to slow down when searching, and my hard drive is
> getting fuller.
>
4 million a YEAR slows you down? Do you have indexes?
Anyway...

> To overcome this problem I have to delete half the records each year (2
> million) which takes quite some time.
>
Yeah, deletions can take time.

> My question is - is it possible to archive some of the SQL database each
> year onto another hard drive or DVD. The archived data should them be
> removed
> from the original SQL data base.
I'd probably use DTS (assuming SQL 2000, SSIS for 2005) to copy out the data
in a format you can use.
Then delete it.
That should solve your problems.
> I Would also need to be able to view, sort and search this archived data
> on
> another machine in excell or something.
> Hope someone can help.
> Thanks
> Jon
Greg Moore
SQL Server DBA Consulting
sql (at) greenms.com http://www.greenms.com
|||Hi,
Thanks for the reply.
Yes we have clustered indexes on the primary keys.
Sorry but what is DTS.
Can the process be automated on a particular date?
Thanks
"Greg D. Moore (Strider)" wrote:

> "jsw" <jsw@.discussions.microsoft.com> wrote in message
> news:E55D02B1-FD8D-468A-A716-9CF9A1717363@.microsoft.com...
> 4 million a YEAR slows you down? Do you have indexes?
> Anyway...
>
> Yeah, deletions can take time.
>
> I'd probably use DTS (assuming SQL 2000, SSIS for 2005) to copy out the data
> in a format you can use.
> Then delete it.
> That should solve your problems.
>
> --
> Greg Moore
> SQL Server DBA Consulting
> sql (at) greenms.com http://www.greenms.com
>
>
|||"jsw" <jsw@.discussions.microsoft.com> wrote in message
news:AF4D2F53-9995-4745-973D-5E3DC04B1EC3@.microsoft.com...
> Hi,
> Thanks for the reply.
> Yes we have clustered indexes on the primary keys.
> Sorry but what is DTS.
Data Transformation Services.
Look for it under Enterprise Manager.

> Can the process be automated on a particular date?
Yes.
Create a DTS package, and then you can schedule it.

> Thanks
>
Greg Moore
SQL Server DBA Consulting
sql (at) greenms.com http://www.greenms.com

Archiving data

You could possibly use transactional replication for this.
All the tables would be in the same publication and
relationships may also be registered there. Foreign keys
would be marked as Yes (Not for Replication). You'd have
to ensure that deletes are not replicated (using None for
the replication command).
Initially I mentioned 'possibly' - this is because there
may be some deletes that you really want to be replicated -
it all depends on how your system is set up. EG on one
database I worked on, there were no deletes allowed -
updates of a bit flag would effectively remove the record
from the user's vision. If your system is like this then
transactional would be ok. If not, then DTS, or snapshot
replication to append records could be used before the
deletes take place.
HTH,
Paul Ibison
I agree with Paul. I would incorporate the business logic for deletes in
custom stored procedures.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:17eb01c46f48$46da5a20$a301280a@.phx.gbl...
> You could possibly use transactional replication for this.
> All the tables would be in the same publication and
> relationships may also be registered there. Foreign keys
> would be marked as Yes (Not for Replication). You'd have
> to ensure that deletes are not replicated (using None for
> the replication command).
> Initially I mentioned 'possibly' - this is because there
> may be some deletes that you really want to be replicated -
> it all depends on how your system is set up. EG on one
> database I worked on, there were no deletes allowed -
> updates of a bit flag would effectively remove the record
> from the user's vision. If your system is like this then
> transactional would be ok. If not, then DTS, or snapshot
> replication to append records could be used before the
> deletes take place.
> HTH,
> Paul Ibison

archiving data

Hi all hope this is in the right forum.
Ive got a SQL database which accumulates about 4 million records each year.
This amount causes my PC to slow down when searching, and my hard drive is
getting fuller.
To overcome this problem I have to delete half the records each year (2
million) which takes quite some time.
My question is - is it possible to archive some of the SQL database each
year onto another hard drive or DVD. The archived data should them be remove
d
from the original SQL data base.
I Would also need to be able to view, sort and search this archived data on
another machine in excell or something.
Hope someone can help.
Thanks
Jon"jsw" <jsw@.discussions.microsoft.com> wrote in message
news:E55D02B1-FD8D-468A-A716-9CF9A1717363@.microsoft.com...
> Hi all hope this is in the right forum.
> Ive got a SQL database which accumulates about 4 million records each
> year.
> This amount causes my PC to slow down when searching, and my hard drive is
> getting fuller.
>
4 million a YEAR slows you down? Do you have indexes?
Anyway...

> To overcome this problem I have to delete half the records each year (2
> million) which takes quite some time.
>
Yeah, deletions can take time.

> My question is - is it possible to archive some of the SQL database each
> year onto another hard drive or DVD. The archived data should them be
> removed
> from the original SQL data base.
I'd probably use DTS (assuming SQL 2000, SSIS for 2005) to copy out the data
in a format you can use.
Then delete it.
That should solve your problems.
> I Would also need to be able to view, sort and search this archived data
> on
> another machine in excell or something.
> Hope someone can help.
> Thanks
> Jon
Greg Moore
SQL Server DBA Consulting
sql (at) greenms.com http://www.greenms.com|||Hi,
Thanks for the reply.
Yes we have clustered indexes on the primary keys.
Sorry but what is DTS.
Can the process be automated on a particular date?
Thanks
"Greg D. Moore (Strider)" wrote:

> "jsw" <jsw@.discussions.microsoft.com> wrote in message
> news:E55D02B1-FD8D-468A-A716-9CF9A1717363@.microsoft.com...
> 4 million a YEAR slows you down? Do you have indexes?
> Anyway...
>
> Yeah, deletions can take time.
>
> I'd probably use DTS (assuming SQL 2000, SSIS for 2005) to copy out the da
ta
> in a format you can use.
> Then delete it.
> That should solve your problems.
>
> --
> Greg Moore
> SQL Server DBA Consulting
> sql (at) greenms.com http://www.greenms.com
>
>|||"jsw" <jsw@.discussions.microsoft.com> wrote in message
news:AF4D2F53-9995-4745-973D-5E3DC04B1EC3@.microsoft.com...
> Hi,
> Thanks for the reply.
> Yes we have clustered indexes on the primary keys.
> Sorry but what is DTS.
Data Transformation Services.
Look for it under Enterprise Manager.

> Can the process be automated on a particular date?
Yes.
Create a DTS package, and then you can schedule it.

> Thanks
>
Greg Moore
SQL Server DBA Consulting
sql (at) greenms.com http://www.greenms.com

Archiving Data

I was exploring ways of archiving old data in our production environment
whilst maintaining it's availability... ...the current data takes 90% of the
queries, but grows very large and puts pressure on the disks... ...I wanted
to explore using a "sliding window" on a partitioned view across a federated
database and have two initial questions I can't find an answer to...
1. Does the data move between partitions when partitions are located on
different federated servers?
2. Assuming a good horizontal partition is chosen, is an automated sliding
window practical in a production environment?
Thanks in advance BenUKPersonally, I'd revise your strategy to use Partioned Tables in SQL2K5.
Awesome feature.
"Ben UK" <BenUK@.discussions.microsoft.com> wrote in message
news:843C5511-A7D1-4323-83A9-7D7B17BA7639@.microsoft.com...
> I was exploring ways of archiving old data in our production environment
> whilst maintaining it's availability... ...the current data takes 90% of
the
> queries, but grows very large and puts pressure on the disks... ...I
wanted
> to explore using a "sliding window" on a partitioned view across a
federated
> database and have two initial questions I can't find an answer to...
> 1. Does the data move between partitions when partitions are located on
> different federated servers?
> 2. Assuming a good horizontal partition is chosen, is an automated sliding
> window practical in a production environment?
> Thanks in advance BenUK

archiving data

Hi all hope this is in the right forum.
Ive got a SQL database which accumulates about 4 million records each year.
This amount causes my PC to slow down when searching, and my hard drive is
getting fuller.
To overcome this problem I have to delete half the records each year (2
million) which takes quite some time.
My question is - is it possible to archive some of the SQL database each
year onto another hard drive or DVD. The archived data should them be removed
from the original SQL data base.
I Would also need to be able to view, sort and search this archived data on
another machine in excell or something.
Hope someone can help.
Thanks
Jon"jsw" <jsw@.discussions.microsoft.com> wrote in message
news:E55D02B1-FD8D-468A-A716-9CF9A1717363@.microsoft.com...
> Hi all hope this is in the right forum.
> Ive got a SQL database which accumulates about 4 million records each
> year.
> This amount causes my PC to slow down when searching, and my hard drive is
> getting fuller.
>
4 million a YEAR slows you down? Do you have indexes?
Anyway...
> To overcome this problem I have to delete half the records each year (2
> million) which takes quite some time.
>
Yeah, deletions can take time.
> My question is - is it possible to archive some of the SQL database each
> year onto another hard drive or DVD. The archived data should them be
> removed
> from the original SQL data base.
I'd probably use DTS (assuming SQL 2000, SSIS for 2005) to copy out the data
in a format you can use.
Then delete it.
That should solve your problems.
> I Would also need to be able to view, sort and search this archived data
> on
> another machine in excell or something.
> Hope someone can help.
> Thanks
> Jon
Greg Moore
SQL Server DBA Consulting
sql (at) greenms.com http://www.greenms.com|||Hi,
Thanks for the reply.
Yes we have clustered indexes on the primary keys.
Sorry but what is DTS.
Can the process be automated on a particular date?
Thanks
"Greg D. Moore (Strider)" wrote:
> "jsw" <jsw@.discussions.microsoft.com> wrote in message
> news:E55D02B1-FD8D-468A-A716-9CF9A1717363@.microsoft.com...
> >
> > Hi all hope this is in the right forum.
> >
> > Ive got a SQL database which accumulates about 4 million records each
> > year.
> > This amount causes my PC to slow down when searching, and my hard drive is
> > getting fuller.
> >
> 4 million a YEAR slows you down? Do you have indexes?
> Anyway...
>
> > To overcome this problem I have to delete half the records each year (2
> > million) which takes quite some time.
> >
> Yeah, deletions can take time.
>
> > My question is - is it possible to archive some of the SQL database each
> > year onto another hard drive or DVD. The archived data should them be
> > removed
> > from the original SQL data base.
> I'd probably use DTS (assuming SQL 2000, SSIS for 2005) to copy out the data
> in a format you can use.
> Then delete it.
> That should solve your problems.
> >
> > I Would also need to be able to view, sort and search this archived data
> > on
> > another machine in excell or something.
> >
> > Hope someone can help.
> >
> > Thanks
> >
> > Jon
>
> --
> Greg Moore
> SQL Server DBA Consulting
> sql (at) greenms.com http://www.greenms.com
>
>|||"jsw" <jsw@.discussions.microsoft.com> wrote in message
news:AF4D2F53-9995-4745-973D-5E3DC04B1EC3@.microsoft.com...
> Hi,
> Thanks for the reply.
> Yes we have clustered indexes on the primary keys.
> Sorry but what is DTS.
Data Transformation Services.
Look for it under Enterprise Manager.
> Can the process be automated on a particular date?
Yes.
Create a DTS package, and then you can schedule it.
> Thanks
>
Greg Moore
SQL Server DBA Consulting
sql (at) greenms.com http://www.greenms.comsql