Showing posts with label scenario. Show all posts
Showing posts with label scenario. Show all posts

Tuesday, March 27, 2012

Archiving data to a seperate database.

Hi all,
Would appreciate some thoughts on the best way to do this scenario.
This is not a large database in the number of tables, but has about 100,000
new rows per day. Data is inserted only, and no queries / SPs are being run
against the data.
Database 1 (transactional database)
Holds all current data, and all data up to 100 days old.
Database 2 (historic database)
Holds all data and is max 15 mins older than Database 1.
So, what would be the best way to keep the historic database current?
Replication, triggers, agent running SPs, keeping it within 15mins sync to
the transactional database, and deleting records older than 100 days from th
e
transactional database...
Would appreciate anyones thoughts on this...
-MarkFor 100000 rows, in general, one could write a simple job & schedule it to
run every day. The job can have a script that copies the historical data to
the destination & purges it from the source.
Anith|||My choices...
Part 1: Removing from Active db.
Use a nightly job to delete all records over 100 days (You do have a Date
Added Column? Is it indexed?)
Part 2: Moving new data to 'Historic' db
Trigger that copies the INSERTED table to the 'Historic' db.
For this application, a trigger would be have the same or less
operational impact, and the maintenance would be significantly
less. This assumes, of course, that security and connection
issues allow the use of a Trigger.
--
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another certification Exam
"MarkieMark" <u20627@.uwe> wrote in message news:5e6ed66fd1456@.uwe...
> Hi all,
> Would appreciate some thoughts on the best way to do this scenario.
> This is not a large database in the number of tables, but has about
> 100,000
> new rows per day. Data is inserted only, and no queries / SPs are being
> run
> against the data.
> Database 1 (transactional database)
> Holds all current data, and all data up to 100 days old.
> Database 2 (historic database)
> Holds all data and is max 15 mins older than Database 1.
> So, what would be the best way to keep the historic database current?
> Replication, triggers, agent running SPs, keeping it within 15mins sync to
> the transactional database, and deleting records older than 100 days from
> the
> transactional database...
> Would appreciate anyones thoughts on this...
> -Mark|||Arnie,
My thoughts were with triggers to. The only issue I had was if the historic
database went off-line for some reason etc., that the databases would get ou
t
of sync. Haven't been able to get my head round that little problem yet.
-Marksql

Tuesday, March 20, 2012

Appropriate Time Out for Alter Database?

Platform: SQL Server 2000 or higher

Scenario: We have an application which is going to perform some changes to a database, and for those changes to be safe, we want to set the database to single_user mode. We'll be executing commands like:

ALTER DATABASE <DbName> SET READ_ONLY WITH ROLLBACK IMMEDIATE

ALTER DATABASE <DbName> SET MULTI_USER

etc.

Is 15 seconds a reasonable timeout for the alter commands? Does database size come into play when considering the time out for such commands? Is there a good rule of thumb for this sort of thing?

What exactly are you changing in the database, or are you talking changing tables?

Generally, alter database and alter table are "safe" to do on a running database. You do not need to set it into single user mode.

Yes, using alter table, size of the table being altered is a factor in how long it will take to complete.

Using alter database is almost instant unless you are creating a new huge file.

|||

Ultimately, it's a large number of operations, which include adding data, removing data, altering table definitions, etc. It's a process to change a schema, and the potential quantity of items is sufficient that we want to make sure that none of our users can make any changes while the process is running, which is why we want to set it into single user mode.

It sounds like giving the alter database command 15 seconds is generous enough, then. In my tests, I've seen it take up to 2 seconds, but the database I was testing on is a little bit smaller than our average database.

|||In that case, you should just use a transaction. That way it will be all or nothing and it will lock what it needs to do.

|||

Well, the scope of the project I'm doing doesn't actually allow me to explore modifying this particular aspect of the solution. One of my tasks is to determine if a 15 second time out is reasonable or not, and having no luck finding the answer to that question anywhere else, I came here to see if there were any guidelines I could follow, for making that determination.

The "just use transaction" answer implies that there are no conceivable situations where it'd be appropriate to set the database to single user mode. I think that there probably are situations where it'd be appropriate, though I can't site them offhand. I'd probably need to ask more experienced SQL experts to give me an example where it'd be appropriate. It may be that my assumption is wrong, and that I should just be using transactions, and I guarantee that the next project I have where I can make that choice, I will use transactions. For this project, I'm required to set the database in single user mode, and I suspect that the only part of the TSQL that I need to worry about taking a long time is the "ROLLBACK IMMEDIATE," because that's going to depend on the number of transactions at the time that need to be rolled back.

|||I don't understand your question about "time" to change to single user mode. I don't know how you intend to run your alter database commands. Assuming you are running a script, if you either add the single user mode command to the beginning or make a batch file which ran osql and the alter database to single user, then the run the rest of the update script, then the updates would not run until the single user mode command was done. Or just run the command manually and wait for it to finish.

Yes, the rollback is what will take the time, it might even fail. I have had times, when it is unable to rollback very large transactions, and it just hangs. Depending on your usage and situation at the time, it might be 2 seconds or it make never succeed.

Monday, March 19, 2012

Applying SP to Log-shipping Standby server

Scenario: SQL2000 server that is the Standby server in a log-shipping scenario

The need: Apply SP4 to the standby server

Issue: Service pack installation wants all databases to be "writable". The databases on the standby server are "IsInStandby" state and cannot be made "writable" (non-Read-Only) by the Alter Database command.

Question: is there a way to alter the state so that I can apply the service pack without removing log-shiping, applying the sp and then reestablishing log-shipping?

ReadmeSql2k32sp4.htm:

http://download.microsoft.com/download/1/b/d/1bdf5b78-584e-4de0-b36f-c44e06b0d2a3/ReadmeSql2k32sp4.htm#_1462462_considerations_for_an_instance_i_gq59

Thursday, March 8, 2012

Application/Security Design: Stored Procedures versus SQL queries

Hello everyone,

I don't know what category would be appropriate for this question but security seems to be close enough.

I have this case scenario: I am running an automated application that extracts data from a web site and stores the data into a table on SQL server 2005. This information is not confidential in the extreme of social insurance #'s, bank account #s, but should not be seen by a typical employee (it has no use for them). After the data has been stored, it retrieves the data from the same table, processes it, and updates the same table. This application runs every hour infinitely.

Should all the insert, update, and select queries be stored under a stored procedure? I am not concern with performance. My concern would fall under design and security.

Is it worth to hide the details of inserting/updating/selecting behind a stored procedure? Or should I just allow the program to send select/update/insert SQL queries?

No employee (other then the developer and the DB admin) or customer ever access this table (They do not have permission from SQL). The username and passwords were created with security in mind.

Any thoughts or ideas?

Thanks for your time, Adrian

It sounds as though 'a typical employee' would not have access to the table. As long as the PUBLIC role cannot access the table, and 'a typical employee' does not have permissions, you are covered.

Using Stored Procedures 'may' be excessive in this situation.

|||

Thank your for your response.

I still am left wondering whether it is worth to hide the details of inserting/updating/selecting behind a stored procedure or whther I should just allow the program to send select/update/insert SQL queries.

I consider the stored procedure as the last "line of defense" when it comes to enforcing business rules. Although in theory of a 3-tier system, the business layer takes care of this, I have seen other developers program some sort of application and they forgot one aspect of the business or some sort of formating error. A stored procedure would correct this b/c it doesn't matter which business layer accesses the SQL server, they all will be enforced by the server.

Do you have an understanding of where I am coming from? Where do we draw the line of what the business layer enforces and what the SQL server enforced?

This is where I'd like to get other opinions on it too.

Thank you for your time, Adrian

|||

Adrian,

First, let me state that in 'almost' all situations, I recommend using stored procedures for a several reasons, including: reuseability, abstraction and security. Reuseabilitiy infurs that the same code is called multiple times from the application. Abstraction allows the procedure code (and how/where the data is actually stored) to be 'tuned' without having to redeploy the application. And of course, security protects the tables from inadvertent alteration and maintains a minimum level of data protection -and may even inject some form of audit trailing.

That said, I recommend against putting excessive 'business rules' into stored procedures. I prefer business rules to be in the 'middle tier' -mainly for scalability reasons, as well as my attempts to keep the data server dedicated to protecting the data. Here, however, you will find a great range of opinions, and as hardware becomes more robust and less expensive, and data clusters have become easier to configure and operate -'scalability' becomes less of an issue.

Now in your situation, you posit that the application is automatic -little chance for SQL injection, and users have no reason for direct data access. You have to balance the extra effort for creating and testing the stored procedures against the benefit. Granted, there 'may' be a slight increase in performance due to reusing compiled procedures -but you are doing the same activities over and over again so even the 'ad-hoc' queries would most likely be in the procedure cache (make sure the queries are properly parameterized). But the main question to ask is: "What is the benefit and what is the cost of mandating stored procedure use above direct table access?"

|||

May I know how to give all store procedures exec permission to one user instead one by one. Thanks in advance for your advice.

eg.

GRANT EXECUTE ON [dbo].[SP_xxx] TO [yyy]

GO

one by one procedure how can do it all for one go.

With regards

Bala

|||Create a database role. Add all users to the role. Grant permissions to the role.|||

Hi,

Thanks for the reply, we everyday delete the store procedure and recreate atleast few hundred. Once we delete the permission lost or new store procedures no permission set. If we know what is the command to run for all store procedures, instead of one by one i will run every time after creating the store procedure.

With regards

Bala

|||

I always recommend adding stored procedures to source control. Each stored procedure as a separate file. Databases can be 'refreshed' from source control in order to keep out development 'detritus'.

Each stored procedure file also contains the necessary permissions. Here is an example: (Works in SQL 2000/2005)

IF EXISTS
( SELECT ROUTINE_NAME
FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_NAME = 'MyProcedureName'
)
DROP PROCEDURE dbo.MyProcedureName
GO

CREATE PROCEDURE dbo.MyProcedureName
/****************************************************************
* PROCEDURE: MyProcedureName
* DATE:
* AUTHOR:
*-
* DESCRIPTION:
*
*-
* CODE REVIEW: Date/Who/Status
*-
* VSS revision history: (location in source control)
****************************************************************/
( @.Parameter1 datatype,
, @.Parameter2 datatype
)
AS

SET NOCOUNT ON

-- Procedure Code Here

SELECT
@.Err = @.@.ERROR
, @.RowsAffected = @.@.ROWCOUNT

IF ( @.Err != 0 )
RETURN @.Err

IF ( @.RowsAffected = 0 )
RETURN -1

RETURN 0
GO

GRANT EXECUTE ON dbo.MyProcedureName TO MyCustomRole
GO


|||

Hi Arnie,

Thankyou very much i will do that.

With regards

Bala

Application/Security Design: Stored Procedures versus SQL queries

Hello everyone,

I don't know what category would be appropriate for this question but security seems to be close enough.

I have this case scenario: I am running an automated application that extracts data from a web site and stores the data into a table on SQL server 2005. This information is not confidential in the extreme of social insurance #'s, bank account #s, but should not be seen by a typical employee (it has no use for them). After the data has been stored, it retrieves the data from the same table, processes it, and updates the same table. This application runs every hour infinitely.

Should all the insert, update, and select queries be stored under a stored procedure? I am not concern with performance. My concern would fall under design and security.

Is it worth to hide the details of inserting/updating/selecting behind a stored procedure? Or should I just allow the program to send select/update/insert SQL queries?

No employee (other then the developer and the DB admin) or customer ever access this table (They do not have permission from SQL). The username and passwords were created with security in mind.

Any thoughts or ideas?

Thanks for your time, Adrian

It sounds as though 'a typical employee' would not have access to the table. As long as the PUBLIC role cannot access the table, and 'a typical employee' does not have permissions, you are covered.

Using Stored Procedures 'may' be excessive in this situation.

|||

Thank your for your response.

I still am left wondering whether it is worth to hide the details of inserting/updating/selecting behind a stored procedure or whther I should just allow the program to send select/update/insert SQL queries.

I consider the stored procedure as the last "line of defense" when it comes to enforcing business rules. Although in theory of a 3-tier system, the business layer takes care of this, I have seen other developers program some sort of application and they forgot one aspect of the business or some sort of formating error. A stored procedure would correct this b/c it doesn't matter which business layer accesses the SQL server, they all will be enforced by the server.

Do you have an understanding of where I am coming from? Where do we draw the line of what the business layer enforces and what the SQL server enforced?

This is where I'd like to get other opinions on it too.

Thank you for your time, Adrian

|||

Adrian,

First, let me state that in 'almost' all situations, I recommend using stored procedures for a several reasons, including: reuseability, abstraction and security. Reuseabilitiy infurs that the same code is called multiple times from the application. Abstraction allows the procedure code (and how/where the data is actually stored) to be 'tuned' without having to redeploy the application. And of course, security protects the tables from inadvertent alteration and maintains a minimum level of data protection -and may even inject some form of audit trailing.

That said, I recommend against putting excessive 'business rules' into stored procedures. I prefer business rules to be in the 'middle tier' -mainly for scalability reasons, as well as my attempts to keep the data server dedicated to protecting the data. Here, however, you will find a great range of opinions, and as hardware becomes more robust and less expensive, and data clusters have become easier to configure and operate -'scalability' becomes less of an issue.

Now in your situation, you posit that the application is automatic -little chance for SQL injection, and users have no reason for direct data access. You have to balance the extra effort for creating and testing the stored procedures against the benefit. Granted, there 'may' be a slight increase in performance due to reusing compiled procedures -but you are doing the same activities over and over again so even the 'ad-hoc' queries would most likely be in the procedure cache (make sure the queries are properly parameterized). But the main question to ask is: "What is the benefit and what is the cost of mandating stored procedure use above direct table access?"

|||

May I know how to give all store procedures exec permission to one user instead one by one. Thanks in advance for your advice.

eg.

GRANT EXECUTE ON [dbo].[SP_xxx] TO [yyy]

GO

one by one procedure how can do it all for one go.

With regards

Bala

|||Create a database role. Add all users to the role. Grant permissions to the role.|||

Hi,

Thanks for the reply, we everyday delete the store procedure and recreate atleast few hundred. Once we delete the permission lost or new store procedures no permission set. If we know what is the command to run for all store procedures, instead of one by one i will run every time after creating the store procedure.

With regards

Bala

|||

I always recommend adding stored procedures to source control. Each stored procedure as a separate file. Databases can be 'refreshed' from source control in order to keep out development 'detritus'.

Each stored procedure file also contains the necessary permissions. Here is an example: (Works in SQL 2000/2005)

IF EXISTS
( SELECT ROUTINE_NAME
FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_NAME = 'MyProcedureName'
)
DROP PROCEDURE dbo.MyProcedureName
GO

CREATE PROCEDURE dbo.MyProcedureName
/****************************************************************
* PROCEDURE: MyProcedureName
* DATE:
* AUTHOR:
*-
* DESCRIPTION:
*
*-
* CODE REVIEW: Date/Who/Status
*-
* VSS revision history: (location in source control)
****************************************************************/
( @.Parameter1 datatype,
, @.Parameter2 datatype
)
AS

SET NOCOUNT ON

-- Procedure Code Here

SELECT
@.Err = @.@.ERROR
, @.RowsAffected = @.@.ROWCOUNT

IF ( @.Err != 0 )
RETURN @.Err

IF ( @.RowsAffected = 0 )
RETURN -1

RETURN 0
GO

GRANT EXECUTE ON dbo.MyProcedureName TO MyCustomRole
GO


|||

Hi Arnie,

Thankyou very much i will do that.

With regards

Bala

Application times out while doing the updates

here's the scenario..

I am running a DTS to collect the summarized info from Oracle database
into SQL server. I then have a update job which updates my
transactional table from the summarized table.

The update takes a very long time (~ 3 minutes)even though it has
around 1500 rows which causes the application to timeout. I want this
job to be done in less than a minute.

Thoughts on improving performance. Is stored procedure a way to go?
(I have used Isolation,row hints etc etc..nothing seems to be working)

AJI suggest you review the execution plan of the UPDATE query. Perhaps
additional indexes or query changes will speed it up substantially. If you
need additional help, please post you table DDL (including constraints,
indexes and triggers) along with you query and sample data.

Simply encapsulating the query in a proc is unlikely to improve performance.

--
Hope this helps.

Dan Guzman
SQL Server MVP

"AJ" <aj70000@.hotmail.com> wrote in message
news:6097f505.0408272303.752889bc@.posting.google.c om...
> here's the scenario..
> I am running a DTS to collect the summarized info from Oracle database
> into SQL server. I then have a update job which updates my
> transactional table from the summarized table.
> The update takes a very long time (~ 3 minutes)even though it has
> around 1500 rows which causes the application to timeout. I want this
> job to be done in less than a minute.
> Thoughts on improving performance. Is stored procedure a way to go?
> (I have used Isolation,row hints etc etc..nothing seems to be working)
> AJ|||Thanks Dan,

Here's the entire structure and query

Table A has these columns

a_id(pkey),ftq,break_offs,completes,assigned,s_id, end_date (Indexed)
and few other columns

ftq,break_offs, completes,assigned are the summarized cols. which are
updated every hour from the summary_table.

Summary table has these cols.

a_id,ftq break_offs,completes,assigned.

I am issuing this update statement

UPDATE A SET
break_offs = (select case when break_offs<0 then 0 else break_offs end
break_offs from summary_table where A.ano = summary_table.ano),
ftq=(select ftq from summary_table where A.ano=summary_table.ano),
actually_assigned=(select member_assigned from summary_table where
A.ano=summary_table.ano),
qualified_completes=( select member_completes from summary_table where
A.ano=summary_table.ano)
WHERE EXISTS (select break_offs,ftq from summary_table where A.ano =
summary_table.ano) and
end_date>=getdate().

------------

Apart from this we also have stored procs, trigger on the same
table.But the thing is the jobs runs 15 minutes past the hour and
stalls everything.

Thanks again

AJ|||AJ (aj70000@.hotmail.com) writes:
> Here's the entire structure and query
> Table A has these columns
> a_id(pkey),ftq,break_offs,completes,assigned,s_id, end_date (Indexed)
> and few other columns
> ftq,break_offs, completes,assigned are the summarized cols. which are
> updated every hour from the summary_table.
> Summary table has these cols.
> a_id,ftq break_offs,completes,assigned.

Note here: Dan asked for the DDL. By this he means the CREATE TABLE
statements. These are useful if you want to test a query, and they
are also easier to read than a free-form list. In this case, the
cure appears simple enough anyway:

> UPDATE A SET
> break_offs = (select case when break_offs<0 then 0 else break_offs end
> break_offs from summary_table where A.ano = summary_table.ano),
> ftq=(select ftq from summary_table where A.ano=summary_table.ano),
> actually_assigned=(select member_assigned from summary_table where
> A.ano=summary_table.ano),
> qualified_completes=( select member_completes from summary_table where
> A.ano=summary_table.ano)
> WHERE EXISTS (select break_offs,ftq from summary_table where A.ano =
> summary_table.ano) and
> end_date>=getdate().

This should be faster:

UPDATE A
SET break_offs = case when s.break_offs < 0 then 0
else s.break_offs
end,
ftq = s.ftq,
actually_assigned = s.member_assigned,
qualified_completes = s.member_completes,
FROM A
JOIN summary_table s ON a.ano = s.ano
and A.end_date >= getdate()

This is uses syntax that is proprietary to MS SQL Server and Sybase,
but it is a lot more effecient, since in your version each subquery
is evaluated separately.

Permit me also to note that the last condition looks funky. getdate()
returns the current time, so if A.end_date is a date only, this condition
may not do what you expect. That is, rows where end_date = today will
not be updated.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Hi Dan/Erland,

Is there a way to save the execution plan?..I ran trace/execution. My
update job seems to be firing at the end since I have lots of table
level triggers. The changes that Erland is suggesting is infact taking
longer than my query.

Thoughts/Comments

AJ

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!|||AJ Jr (aj70000@.hotmail.com) writes:
> Is there a way to save the execution plan?..I ran trace/execution. My
> update job seems to be firing at the end since I have lots of table
> level triggers. The changes that Erland is suggesting is infact taking
> longer than my query.

You can run a query from Query Analyzer and press CTRL/K to get a graphical
showplan. Or you can press CTRL/L to get an estimated plan, so that the
query is not run. Tnis is not useful if you run the entire procedure, but
mainly if you run the troublesome statement separately.

If you have a Profiler trace with the Execution Plan event, you can find
the plan for the query, select that row, and the cut and paste from the
lower pane. Normally the plan you are looking for is the one before
StmtCompleted, but if there are triggers on the table, it may not be in
this case.

Too bad that my suggestion performed worse. That was indeed a bit of a
surprise. It would be interesting to see both plans.

It would help to have the complete CREATE TABLE and CREATE INDEX statements
for the tables.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||To add to Erland's response, you may want to add unique constraints or
indexes on the ano columns.

--
Hope this helps.

Dan Guzman
SQL Server MVP

"AJ" <aj70000@.hotmail.com> wrote in message
news:6097f505.0408281317.3e394ab2@.posting.google.c om...
> Thanks Dan,
>
> Here's the entire structure and query
> Table A has these columns
> a_id(pkey),ftq,break_offs,completes,assigned,s_id, end_date (Indexed)
> and few other columns
> ftq,break_offs, completes,assigned are the summarized cols. which are
> updated every hour from the summary_table.
> Summary table has these cols.
> a_id,ftq break_offs,completes,assigned.
> I am issuing this update statement
> UPDATE A SET
> break_offs = (select case when break_offs<0 then 0 else break_offs end
> break_offs from summary_table where A.ano = summary_table.ano),
> ftq=(select ftq from summary_table where A.ano=summary_table.ano),
> actually_assigned=(select member_assigned from summary_table where
> A.ano=summary_table.ano),
> qualified_completes=( select member_completes from summary_table where
> A.ano=summary_table.ano)
> WHERE EXISTS (select break_offs,ftq from summary_table where A.ano =
> summary_table.ano) and
> end_date>=getdate().
>
> ------------
> Apart from this we also have stored procs, trigger on the same
> table.But the thing is the jobs runs 15 minutes past the hour and
> stalls everything.
> Thanks again
> AJ|||Hi Dan/Erland

I have identified a problem : It is the Update trigger which has a
cursor which checks all the records before allowing the update.

Comments/Thoughts on this one :

The update job is the scheduled job..it runs as user x (I can specify
sa or DBO).
Is there a way in SQL server that I can fire the trigger on a table
for a particular user only (Application user).

Thanks ia advance

AJ|||AJ (aj70000@.hotmail.com) writes:
> I have identified a problem : It is the Update trigger which has a
> cursor which checks all the records before allowing the update.
> Comments/Thoughts on this one :
> The update job is the scheduled job..it runs as user x (I can specify
> sa or DBO).
> Is there a way in SQL server that I can fire the trigger on a table
> for a particular user only (Application user).

No, but you can use ALTER TRIGGER to disable the trigger. But this is
a little risky if the job dies for some reason in the middle of and leaves
the trigger disabled.

A better technique is to define a temp table, call it #triggerdisable, just
before the update, and then in the trigger add:

if object_id('tempdb..#triggerdisable') IS NOT NULL
RETURN

Note: what columns the table has or what data there is does not matter. It's
the sheer existence that matter.

But of course the best solution would be rewrite the trigger to use set-
based operations!

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thanks Erland,

But I am kinda lost...By defining #triggerdisable..What do you mean?

Thanks again

AJ|||AJ (aj70000@.hotmail.com) writes:
> But I am kinda lost...By defining #triggerdisable..What do you mean?

CREATE TABLE #triggerdisable(a int NOT NULL)

The temp table serves as a process-global flag to test for.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||thanks Erland/Dan for all your help.

AJ