Tuesday, March 27, 2012
Archiving data to a seperate database.
Would appreciate some thoughts on the best way to do this scenario.
This is not a large database in the number of tables, but has about 100,000
new rows per day. Data is inserted only, and no queries / SPs are being run
against the data.
Database 1 (transactional database)
Holds all current data, and all data up to 100 days old.
Database 2 (historic database)
Holds all data and is max 15 mins older than Database 1.
So, what would be the best way to keep the historic database current?
Replication, triggers, agent running SPs, keeping it within 15mins sync to
the transactional database, and deleting records older than 100 days from th
e
transactional database...
Would appreciate anyones thoughts on this...
-MarkFor 100000 rows, in general, one could write a simple job & schedule it to
run every day. The job can have a script that copies the historical data to
the destination & purges it from the source.
Anith|||My choices...
Part 1: Removing from Active db.
Use a nightly job to delete all records over 100 days (You do have a Date
Added Column? Is it indexed?)
Part 2: Moving new data to 'Historic' db
Trigger that copies the INSERTED table to the 'Historic' db.
For this application, a trigger would be have the same or less
operational impact, and the maintenance would be significantly
less. This assumes, of course, that security and connection
issues allow the use of a Trigger.
--
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another certification Exam
"MarkieMark" <u20627@.uwe> wrote in message news:5e6ed66fd1456@.uwe...
> Hi all,
> Would appreciate some thoughts on the best way to do this scenario.
> This is not a large database in the number of tables, but has about
> 100,000
> new rows per day. Data is inserted only, and no queries / SPs are being
> run
> against the data.
> Database 1 (transactional database)
> Holds all current data, and all data up to 100 days old.
> Database 2 (historic database)
> Holds all data and is max 15 mins older than Database 1.
> So, what would be the best way to keep the historic database current?
> Replication, triggers, agent running SPs, keeping it within 15mins sync to
> the transactional database, and deleting records older than 100 days from
> the
> transactional database...
> Would appreciate anyones thoughts on this...
> -Mark|||Arnie,
My thoughts were with triggers to. The only issue I had was if the historic
database went off-line for some reason etc., that the databases would get ou
t
of sync. Haven't been able to get my head round that little problem yet.
-Marksql
Sunday, March 11, 2012
Apply permissions to user, role or group?
Hi guys,
I'd appreciate your thoughts on this. Not done too much DB admin. Usually I do development work and the admins to the admin.
The database is behind an API of stored procedures to manipulate the data, and views to select the data.
The database needs to be accessed remotely by multiple clients.
How best to keep the database secure?
Create a new user and login on the database which is made known to all client applications. Then grant execute permission on the stored procs and grant select on the views?
There is probably a better way than one login for all? Should I be looking at roles and groups etc? If so, how best to set that up?
A few pointers would be gratefully received!
What are you trying to protect and from whom? Who should have access to what? What kinds of access do you want to allow? You should start by asking yourself such questions and once you gather the answers, you can start designing your application security to enforce these access restrictions.
There is no best database security model - if you don't have anything to protect, you won't need a security model at all. Best is relative to the needs of a specific application.
If you tell us what you are trying to obtain, we'll try to help you get it.
Thanks
Laurentiu
Thanks Laurentiu,
The database contains billing information and server will be (in some cases) visible on the Internet. Users manage the data using client application software. It is this software that uses the stored proc API.
I created a specific login known to the software and proceeded to grant execute permission to the API on this login.
The API is quite extensive and while doing this I wondered if there was a better way, perhaps using roles or whatever, so that I can grant permissions to the API once and then allow different logins, including SSPI logins to be members of that role, or group or whatever it should be.
Looking for pointers and advice on the best direction to go on this.
|||
I still need more details about your users: are they having diferent roles, which would require different access restrictions to your data? If they all have the same level of access, do you need to distinguish between them, would you want to know, for example, who did what operation and have an auditing system?
There are many ways to achieve security, and it is hard to tell what is the best solution for you. Here are some possibilities, but I can't really recommend one without knowing more about what you are trying to do.
1) You can have all users connecting to your application with the same credentials, and the software will connect to the server as some login. This won't allow you to know who does what, it will only restrict access to those that know how to connect to the application.
2) You can have users connecting to your application with distinct credentials. You would manage these credentials within your application. The connections to the server would be done using the same login, but your application can implement custom auditing because it does the user authentication. Also, any access rights will have to be controlled at the application layer.
3) You can have users connecting again with distinct credentials, but in this case the credentials correspond to SQL Server logins, and for each user, you connect to SQL Server using the corresponding login. You can do auditing in this case either at the application level or at the server level, within the stored procedures that you call. You can manage rights granted to users at SQL Server level and you can use roles for easier management.
If you don't know exactly what you will want to do, it is a good idea to keep your options open. So, use roles and grant permissions on roles; then, if you need to have those permissions accessible to more than one user, you can just add them to the role.
Hope this helps.
Laurentiu
|||Laurentiu
Been playing around with it and option 3 seems to suit us best. Keeps it flexible, for example, it allows us to split the API into different roles.
Thanks for your ideas!
Saturday, February 25, 2012
Application Performance
can suggest a good forum to post this question, I would appreciate it!)
I have a client with an application setup as follows:
(1) Web Server running Windows 2003 Standard Edition Server & IIS.
(1) Web Server also running Windows 2003 Standard Edition Server &
Microsoft SQL Server 2000
The database is large - the MDF file is approximately 2.5 Gig. There are
well over 100 tables and many tables have hundreds of thousands of records.
On the web server, we have two applications running that communicate with
the database. One is written in ASP - this is used by their customers and
has a good amount of activity during business hours. The other is written
in ASP.NET/VB.NET 1.1 and is used by their internal staff to manage their
business. This side is used heavily, especially during business hours.
When only a few people are on the system, it flys. i.e. first thing in the
morning before everybody has gotten into the office and started using the
system. It's very fast. But as time progresses, and more and more people
log on, the system (both the ASP system and the ASP.NET system) gets slower
and slower, to the point of timing out regularly. If we do an IISReset,
reboot the server, or do anything that essentially kills all the active
sessions, access to the system speeds up again - for a time - until
everybody has logged back on, when it starts dogging again.
I'm looking for any suggestions on how to go about optimizing the system.
At first we thought it was bandwidth to the system, but everything else on
the network seems to not have a problem - only this system. Should we
investigate a cluster with load balancing on the SQL Server side? Or maybe
it's our application itself. Maybe we are not handling those large record
sets in the most efficient manner? Mabye we need to change some settings
in the SQL Server database itself to make access to the database more
efficient. I'm not really sure where to start with this issue.
If anybody has any advice on where to start looking at this problem, or
what forum I should post this on, I would greatly appreciate it.
Thank you very much for your help!
Sincerely,
Matt SchwartzMatt,
First off clustering does not give you load balancing, it is a hardware fail
over solution only. I am a little confused as to what configuration you
actually have. Is it two different servers and only one has SQL Server or
are they both the same? In any case it is usually not a good idea to run
both the SQL Server and the web service on the same server as they compete
for resources. This is especially true of memory. How much memory does the
machine with SQL Server have and how have you configured the memory? If you
have not set the MAX memory setting to give the ASP stuff enough memory to
use they will constantly be fighting with each other for the memory. Here
are some links that should get you started to determine where the issues
are.
http://www.sql-server-performance.c...nce_audit10.asp
Performance Audit
http://www.microsoft.com/technet/pr...perfmonitor.asp Perfmon counters
http://www.sql-server-performance.c...mance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.c...rmance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/d.../>
on_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Andrew J. Kelly SQL MVP
"Matt Schwartz" <matt@.nelix.com> wrote in message
news:Xns97CA88A7B151Dmattnelixcom@.216.196.97.142...
> I'm not sure if this is the best forum to post this in (if not, if
> somebody
> can suggest a good forum to post this question, I would appreciate it!)
> I have a client with an application setup as follows:
> (1) Web Server running Windows 2003 Standard Edition Server & IIS.
> (1) Web Server also running Windows 2003 Standard Edition Server &
> Microsoft SQL Server 2000
> The database is large - the MDF file is approximately 2.5 Gig. There are
> well over 100 tables and many tables have hundreds of thousands of
> records.
> On the web server, we have two applications running that communicate with
> the database. One is written in ASP - this is used by their customers and
> has a good amount of activity during business hours. The other is written
> in ASP.NET/VB.NET 1.1 and is used by their internal staff to manage their
> business. This side is used heavily, especially during business hours.
> When only a few people are on the system, it flys. i.e. first thing in
> the
> morning before everybody has gotten into the office and started using the
> system. It's very fast. But as time progresses, and more and more people
> log on, the system (both the ASP system and the ASP.NET system) gets
> slower
> and slower, to the point of timing out regularly. If we do an IISReset,
> reboot the server, or do anything that essentially kills all the active
> sessions, access to the system speeds up again - for a time - until
> everybody has logged back on, when it starts dogging again.
> I'm looking for any suggestions on how to go about optimizing the system.
> At first we thought it was bandwidth to the system, but everything else on
> the network seems to not have a problem - only this system. Should we
> investigate a cluster with load balancing on the SQL Server side? Or
> maybe
> it's our application itself. Maybe we are not handling those large record
> sets in the most efficient manner? Mabye we need to change some settings
> in the SQL Server database itself to make access to the database more
> efficient. I'm not really sure where to start with this issue.
> If anybody has any advice on where to start looking at this problem, or
> what forum I should post this on, I would greatly appreciate it.
> Thank you very much for your help!
> Sincerely,
> Matt Schwartz
>|||Thank you very much for your assistance! We have two servers - one is
running IIS, the other running SQL Server. As far as memory, There is 4
Gig on the SQL Server, but it's setup as the default installation.
I'll take a look at the articles you recommended. Thank you very much!
Sincerely,
Matt Schwartz
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in
news:er9IyLRfGHA.1208@.TK2MSFTNGP02.phx.gbl:
> Matt,
> First off clustering does not give you load balancing, it is a
> hardware fail over solution only. I am a little confused as to what
> configuration you actually have. Is it two different servers and only
> one has SQL Server or are they both the same? In any case it is
> usually not a good idea to run both the SQL Server and the web service
> on the same server as they compete for resources. This is especially
> true of memory. How much memory does the machine with SQL Server have
> and how have you configured the memory? If you have not set the MAX
> memory setting to give the ASP stuff enough memory to use they will
> constantly be fighting with each other for the memory. Here are some
> links that should get you started to determine where the issues are.
>|||OK so it is not really a WEB server then, just a server with SQL Server on
it. If you are running Std Edition of SQL Server 2000 you can only use 2GB
anyway and the default config should work just fine. It is hard to say what
the issues can be since there is so little to go on. But those articles
should help to narrow it down some. Feel free to post any follow on
questions as a result of it.
Andrew J. Kelly SQL MVP
"Matt Schwartz" <matt@.nelix.com> wrote in message
news:Xns97CBB0EE525B6mattnelixcom@.216.196.97.142...[vbcol=seagreen]
> Thank you very much for your assistance! We have two servers - one is
> running IIS, the other running SQL Server. As far as memory, There is 4
> Gig on the SQL Server, but it's setup as the default installation.
> I'll take a look at the articles you recommended. Thank you very much!
> Sincerely,
> Matt Schwartz
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in
> news:er9IyLRfGHA.1208@.TK2MSFTNGP02.phx.gbl:
>