Thursday, March 22, 2012
Architecture advice
I am a newbie to SQL Server, but a long time Access and FrontPage developer.
With these two apps, I have a "publish to the web" mentality, so I have
envisioned having my development SQL Server database on my local harddisk
and periodically "publishing" it to my production SQL Server out on the
Internet. (By development server, I mean where data creation and update is
accomplished.)
My idea: Update the production SQL Server periodically (weekly) from the
development server, but I need to capture data received from users on the
production server, then edit and merge on the development server for later
republishing to the production server. Alternatively, all data updates could
be done on the production server with backups to the local harddisk server.
Problem: My Internet SQL Server provider does NOT allow replication nor the
creation or implementation of DTS packages, but does support the DTS
import/export functions to a point of connection to and selection of my
database.
Which would be best approach to implement given the ISP's restrictions? Or
can you suggest another approach?
Many thanks for your expertise to point me in the right direction.
CharaxDo all your production changes on your production server,
and don't mix your Dev server with your prod server. If
you really need to do things first on your Dev server and
then publish 'it' to the prod server, you should then
treat your dev server as a prod server unless of course
by 'publishing' you meant scheduled releases in a
controlled fashion.
Linchi
>--Original Message--
>I need some advice on architecture.
>I am a newbie to SQL Server, but a long time Access and
FrontPage developer.
>With these two apps, I have a "publish to the web"
mentality, so I have
>envisioned having my development SQL Server database on
my local harddisk
>and periodically "publishing" it to my production SQL
Server out on the
>Internet. (By development server, I mean where data
creation and update is
>accomplished.)
>My idea: Update the production SQL Server periodically
(weekly) from the
>development server, but I need to capture data received
from users on the
>production server, then edit and merge on the development
server for later
>republishing to the production server. Alternatively, all
data updates could
>be done on the production server with backups to the
local harddisk server.
>Problem: My Internet SQL Server provider does NOT allow
replication nor the
>creation or implementation of DTS packages, but does
support the DTS
>import/export functions to a point of connection to and
selection of my
>database.
>Which would be best approach to implement given the ISP's
restrictions? Or
>can you suggest another approach?
>Many thanks for your expertise to point me in the right
direction.
>Charax
>
>.
>|||Thanks for the quick response, Linchi. Yes, you are quite right and my
terminology was wrong. (Remember, I'm an SQL server newbie!)
I now will call the server on my local harddisk the 'production' server. The
production server has the most current data and any changes to data or
database structure are made there. The ISP's SQL Server on the Internet is
simply a copy of the production server that web users can access -- let's
call it the 'slave'. Is this a reasonable architecture?
If so, how do you recommend that I update the slave server on the Internet
from the production server on local harddisk? Please keep in mind that the
slave server does not support replication, and only supports the DTS
import/export functions to a point of connection to and selection of my
database. On my local production server, I have all replication and DTS
features.
Many thanks for your ideas and help to a new guy.
Charax
"Linchi Shea" <linchi_shea@.NOSPAMml.com> wrote in message
news:339801c3e1c8$08021990$a601280a@.phx.gbl...
> Do all your production changes on your production server,
> and don't mix your Dev server with your prod server. If
> you really need to do things first on your Dev server and
> then publish 'it' to the prod server, you should then
> treat your dev server as a prod server unless of course
> by 'publishing' you meant scheduled releases in a
> controlled fashion.
> Linchisql
Architecture advice
I am a newbie to SQL Server, but a long time Access and FrontPage developer.
With these two apps, I have a "publish to the web" mentality, so I have
envisioned having my development SQL Server database on my local harddisk
and periodically "publishing" it to my production SQL Server out on the
Internet. (By development server, I mean where data creation and update is
accomplished.)
My idea: Update the production SQL Server periodically (weekly) from the
development server, but I need to capture data received from users on the
production server, then edit and merge on the development server for later
republishing to the production server. Alternatively, all data updates could
be done on the production server with backups to the local harddisk server.
Problem: My Internet SQL Server provider does NOT allow replication nor the
creation or implementation of DTS packages, but does support the DTS
import/export functions to a point of connection to and selection of my
database.
Which would be best approach to implement given the ISP's restrictions? Or
can you suggest another approach?
Many thanks for your expertise to point me in the right direction.
CharaxDo all your production changes on your production server,
and don't mix your Dev server with your prod server. If
you really need to do things first on your Dev server and
then publish 'it' to the prod server, you should then
treat your dev server as a prod server unless of course
by 'publishing' you meant scheduled releases in a
controlled fashion.
Linchi
quote:
>--Original Message--
>I need some advice on architecture.
>I am a newbie to SQL Server, but a long time Access and
FrontPage developer.
quote:
>With these two apps, I have a "publish to the web"
mentality, so I have
quote:
>envisioned having my development SQL Server database on
my local harddisk
quote:
>and periodically "publishing" it to my production SQL
Server out on the
quote:
>Internet. (By development server, I mean where data
creation and update is
quote:
>accomplished.)
>My idea: Update the production SQL Server periodically
(weekly) from the
quote:
>development server, but I need to capture data received
from users on the
quote:
>production server, then edit and merge on the development
server for later
quote:
>republishing to the production server. Alternatively, all
data updates could
quote:
>be done on the production server with backups to the
local harddisk server.
quote:
>Problem: My Internet SQL Server provider does NOT allow
replication nor the
quote:
>creation or implementation of DTS packages, but does
support the DTS
quote:
>import/export functions to a point of connection to and
selection of my
quote:
>database.
>Which would be best approach to implement given the ISP's
restrictions? Or
quote:
>can you suggest another approach?
>Many thanks for your expertise to point me in the right
direction.
quote:|||Thanks for the quick response, Linchi. Yes, you are quite right and my
>Charax
>
>.
>
terminology was wrong. (Remember, I'm an SQL server newbie!)
I now will call the server on my local harddisk the 'production' server. The
production server has the most current data and any changes to data or
database structure are made there. The ISP's SQL Server on the Internet is
simply a copy of the production server that web users can access -- let's
call it the 'slave'. Is this a reasonable architecture?
If so, how do you recommend that I update the slave server on the Internet
from the production server on local harddisk? Please keep in mind that the
slave server does not support replication, and only supports the DTS
import/export functions to a point of connection to and selection of my
database. On my local production server, I have all replication and DTS
features.
Many thanks for your ideas and help to a new guy.
Charax
"Linchi Shea" <linchi_shea@.NOSPAMml.com> wrote in message
news:339801c3e1c8$08021990$a601280a@.phx.gbl...
quote:
> Do all your production changes on your production server,
> and don't mix your Dev server with your prod server. If
> you really need to do things first on your Dev server and
> then publish 'it' to the prod server, you should then
> treat your dev server as a prod server unless of course
> by 'publishing' you meant scheduled releases in a
> controlled fashion.
> Linchi
Tuesday, March 20, 2012
Applying sp4 advice on a server with sql and analysis services
applied together, one before the other, not at all?
Hi Burt
It is usually better to keep everything in step, and I would apply the
engine first.
John
"burt_king" wrote:
> Can anyone tell me if sp4 for analysis services and SQL server 2000 should be
> applied together, one before the other, not at all?
> --
>
|||You can apply them in any order. With SP1 you would need to apply them
together IIRC.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"burt_king" <burt_king@.yahoo.com> wrote in message
news:96F2A6C3-00AA-4BC3-A49D-0294B9B8CC60@.microsoft.com...
> Can anyone tell me if sp4 for analysis services and SQL server 2000 should
> be
> applied together, one before the other, not at all?
> --
>
Monday, March 19, 2012
Applying sp4 advice on a server with sql and analysis services
e
applied together, one before the other, not at all?
--Hi Burt
It is usually better to keep everything in step, and I would apply the
engine first.
John
"burt_king" wrote:
> Can anyone tell me if sp4 for analysis services and SQL server 2000 should
be
> applied together, one before the other, not at all?
> --
>|||You can apply them in any order. With SP1 you would need to apply them
together IIRC.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"burt_king" <burt_king@.yahoo.com> wrote in message
news:96F2A6C3-00AA-4BC3-A49D-0294B9B8CC60@.microsoft.com...
> Can anyone tell me if sp4 for analysis services and SQL server 2000 should
> be
> applied together, one before the other, not at all?
> --
>
Applying sp4 advice on a server with sql and analysis services
applied together, one before the other, not at all?
--Hi Burt
It is usually better to keep everything in step, and I would apply the
engine first.
John
"burt_king" wrote:
> Can anyone tell me if sp4 for analysis services and SQL server 2000 should be
> applied together, one before the other, not at all?
> --
>|||You can apply them in any order. With SP1 you would need to apply them
together IIRC.
--
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"burt_king" <burt_king@.yahoo.com> wrote in message
news:96F2A6C3-00AA-4BC3-A49D-0294B9B8CC60@.microsoft.com...
> Can anyone tell me if sp4 for analysis services and SQL server 2000 should
> be
> applied together, one before the other, not at all?
> --
>
Saturday, February 25, 2012
Application Performance advice please?
I've been recruited to assist in diagnosing and fixing a performance problem
on an application we have running on SQL Server 7.
The application itself is third party software, so we can't get at the
source code. It's a Client Management system, where consultants all over
the
country track their client meetings, results, action plans, etc. , and has
apparently been problematic for a long time now. I came into this
investigation
in mid-stream, but here's the situation as I understand it:
We have users reporting it's slow, with no discernable pattern with respect
to what part of the application they're using or now particular time of day.
I am told that it doesn't appear to be a bandwith or computer resource
problem. They apparently added two app servers a year or so ago, which
temporarily
improved the performance. We're using a nominal percentage of CPU and
memory.
There are three large tables (approx 8 million rows) that are queried often,
as users click to see their calendar of appointments or review past meetings
with a client, etc. The activity on these tables is over 90% reads
(SELECTS) with about 10% INSERTS/UPDATES. We have attempted to run the Index
Analyzer Wizard twice
but so far it just seems to hang (it could be that the workload file is too
big?) . So, what we're doing now is isolating the SELECT statements that
take a long time to run and manually comparing them to the indexes that
exist on these large tables. Since we can't alter the SQL source code,
we're trying to alter the indexes to improve performance.
What I would like to know is, is there a good way to get benchmark
measurements so we can explicitly measure any performance changes? Also, do
you think
we're going about this the right way, or is there some other avenue we could
be looking at to improve performance?
I recognize that performance questions are tricky to post/answer in a
newsgroup, because usually you need more information than is provided. The
problem is
that this is a high profile investigation (they're hauling us into meetings
every two days to report our progress) and I need to be able to convincingly
state that we have either improved performance by X% , or that it is the
application itself that's the problem and we're stuck with it.
Any thoughts would be deeply appreciated.
Thanks and best regards,
Steve"Steve_CA" <steveee_ca@.yahoo.com> wrote in message
news:35vad.13104$3C6.446571@.news20.bellglobal.com. ..
> Hello all,
> I've been recruited to assist in diagnosing and fixing a performance
> problem
> on an application we have running on SQL Server 7.
> The application itself is third party software, so we can't get at the
> source code. It's a Client Management system, where consultants all over
> the
> country track their client meetings, results, action plans, etc. , and has
> apparently been problematic for a long time now. I came into this
> investigation
> in mid-stream, but here's the situation as I understand it:
> We have users reporting it's slow, with no discernable pattern with
> respect
> to what part of the application they're using or now particular time of
> day.
> I am told that it doesn't appear to be a bandwith or computer resource
> problem. They apparently added two app servers a year or so ago, which
> temporarily
> improved the performance. We're using a nominal percentage of CPU and
> memory.
> There are three large tables (approx 8 million rows) that are queried
> often,
> as users click to see their calendar of appointments or review past
> meetings
> with a client, etc. The activity on these tables is over 90% reads
> (SELECTS) with about 10% INSERTS/UPDATES. We have attempted to run the
> Index
> Analyzer Wizard twice
> but so far it just seems to hang (it could be that the workload file is
> too
> big?) . So, what we're doing now is isolating the SELECT statements that
> take a long time to run and manually comparing them to the indexes that
> exist on these large tables. Since we can't alter the SQL source code,
> we're trying to alter the indexes to improve performance.
> What I would like to know is, is there a good way to get benchmark
> measurements so we can explicitly measure any performance changes? Also,
> do
> you think
> we're going about this the right way, or is there some other avenue we
> could
> be looking at to improve performance?
> I recognize that performance questions are tricky to post/answer in a
> newsgroup, because usually you need more information than is provided.
> The
> problem is
> that this is a high profile investigation (they're hauling us into
> meetings
> every two days to report our progress) and I need to be able to
> convincingly
> state that we have either improved performance by X% , or that it is the
> application itself that's the problem and we're stuck with it.
> Any thoughts would be deeply appreciated.
> Thanks and best regards,
> Steve
Your first stop should probably be Profiler, where you can gather lots of
information about the TSQL being executed on the server, along with
durations, I/O cost, query plans etc. And of course Perfmon for checking if
MSSQL is hitting I/O or CPU limits - if you've just joined the
investigation, you should probably satisfy yourself about that, especially
if you're now the person more or less responsible for resolving the
situation.
If it's a third-party app, then it's going to be awkward to find a
resolution, as you say. Indexes are probably the only thing you can change,
any even then you might find you've invalidated your support agreement by
doing so. But certainly, gathering information and establishing where the
bottleneck is (if there is one) should be the first step.
Simon|||On Mon, 11 Oct 2004 08:34:12 -0400, Steve_CA wrote:
>We have users reporting it's slow, with no discernable pattern with respect
>to what part of the application they're using or now particular time of day.
Hi Steve,
In addition to Simon's suggestion, I'd look into possible locking problems
as well. Just add locking events to the Profiler trace already suggested
by Simon.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||Can you capture some of the queries that run slowly ? I imagine that a
number of them will consistently perform slowly. If so, try running
these through QA and look at the execution plan. It should indicate
where the performance hit is. You may find that this is table scans
etc...
Also, you should run the index analysis against these 'common' queries
to see if it comes up with any suggestions.
If it's locking/blocking then
http://www.sommarskog.se/sqlutil/aba_lockinfo.html will likely be of
some help
Finally, a bit of a long shot as it sounds similar to a problem we
had.
Are any views used ? Can these be improved by swapping to tables
populated by stored procedures ? You can keep the naming the same but
swap them over. To give you an example, we had 4 views which were
referenced by a final view (all were complex). Our app needed to look
at the data, but I didn't want to change the code. I swapped the view
to a table which was generated by a copy of the original view from an
SP. This saved a lot of time (1 min from startup down to 5 seconds)
with the SP being run every 10 mins on the server. This combined with
better indexing and it's saving an hour a day for them. OK, it's a
quick overview, but I'd see if you can get away with any little tricks
like this (especially if in certain circumstances you only need to
read the data). Might be worth a try.
Ryan
Hugo Kornelis <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message news:<gh1mm0llnfvomce0s6u684ml81q365ursh@.4ax.com>...
> On Mon, 11 Oct 2004 08:34:12 -0400, Steve_CA wrote:
> >We have users reporting it's slow, with no discernable pattern with respect
> >to what part of the application they're using or now particular time of day.
> Hi Steve,
> In addition to Simon's suggestion, I'd look into possible locking problems
> as well. Just add locking events to the Profiler trace already suggested
> by Simon.
> Best, Hugo|||Thank you all,
Ryan, your suggestion regarding views is very relevant...one of the main
culprits is a commonly used view
and I thought we were dead in the water with version 7 (I know you can
create indexes on views in 2000).
"Ryan" <ryanofford@.hotmail.com> wrote in message
news:7802b79d.0410120046.4ea7dcd@.posting.google.co m...
> Can you capture some of the queries that run slowly ? I imagine that a
> number of them will consistently perform slowly. If so, try running
> these through QA and look at the execution plan. It should indicate
> where the performance hit is. You may find that this is table scans
> etc...
> Also, you should run the index analysis against these 'common' queries
> to see if it comes up with any suggestions.
> If it's locking/blocking then
> http://www.sommarskog.se/sqlutil/aba_lockinfo.html will likely be of
> some help
> Finally, a bit of a long shot as it sounds similar to a problem we
> had.
> Are any views used ? Can these be improved by swapping to tables
> populated by stored procedures ? You can keep the naming the same but
> swap them over. To give you an example, we had 4 views which were
> referenced by a final view (all were complex). Our app needed to look
> at the data, but I didn't want to change the code. I swapped the view
> to a table which was generated by a copy of the original view from an
> SP. This saved a lot of time (1 min from startup down to 5 seconds)
> with the SP being run every 10 mins on the server. This combined with
> better indexing and it's saving an hour a day for them. OK, it's a
> quick overview, but I'd see if you can get away with any little tricks
> like this (especially if in certain circumstances you only need to
> read the data). Might be worth a try.
> Ryan
> Hugo Kornelis <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:<gh1mm0llnfvomce0s6u684ml81q365ursh@.4ax.com>...
> > On Mon, 11 Oct 2004 08:34:12 -0400, Steve_CA wrote:
> > >We have users reporting it's slow, with no discernable pattern with
respect
> > >to what part of the application they're using or now particular time of
day.
> > Hi Steve,
> > In addition to Simon's suggestion, I'd look into possible locking
problems
> > as well. Just add locking events to the Profiler trace already suggested
> > by Simon.
> > Best, Hugo|||Steve,
Got your e-mail (replied directly as well). Essentially, that's what I
meant. Posting in NG in case it helps someone else later.
The scenario that we had with this was that we had a view which gathered
the data from 4 other views. Each of the 4 underlying views were a
little complex and took a while to run. Combined in the top level
view, the performance was slow. The application ran a simple summary
query of the top level view when opening. This took about 50 seconds to
run which was unacceptable to the users.
Imagine (obviously change the names to something more meaningful)
myView1, myView2, myView3, myView4 all feed into myTopView. The
application looks to query the object myTopView.
I renamed the view from myTopView to myTopViewFull.
I created a table called myTopViewTable (the same name as the view and
the same structure as the data returned from the original view
myTopView).
Then I created a view called myTopView (same name as my original top
level view) which pointed to the table myTopViewTable instead of the 4
lower level views. Simple select * will do it.
All I had to do was a very simple SP which truncated the table
myTopViewTable and inserted the data from myTopViewFull. This still took
a while to run, but it only takes the hit on the server and the user
perception is changed and they think its run quicker. Instead of a
bunch of users all doing the same thing at a 50 second cost to each of
them, we only have a 50 second cost on the server every 10 minutes.
For our needs, it doesnt matter for this data if we update it every 10
minutes, but the difference to the users was quite noticeable. The time
to open the application dropped from 50 seconds plus to consistently
lower than 5 seconds. I did add some indexes to the main tables
(referenced by the view) to try and speed things up and it helped a
little. I found more performance improvements in the application though
as a result.
The main benefit of this is shifting the perception of work from the
users to the server. You can also try using OPTION (NO LOCK) on the
select statements to reduce locking / blocking issues as you are taking
the data into a table directly.
Also, as SQL tables / views cannot have the same name, if you have a
view that runs far too slowly, you can sometimes get a bit of a
performance advantage by doing this. Its crude, but it works provided
you apply it correctly. No changes to the application should be needed.
A simple test is, drop the data from the view into a table. Try a query
that you know takes a while against the view and against the table. See
if there is a performance improvement. Oh, and you could try indexes on
the new table provided you dont take a hit re-building them.
Indexed views are possibly another option though.
Hope that helps
Ryan
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!
Sunday, February 12, 2012
Anyway to recover delete stored proc?
I screwed up big time, I deleted a very long and smart stored proc (pls
don't ask how).
Is there anyway I can recover it?
Any advice appreciated.
Tada.KoliPoki (rayone@.gmail.com) writes:
> I screwed up big time, I deleted a very long and smart stored proc (pls
> don't ask how).
> Is there anyway I can recover it?
Do you have a backup of the database? Or do you run the database with
full or bulk-logged recovery? In that case you might be able to.
If you don't have any backup and run with simple recovery, the procedure
has left for outer space.
Generally, all source code should be under version control. See the
database as the place where you have the binary representation of
the source.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp