Showing posts with label tool. Show all posts
Showing posts with label tool. Show all posts

Tuesday, March 27, 2012

Archiving very slow, help needed please

Hey guys,
The vendor gave us an archiving tool for our huge tables. We gave them a
production copy of our tables. They tested their tool against their own
server and told us that it takes 2 seconds to insert each record. I know
this is already bad. But today, I tested it against our own test server.
They also gave us instruction on what to do first before running the
archiving tool. Well anyways, after running the archiving tool, it took 20
seconds to insert 1 record. That's totally bad!
I would like to know if you will be able to help me guys identify the
problem by just looking at this links.
http://restricted.dyndns.org/executionplan.txt
http://restricted.dyndns.org/execplan1.gif
http://restricted.dyndns.org/execplan2.gif
http://restricted.dyndns.org/execplan3.gif
The INSERT statement that you will see there consumed 10 seconds of CPU and
30 seconds of Duration.
Are there any other statements that I can execute against my captured
profiler table that can us troubleshoot?
Any help will be greatly appreciated.
Thanks.
V1rt>> told us that it takes 2 seconds to insert each record
well....
Go back to the vendor (I assume you haven't paid them yet) and tell them
this isn't acceptable.
How many records do you have to deal with - even at 2 secs per record?
"Neil" wrote:
> Hey guys,
> The vendor gave us an archiving tool for our huge tables. We gave them a
> production copy of our tables. They tested their tool against their own
> server and told us that it takes 2 seconds to insert each record. I know
> this is already bad. But today, I tested it against our own test server.
> They also gave us instruction on what to do first before running the
> archiving tool. Well anyways, after running the archiving tool, it took 20
> seconds to insert 1 record. That's totally bad!
> I would like to know if you will be able to help me guys identify the
> problem by just looking at this links.
> http://restricted.dyndns.org/executionplan.txt
> http://restricted.dyndns.org/execplan1.gif
> http://restricted.dyndns.org/execplan2.gif
> http://restricted.dyndns.org/execplan3.gif
> The INSERT statement that you will see there consumed 10 seconds of CPU and
> 30 seconds of Duration.
> Are there any other statements that I can execute against my captured
> profiler table that can us troubleshoot?
> Any help will be greatly appreciated.
> Thanks.
> V1rt
>
>|||It looks like you are filtering the source table, Enclosure. Nothing wrong
with that. However, then you are joining this to the destination table.
Why?
An archive table usually doesn't contain the data in it yet. Are you
joining to make sure that you don't attempt to archive data that's already
been copied?
If so, there are better ways to write this. If you are INSERTing into a
table, not an update, there is rarely a need to join the destination to the
source. This will only slow you down because an INSERT will usually involve
updating the joined columns.
Try something more like this:
BEGIN TRANSACTION
INSERT INTO TableDestination
(Column1, Column2, ..., ColumnN)
SELECT Column1, Column2, ..., ColumnN
FROM TableSource
WHERE TableSource.ColumnX = ExpressionX
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
DELETE FROM s
FROM TableDestination AS d
INNER JOIN TableSource AS s
ON d.Key1 = s.Key1
AND d.Key2 = s.Key2
AND...
AND d.KeyN = s.KeyN
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
COMMIT TRANSACTION
Sincerely,
Anthony Thomas
"Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...
> >> told us that it takes 2 seconds to insert each record
> well....
> Go back to the vendor (I assume you haven't paid them yet) and tell them
> this isn't acceptable.
> How many records do you have to deal with - even at 2 secs per record?
>
> "Neil" wrote:
> > Hey guys,
> >
> > The vendor gave us an archiving tool for our huge tables. We gave them a
> > production copy of our tables. They tested their tool against their own
> > server and told us that it takes 2 seconds to insert each record. I know
> > this is already bad. But today, I tested it against our own test server.
> > They also gave us instruction on what to do first before running the
> > archiving tool. Well anyways, after running the archiving tool, it took
20
> > seconds to insert 1 record. That's totally bad!
> >
> > I would like to know if you will be able to help me guys identify the
> > problem by just looking at this links.
> >
> > http://restricted.dyndns.org/executionplan.txt
> > http://restricted.dyndns.org/execplan1.gif
> > http://restricted.dyndns.org/execplan2.gif
> > http://restricted.dyndns.org/execplan3.gif
> >
> > The INSERT statement that you will see there consumed 10 seconds of CPU
and
> > 30 seconds of Duration.
> >
> > Are there any other statements that I can execute against my captured
> > profiler table that can us troubleshoot?
> >
> > Any help will be greatly appreciated.
> >
> > Thanks.
> >
> > V1rt
> >
> >
> >|||Hi Anthony,
Thanks for the awesome reply. Am I correct that the destination table that
is being joint is the tablefieldaudit? That's what I saw in the INSERT
statement.
Below is what I captured using Profiler. I saw tons of this running for 20+
seconds each. :(
INSERT INTO
TABLEFIELDAUDIT(TABLENAME,FIELDNAME,FIELDVALUE,CHANGEDATE,KEYVALUE,USERID,SUBKEY1,
SUBKEY2)
SELECT TABLENAME,FIELDNAME,FIELDVALUE,CHANGEDATE,E.RECORDID AS
KEYVALUE,USERID,SUBKEY1, SUBKEY2
FROM TABLEFIELDAUDIT
INNER JOIN ENCLOSURE E
ON TABLEFIELDAUDIT.SUBKEY1=E.BARCODE AND
TABLEFIELDAUDIT.SUBKEY2=E.ENCLOSURENUMBER
WHERE TABLENAME='ENCLOSURE' AND SUBKEY1='00010690'
Thanks again,
Neil
"AnthonyThomas" <Anthony.Thomas@.CommerceBank.com> wrote in message
news:OLmPhvtyEHA.2568@.TK2MSFTNGP11.phx.gbl...
> It looks like you are filtering the source table, Enclosure. Nothing
> wrong
> with that. However, then you are joining this to the destination table.
> Why?
> An archive table usually doesn't contain the data in it yet. Are you
> joining to make sure that you don't attempt to archive data that's already
> been copied?
> If so, there are better ways to write this. If you are INSERTing into a
> table, not an update, there is rarely a need to join the destination to
> the
> source. This will only slow you down because an INSERT will usually
> involve
> updating the joined columns.
> Try something more like this:
> BEGIN TRANSACTION
> INSERT INTO TableDestination
> (Column1, Column2, ..., ColumnN)
> SELECT Column1, Column2, ..., ColumnN
> FROM TableSource
> WHERE TableSource.ColumnX = ExpressionX
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> DELETE FROM s
> FROM TableDestination AS d
> INNER JOIN TableSource AS s
> ON d.Key1 = s.Key1
> AND d.Key2 = s.Key2
> AND...
> AND d.KeyN = s.KeyN
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> COMMIT TRANSACTION
> Sincerely,
>
> Anthony Thomas
>
> --
> "Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
> news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...
>> >> told us that it takes 2 seconds to insert each record
>> well....
>> Go back to the vendor (I assume you haven't paid them yet) and tell them
>> this isn't acceptable.
>> How many records do you have to deal with - even at 2 secs per record?
>>
>> "Neil" wrote:
>> > Hey guys,
>> >
>> > The vendor gave us an archiving tool for our huge tables. We gave them
>> > a
>> > production copy of our tables. They tested their tool against their own
>> > server and told us that it takes 2 seconds to insert each record. I
>> > know
>> > this is already bad. But today, I tested it against our own test
>> > server.
>> > They also gave us instruction on what to do first before running the
>> > archiving tool. Well anyways, after running the archiving tool, it took
> 20
>> > seconds to insert 1 record. That's totally bad!
>> >
>> > I would like to know if you will be able to help me guys identify the
>> > problem by just looking at this links.
>> >
>> > http://restricted.dyndns.org/executionplan.txt
>> > http://restricted.dyndns.org/execplan1.gif
>> > http://restricted.dyndns.org/execplan2.gif
>> > http://restricted.dyndns.org/execplan3.gif
>> >
>> > The INSERT statement that you will see there consumed 10 seconds of CPU
> and
>> > 30 seconds of Duration.
>> >
>> > Are there any other statements that I can execute against my captured
>> > profiler table that can us troubleshoot?
>> >
>> > Any help will be greatly appreciated.
>> >
>> > Thanks.
>> >
>> > V1rt
>> >
>> >
>> >
>

Archiving very slow, help needed please

Hey guys,
The vendor gave us an archiving tool for our huge tables. We gave them a
production copy of our tables. They tested their tool against their own
server and told us that it takes 2 seconds to insert each record. I know
this is already bad. But today, I tested it against our own test server.
They also gave us instruction on what to do first before running the
archiving tool. Well anyways, after running the archiving tool, it took 20
seconds to insert 1 record. That's totally bad!
I would like to know if you will be able to help me guys identify the
problem by just looking at this links.
http://restricted.dyndns.org/executionplan.txt
http://restricted.dyndns.org/execplan1.gif
http://restricted.dyndns.org/execplan2.gif
http://restricted.dyndns.org/execplan3.gif
The INSERT statement that you will see there consumed 10 seconds of CPU and
30 seconds of Duration.
Are there any other statements that I can execute against my captured
profiler table that can us troubleshoot?
Any help will be greatly appreciated.
Thanks.
V1rt
>> told us that it takes 2 seconds to insert each record
well....
Go back to the vendor (I assume you haven't paid them yet) and tell them
this isn't acceptable.
How many records do you have to deal with - even at 2 secs per record?
"Neil" wrote:

> Hey guys,
> The vendor gave us an archiving tool for our huge tables. We gave them a
> production copy of our tables. They tested their tool against their own
> server and told us that it takes 2 seconds to insert each record. I know
> this is already bad. But today, I tested it against our own test server.
> They also gave us instruction on what to do first before running the
> archiving tool. Well anyways, after running the archiving tool, it took 20
> seconds to insert 1 record. That's totally bad!
> I would like to know if you will be able to help me guys identify the
> problem by just looking at this links.
> http://restricted.dyndns.org/executionplan.txt
> http://restricted.dyndns.org/execplan1.gif
> http://restricted.dyndns.org/execplan2.gif
> http://restricted.dyndns.org/execplan3.gif
> The INSERT statement that you will see there consumed 10 seconds of CPU and
> 30 seconds of Duration.
> Are there any other statements that I can execute against my captured
> profiler table that can us troubleshoot?
> Any help will be greatly appreciated.
> Thanks.
> V1rt
>
>
|||It looks like you are filtering the source table, Enclosure. Nothing wrong
with that. However, then you are joining this to the destination table.
Why?
An archive table usually doesn't contain the data in it yet. Are you
joining to make sure that you don't attempt to archive data that's already
been copied?
If so, there are better ways to write this. If you are INSERTing into a
table, not an update, there is rarely a need to join the destination to the
source. This will only slow you down because an INSERT will usually involve
updating the joined columns.
Try something more like this:
BEGIN TRANSACTION
INSERT INTO TableDestination
(Column1, Column2, ..., ColumnN)
SELECT Column1, Column2, ..., ColumnN
FROM TableSource
WHERE TableSource.ColumnX = ExpressionX
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
DELETE FROM s
FROM TableDestination AS d
INNER JOIN TableSource AS s
ON d.Key1 = s.Key1
AND d.Key2 = s.Key2
AND...
AND d.KeyN = s.KeyN
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
COMMIT TRANSACTION
Sincerely,
Anthony Thomas

"Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...[vbcol=seagreen]
> well....
> Go back to the vendor (I assume you haven't paid them yet) and tell them
> this isn't acceptable.
> How many records do you have to deal with - even at 2 secs per record?
>
> "Neil" wrote:
20[vbcol=seagreen]
and[vbcol=seagreen]
|||Hi Anthony,
Thanks for the awesome reply. Am I correct that the destination table that
is being joint is the tablefieldaudit? That's what I saw in the INSERT
statement.
Below is what I captured using Profiler. I saw tons of this running for 20+
seconds each.
INSERT INTO
TABLEFIELDAUDIT(TABLENAME,FIELDNAME,FIELDVALUE,CHA NGEDATE,KEYVALUE,USERID,SUBKEY1,
SUBKEY2)
SELECT TABLENAME,FIELDNAME,FIELDVALUE,CHANGEDATE,E.RECORD ID AS
KEYVALUE,USERID,SUBKEY1, SUBKEY2
FROM TABLEFIELDAUDIT
INNER JOIN ENCLOSURE E
ON TABLEFIELDAUDIT.SUBKEY1=E.BARCODE AND
TABLEFIELDAUDIT.SUBKEY2=E.ENCLOSURENUMBER
WHERE TABLENAME='ENCLOSURE' AND SUBKEY1='00010690'
Thanks again,
Neil
"AnthonyThomas" <Anthony.Thomas@.CommerceBank.com> wrote in message
news:OLmPhvtyEHA.2568@.TK2MSFTNGP11.phx.gbl...
> It looks like you are filtering the source table, Enclosure. Nothing
> wrong
> with that. However, then you are joining this to the destination table.
> Why?
> An archive table usually doesn't contain the data in it yet. Are you
> joining to make sure that you don't attempt to archive data that's already
> been copied?
> If so, there are better ways to write this. If you are INSERTing into a
> table, not an update, there is rarely a need to join the destination to
> the
> source. This will only slow you down because an INSERT will usually
> involve
> updating the joined columns.
> Try something more like this:
> BEGIN TRANSACTION
> INSERT INTO TableDestination
> (Column1, Column2, ..., ColumnN)
> SELECT Column1, Column2, ..., ColumnN
> FROM TableSource
> WHERE TableSource.ColumnX = ExpressionX
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> DELETE FROM s
> FROM TableDestination AS d
> INNER JOIN TableSource AS s
> ON d.Key1 = s.Key1
> AND d.Key2 = s.Key2
> AND...
> AND d.KeyN = s.KeyN
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> COMMIT TRANSACTION
> Sincerely,
>
> Anthony Thomas
>
> --
> "Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
> news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...
> 20
> and
>

Archiving very slow, help needed please

Hey guys,
The vendor gave us an archiving tool for our huge tables. We gave them a
production copy of our tables. They tested their tool against their own
server and told us that it takes 2 seconds to insert each record. I know
this is already bad. But today, I tested it against our own test server.
They also gave us instruction on what to do first before running the
archiving tool. Well anyways, after running the archiving tool, it took 20
seconds to insert 1 record. That's totally bad!
I would like to know if you will be able to help me guys identify the
problem by just looking at this links.
http://restricted.dyndns.org/executionplan.txt
http://restricted.dyndns.org/execplan1.gif
http://restricted.dyndns.org/execplan2.gif
http://restricted.dyndns.org/execplan3.gif
The INSERT statement that you will see there consumed 10 seconds of CPU and
30 seconds of Duration.
Are there any other statements that I can execute against my captured
profiler table that can us troubleshoot?
Any help will be greatly appreciated.
Thanks.
V1rt>> told us that it takes 2 seconds to insert each record
well....
Go back to the vendor (I assume you haven't paid them yet) and tell them
this isn't acceptable.
How many records do you have to deal with - even at 2 secs per record?
"Neil" wrote:

> Hey guys,
> The vendor gave us an archiving tool for our huge tables. We gave them a
> production copy of our tables. They tested their tool against their own
> server and told us that it takes 2 seconds to insert each record. I know
> this is already bad. But today, I tested it against our own test server.
> They also gave us instruction on what to do first before running the
> archiving tool. Well anyways, after running the archiving tool, it took 20
> seconds to insert 1 record. That's totally bad!
> I would like to know if you will be able to help me guys identify the
> problem by just looking at this links.
> http://restricted.dyndns.org/executionplan.txt
> http://restricted.dyndns.org/execplan1.gif
> http://restricted.dyndns.org/execplan2.gif
> http://restricted.dyndns.org/execplan3.gif
> The INSERT statement that you will see there consumed 10 seconds of CPU an
d
> 30 seconds of Duration.
> Are there any other statements that I can execute against my captured
> profiler table that can us troubleshoot?
> Any help will be greatly appreciated.
> Thanks.
> V1rt
>
>|||It looks like you are filtering the source table, Enclosure. Nothing wrong
with that. However, then you are joining this to the destination table.
Why?
An archive table usually doesn't contain the data in it yet. Are you
joining to make sure that you don't attempt to archive data that's already
been copied?
If so, there are better ways to write this. If you are INSERTing into a
table, not an update, there is rarely a need to join the destination to the
source. This will only slow you down because an INSERT will usually involve
updating the joined columns.
Try something more like this:
BEGIN TRANSACTION
INSERT INTO TableDestination
(Column1, Column2, ..., ColumnN)
SELECT Column1, Column2, ..., ColumnN
FROM TableSource
WHERE TableSource.ColumnX = ExpressionX
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
DELETE FROM s
FROM TableDestination AS d
INNER JOIN TableSource AS s
ON d.Key1 = s.Key1
AND d.Key2 = s.Key2
AND...
AND d.KeyN = s.KeyN
IF @.@.ERROR <> 0 BEGIN
ROLLBACK TRANSACTION
RETURN
END
COMMIT TRANSACTION
Sincerely,
Anthony Thomas
"Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...[vbcol=seagreen]
> well....
> Go back to the vendor (I assume you haven't paid them yet) and tell them
> this isn't acceptable.
> How many records do you have to deal with - even at 2 secs per record?
>
> "Neil" wrote:
>
20[vbcol=seagreen]
and[vbcol=seagreen]|||Hi Anthony,
Thanks for the awesome reply. Am I correct that the destination table that
is being joint is the tablefieldaudit? That's what I saw in the INSERT
statement.
Below is what I captured using Profiler. I saw tons of this running for 20+
seconds each.
INSERT INTO
TABLEFIELDAUDIT(TABLENAME,FIELDNAME,FIEL
DVALUE,CHANGEDATE,KEYVALUE,USERID,SU
BKEY1,
SUBKEY2)
SELECT TABLENAME,FIELDNAME,FIELDVALUE,CHANGEDAT
E,E.RECORDID AS
KEYVALUE,USERID,SUBKEY1, SUBKEY2
FROM TABLEFIELDAUDIT
INNER JOIN ENCLOSURE E
ON TABLEFIELDAUDIT.SUBKEY1=E.BARCODE AND
TABLEFIELDAUDIT.SUBKEY2=E.ENCLOSURENUMBER
WHERE TABLENAME='ENCLOSURE' AND SUBKEY1='00010690'
Thanks again,
Neil
"AnthonyThomas" <Anthony.Thomas@.CommerceBank.com> wrote in message
news:OLmPhvtyEHA.2568@.TK2MSFTNGP11.phx.gbl...
> It looks like you are filtering the source table, Enclosure. Nothing
> wrong
> with that. However, then you are joining this to the destination table.
> Why?
> An archive table usually doesn't contain the data in it yet. Are you
> joining to make sure that you don't attempt to archive data that's already
> been copied?
> If so, there are better ways to write this. If you are INSERTing into a
> table, not an update, there is rarely a need to join the destination to
> the
> source. This will only slow you down because an INSERT will usually
> involve
> updating the joined columns.
> Try something more like this:
> BEGIN TRANSACTION
> INSERT INTO TableDestination
> (Column1, Column2, ..., ColumnN)
> SELECT Column1, Column2, ..., ColumnN
> FROM TableSource
> WHERE TableSource.ColumnX = ExpressionX
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> DELETE FROM s
> FROM TableDestination AS d
> INNER JOIN TableSource AS s
> ON d.Key1 = s.Key1
> AND d.Key2 = s.Key2
> AND...
> AND d.KeyN = s.KeyN
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK TRANSACTION
> RETURN
> END
> COMMIT TRANSACTION
> Sincerely,
>
> Anthony Thomas
>
> --
> "Nigel Rivett" <sqlnr@.hotmail.com> wrote in message
> news:B085D060-8557-4C9E-B632-0C320BE1CE98@.microsoft.com...
> 20
> and
>

Thursday, March 22, 2012

arabic characters are not saved (was "A serious problem ...- plz. help"

Dear all ..
i have a serious problem & all ur comments will be appreciated..

i have bought an ASP .NET publishing tool which i receieved an sql script with it to execute on either Ms sql server or MSDE .

i executed it on MSDE as i don't have Ms sql server on my windows dedicated server .

I wanted the tool for publishing (Arabic Language)content for a highly traffic soccer website..

After executing the sql script i tested the tool but i found arabic characters are not saved when i add articles .. they were saved as question marks (??).

so i re-executed the sql script on a new db after modifying every code containg (varchar) to (nvarchar) to support unicode & thus arabic.

it worked & i succeeded in saving arabic articles

BUT >>>>>>>>>>>>>

i found that only short arabic articles r saved fine while any article that reaches around (1 microsoft word page) is not saved well with arabic characters but saved as question marks ( ? ) .. !!

=======

so i checked the db tables using ASP.NET Enterprise manager & i found that

the field of article has (ntext) & infront of it number 16 ..

it seems that the (ntext) has a limit to wt it can save ..

so i believe there's a way which i don't know to make the (ntext) accepts long articles entry .

========

Here's the code in the original sql script i received with the tool & i hope u can guide me in details to any modification to do so that the ntext limit is raise to save any long arabic article.

========

code :

CREATE TABLE [dbo].[xlaANMarticles] (
[articleid] [int] IDENTITY (1, 1) NOT NULL ,
[posted] [nvarchar] (50) NOT NULL ,
[lastupdate] [nvarchar] (50) NOT NULL ,
[headline] [nvarchar] (350) NOT NULL ,
[headlinedate] [nvarchar] (255) NOT NULL ,
[startdate] [nvarchar] (50) NOT NULL ,
[enddate] [nvarchar] (50) NOT NULL ,
[source] [nvarchar] (255) NOT NULL ,
[summary] [nvarchar] (3000) NOT NULL ,
[articleurl] [nvarchar] (1000) NOT NULL ,
[article] [ntext] NOT NULL ,
[status] [tinyint] NOT NULL ,
[autoformat] [nvarchar] (50) NOT NULL ,
[publisherid] [int] NOT NULL ,
[clicks] [int] NOT NULL ,
[editor] [int] NOT NULL ,
[relatedid] [nvarchar] (2000) NOT NULL ,
[isfeatured] [nvarchar] (10) NULL ,
[keywords] [nvarchar] (255) NULL ,
[description] [nvarchar] (255) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

i highlighted the specific column for the article field with red color ..

i tried making it (nvarchar) but the sql manager i use saud it can't be done coz. there has to be a field for TEXTIMAGE coz. it's set to be (on) in the code.

waiting for ur help plz. ... i am desperate .. :confused:Masry-

I maximum length of NTEXT is 1,073,741,823 characters. I can't imagine a page of MS word document holding more than that limit. In addition, BOL says prefix the unicode character strings with N. You might want to try with N. Not sure what 16 means.|||The problem is that saving Arabic characters (like any unicode or 16 bit characters) requires a full 16 bit data path from begining to end. If any part of the path reverts back to 8 bit characters, then any character that isn't supported by the 16 bit to 8 bit translation will appear as a question mark.

Apparently something in the data path that handles the translation from Word documents larger than a given (roughly one page) threshold to a database column causes the data to pass through as 8 bit characters. The problem lies in isolating whatever that weak point is!

-PatP|||Masry,
BLOB values are not saved in row, but elsewhere in data file, unless you use sp_tableoption. The 16 in front of ntext column is only a pointer to where the actual ntext value is located.
But about your problem, I suggest to check the path that your data is traversing to be saved to DB. There might some variables or ... be used that cause the Unicode information be lost!
Regards,
Leila

Sunday, February 19, 2012

Appending a custom sql where clause

Hello,
I am in the process of evaluating SSRS 2005 to replace an home grown
reporting tool. In my reporting application all users have access all
the tables\fields in the database. In the home grown tool, data
security is implemented by the following mechanism. When users run
reports, a standard sql where clause is appended to the sql generated
by the reproting tool. This standard where clause has @.userID as the
parameter.
Now is there a way in SSRS 2005 I can append a standard where clause to
every report just before it is run? Does it have an event model, I can
hook into?
Thanks
_Gigi JKIn most cases query parameters are mapped to report parameters but they do
not have to be, they can be mapped to expressions. When mapping a query
parameter to an expression you can map the query parameter to a global
variable. One of the global variables available is User!UserID. This
variable has the user (and their domain). If you don't want the domain then
you would strip the domain off.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
<gigijk@.gmail.com> wrote in message
news:1137098143.325204.240880@.g47g2000cwa.googlegroups.com...
> Hello,
> I am in the process of evaluating SSRS 2005 to replace an home grown
> reporting tool. In my reporting application all users have access all
> the tables\fields in the database. In the home grown tool, data
> security is implemented by the following mechanism. When users run
> reports, a standard sql where clause is appended to the sql generated
> by the reproting tool. This standard where clause has @.userID as the
> parameter.
> Now is there a way in SSRS 2005 I can append a standard where clause to
> every report just before it is run? Does it have an event model, I can
> hook into?
> Thanks
> _Gigi JK
>|||Bruce,
Thanks for the reply. Sorry I sent the same question to you directly from my
gmail address as well.
In my case, reporting application will be intergrated into another
application which does not use NT auth. Is there a way I could pass in userID
to reporting tool?
On the original issue, what I want to do is to automatically append a where
clause ( for eg: 'AND sysem_id IN (SELECT System_id FROM Mdu_system where
user_id=@.userID') to every query generated by the reporting tool withou the
user intervention. Would this be posssible.
Once again thank you.
_GJK
"Bruce L-C [MVP]" wrote:
> In most cases query parameters are mapped to report parameters but they do
> not have to be, they can be mapped to expressions. When mapping a query
> parameter to an expression you can map the query parameter to a global
> variable. One of the global variables available is User!UserID. This
> variable has the user (and their domain). If you don't want the domain then
> you would strip the domain off.
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> <gigijk@.gmail.com> wrote in message
> news:1137098143.325204.240880@.g47g2000cwa.googlegroups.com...
> > Hello,
> > I am in the process of evaluating SSRS 2005 to replace an home grown
> > reporting tool. In my reporting application all users have access all
> > the tables\fields in the database. In the home grown tool, data
> > security is implemented by the following mechanism. When users run
> > reports, a standard sql where clause is appended to the sql generated
> > by the reproting tool. This standard where clause has @.userID as the
> > parameter.
> >
> > Now is there a way in SSRS 2005 I can append a standard where clause to
> > every report just before it is run? Does it have an event model, I can
> > hook into?
> >
> > Thanks
> > _Gigi JK
> >
>
>|||You can have a hidden parameter (i.e. it does not prompt for it but you can
include it when running the report).
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"GJK" <GJK@.discussions.microsoft.com> wrote in message
news:D37B86F8-AA55-4FF1-BC01-C426D806AF02@.microsoft.com...
> Bruce,
> Thanks for the reply. Sorry I sent the same question to you directly from
> my
> gmail address as well.
> In my case, reporting application will be intergrated into another
> application which does not use NT auth. Is there a way I could pass in
> userID
> to reporting tool?
> On the original issue, what I want to do is to automatically append a
> where
> clause ( for eg: 'AND sysem_id IN (SELECT System_id FROM Mdu_system where
> user_id=@.userID') to every query generated by the reporting tool withou
> the
> user intervention. Would this be posssible.
> Once again thank you.
> _GJK
> "Bruce L-C [MVP]" wrote:
>> In most cases query parameters are mapped to report parameters but they
>> do
>> not have to be, they can be mapped to expressions. When mapping a query
>> parameter to an expression you can map the query parameter to a global
>> variable. One of the global variables available is User!UserID. This
>> variable has the user (and their domain). If you don't want the domain
>> then
>> you would strip the domain off.
>>
>> --
>> Bruce Loehle-Conger
>> MVP SQL Server Reporting Services
>> <gigijk@.gmail.com> wrote in message
>> news:1137098143.325204.240880@.g47g2000cwa.googlegroups.com...
>> > Hello,
>> > I am in the process of evaluating SSRS 2005 to replace an home grown
>> > reporting tool. In my reporting application all users have access all
>> > the tables\fields in the database. In the home grown tool, data
>> > security is implemented by the following mechanism. When users run
>> > reports, a standard sql where clause is appended to the sql generated
>> > by the reproting tool. This standard where clause has @.userID as the
>> > parameter.
>> >
>> > Now is there a way in SSRS 2005 I can append a standard where clause to
>> > every report just before it is run? Does it have an event model, I can
>> > hook into?
>> >
>> > Thanks
>> > _Gigi JK
>> >
>>

Thursday, February 9, 2012

Anyone written an end user guide to using the Reportbuilder?

For those using the ReportBuilder, has anyone put together any sort of end-user guides to getting started with the tool? I'm not referring to articles and docs written from the perspective of introducing the tool and feature set to developers and DBAs but rather something we could all then give our end users when they first start using the tool to build and run reports.

I realize there's lots (and lots) of online help within the tool, but we all know that some users never read that. Again, I'm thinking of something ranging from a 1-2 page quick intro to the use and features, to perhaps even a several page guide.

Even if someone's written one for their company and so it's branded as such, if you're open to sharing it, I won't mind rebranding and modifying it (and happy to share that result then with others).

Indeed, is there perhaps something from MS that I've not thought of?

Raising this again to see if there are any takers. :-)|||I am also looking for the same thing- let me know if you have any luck.

Anyone written an end user guide to using the Reportbuilder?

For those using the ReportBuilder, has anyone put together any sort of end-user guides to getting started with the tool? I'm not referring to articles and docs written from the perspective of introducing the tool and feature set to developers and DBAs but rather something we could all then give our end users when they first start using the tool to build and run reports.

I realize there's lots (and lots) of online help within the tool, but we all know that some users never read that. Again, I'm thinking of something ranging from a 1-2 page quick intro to the use and features, to perhaps even a several page guide.

Even if someone's written one for their company and so it's branded as such, if you're open to sharing it, I won't mind rebranding and modifying it (and happy to share that result then with others).

Indeed, is there perhaps something from MS that I've not thought of?

Raising this again to see if there are any takers. :-)|||I am also looking for the same thing- let me know if you have any luck.

Anyone used ISql tool

Hi
Has anyone used ISql tool that comes with the windows installation. I'm
trying to run an SQL script using it giving the command
iSql -U user -P password -S server -d Database -i <path to sql script file>
What happens is that it does start executing the script but takes so much
time if the script contains around 100-200 statements. When I execute the
same script using MS SqlServer Managment studio, it executes it all within
2-4 seconds. Is there any way to speed up the process using ISql tool, like
if i'm missing any argument or something. Or if there's any alternative
method to do so. Actually I've to do so from my VC++ code so looking for any
alternative method possible.
Regards
UsmanUsman,
I don't know why isql is that slow, but you shouldn't use it anyway.
Isql is only included for backward compatibility. Better use
sqlcmd.exe.
M

Anyone used ISql tool

Hi
Has anyone used ISql tool that comes with the windows installation. I'm
trying to run an SQL script using it giving the command
iSql -U user -P password -S server -d Database -i <path to sql script file>
What happens is that it does start executing the script but takes so much
time if the script contains around 100-200 statements. When I execute the
same script using MS SqlServer Managment studio, it executes it all within
2-4 seconds. Is there any way to speed up the process using ISql tool, like
if i'm missing any argument or something. Or if there's any alternative
method to do so. Actually I've to do so from my VC++ code so looking for any
alternative method possible.
Regards
UsmanUsman,
I don't know why isql is that slow, but you shouldn't use it anyway.
Isql is only included for backward compatibility. Better use
sqlcmd.exe.
M

Anyone use BLAT on SQL2000/W2K3

Has anyone used the Blat application to send emails on SQL Server 2000
running on Windows 2003.
I have used this tool on Windows 2000 successfully - but am encountering
problems on Windows2003 - if anyone can confirm this tool does work OK I'd be
very grateful...
Thanks in advance,
Don't currently have access to one.
insure you have an up to date version of Blat.
You could try scheduling a call to Blat in the task scheduler and check
the error code it returns to the task scheduler log.
you might also try using the -server and -f options instead of the
registered defaults.

Anyone use BLAT on SQL2000/W2K3

Has anyone used the Blat application to send emails on SQL Server 2000
running on Windows 2003.
I have used this tool on Windows 2000 successfully - but am encountering
problems on Windows2003 - if anyone can confirm this tool does work OK I'd be
very grateful...
Thanks in advance,Don't currently have access to one.
insure you have an up to date version of Blat.
You could try scheduling a call to Blat in the task scheduler and check
the error code it returns to the task scheduler log.
you might also try using the -server and -f options instead of the
registered defaults.

Anyone use BLAT on SQL2000/W2K3

Has anyone used the Blat application to send emails on SQL Server 2000
running on Windows 2003.
I have used this tool on Windows 2000 successfully - but am encountering
problems on Windows2003 - if anyone can confirm this tool does work OK I'd b
e
very grateful...
Thanks in advance,Don't currently have access to one.
insure you have an up to date version of Blat.
You could try scheduling a call to Blat in the task scheduler and check
the error code it returns to the task scheduler log.
you might also try using the -server and -f options instead of the
registered defaults.