I try to upload a big file to a binary field. Because it is very big, I use
AppendChunk.
But it just takes as much memory as before.
_variant_t bigarray;
// set bigarray to 5 Mega bytes
while(true){
read next part of file into bigarrary;
Recordset->Fields->Item["ImageField"].AppendChunk(bigarrary);
}
Recordset->updata();
If the file is 200 Mega bytes, after call the update, the memory usage of
the process will go up to more than 600Mega bytes.
And then after some time, the update failed with time-out.
Can anyone help? Or advise me another way to do this?
Thanks
Eason
eason@.hotmail.comHi
At a guess you are continually looping.
Also check out:
http://support.microsoft.com/defaul...kb;en-us;153238
John
"Eason" wrote:
> I try to upload a big file to a binary field. Because it is very big, I us
e
> AppendChunk.
> But it just takes as much memory as before.
> _variant_t bigarray;
> // set bigarray to 5 Mega bytes
> while(true){
> read next part of file into bigarrary;
> Recordset->Fields->Item["ImageField"].AppendChunk(bigarrary);
> }
> Recordset->updata();
> If the file is 200 Mega bytes, after call the update, the memory usage of
> the process will go up to more than 600Mega bytes.
> And then after some time, the update failed with time-out.
> Can anyone help? Or advise me another way to do this?
> Thanks
> Eason
> eason@.hotmail.com
>|||Thanks for your response. But the loop is fine, not a dead loop.
The real code just generates some data and exits the loop after some
AppendChunk.
I tried a middle size file (10MByte), it works fine.
Then I tried a big file (100MByte), it ran a long time using more than
600Mbyte virtual memory after finishing all AppendChunk. Then called update,
the memory went higher and higher and gave an error time out.
Here is my code:
======================================
// This is the main project file for VC++ application project
// generated using an Application Wizard.
#include "stdafx.h"
#import "D:\Program Files\Common Files\System\ADO\mo15.dll" \
no_namespace rename("EOF", "EndOfFile")
#define ChunkSize 1024*1024
#include <ole2.h>
#include <stdio.h>
#include "conio.h"
#include "malloc.h"
_ConnectionPtr pConnection;
///////////////////////////////////////////////////////////
// //
// PrintProviderError Function //
// //
///////////////////////////////////////////////////////////
VOID PrintProviderError(_ConnectionPtr pConnection)
{
// Print Provider Errors from Connection object.
// pErr is a record object in the Connection's Error collection.
ErrorPtr pErr = NULL;
long nCount = 0;
long i = 0;
if( (pConnection->Errors->Count) > 0)
{
nCount = pConnection->Errors->Count;
// Collection ranges from 0 to nCount -1.
for(i = 0; i < nCount; i++)
{
pErr = pConnection->Errors->GetItem(i);
printf("\t Error number: %x\t%s", pErr->Number,(LPCSTR)
pErr->Description);
}
}
}
///////////////////////////////////////////////////////////
// //
// AppendChunkX Function //
// //
///////////////////////////////////////////////////////////
int AppendChunkX(VOID)
{
// Define ADO object pointers.
// Initialize pointers on define.
// These are in the ADODB:: namespace.
_RecordsetPtr pRstPubInfo = NULL;
_ConnectionPtr pConnection = NULL;
HRESULT hr = S_OK;
_bstr_t strCnn("Provider='sqloledb';Data Source='crybaby';"
"Initial Catalog='MyTest';Integrated Security='SSPI';");
SAFEARRAY FAR *psa;
SAFEARRAYBOUND rgsabound[1];
rgsabound[0].lLbound = 0;
rgsabound[0].cElements = ChunkSize;
psa = SafeArrayCreate(VT_UI1, 1, rgsabound);
_variant_t varChunk;
_RecordsetPtr RsVersions(__uuidof(Recordset));
int len,k,i;
char* databuf=(char*)psa->pvData;
{
char buf[256]="
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aa\n";
len=strlen(buf);
k=0;
memset(databuf,' ',ChunkSize);
for(i=0;i<(ChunkSize/len);i++){
strncpy(databuf+k,buf,len);
k+=len;
}
}
try
{
//Open a Connection.
hr=pConnection.CreateInstance(__uuidof(Connection));
hr = pConnection->Open(strCnn,"","",adConnectUnspecified);
_bstr_t wQueryString="Select content from Doc where (id=1)";
try {
hr=RsVersions->Open(
variant_t(wQueryString),
_variant_t((IDispatch *)pConnection, true),
adOpenUnspecified, //adOpenForwardOnly, // adOpenUnspecified,
adLockOptimistic, // adLockUnspecified,
-1);
if (!FAILED(hr) && RsVersions->BOF){
printf("error -1\n");
RsVersions->Close();
return -1;
}
}
catch(_com_error e) {
printf("error -2-\n");
// dump_com_error(e,LOG_DEBUG);
return -1;
}
int num=1;
char sbuf[10];
// for(i=0;i<10;i++){
for(i=0;i<300;i++){
sprintf(sbuf,"%d",i);
strncpy((char*)psa->pvData,sbuf,strlen(sbuf));
//Assign the Safe array to a variant.
varChunk.vt = VT_ARRAY|VT_UI1;
varChunk.parray = psa;
RsVersions->Fields->Item["content"]->AppendChunk(varChunk);
}
RsVersions->Update();
printf("Write %d Mega bytes\n",num);
}
catch(_com_error &e)
{
// Notify the user of errors if any.
_bstr_t bstrSource(e.Source());
_bstr_t bstrDescription(e.Description());
PrintProviderError(pConnection);
printf("Source : %s \n Description : %s\n",(LPCSTR)bstrSource,
(LPCSTR)bstrDescription);
}
// Clean up objects before exit.
if (RsVersions)
if (RsVersions->State == adStateOpen)
RsVersions->Close();
if (pConnection)
if (pConnection->State == adStateOpen)
pConnection->Close();
}
int main()
{
HRESULT hr = S_OK;
if(FAILED(::CoInitialize(NULL)))
return 1;
AppendChunkX();
//Wait here for the user to see the output
printf("\n\nPress any key to continue..");
getch();
::CoUninitialize();
return 0;
}
========================================
"John Bell" wrote:
> Hi
> At a guess you are continually looping.
> Also check out:
> http://support.microsoft.com/defaul...kb;en-us;153238
> John
> "Eason" wrote:
>|||Hi
Is this happening
http://support.microsoft.com/defaul...kb;en-us;182423
You may want to look at:
http://support.microsoft.com/defaul...kb;en-us;189415
I also seem to remember that returning a second (non text) column was
the solution for some error, but can't remember or find the article
that was talking about it.
John
Eason wrote:
> Thanks for your response. But the loop is fine, not a dead loop.
> The real code just generates some data and exits the loop after some
> AppendChunk.
> I tried a middle size file (10MByte), it works fine.
> Then I tried a big file (100MByte), it ran a long time using more
than
> 600Mbyte virtual memory after finishing all AppendChunk. Then called
update,
> the memory went higher and higher and gave an error time out.
> Here is my code:
> ======================================
> // This is the main project file for VC++ application project
> // generated using an Application Wizard.
> #include "stdafx.h"
> #import "D:\Program Files\Common Files\System\ADO\mo15.dll" \
> no_namespace rename("EOF", "EndOfFile")
> #define ChunkSize 1024*1024
> #include <ole2.h>
> #include <stdio.h>
> #include "conio.h"
> #include "malloc.h"
> _ConnectionPtr pConnection;
> ///////////////////////////////////////////////////////////
> // //
> // PrintProviderError Function //
> // //
> ///////////////////////////////////////////////////////////
> VOID PrintProviderError(_ConnectionPtr pConnection)
> {
> // Print Provider Errors from Connection object.
> // pErr is a record object in the Connection's Error collection.
> ErrorPtr pErr = NULL;
> long nCount = 0;
> long i = 0;
> if( (pConnection->Errors->Count) > 0)
> {
> nCount = pConnection->Errors->Count;
> // Collection ranges from 0 to nCount -1.
> for(i = 0; i < nCount; i++)
> {
> pErr = pConnection->Errors->GetItem(i);
> printf("\t Error number: %x\t%s", pErr->Number,(LPCSTR)
> pErr->Description);
> }
> }
> }
> ///////////////////////////////////////////////////////////
> // //
> // AppendChunkX Function //
> // //
> ///////////////////////////////////////////////////////////
> int AppendChunkX(VOID)
> {
> // Define ADO object pointers.
> // Initialize pointers on define.
> // These are in the ADODB:: namespace.
> _RecordsetPtr pRstPubInfo = NULL;
> _ConnectionPtr pConnection = NULL;
> HRESULT hr = S_OK;
> _bstr_t strCnn("Provider='sqloledb';Data Source='crybaby';"
> "Initial Catalog='MyTest';Integrated Security='SSPI';");
> SAFEARRAY FAR *psa;
> SAFEARRAYBOUND rgsabound[1];
> rgsabound[0].lLbound = 0;
> rgsabound[0].cElements = ChunkSize;
> psa = SafeArrayCreate(VT_UI1, 1, rgsabound);
> _variant_t varChunk;
> _RecordsetPtr RsVersions(__uuidof(Recordset));
> int len,k,i;
> char* databuf=(char*)psa->pvData;
> {
> char buf[256]="
>
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaa\n";
> len=strlen(buf);
> k=0;
> memset(databuf,' ',ChunkSize);
> for(i=0;i<(ChunkSize/len);i++){
> strncpy(databuf+k,buf,len);
> k+=len;
> }
> }
> try
> {
> //Open a Connection.
> hr=pConnection.CreateInstance(__uuidof(Connection));
> hr = pConnection->Open(strCnn,"","",adConnectUnspecified);
> _bstr_t wQueryString="Select content from Doc where (id=1)";
> try {
> hr=RsVersions->Open(
> variant_t(wQueryString),
> _variant_t((IDispatch *)pConnection, true),
> adOpenUnspecified, //adOpenForwardOnly, // adOpenUnspecified,
> adLockOptimistic, // adLockUnspecified,
> -1);
> if (!FAILED(hr) && RsVersions->BOF){
> printf("error -1\n");
> RsVersions->Close();
> return -1;
> }
> }
> catch(_com_error e) {
> printf("error -2-\n");
> // dump_com_error(e,LOG_DEBUG);
> return -1;
> }
>
> int num=1;
> char sbuf[10];
> // for(i=0;i<10;i++){
> for(i=0;i<300;i++){
> sprintf(sbuf,"%d",i);
> strncpy((char*)psa->pvData,sbuf,strlen(sbuf));
> //Assign the Safe array to a variant.
> varChunk.vt = VT_ARRAY|VT_UI1;
> varChunk.parray = psa;
> RsVersions->Fields->Item["content"]->AppendChunk(varChunk);
> }
> RsVersions->Update();
> printf("Write %d Mega bytes\n",num);
> }
> catch(_com_error &e)
> {
> // Notify the user of errors if any.
> _bstr_t bstrSource(e.Source());
> _bstr_t bstrDescription(e.Description());
> PrintProviderError(pConnection);
> printf("Source : %s \n Description :
%s\n",(LPCSTR)bstrSource,
> (LPCSTR)bstrDescription);
> }
> // Clean up objects before exit.
> if (RsVersions)
> if (RsVersions->State == adStateOpen)
> RsVersions->Close();
> if (pConnection)
> if (pConnection->State == adStateOpen)
> pConnection->Close();
> }
> int main()
> {
> HRESULT hr = S_OK;
> if(FAILED(::CoInitialize(NULL)))
> return 1;
> AppendChunkX();
> //Wait here for the user to see the output
> printf("\n\nPress any key to continue..");
> getch();
> ::CoUninitialize();
> return 0;
> }
> ========================================
>
> "John Bell" wrote:
>
big, I use
usage of|||Thanks for the sample.
The sample code only calls AppendChunk once, so it cannot be very big and it
does not save any memory.
GetChunk is correct. It can get the part of data each time and it does not
load all data to the memory first.
If all the data will save in the memory during multiple AppendChunk call, I
do not see any reason we need use it.
I can allocate the memory for the whole data if AppendChunk will also
allocate same amount of memory.
It looks like AppendChunk API is just a joke.
"John Bell" wrote:
> Hi
> Is this happening
> http://support.microsoft.com/defaul...kb;en-us;182423
> You may want to look at:
> http://support.microsoft.com/defaul...kb;en-us;189415
> I also seem to remember that returning a second (non text) column was
> the solution for some error, but can't remember or find the article
> that was talking about it.
> John
> Eason wrote:
> than
> update,
> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaa\n";
> %s\n",(LPCSTR)bstrSource,
> big, I use
> usage of
>|||Hi
Storing large blobs in a relational database is not what they are
really designed for, when you start looking at using text/image
datatypes you will see that there are significant restrictions in what
you can do.
John
Eason wrote:
> Thanks for the sample.
> The sample code only calls AppendChunk once, so it cannot be very big
and it
> does not save any memory.
> GetChunk is correct. It can get the part of data each time and it
does not
> load all data to the memory first.
> If all the data will save in the memory during multiple AppendChunk
call, I
> do not see any reason we need use it.
> I can allocate the memory for the whole data if AppendChunk will also
> allocate same amount of memory.
> It looks like AppendChunk API is just a joke.
>
> "John Bell" wrote:
>
was
some
called
collection.
pErr->Number,(LPCSTR)
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaa\n";
pConnection->Open(strCnn,"","",adConnectUnspecified);
very
Recordset->Fields->Item["ImageField"].AppendChunk(bigarrary);
memory|||It is true that there is a limitation on the size of blob data.
But it is not true in my case.
I try to move data from sharepoint database to another sql database(not on
the sharepoint).
The sharepoint database (SQL2000) has a blob data 200Mega bytes. I read it
out using GetChunk without any error.
But when I try to put it into our SQL2000 database, I cannot put it back. I
use ADO.
Do you know what database API that sharepoint uses to put blob data into SQL
database?
Your help is greatly appreciated.
Thanks
Eason
"John Bell" wrote:
> Hi
> Storing large blobs in a relational database is not what they are
> really designed for, when you start looking at using text/image
> datatypes you will see that there are significant restrictions in what
> you can do.
> John
> Eason wrote:
> and it
> does not
> call, I
>
> was
> some
> called
> collection.
> pErr->Number,(LPCSTR)
> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaa\n";
> pConnection->Open(strCnn,"","",adConnectUnspecified);
> very
> Recordset->Fields->Item["ImageField"].AppendChunk(bigarrary);
> memory
>|||Hi
I am not sure why you aren't using DTS, or a linked server, BCP or
replication to do this?
JOhn
*** Sent via Developersdex http://www.examnotes.net ***
Don't just participate in USENET...get rewarded for it!
Sunday, February 19, 2012
AppendChunk uses a lot of memory
Labels:
_variant_t,
appendchunk,
binary,
database,
field,
file,
memory,
microsoft,
mysql,
oracle,
server,
sql,
upload,
useappendchunk
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment