|
Thanks, I'll give that a shot on Monday
James
Sonork ID: 100.11138 - Hasaki
"Not be to confused with 'The VD Project'. Which would be a very bad pr0n flick. " - Michael P Butler Jan. 18, 2002
|
|
|
|
|
Hi,
I have this problem. I have created this class for ADO binding:
class Obce : public CADORecordBinding
{
BEGIN_ADO_BINDING(Obce)
ADO_VARIABLE_LENGTH_ENTRY2( 1, adVarWChar, m_szCITY, sizeof(m_szCITY), m_lCITYStatus, FALSE)
END_ADO_BINDING()
//Attributes
public:
CHAR m_szCITY[46];
ULONG m_lCITYStatus;
};
Now I have an edit control IDC_CITY and the variable m_strCity. All I need is to exchange data between these two variables.
How do I do: m_strCity -> m_szCITY and m_szCITY -> m_strCity?
Thank you for your help or suggestions.
David Pokluda (pokluda@mujweb.cz)
|
|
|
|
|
Hi,
I use in my soft an ADO Data Control 6.0.
I want to install my soft with InstallShield. So I don't know if it is possible to modify the data source of the ADO because when I install my soft, it doesn't fin the data path.
A solution would be modify the data source of the ADO DATA Control directly in my soft, but I don't know if it is possible.
If someone can help me !!
Thanks a lot.
Ludovic
|
|
|
|
|
Hi there,
My VC codes access SQL Server via ADO (using help class CADORecordBinding). I have a DBTYPE_DBTIMESTAMP data type column.
How can I put value in it? Is there a data type responding to it in C++?
|
|
|
|
|
Out of SQL Server Books Online:
timestamp is a data type that exposes automatically generated binary numbers, which are guaranteed to be unique within a database. timestamp is used typically as a mechanism for version-stamping table rows. The storage size is 8 bytes.
...
A nonnullable timestamp column is semantically equivalent to a binary(8) column. A nullable timestamp column is semantically equivalent to a varbinary(8) column.
So it does not look like you can set a value to it. It is auto generated by the server...
|
|
|
|
|
Maybe my words were not clear enough, I find the corresponding data type in C++ not anything else. And more, I've found it in 'ADO Data Bound Class Wizard' article. Thanks for ur help!
Best regard,
Kamp Huang
|
|
|
|
|
I'm working on a C++ program that merges and export Excel woorkbook sheets to plain text files. The program runs as a service and therefor needs to be stable and have very good memory management. I've done lots of testing, and I've finally come to the conclution that the ODBC driver (or perheaps the Excel driver) does not return all memory that it allocates. I removed all my own memory allocation and noticed that the program increases approx. 10 kB of memory (average) per ODBC connection.
I do close both the recordset and the database when I'm done with the date retrieval.
Have anyone else noticed this problem and have a work around? I would be very pleased for any ideas on the issue.
cheers,
Martin Fridegren
|
|
|
|
|
System : Win2000Profession VC6SP5 MDAC2.6 SQLserver7
When I retrive 15,000,000 records,the memory raise slowly to 2G,how to resolve this memeory problem?
The main routine is blow:
m_ptrRecordset->Open(
bstrSQL,
m_ptrConnection.GetInterfacePtr(),
ADODB::adOpenForwardOnly,
ADODB::adLockUnspecified,
ADODB::adAsyncExecute );
//--------------------------------------------------
while(!ptrRS->adoEOF)
{
lRow ++;
ptrFields = ptrRS->Fields;
nCols = ptrFields->Count;
for(long n = 0; n < nCols; n++)
{
vCol=n;
hr = ptrFields->get_Item(vCol, &ptrField);
VARIANT _result;
VariantInit(&_result);
hr = ptrField->get_Value(&_result);
CString str((LPCTSTR)CHelpers::CrackStrVariant(_result));
fwrite(str.GetBuffer(2048),sizeof(char),str.GetLength(),file);
if(n<ncols-1)
fwrite(",",sizeof(char),1,file);
="" }
="" fwrite("\n",sizeof(char),1,file);
="" hr="ptrRS-">raw_MoveNext();
if(FAILED(hr))
break;
}
ptrRS = ptrOldRS->NextRecordset(&vRowsAffected);
}
while(ptrRS != NULL);
Please pardon my weak English!
|
|
|
|
|
we have an app that does a whole bunch of calcs and end up writing upwards of 200,000 records into a sql7 database via direct odbc calls (sqlexecdirect) ... the app takes maybe 15 - 25 minutes to run which isn't so bad considering but i want to know if it can be sped up ... after profiling it i find that 89% of the time is spent inside sqlexecdirect()
we don't use stored procs and the records are inserted using a simple INSERT INTO blah (xxx,xxx,xxx,xxx) VALUES (xxx,xxx,xxx,xxx) statement ... the number of 'parameters' for the queries is maybe 20 or 25
my understanding of stored procs is that they compile once and execute quicker but will this make much difference given the simple nature of the statements and the number of parameters i would have to be working with?
thoughts would be appreciated
---
"every year we invent better idiot proof systems and every year they invent better idiots ... and the linux zealots still aren't being sterilized"
|
|
|
|
|
One thing that can speed it up a lot (in my experience) is to batch multiple insert statements together. For example, I would append all your insert statements to a string and when it reaches certain length I would execute it, then start all over again. The limit of the query statement is pretty big, I think, it is around 16K, so you can safely build strings up to at least that limit.
Someone told me that wrapping your inserts in transactions speed things up, but from my personal experience I did not see any improvement.
I hope this helps.
|
|
|
|
|
The fastest way of inserting records on a table is using BCP (Bulk Insert).
200.000 records takes about 3 or 4 seconds on my machine.
Other good option is using DTS if you need doing calculations on the data.
Crivo
Automated Credit Assessment
|
|
|
|
|
I agree with Daniel...
You must to use the BCP utility....
Regards
Carlos Antollini.
Sonork ID 100.10529 cantollini
|
|
|
|
|
Can you distribute BCP with your application?
Beside that, what I proposed to do is essentially the same thing that BCP does (batch inserting) only without a hassle of writing it to a file and then using external app to import it. What do you think?
|
|
|
|
|
Yes you can distribute the BCP utility. But I remember somthing better. The command BULK INSERT.
You can write a stored procedure using the command BULK INSERT in It. This command, You only need to inform the Path of the file that has the data, and the Field and record separator, and ready....
The good news is that, the BULK INSERT is executed in the Server, in that case all the resources are in the server....
Regards....
Carlos Antollini.
Sonork ID 100.10529 cantollini
|
|
|
|
|
This indeed looks like a better choice providing they can upload or access their file from the server. Good thinking.
|
|
|
|
|
Thanks... Always rememmber, that the path that you must to inform to the BULK INSERT is the Path from the Server...
Regards...;)
Carlos Antollini.
Sonork ID 100.10529 cantollini
|
|
|
|
|
given that the prog calculates the data to be inserted would you suggest i write all data to a file and then run a bcp to bulk insert?
i think writing it all to a memory buffer might be a bit resource gobbling no?
---
"every year we invent better idiot proof systems and every year they invent better idiots ... and the linux zealots still aren't being sterilized"
|
|
|
|
|
Lauren: Read my previus answer, where I said that I think that is better to use the command BULK INSERT....
Regards...
Carlos Antollini.
Sonork ID 100.10529 cantollini
|
|
|
|
|
carlos i did read your post
my problem is that the data to be inserted are created one by one from calcs in the prog ... i would have to buffer them somewhere until ready to do a bulk insert no?
my issue was where does one buffer them?
thanks
---
"every year we invent better idiot proof systems and every year they invent better idiots ... and the linux zealots still aren't being sterilized"
|
|
|
|
|
lauren wrote:
my problem is that the data to be inserted are created one by one from calcs in the prog
In this case the only that you can make is insert record by record, using the INSERT SQL Statement, Is better to use a Stored Procedure...
Bt if you have thousand of records for insert, may be you can make a file, an send it to the server, for make a BULK INSERT, because the BULK INSERT is for insert data Files into the database....
Regards!!!
Carlos Antollini.
Sonork ID 100.10529 cantollini
|
|
|
|
|
If you have any indexes on the table, drop them and recreate them after the insert. Make sure you aren't logging everything (in SQL Server) if you don't need to.
Andy Gaskell, MCSD
|
|
|
|
|
hi andy
no transaction loging is used on this piece and the only index is the primary key for the table
---
"every year we invent better idiot proof systems and every year they invent better idiots ... and the linux zealots still aren't being sterilized"
|
|
|
|
|
One note:
Stored Procedures are not compiled, they are simply stored directly on the database, not as a SQL statement within your application. The great flexibilty that is gained through Stored Procedures is that if something in your SQL statement needs to be changed, you don't have to recompile your project as long as you don't change the name of the Stored Procedure. You also can expose yourself to other process methods that a stored procedure can do.
Nick Parker
|
|
|
|
|
<nitpick>
Actually stored procs are compiled. The database (in this case SQL Server 7) will create an optimised execution plan the first time the proc is run. This plan is then stored for later use (eliminating the need to recreate the plan again and again if it were a standard SQL statement).
There is one gotcha to this though; the proc is optimized for the way it is run the first time, if you have some conditionals inside the proc the optimization could be different depending on the branch that gets executed.
</nitpick>
If you use the Query Analyzer (provided in the client tools) you can view the execution plan for a given query. Without a copy in front of me I can't tell you how to view it though, I remember something about the view that allowed you to switch from grid to report though.
James
Sonork ID: 100.11138 - Hasaki
"Not be to confused with 'The VD Project'. Which would be a very bad pr0n flick. " - Michael P Butler Jan. 18, 2002
|
|
|
|
|
Hallo
Deleting a record with
m_pRecordset->Delete(adAffectCurrent);
fails with the message
[Microsoft][ODBC dBASE Driver] Selection too complex
Whats wrong?
|
|
|
|