|
Sounds like a bad idea to me.
They are asking you to reveal your business information to them. Information that should have nothing to do with them.
What are you going to do if they take that client list and tell those other people that they must drop you?
kmoorevs wrote: This is fair enough, and probably should have been in practice before
If your business was based on a smile and a handshake and the other side is no longer satisfied with that then it is time to use a contract.
|
|
|
|
|
But did they actually use the phrase "Full Disclosure"?
Doesn't that sound like their lawyer asked them to get the list?
Think carefully about how you answer them. If your company has a lawyer I'd run it by them. The developer threatened legal action and now wants this piece of information. Perhaps you should take the threat seriously.
_____________________________
Give a man a mug, he drinks for a day. Teach a man to mug...
|
|
|
|
|
Unless it is spelled out differently by contract, I'm not sure how the data ever belongs to the vendor. And I wouldn't sign such a contract with a vendor, I'd tell them to pound sand.
Most of my customers business IS their information or tied directly to it. If the vendor owns it then the vendor can tell you that if you migrate away from their product that you can't take the data with you. Who would agree to such a thing that was thinking?
This would be like Microsoft insisting that they own your documents because you wrote them in Word.
I also wouldn't give these guys a list of common customers. They are considering a lawsuit and they want to have 'evidence' that you've been tampering with their clients and they want a way to calculate damages- so they can decide to move forward, or not.
_____________________________
Give a man a mug, he drinks for a day. Teach a man to mug...
|
|
|
|
|
Hah! Seems they have done some homework...I got an email this morning with a sql script attached showing all the queries used against their database...must have started a trace. The email informed me that the queries passed QA. A half-hour later, I get another email stating that our request for access their system has been approved pending the acceptance and signature of a formal document. The document simply states that we acknowledge and respect their database and all objects contained therein or created thereby as intellectual property of the vendor. We also agree to identify them as the source of any data retrieved from their system, (already in place) and to destory any backups or databases we may have obtained from the client. (sure thing) As a sidenote, the email stated that 'clients have a favorable opinion of your company and consider the connector to our system as 'integral' to their business objectives'. I have a new contact with their dev team and a promise of cooperation should I need it. We are reviewing the document now. I may have to delete this thread as it is now showing up in google searches on the subject!
"Go forth into the source" - Neal Morse
|
|
|
|
|
This seems to answer the question. The vendor owns the database...structure and objects. The client own's the data in the database. Clients also own the SQL Server license and contol access to databases. In most cases, whether or not they know it, clients have accepted a EULA that prohibits them from sharing software components (including databases) with third parties without written consent. I believe that a contract is a good idea to protect all three parties involved.
"Go forth into the source" - Neal Morse
|
|
|
|
|
Hi,
I built an SSIS package locally and now I need to deploy it to our development instance of SQLServer...can anyone tell me how to go about doing that?
"I need build Skynet. Plz send code"
|
|
|
|
|
ok...I've got the thing deployed, but I'm trying to run it from a job. If my job is running in a database called "dev" and a package named "foo" is running in Integration Services on "dev", how would I go about referencing it in my job step command?
(just a stab)
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobID, @step_name=N'Execute SSIS package',
@step_id=1,
@command=N'/FILE "\\dev\MSDB\Maintenance Plans\foo.dtsx" /CHECKPOINTING OFF /SET \Package.Variables[User::Correlator].Properties[Value];' + @Correlator + ' /REPORTING E'
Looking at my @command assignment, is anything missing?
I have the thing deployed to MSDB\Maintenance Plans, but I really don't know if the reference is as simple as this.
..Is it? I'm getting an "XML Parsing error"(<-pseudo) that seems to be a permissions error when trying to open the package. Either the package doesn't exist at the location specified or my account doesn't have access to the file...I'm pretty confident that I have access to the file, so my other logical option is that the context that I have provided in my snippet above is incorrect.
Does anyone know how I should go about referencing it?
"I need build Skynet. Plz send code"
|
|
|
|
|
|
Thank you very much for the documentation, but I had already referenced the materials at those links. My specific question was surrounding the command-line reference that I was making to my package after it had been deployed.
...at any rate, I found out what my problem was. I was attempting to reference the package with the FILE switch, but I had not deployed the package to the file system of the database server which I was attempting to reference (I had deployed it directly to the SQL Server instance)
...I still am unsure of how I would go about referencing the package from SQL Server directly, but I repeated my deployment, this time to the file system, and adjusted my reference to the location on the file system, and the package was able to be loaded at run-time.
just fyi...anyone referencing a package with the "FILE" switch, make sure your package is deployed to the file system lol ...it'll save you a night's sleep.
"I need build Skynet. Plz send code"
|
|
|
|
|
Hi
Im Ashutosh
Student In MCA
Respected Sir
I create a webApplication in ASP.Net i want to store Vedio, Audio, Images, file in my SQL Server2008 so plz give me a solution Imidiat
Yours
Ashutosh
91-8115736699
|
|
|
|
|
monulove2u wrote: I create a webApplication in ASP.Net
then continue to work on it and when you stuck in any specific problem, CP members are here to assist you.
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
www.cacttus.com
|
|
|
|
|
Go and Google about HTML 5, ASP.Net Audio examples, Video Example. Try it. And when you are stuck with doubts, come and ask here.
// ♫ 99 little bugs in the code,
// 99 bugs in the code
// We fix a bug, compile it again
// 101 little bugs in the code ♫
|
|
|
|
|
|
Knock yourself out. No, seriously...
|
|
|
|
|
Here borrow my hammer, it is available immediately!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Steps
1. Learn basic .Net
2. Learn about ASP
3. Learn basic database including SQL. This has nothing to do with steps 1/2.
4. Learn how "blobs" are stored/retrieved in the database. Nothing to do with steps 1/2
5. Learn to use SQL in .Net.
6. Put the above together to create a program that does what you want.
|
|
|
|
|
use BLOB (binary large object) datatype in sql
and if using .NET then send the image or video as parameter to query.
try it .
|
|
|
|
|
So we've got a single Solicitations table that logs all outbound solicitations that get wrapped up into a file and dropped on our FTP site as well as holding columns for handling the inbound processing when the records make their roundtrip back to our system.
So, for instance:
Table has columns
GUID|AccountID|AccountNumber|PhoneNumber|FirstName|LastName|Street|City|State|Zip|Code|CodeDescription|Agent|Date
GUID, AccountID, AccountNumber, PhoneNumber, City, State, & Zip are stored in the Log table on outbound processing, with the inbound columns defaulted to NULLS.
When the information comes back into our system after the Marketing call has been made, it will return all of the data that we sent out plus the remaining fields all in a csv format.
So, a record might go out looking like this:
'1234-5678-ABC-BLAH',5,'666321234','800-123-4567','Chasey','Lane','976 Gloryhole Ave','Los Angeles','CA','66699'
and it will come back looking like this:
'1234-5678-ABC-BLAH',5,'666321234','800-123-4567','Chasey','Lane','976 Gloryhole Ave','Los Angeles','CA','66699',170,'Sealed the Deal','Ron Jeremy',07/06/2011
For the inbound processing, I want to create an SSIS package with a Data Flow that uses a Flat File Source to pick up the inbound file and have an OLE DB Destination execute a SQL Command that maps the inbound fields to the specific columns that are coming out of the csv file.
I'm basically to here:
UPDATE Solicitations SET Code = ? , CodeDescription = ? , Agent = ? , Date = ?
WHERE GUID = ?
AND AccountID = ?
AND AccountNumber = ?
How do I reference the Flat File Source in my command? How do I assure that the correct parameter values are being pushed into the command in the correct reference positions? Am I overthinking the problem?
"I need build Skynet. Plz send code"
|
|
|
|
|
I have backup file which is backed by using ms-sql2008.
So Restore destination is ms-sql 2005.
Is it possible?
if can, how?
hi
My english is a little.
anyway, nice to meet you~~
and give me your advice anytime~
|
|
|
|
|
No, you can not. Your only choice (that I know of) is to script out the 2008 database in 2005 compatibility mode, create it in a 2005 server, and transfer the data over.
Scott
|
|
|
|
|
It doesn't work even if you set your database compatibility to SQL Server 2005 before taking the backup. So the only way is as suggested by scottgp.
|
|
|
|
|
Hi all,
I have a stored procedure sometime I get deadlock condition in it. Actually as per requirment I have open cursor in it 3 time also updation some tables. Can any one let me know how can I detect it.
Regard's
Kaushik
|
|
|
|
|
|
For deadlock detection I am using SQL Server Profiler to detect locks. Also I run a script to detect the longest query time execution and see what is actually happening in my procedure.
Also check this article. I think it will be more useful than to run SQL Profiler.
We live in a Newtonian world of Einsteinian physics ruled by Frankenstein logic
modified on Monday, July 11, 2011 4:50 AM
|
|
|
|
|
I would like to know which of the following options will produce the fastest transactions (and why). Unfortunately, I don't know enough about the low-level "nuts and bolts" of database operation to really have a good intuition on this.
My PHP script resides on one server, my MySQL database resides on another. I believe that by performing operations on a regular MyISAM table, I am incurring time costs related to both 1) communicating between the web server and the database server, and 2) performing disk operations because of MyISAM is disk-based storage engine. I do not, however, know how much of the time cost is associated with each of these factors or if both are really significant (in the sense of important).
I have some temporary data that I want to manipulate on a per-session basis, and the way I see it I have two options:
I can create a local memory database and create a table there.
$wm = sqlite_open(':memory:', 0666);
sqlite_query($wm,"CREATE TABLE patterns (pattern varchar(500))");
Or, I can create a table in memory on the existing MySQL database.
mysql_connect("not.localhost.com", "admin", "example");
mysql_select_db("example");
mysql_query("CREATE TEMPORARY TABLE patterns (pattern VARCHAR(500)) ENGINE=MEMORY");
Which one of these will be faster to perform queries on? Will there be a noticable difference? My intuition is that the second will be slower because it is a transaction on a non-local database, but I don't know this for sure. When the table is in-memory, which computer's memory is it actually in? How much volume would have to be going on for a difference to actually be noticable?
Any thoughts would be very much appreciated.
--Greg
|
|
|
|
|
GregStevens wrote: Any thoughts would be very much appreciated
I think you should take measurements, and note down the version of the database that you tested.
GregStevens wrote: Which one of these will be faster to perform queries on?
I'm hoping Sqlite, since it could be a fast implementation in native code without much overhead.
GregStevens wrote: My intuition is that the second will be slower because it is a transaction on a non-local database, but I don't know this for sure.
How about putting it one the same machine?
You also could consider Times Ten[^] from Oracle.
GregStevens wrote: When the table is in-memory, which computer's memory is it actually in?
That depends on the specific implementation; most of the time it will be in memory that's managed by Windows, putting it into the virtual memory area. Others might have optimizations.
GregStevens wrote: How much volume would have to be going on for a difference to actually be noticable?
You can review the amount of free memory using the Task Manager;
Your computers fysical memory - used memory = free memory
Once the system starts to page out memory, you'll notice delays. That can range from minimal delays (say, served from the 64Mb buffer of memory in your harddisk) to large delays (Windows reshuffling a lot on disk, paging in and out other applications that are also running, your computer nearly grinding to a halt)
Bastard Programmer from Hell
|
|
|
|
|