|
STUFF is what you're looking for indeed.
|
|
|
|
|
Thanks for your responses.
Instead of a comma separated list, i would like it as line breaks.
I am separating the information by pipes, and later doing a replace after calling the function like below.
I am doing this -- SELECT replace(DBO.fn_getAssociateComments(168), '|', char(13)+char(10))
But it is putting any line breaks??Am I missing something?
Thanks a bunch!
|
|
|
|
|
Hi,
According to this stackoverflow thread [^], it should work.
Don't know why it doesn't work for you...
Did you try replacing CHAR(13) + CHAR(10) with "\r\n" just to try?
Edit:
Maybe you should take a look at this example too: [^]
|
|
|
|
|
I keep bumping into a need for this.
I'm tossing this into my code library.
Very, very useful. In fact it was useful today in a problem.
_____________________________
Give a man a mug, he drinks for a day. Teach a man to mug...
|
|
|
|
|
|
Hi,
I wondered if Sql Server supported somehow to do that?
By that, I mean having the structure on one database and the data on another.
The purpose is to be able to lunch a set of integration tests that would get the data on the test database but would execute the stored procedures from the original database.
I didn't found much on google, so I suppose this isn't possible.
Edit: I didn't said, that I'd like to leave the stored procedures as they are, prefixing table names with a database would be too much work. Ideally it should be a config parameter or something like that.
modified on Friday, July 22, 2011 8:06 AM
|
|
|
|
|
The stored procedure is executed on a server not a database. You can have multiple databases on one server.
Normally you would want to run a test using code on a test machine, but if you need data from the primary database a linked server might be the way to go.
Look into linked servers either way.
Also look at IMPORT / EXPORT of tables.
Good luck,
djj
|
|
|
|
|
djj55 wrote: The stored procedure is executed on a server not a database. You can have
multiple databases on one server.
Not so sure about that, I may be wrong though.
We have multiple databases in one single server, indeed, but some databases contain some procedures and other databases contain other procedures, so I'd say that the database is the procedure's owner.
Anyway, linked servers may be a solution indeed, but it would require to prefix the table names with the server/database name. We actually have too many stored procedures, so this is not an option right now.
Thanks
|
|
|
|
|
_Zorro_ wrote: Not so sure about that, I may be wrong though.
If you try running a stored procedure that stores to the c:\ drive, see where the file ends up. Now there is most likely a way to change behavior of where a process runs but I use the defaults. See sp_who also will show you what is running on a given server.
You can run a stored procedure from one linked server on another by using the server/database name. Within a stored procedure I like to have the database with the table so if it is ran from another database it still runs. I think this is called fully qualified but I do not remember. But database.schema.table is what I suggest and with the new schema importance in 2005/2008 referencing the schema should be the minimum. (We still currently use dbo for our tables though).
|
|
|
|
|
djj55 wrote: If you try running a stored procedure that stores to the c:\ drive, see where
the file ends up.
Are you talking about Sql Server, or Windows Server? Checking the location of a file stored from a stored procedure makes me believe you are talking about the second one.
If you have an Sql Server that has two databases, db1 and db2, and you create a Stored Procedure only on db2, trying to run that stored procedure from db1 won't work.
Check the details of EXEC sp_who you'll see that the dbname is specified.
We too are referencing the schema, and we still use dbo too by the way.
I think that the only way would be prefixing all our table/views/etc. calls with the database name, the name should be a parameter (if we're running tests then the data should be retrieved from the test database, otherwise it should get the data from the context's database). If we use a parameter, then, that would mean converting all our procedures to dynamic sql. Considering that we almost have 3000 stored procedures, this is not an option, as said before.
I was wondering if Sql Server supported some sort of parameter, indicating that it should read the data from another database, while executing a procedure from another (without having to edit the stored procedures, this is a key factor).
|
|
|
|
|
What I was talking about with the c: drive was if you bcp data (export) it saves to the server you are running on.
You could try and use an alias for your database but I have only created those while studying. (Not to be confused with table alias in query).
If that does not work I do not know of a way to change where the sp looks. This rings a bell but I cannot remember.
|
|
|
|
|
djj55 wrote: This rings a bell but I cannot remember.
Ok, thanks for your time!
|
|
|
|
|
Try CREATE SYNONYM.
I think that's your best bet.
_____________________________
Give a man a mug, he drinks for a day. Teach a man to mug...
|
|
|
|
|
That's not bad, but still, requires me to update all my stored procedures... not cool!
|
|
|
|
|
I'm not clear as to why.
If the synonym is on the test server and points to the code on the production server then any change to the prod server should immediately be available to the test server. You run the code on the test server but it is the code from production.
Or am I not seeing the problem?
_____________________________
Give a man a mug, he drinks for a day. Teach a man to mug...
|
|
|
|
|
Here's why:
CREATE SYNONYM [ schema_name_1. ] synonym_name FOR <object><br />
<br />
<object> :: =<br />
{<br />
[ server_name.[ database_name ] . [ schema_name_2 ].| database_name . [ schema_name_2 ].| schema_name_2. ] object_name<br />
}<br />
USE tempdb;<br />
GO<br />
-- Create a synonym for the Product table in AdventureWorks2008R2.<br />
CREATE SYNONYM MyProduct<br />
FOR AdventureWorks2008R2.Production.Product;<br />
GO<br />
<br />
-- Query the Product table by using the synonym.<br />
USE tempdb;<br />
GO<br />
SELECT ProductID, Name <br />
FROM MyProduct<br />
WHERE ProductID < 5;<br />
GO<br />
Maybe I'm not seeing it, but how would you manage to use the same procedure and call different databases depending on who called the procedure?
Let's say we have db1 and db2 and we're doing:
SELECT A, B, C FROM dbo.Table
What I need is to call the data from db1 if the application made the call, but go read the data on db2 if the call was made by the build server (the unit/integration tests).
|
|
|
|
|
Nevermind.
I'm trying it on my servers and can't seem to see anything but views and tables through my linked server. So maybe the other way around would work.
I'm curious enough to continue researching this but let us know if you decide on a solution.
_____________________________
Give a man a mug, he drinks for a day. Teach a man to mug...
|
|
|
|
|
smcnulty2000 wrote: I'm curious enough to continue researching this but let us know if you decide on
a solution.
I don't think there's one... but I will let you know if I find one, indeed.
|
|
|
|
|
Still does not make sense to me. Why not take a copy of the procedures and run them on the test server. We use SQL Compare to do this except we do it the other way around. Develop on the dev server, test on UAT and deploy to Production, SQL Compare does the synching of the code in both steps.
When we require another environment we take a backup of production onto yet another server. Then the only change is the server name and we don't use that in the qualification in the code. I admit this may become complicated if you are using distributed databases but we use Oracle for any really big data (multi TB)
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: but we use Oracle for any really big data (multi TB)
I'm jealous.
I've only gotten to big gig levels.
_____________________________
Give a man a mug, he drinks for a day. Teach a man to mug...
|
|
|
|
|
We recently sized an Oracle system at 6TB, we can identify 3tb as the initial data coming from part of the trading system.
Really big data is only useful if your structure is good and your data clean. Unfortunately our legacy systems were designed by incompetent, moronic, lazy, uneducated, idiots who only had a vague idea about the business and absolutely no idea about data structures.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: our legacy systems were designed by incompetent, moronic, lazy, uneducated,
idiots who only had a vague idea about the business and absolutely no idea about
data structures
This is in fact the standard methodology for database development in 99% of companies worldwide.
|
|
|
|
|
Mycroft Holmes wrote: Still does not make sense to me. Why not take a copy of the procedures and run
them on the test server.
Because we're using continuous integration with tfs 2010, and want to launch the test sets while the build server is... building. If we have to copy our stored procedures to the test database then this will require a manual action each time a sp is updated, which I think, is a bad idea. Less humans around, less problems.
Apart from checking bugs, we'd also like to guarantee that the deploy succeed regardless of the environment (dev, tst, post test, production).
I know this seems a bit strange since we should only test on one of those, but we have numerous errors due to deployment issues (large amount of files to merge, a lot of manual actions) and we'd like to prevent that, or at least, be warned as soon as possible.
Thanks
|
|
|
|
|
_Zorro_ wrote: continuous integration with tfs 2010
Never have been able to get the hang of automated testing/deployment I guess I'm old school, I want a human driving the deployment.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
It works fine for us, but we still have to do some stuff manually.
We start having huge problems when humans begin to interact. That's actually why we wanted to validate each deployment step, because there's always something wrong. A file not merged correctly, something forgotten on the database, etc. and it implies a big loss of time to go and repair everything... each time...
I suppose your humans are better than ours!
|
|
|
|