|
If you write
use databasename it will throw error
Msg 207, Level 16, State 1, Line 1<br />
Invalid column name 'databasename'.
But if you write
use master it will say
Command(s) completed successfully.
Niladri Biswas
|
|
|
|
|
if i use
use master
And i want to create some table in different database then what i will do?
|
|
|
|
|
Dear All,
I use SQL Server 2005.
I want to creat Folder If have that folder name ready I want to rename it or delete old folder.
About File is the same.
Have any command for do it ?
Thanks for help....
VB.Net
|
|
|
|
|
You need to use xp_cmdshell, but you will probably have permissioning problems! Why do file activities from sql server?
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
I believe this can be done via
xp_cmdshell
In the below example I am creating a directory initially and next time if such a directory is found, I will rename that
declare
@cmdpath nvarchar(60)
, @Location nvarchar(100)
, @message nvarchar(max)
set @Location = N'C:\NewFolder'
set @cmdpath = 'MD '+ @Location
create table #result(result nvarchar(255))
insert into #result (result) exec master.dbo.xp_cmdshell @cmdpath
select @message = ISNULL(@message + ' - ','') + result from #result where result is not null
if(@message is not null)
begin
EXEC xp_cmdshell 'RENAME C:\NewFolder RenamedFolder'
end
select @message
drop table #result
For more info you can visit
a) Using xp_cmdshell[^]
b) xp_cmdshell[^]
Niladri Biswas
|
|
|
|
|
Why are you using SQL Server to interact with the file system, this is generally not recommended as it exposes the file system to database users. Most server environments will not allow that.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
thanks.
the reason is that the data in backup to txt file
so some have to rename, delete or create new.
about Folder when delete it ask are you sure (Y/N)?
how can type Y ?
VB.Net
|
|
|
|
|
Your statement does not make a lot of sense to me. You back up your database to a text file? Are you nuts?
Ok what database are you using that does not have a backup facility to properly backup and restore your data. As someone suggested you can use xpcommand shell but it sounds like a nightmare to me. Permissioning will be another issue as well.
I would recommend you TOTALLY rethink you data management.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
thanks you so much.
VB.Net
|
|
|
|
|
To be honest, though I proposed a solution, but I agree with you. If the person is using .net, it is better to go ahead with SMO.
In any other technology, there must be some option to go ahead.
Niladri Biswas
|
|
|
|
|
I use SQL Server 2005.
Select eTime from Table1
it gives e.g 2009-12-10 10:11:00'
but i want to display date only like '2009-12-10' using Select Statement
Help please
|
|
|
|
|
Try this
select replace(CONVERT(varchar(10),getdate(),111),'/','-') dt
OR
select replace(CONVERT(varchar(10),getdate(),102),'.','-') dt
Output:
dt
2009-12-18
So in your case it will be
select replace(CONVERT(varchar(10),eTime,111),'/','-') eTime from Table1
OR
select replace(CONVERT(varchar(10),eTime,102),'.','-') eTime from Table1
Niladri Biswas
|
|
|
|
|
Hi,try this
SELECT convert(varchar(10),CONVERT(DATETIME,'2009-12-10 10:11:00'),120)
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
www.aktualiteti.com
|
|
|
|
|
I'm creating a database that can send private messages. The system utilized MSSQL database engine and uses relations to constrain the data inserted.
Problem is that users can send mass-messages and that can add up quickly if each row in a table represents a message to a user. My question is: Is it better use one row per message or use a comma separated list of users?
I would think that the table with the ID's of the message and users (many-to-many relation table) can become REALLY large quickly. A comma separated list would keep the number of rows down.. What is more effective?
|
|
|
|
|
assuming that the recipient list are also users
User Table Columns
UserID+User
Message Table Columns
MessageID+Message
Sent Table Columns
UserID+MessageID (using the senders user id)
Receive Table Columns
UserID+MessageID (using the recipients user ids)
use csv if recipients are not users or you don't need to track
don't use csv if you intend to parse later - not effective
|
|
|
|
|
Your proposal is actually very close to my current database schema. As for the CSV, If the users need to delete a mass-message, I need to keep a list of unique receivers. This can be represented by rows or CVS. I need to edit the CVS in case a user delete a message. I need to remove a row in case I use rows.
It is simply a matter of what is the most effective method after a (long) period of time. I would guess that a table with >1.000.000 rows would take a long time for the database to process.
Then I rather have to use a little more memory and CPU time on parsing a comma separated list.
Any experiences on this matter? Any good db schemas I can look at? (I'm thinking forums here).
|
|
|
|
|
Yes, lots of experience. Just finished a db optimization contract(job). The client was adding a million records every couple of days. Have been doing the same for 20+ years.
The Receive Table may have a lot of records in it but only 2 columns and those columns will make up the primary key. To query for a given record, you will use the pk. Clustered, indexed, primary key 2 columns only - should be able to get any given record in about 3 milliseconds or less with 10 million records in it. Subtract a ms or 2 if your id values are numeric. This type of table will not take up much space.
This will be thousands of times faster than searching for a partial string value in text where you will won't be able to find the user by using an index - at all. (not easily anyway)
Searching for a string in a text type field is a VERY time consuming operation for a db to do - not a good idea.
I would create test scripts and data to validate if you're not convinced. Let me know if you need help.
What db are you using?
|
|
|
|
|
Sounds great. I don't have a lot of experience on the inner workings of SQL databases, so I was not sure what the best approach was. I will build the database without CSV and hope the database is clever enough to find a single (or range of) row within millions of rows without any performance problems.
The current database schema works exactly like that (2 PK) on numeric datatypes.
And you are right, somehow I did not take the partial text matching into account. That would take ages. That alone would keep me from using CVS.
As a side note, the database is a MSSQL 2008 Express engine. Thanks for your support. I really appreciate it.
|
|
|
|
|
If your problem is the number of rows, one million rows is not large for a modern RDBMS. If it is indexed properly, it will cope with that with absolutely no problem in terms of simple queries or updates.
Things get a bit hairier if you're doing more complex manipulations, using cursors or anything like that, or doing wildcard queries like '%XYZ%', that sort of thing.
|
|
|
|
|
The database schema is created in such a way that data that needs complex queries are placed in their own tables. The table in question is a 2 column composite key table. From what I've gathered here, the engine should have no trouble doing simple queries against several million rows.
Thanks for the clarification though. Does any of you by chance have any good articles on database optimization and inner workings of RDBMS?
|
|
|
|
|
I have written a small medical research application is asp that uses a sqlserver database to store data patient research data. To set the scene for my question; I am not sure what the term of the table/field design I have used is but I will try to explain.
I have a table that I have called ref_FieldDefinition, this contains the definition of each data field I need to store patient research data for. (E.g. FieldUID, FieldLabel, FieldType, etc.). So I could define a field for example FieldLabel ="Smoker", with FieldType="T/F".
I then have a seperate table that stores the data for each defined field , data_PatientData, (PatientUID, FieldUID, StoredData). (This is a simplified description of how I am constructing the db).
When I present a user with an asp page for a particular patient, the page is built based on which fields are defined in the ref_FieldDefinition table and then which data is applicable from the data_PatientData table. Hence, I have no hard wired fields in my table design.
This concept works fine for classic database field definitions, where you have a fields label and a field data input object (like a textbox, dropdown, checkbox, etc).
Yesterday I was asked could I create a 3x3 "table" on the screen for users to enter values into. The first two rows of the table will contain numbers and the bottom row will contain the sum of each column. If I had a classic db table design I could hardwire fields like "topleft", "topmiddle", "topright" etc.
I was hoping to do this in a smarter way and somehow define a "table" type field that I could dynamically build on display, as I do with my other field definition.
Am I aiming too high here ? Does anyone have any thoughts on this ? Some brainstorming would be much appreciated.
Cheers
Ryan
|
|
|
|
|
I use the same design structure except I call then attributes and attribute types and link them to all sorts of things via a many to many link table.
I use SQL 2008 so I make heavy use of the pivot functions. Only problem is the columns are dynamic so you need to use dynamic sql. This article may interest you Pivot two or more columns in SQL Server 2005[^]
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I am using an application created by a vendor where they use XML tables to store "custom" grid type data that we can define.
For example:
Given an employee record, we want to store safety qualifications and their expiration dates. Something like "CPR", "QualifiedOn", "ExpiresOn"
What the vendor has done is created an additional table called,
"EMP_MATRIXDATA"
KEYID
MATRIX_NAME (name we give the custom grid, like "SafetyTests"
MAXTRIX_DATA (datatype is an XML document)
With this type of design, you can provide a very flexible method for storing all kinds of grid data. You can even build in an XSD column into the table which would hold the schema of the associated table.
You will pay a price for searchability, but do some research on the topic of storing XML data in SQL server and you will be suprised how good it really is.
Good luck.
|
|
|
|
|
I have a columns say, Col and Col[1] in table say Test. Now I want to do a select from it. How to go about it? (Since [] has a special meaning for SQl server) I tried
select Col,Col[1] from Test
but for obvious reasons, it does not works.
[Edit]
This will work:
Select Col,[Col[1]]] from Test
[/Edit]
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
modified on Thursday, December 17, 2009 12:59 AM
|
|
|
|
|
try
select Col,[Col[1]] from Test
Who would be dumb enough to use reserved words or characters in a column name. Stoopid really.
Never underestimate the power of human stupidity
RAH
|
|
|
|