|
you could also use something like
Where str_Name like '[0-9]%'
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
ThanQ for your suggestion.
|
|
|
|
|
try this
select str_name from mytable where isnumeric(left(str_name,1)) = 1
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Thanks for your suggestion
|
|
|
|
|
In LINQ:
'Create a char array using digits
dim digits as char() = "0123456789"
dim query = from x in myTable _
where digits.Contains(x.str_Name.Substring(0,1)) _
select x
|
|
|
|
|
I trying to upgrade sybase version 6 and keep getting database created by an different version using version 11
any help.
Learning to Code
|
|
|
|
|
Assuming Sybase have filled versions between 6-11 you are trying to jump 5 versions, and you wonder why it does not work.
Have you confirmed with Sybase that the 6-11 upgrade is possible?
I suggest you try transferring the database. Scripting out the structures and then transfer the data, probably won't work the first couple of times so make sure you have backups and script each step so it can be repeated and repeated and....
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I recently have had the task of converting SQL statements that select directly from the database to DataTable.Select statements that select from a cache of records in memory.
The SQL statements use a single table and are along the lines of:
SELECT node.Name, parentNode.Name, rootNode.Name
FROM (nodes node INNER JOIN nodes parentNode ON node.ParentID = parentNode.ID)
INNER JOIN nodes rootNode ON node.RootID = rootNode.ID
WHERE node.format = 5;
Assuming I have a cache of the "nodes" table in memory (as a large DataTable), I can simulate this SQL statement with nested for / foreach loops, intermediate DataTables, and simple DataTable.Select statements.
However, I was wondering if there's a slicker way of using the DataTable.Select that eliminates the need of doing so much looping and creating intermediate tables for the results.
|
|
|
|
|
The Select method in DataTable is quite restricted. You cannot for example join tables, create hierarchical queries etc.
I didn't quite understand why you have to loop or create intermediate results since you didn't include your actual data table usage. I'm guessing that it's because of those restrictions I mentioned.
If that's the case, I'd suggest that when building the cached datatable, use the database server ability to execute complex queries and in the datatable store the information in simpler form. Since this is mostly for reading the data from the cache, denormalization is one way to make the data more easy to use.
|
|
|
|
|
Hello all,
I have a report that deals with a large set of inventory data. Inventory items are stored in bins at various locations. At each location, different team members may be responsible for keeping track of inventory in the various bins. Team members may have more than one bin. We don't store which team members are responsible for a bin - that's pretty much determined by who is around when inventory needs to be taken. We are building a report that is basically a worksheet for listing what is in each bin.
We have a user interface that allows us to determine which bins should have page breaks after them (the bins are listed in order of location and then alphabetically ordered). What we need is a way to group the result set coming back from our stored procedure so that each set of bins between page breaks is grouped. If that makes sense...?
I've been googling trying to find an answer to this question, but I think part of my problem is in how to express the question in such a way as to get an answer. Please advise.
Thanks,
Will
|
|
|
|
|
If I understand your description correctly, I advise the following route. Remember that grouping is just condensing the data on a common element. I'd recommend putting the stored procedure results into a result set that can be manipulated (a temp table or table variable) but add an addition column for a 'group number'. You'll need to go over the table using the page break marks to assign the 'group numbers' to each row in the result. The way I think the groups numbers would be assigned is every row between the beginning of the result set and the first page break would be group number 1, every row between the first page break and the second would be group number 2, etc. Look at the ROW_NUMBER() function (if using SQL Server 2005) as you can use it with it's GROUP BY clause (which is separate from the normal query GROUP BY clause) to automatically generate the group numbers. If using SQL Server 7.0 or 2000, you may have to hand-craft the group number assignments. Either way, once the group numbers are assigned, you simply need to select from the result set and group by the group number column.
Keep It Simple Stupid! (KISS)
|
|
|
|
|
Retrieve Data from 3 tables
Schema
symptom(ID,name,description)
visitentry(visitentryID,datevisit,weight,height,bmi,medicalNotes,patientID)
visitentrysymptom(symptonentryID, patientID, symptomID, date)
Records
symptom table
1 || fever || feeling hot in the body
2 || flu || sneezing non stop
visit entry table
1 || 03/12/2008 || 50 || 1.68 || 17 || high fever 40 degree || 12
visit entry symptom table
1 || 12 || 1 || 03/12/2008
2 || 12 || 2 || 03/12/2008
I wan to extract the data from visitentry and visitentrysymptom
i wrote this sql code but dont know how to extract both fever and flu because my visit entry symptom table shows that the same user has 2 symptom
Select * from visitentry
where patientID = @patientID
|
|
|
|
|
This is a pretty straight-forward query:
select ves.*, ve.*
from visitentrysymptom ves
inner join visitentry ve on ve.patientID = ves.patientID and ve.datevisit = ves.date
I think you should read up on the basics of T-SQL, because this is a pretty basic thing to do in T-SQL.
Keep It Simple Stupid! (KISS)
|
|
|
|
|
Hi,
I performed a search in google and found different answers to my question. However none of them did work out. I am able to read from a table but when I try to insert into a table, it doesn't insert and it doesn't give me any error message. I am using SQLCE in Windows Mobile 6.1 OS. If you give me a working sample for INSERT statment in SQLCE, I would really appreciate it.
The code below neither does insert the record into table nor produce any error.
Dim myconnection As New SqlCeConnection(ConnStr)
Dim mycommand As New SqlCeCommand
Dim InsertStr As String = Nothing
mycommand.CommandType = CommandType.Text
mycommand.Connection = myconnection
Dim xdate As Date = DateTimePicker1.Value
xdate = xdate.ToShortDateString
InsertStr = "INSERT INTO Expenses ([Date],BelongTo,Status,ExpenseAmount,ExpenseCurrency) "
InsertStr += "VALUES ('" & xdate & "'," & ExpenseIndex & ",0," & CType(ExpenseAmount.Text, Decimal) & "," & "'" & Currency.SelectedItem.ToString & "');"
mycommand.CommandText = InsertStr
Try
mycommand.Connection.Open()
mycommand.ExecuteNonQuery()
Catch ex As Exception
MsgBox(ex.ToString & vbCrLf & InsertStr)
mycommand.Connection.Close()
Exit Sub
End Try
mycommand.Connection.Close()
What a curious mind needs to discover knowledge is noting else than a pin-hole.
|
|
|
|
|
With a quick look I didn't notice anything wrong in your code (except that you don't use parameters in the insert statement, as stated many times in this forum, no no no and no ). So what I suggest is that after executing the insert, fetch the inserted record in your code to see if it's in the db.
I'm suggesting this because you didn't mention how you discover that the insertion wasn't done. One possibility is that from Visual Studio you fetch data in design time from a different database than what you use at run time. This could be because n some cases the database file is copied under bin/debug folder at compile time and that database is used at runtime. In design time the original source of the copy is used so you wouldn't actually see any changes
|
|
|
|
|
i created a filestream in sql server 2008 and i inserted a word document in it and now i want to display this word document in asp.net 2008.how can i do that?
|
|
|
|
|
lambo wrote: display this word document in asp.net 2008
Thats going to be a neat trick - display a windows form document in an ASP page. Where have you seen or heard about this being done before?
Filestream does imply that you can universally disply the contents, it is a method of storing the binary data in the filesystem where it is managed by SQL Server. You can retreive it from SQL and display it in WORD only.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Well, you can set the mime type etc for the filestream you are sending to a web page and it will open it (using Word) in the browser window. Never really liked this, but you can do it.
Not sure what ASP.net 2008 is though...
|
|
|
|
|
Hi,
i am handling a VLDB( very large database), out of which some tables contains billions of records. a few applications are continuously accessing this database. Right now i am using a Database manager library ( its a c# dll) to handle the database. I am looking forward for suggestions and useful links to help me to increase the performance of these applications.
What are the efficient ways to handle such kind of large database? , Now it is taking more time to return from a query/
|
|
|
|
|
In (as they say) no particular oprder, I would look at
> partitioned tables and/or views as these can have a massive performance gain.
> is all the data required continually, could some of it be archived.
> the physical location of the data and indices - ensure seperate controllers etc
> the indices themselves - can they be improved
There are a few starting points, I'm sure other people will be able to come up with more
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Thanks, i know that no one will have a ready made answer.
anyway i am not sure about your 3rd and 4th points, can u put some light on it?
|
|
|
|
|
By putting your data and indices on totally separate disks (including disk controllers) you will improve performance. Checking the execution plans of the most common queries will reveal if teh indices on the tables are the most efficiant. There also loads of other possibilities.
I would do as Mycroft suggests, get a DBA with strong VLDB experience in to review your system.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Have you engaged a DBA to optimise your database, with a VLDB it is probably worth paying for professional help. Unless a professional DBA or highly skilled developer has been involved in the DB then there are a myriad of things that can be done to improve performance.
From your lack of understanding of 3-4 in Bobs suggestions I suspect there has not been any optimisation past throwing some indexes at the tables. Indexes and data files can reside on multiple disks, dramatically increasing the response time.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Very good answers from both Ashfield and Mycroft.
To add a few things: You said that applications are accessing the database. What you didn't mention is that do they mostly only read or do they modify the contents of the database. Depending on the answer you should take a bit different paths in optimization. For example if the database is mostly only read, usage of indices can be very agressive while if it's commonly also updated, you will need to get a 'balanced' solution. This principle is not used only against the whole database but for different areas in the database so for different tables you could have different optimization goals.
As Mycroft suggested, you would benefit if you get a good DBA to have a look at the problems and advice you further. The solution won't be purely technical but it will include technical modifications as well as going through the logic of the applications and possibly modifying it if necessary and possible.
One technical thing to add to the excellent list from Ashfield, you could consider also using federations.
|
|
|
|
|