|
From the look of it, the original
Land_use_code LIKE '%' + @Landusecode1 + '%
is the correct way. I might need to look into using combo boxes with equals|like in it to control the results a bit more, and code the query as necessary from those.
Just a bit more than I wanted to have to do! But no doubt it will be good experience!
Thanks for your help
|
|
|
|
|
Joe Stansfield wrote: From the look of it, the original
Land_use_code LIKE '%' + @Landusecode1 + '%
is the correct way.
No way, you really need the percent signs and the known content all inside a single pair of quotes.
Where is yous snippet sitting, inside a programming language or inside an SQL procedure? maybe show a bit more of it, and somebody will fix it for you.
Luc Pattyn [Forum Guidelines] [Why QA sucks] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
modified on Wednesday, December 22, 2010 7:41 AM
|
|
|
|
|
It's sitting inside a datasource as a query (so similar to a stored procedure?) and then executed via a button and a table adapter in the programming language.
This is the full query as it sits at the moment:
SELECT File_Name, PID, Address_1, Address_2, Locality, Municipality, Postcode, Sale_date, Sale_price, Total, Capital_value, Land_area, Land_use_code, Room_count, Building_area, Construction_year, Wall_construction_code, Roof_construction_code, Land, Rate_1, Val_Ref, Source FROM vSaleSearch
WHERE (Municipality = @Municiaplity1 OR Municipality = @Municipality2)
AND (Locality LIKE @Locality + '%' OR Locality = @Locality1 OR Locality = @Locality2)
AND (Land_use_code LIKE '%' + @Landusecode1 + '%')
AND (Sale_price >= @Sale_Price1 )
AND ( Sale_price <= @Sale_Price2)
AND (Sale_date >= @Sale_Date1)
ORDER BY Sale_price
I'm trying to achieve the same thing on the locality. What I'd ideally like is for a person to be able to enter a string in the locality field and it take it as an "equals" query. Or include a wild card at the start or end to take it as a like and apply the wild card characters.
At the moment it achieves half of that, but not as user friendly as one might hope... I see what I want to happen all the time, but of course I'm not sure how exactly it is done...
|
|
|
|
|
When the query is a string built in a programming language (say C#), then this[^] seems to show the way. Basically, apply the wildcard characters (if you want them, this might depend on a CheckBox being checked) to the parameter before you set up the SQLParameter, then write your query without any quotes.
|
|
|
|
|
Add the percent signs to the parameter value.
parm.Value = "%" + foo + "%"
|
|
|
|
|
Joe Stansfield wrote: If I enter R%%, I will get all records.
And if you enter R%%% do you get no records?
Via SQL Server 2005 directly I am unable to replicate what you are seeing.
Joe Stansfield wrote: I am using C# Visual Studio 2008 and the dataset query designer.
It is possible that the latter is the problem.
In Server 2005 (not the designer) I get exactly or at least close to the behaviour you describe when I use the underscore rather than the percent sign. Especially when you try the different input values.
Even if the SQL created excluded the percent signs from your where clause when you entered them manually it should still work. Instead with a single one you get only a single match. With two you get a two character match. That suggests the underline (single character match.)
Presuming there isn't some other assumption that is wrong - such as that you are actually running some other query because this one didn't get applied.
Simple test. Remove the percent entirely from the expression. Then test with input values: R, R_ and R%.
Those should give different results. If not then I would suspect that there is some other assumption that is causing the problem.
|
|
|
|
|
Sorry about the delayed reply - christmas / new year break over here in sunny Australia!
I tried all of the things above, and get identical results when using R% as I do R_. I have found that when using R%% I do get all records, but I have since discovered this is only because the maximum length of a record is 3 characters. I get the same result if I use R%%%.
However, when I use the wildcard part of the query on the Locality choice, it works as I would expect it too, and returns all results that start with, say, 'Hob', if I enter Hob%.
The only difference I can think of between the two is that Locality is stored as 'NVARCHAR(50)' and Land_Use_Code as 'NCHAR(5)'. I would not have thought this would cause a problem, but I may be wrong!
In the mean time I will work around it and add multiple %% signs to the parameter before the query is executed. Unfortunately I have tried to get my head around writing parametrized queries from scratch, but have not been able to find a good resource that explains it along with using it to then fill a DataGridView.
Thanks for all your help though guys - has been great and I feel like I have learnt a lot!
|
|
|
|
|
If (Decrypt_TextBox.Text = "") Then
MessageBox.Show("Need to put in the file location or select file in grid for decryption")
Else
'declaring string for file'
Dim filename As String
'Getting the file name that is selected'
filename = getFilename()
'Starting to abstract the data from the database'
database.ConnectionString = "Data Source = test.db3;"
database.Open()
Dim command As SQLite.SQLiteCommand
Dim sqlstatement As String
command = database.CreateCommand
'The following will get the bytes of the file'
Dim filesize As Integer
sqlstatement = "SELECT FileSize From Vault WHERE FileName=@file;"
command.CommandText = sqlstatement
command.Parameters.AddWithValue("@file", filename)
Dim reader As SQLite.SQLiteDataReader
reader = command.ExecuteReader
filesize = reader.GetValue(0)
reader.Close()
Dim data(filesize) As Byte
'The following should get the data'
sqlstatement = "SELECT FileData FROM Vault WHERE FileName=@file;"
command.CommandText = sqlstatement
command.Parameters.AddWithValue("@file", filename)
reader = command.ExecuteReader
I have encrypted files in this database. I need to extract this file and decrypt it. I've been thinking about using a memory stream but these files will get fairly large (2GB). This is as far as I've gotten using vb .net framework 2.0. Any help is helpful?
|
|
|
|
|
where do the files originate, how did they get encrypted, how did they enter the database?
would retrieving them not be pretty much the reverse process?
and why is it you store them in the database to begin with? I wouldn't store gigablobs, I'd use files and store the file's path in the db.
|
|
|
|
|
At first its encrypted using SHA1 as a file and stored into the database. The problem is that sqlite libraries doesn't give me a clear picture on how to extract the file. After further research and bouncing ideas of people, they have suggested binarystream and it goes make since but i don't know how it will the end will be just yet.
|
|
|
|
|
I expect one uses a byte array to write or read a blob field; and a stream to encrypt or decrypt; so basically reading from the database and decrypting should be pretty much the same as encrypting and writing to the database. If the latter works for huge files, so should the former.
Suggestion: try and solve the problem without any encryption/decryption first; then add them in.
|
|
|
|
|
Still don't how to extract the file either way.
|
|
|
|
|
I have no experience with SQLite.
However I did search CodeProject articles for SQL read BLOB and found a lot, including this: Reading and Writing BLOB Data to Microsoft SQL or Oracle Database[^]. I'm sorry the examples are in C#, that does not change the philosophy, it merely changes the syntax of things.
I suggest you read it, or some of the other hits, and try what you learn. I once more stress the fact that IMO writing to a BLOB field and reading from a BLOB field are bound to be very similar.
|
|
|
|
|
I agree with you but now I'm having problems trying to convert the file back to its original form.
|
|
|
|
|
I have a database table that consists of an int primary key, several unique identifiers (most of which are used to reference data in other databases), an xml field and an infoset. I have non-unique, non-clustered indexes that I am trying to use to speed up the queries to the table, all of the indexes work well except for a single index on a uniqueidentifier field that retains a 99% fragmentation rate regardless of rebuild/reorganizing the index. Below is the details of the table, IX_external_guid_3 is the index with the high fragmentation. Can anyone tell me what might cause the high fragmentation, or what I can do to reduce the fragmentation on this guid field and speed up queries based on it?
Thanks,
Table
id(PK, int, not null)
guid(uniqueidentifier, null, rowguid)
external_guid_1(uniqueidentifier, not null)
external_guid_2(uniqueidentifier, null)
state(varchar(32), null)
value(varchar(max), null)
infoset(XML(.), null)
created_time(datetime, null)
updated_time(datetime, null)
external_guid_3(uniqueidentifier, not null)
FK_id(FK, int, null)
locking_guid(uniqueidentifer, null)
locked_time(datetime, null)
external_guid_4(uniqueidentifier, null)
corrected_time(datetime, null)
is_add(bit, not null)
score(int, null)
row_version(timestamp, null)
Indexes
PK_table(Clustered)
IX_created_time(Non-Unique, Non-Clustered)
IX_external_guid_1(Non-Unique, Non-Clustered)
IX_guid(Non-Unique, Non-Clustered)
IX_external_guid_3(Non-Unique, Non-Clustered)
IX_state(Non-Unique, Non-Clustered)
John
modified on Tuesday, December 21, 2010 2:57 PM
|
|
|
|
|
I've had a similar problem to this before, rebuilding or reorganizing wouldn't make a difference.
I ended up dropping the index and recreating it.
|
|
|
|
|
please let me know how to call a .sql file from one sql file.
|
|
|
|
|
That is not possible as far as I know.
Wout Louwers
|
|
|
|
|
Exec (@Filename)
"When did ignorance become a point of view" - Dilbert
|
|
|
|
|
Not sure if this is possible as I haven't tried it but what I would try would be to do a bulk insert from your sql file into a single column temporary table.
Then you could select this into a temporary varchar variable.
You should then be able to use the EXEC() function to run the query in the varchar variable.
|
|
|
|
|
I have a table with one record per security role a user has. So, user SMITH can have multiple records, one for DMV another for CPT and another for ICD. In this example, SMITH has three security roles but can have up to 100.
The desire is to have a second table with one record per user with columns for each security role indicating with Y or N whether the user has that role. The column names are the same as the security role name (eg. DMV, CPT, ICD,...) Smith would have 3 columns with a Y and all the rest with an N.
So, is there a way to read in the first table, use the data in the security role field (DMV, CPT, ICD, etc) to then reference the column in the second table and update the respective column to a "Y" to indicate the user has that role? For example: if table2:ColumnName(DMV)=table1:Security Role("DMV") then update table2:Column(DMV)="Y". The key to both tables is the Employee ID.
The input file we use to create table 1 is in the one record per Security Role per user design. We do not control that.
The intent is to avoid long Case statements. I've Googled this multiple ways and read lots of possibilities but none seem to have a workable solution but most likely I just don't understand it. Below is one way I hoped would worked and a couple of forums indicated it would but I get 'Yes' in the @colname variable.
Declare @colname varchar(200), @Eid varchar(50),@message varchar(80),@command varchar(200)
Declare my_cursor CURSOR
For Select replace(replace(replace(SecurityRole,' ',''),'/',''),'-','') as SecRole,EmployeeID
from EmergencyContact.dbo.CSEEmployeeRoles
where EmployeeID='38'
order by EmployeeID, SecRole
open my_cursor
fetch next from my_cursor into @colname,@Eid
while @@fetch_status = 0
begin
select @message = @colname+' '+@Eid
print @message
select @command= 'update EmergencyContact.dbo.CSERolesRolledUp set '+@colname+' = "Yes"
where EmployeeID = '+@Eid
exec (@command)
fetch next from my_cursor into @colname,@Eid
end
close my_cursor
deallocate my_cursor
Any help will much appreciated. Seems so simple.
|
|
|
|
|
That's the wrong way to go about it, so if it has been mandated, I recommend that you find a better company to work for.
What you want is a many-to-many relationship between users and roles, this is done with a table that contains the IDs of the user and role -- two IDs per record (maybe with a timestamp of when the relationship was created).
User
1 Smith
2 Jones
Role
1 DMV
2 CPT
UserRole
1 1 <-- Indicates Smith is in role DMV
1 2 <-- Indicates Smith is in role CPT
This is a much more flexible design that will expand more easily as additional roles are added.
P.S. It also allows for quicker querying to see which users are members of a given role.
JTRizos wrote: the respective column to a "Y"
If you must go that way, I strongly recommend that you use numeric codes -- and a translation table with referential integrity. You don't want invalid values showing up in there.
|
|
|
|
|
I agree. However, the file comes from the state and we don't control it's format. Otherwise, it would be as you describe.
We currently take the file into SQL table and allow access to supervisers and managers through an ASP.NET application using a GridView which provides some functions for the user. So, it's
User1 SecRole1
User1 SecRole2
User1 SecRole10
User1 SecRole23
User2 SecROle1
User2 SecRole4
User2 SecRole20
User2 SecRole100
What they want is
SecRole1 SecRole2 SecRole3 SecRole4 ... SecRole10 ... SecRole20 SecRole21 SecRole22 SecRole23 ... SecRol100
User1 Y Y N N Y N N N Y N
User2 Y N N Y N Y N N N Y
in GridView making it easier to compare who is in a SecRole.
Thanx for your response. I appreciate any feedback provided.
|
|
|
|
|
There should be no reason to store the data in the same form you receive it.
|
|
|
|
|
Keep the table you're having. Storing the same data in more than one place is against one of the fundamental principles of relational databases.
And if you really need to have that second table you should read up on pivot.[^]
Then consider creating that pivot table as a view. That's one of the purposes of views.
"When did ignorance become a point of view" - Dilbert
|
|
|
|
|