|
Sorry about the delayed reply - christmas / new year break over here in sunny Australia!
I tried all of the things above, and get identical results when using R% as I do R_. I have found that when using R%% I do get all records, but I have since discovered this is only because the maximum length of a record is 3 characters. I get the same result if I use R%%%.
However, when I use the wildcard part of the query on the Locality choice, it works as I would expect it too, and returns all results that start with, say, 'Hob', if I enter Hob%.
The only difference I can think of between the two is that Locality is stored as 'NVARCHAR(50)' and Land_Use_Code as 'NCHAR(5)'. I would not have thought this would cause a problem, but I may be wrong!
In the mean time I will work around it and add multiple %% signs to the parameter before the query is executed. Unfortunately I have tried to get my head around writing parametrized queries from scratch, but have not been able to find a good resource that explains it along with using it to then fill a DataGridView.
Thanks for all your help though guys - has been great and I feel like I have learnt a lot!
|
|
|
|
|
If (Decrypt_TextBox.Text = "") Then
MessageBox.Show("Need to put in the file location or select file in grid for decryption")
Else
'declaring string for file'
Dim filename As String
'Getting the file name that is selected'
filename = getFilename()
'Starting to abstract the data from the database'
database.ConnectionString = "Data Source = test.db3;"
database.Open()
Dim command As SQLite.SQLiteCommand
Dim sqlstatement As String
command = database.CreateCommand
'The following will get the bytes of the file'
Dim filesize As Integer
sqlstatement = "SELECT FileSize From Vault WHERE FileName=@file;"
command.CommandText = sqlstatement
command.Parameters.AddWithValue("@file", filename)
Dim reader As SQLite.SQLiteDataReader
reader = command.ExecuteReader
filesize = reader.GetValue(0)
reader.Close()
Dim data(filesize) As Byte
'The following should get the data'
sqlstatement = "SELECT FileData FROM Vault WHERE FileName=@file;"
command.CommandText = sqlstatement
command.Parameters.AddWithValue("@file", filename)
reader = command.ExecuteReader
I have encrypted files in this database. I need to extract this file and decrypt it. I've been thinking about using a memory stream but these files will get fairly large (2GB). This is as far as I've gotten using vb .net framework 2.0. Any help is helpful?
|
|
|
|
|
where do the files originate, how did they get encrypted, how did they enter the database?
would retrieving them not be pretty much the reverse process?
and why is it you store them in the database to begin with? I wouldn't store gigablobs, I'd use files and store the file's path in the db.
|
|
|
|
|
At first its encrypted using SHA1 as a file and stored into the database. The problem is that sqlite libraries doesn't give me a clear picture on how to extract the file. After further research and bouncing ideas of people, they have suggested binarystream and it goes make since but i don't know how it will the end will be just yet.
|
|
|
|
|
I expect one uses a byte array to write or read a blob field; and a stream to encrypt or decrypt; so basically reading from the database and decrypting should be pretty much the same as encrypting and writing to the database. If the latter works for huge files, so should the former.
Suggestion: try and solve the problem without any encryption/decryption first; then add them in.
|
|
|
|
|
Still don't how to extract the file either way.
|
|
|
|
|
I have no experience with SQLite.
However I did search CodeProject articles for SQL read BLOB and found a lot, including this: Reading and Writing BLOB Data to Microsoft SQL or Oracle Database[^]. I'm sorry the examples are in C#, that does not change the philosophy, it merely changes the syntax of things.
I suggest you read it, or some of the other hits, and try what you learn. I once more stress the fact that IMO writing to a BLOB field and reading from a BLOB field are bound to be very similar.
|
|
|
|
|
I agree with you but now I'm having problems trying to convert the file back to its original form.
|
|
|
|
|
I have a database table that consists of an int primary key, several unique identifiers (most of which are used to reference data in other databases), an xml field and an infoset. I have non-unique, non-clustered indexes that I am trying to use to speed up the queries to the table, all of the indexes work well except for a single index on a uniqueidentifier field that retains a 99% fragmentation rate regardless of rebuild/reorganizing the index. Below is the details of the table, IX_external_guid_3 is the index with the high fragmentation. Can anyone tell me what might cause the high fragmentation, or what I can do to reduce the fragmentation on this guid field and speed up queries based on it?
Thanks,
Table
id(PK, int, not null)
guid(uniqueidentifier, null, rowguid)
external_guid_1(uniqueidentifier, not null)
external_guid_2(uniqueidentifier, null)
state(varchar(32), null)
value(varchar(max), null)
infoset(XML(.), null)
created_time(datetime, null)
updated_time(datetime, null)
external_guid_3(uniqueidentifier, not null)
FK_id(FK, int, null)
locking_guid(uniqueidentifer, null)
locked_time(datetime, null)
external_guid_4(uniqueidentifier, null)
corrected_time(datetime, null)
is_add(bit, not null)
score(int, null)
row_version(timestamp, null)
Indexes
PK_table(Clustered)
IX_created_time(Non-Unique, Non-Clustered)
IX_external_guid_1(Non-Unique, Non-Clustered)
IX_guid(Non-Unique, Non-Clustered)
IX_external_guid_3(Non-Unique, Non-Clustered)
IX_state(Non-Unique, Non-Clustered)
John
modified on Tuesday, December 21, 2010 2:57 PM
|
|
|
|
|
I've had a similar problem to this before, rebuilding or reorganizing wouldn't make a difference.
I ended up dropping the index and recreating it.
|
|
|
|
|
please let me know how to call a .sql file from one sql file.
|
|
|
|
|
That is not possible as far as I know.
Wout Louwers
|
|
|
|
|
Exec (@Filename)
"When did ignorance become a point of view" - Dilbert
|
|
|
|
|
Not sure if this is possible as I haven't tried it but what I would try would be to do a bulk insert from your sql file into a single column temporary table.
Then you could select this into a temporary varchar variable.
You should then be able to use the EXEC() function to run the query in the varchar variable.
|
|
|
|
|
I have a table with one record per security role a user has. So, user SMITH can have multiple records, one for DMV another for CPT and another for ICD. In this example, SMITH has three security roles but can have up to 100.
The desire is to have a second table with one record per user with columns for each security role indicating with Y or N whether the user has that role. The column names are the same as the security role name (eg. DMV, CPT, ICD,...) Smith would have 3 columns with a Y and all the rest with an N.
So, is there a way to read in the first table, use the data in the security role field (DMV, CPT, ICD, etc) to then reference the column in the second table and update the respective column to a "Y" to indicate the user has that role? For example: if table2:ColumnName(DMV)=table1:Security Role("DMV") then update table2:Column(DMV)="Y". The key to both tables is the Employee ID.
The input file we use to create table 1 is in the one record per Security Role per user design. We do not control that.
The intent is to avoid long Case statements. I've Googled this multiple ways and read lots of possibilities but none seem to have a workable solution but most likely I just don't understand it. Below is one way I hoped would worked and a couple of forums indicated it would but I get 'Yes' in the @colname variable.
Declare @colname varchar(200), @Eid varchar(50),@message varchar(80),@command varchar(200)
Declare my_cursor CURSOR
For Select replace(replace(replace(SecurityRole,' ',''),'/',''),'-','') as SecRole,EmployeeID
from EmergencyContact.dbo.CSEEmployeeRoles
where EmployeeID='38'
order by EmployeeID, SecRole
open my_cursor
fetch next from my_cursor into @colname,@Eid
while @@fetch_status = 0
begin
select @message = @colname+' '+@Eid
print @message
select @command= 'update EmergencyContact.dbo.CSERolesRolledUp set '+@colname+' = "Yes"
where EmployeeID = '+@Eid
exec (@command)
fetch next from my_cursor into @colname,@Eid
end
close my_cursor
deallocate my_cursor
Any help will much appreciated. Seems so simple.
|
|
|
|
|
That's the wrong way to go about it, so if it has been mandated, I recommend that you find a better company to work for.
What you want is a many-to-many relationship between users and roles, this is done with a table that contains the IDs of the user and role -- two IDs per record (maybe with a timestamp of when the relationship was created).
User
1 Smith
2 Jones
Role
1 DMV
2 CPT
UserRole
1 1 <-- Indicates Smith is in role DMV
1 2 <-- Indicates Smith is in role CPT
This is a much more flexible design that will expand more easily as additional roles are added.
P.S. It also allows for quicker querying to see which users are members of a given role.
JTRizos wrote: the respective column to a "Y"
If you must go that way, I strongly recommend that you use numeric codes -- and a translation table with referential integrity. You don't want invalid values showing up in there.
|
|
|
|
|
I agree. However, the file comes from the state and we don't control it's format. Otherwise, it would be as you describe.
We currently take the file into SQL table and allow access to supervisers and managers through an ASP.NET application using a GridView which provides some functions for the user. So, it's
User1 SecRole1
User1 SecRole2
User1 SecRole10
User1 SecRole23
User2 SecROle1
User2 SecRole4
User2 SecRole20
User2 SecRole100
What they want is
SecRole1 SecRole2 SecRole3 SecRole4 ... SecRole10 ... SecRole20 SecRole21 SecRole22 SecRole23 ... SecRol100
User1 Y Y N N Y N N N Y N
User2 Y N N Y N Y N N N Y
in GridView making it easier to compare who is in a SecRole.
Thanx for your response. I appreciate any feedback provided.
|
|
|
|
|
There should be no reason to store the data in the same form you receive it.
|
|
|
|
|
Keep the table you're having. Storing the same data in more than one place is against one of the fundamental principles of relational databases.
And if you really need to have that second table you should read up on pivot.[^]
Then consider creating that pivot table as a view. That's one of the purposes of views.
"When did ignorance become a point of view" - Dilbert
|
|
|
|
|
I thought about a PIVOT but the users have different security roles so I don't think it is possible to show what roles a user has or does not have so users can be compared. I've created PIVOTS in Excel and can't see how I can inidcate whether a user has or does not have a role and then compare them to other users. I will, however, look into this and see if it will work for this application.
Thank you for your response. I appreciate any responces I can get.
|
|
|
|
|
Case When NULL Then 'N' Else 'Y' End
"When did ignorance become a point of view" - Dilbert
|
|
|
|
|
Not sure I understand. Also, can I use a PIVOT to create a table available through a GridView or would it be just for reporting?
Thanx again!
|
|
|
|
|
This is a simpler way to make a crosstab:
Select UserID
,Max(Case When RoleID = 'SecRole1' Then 'Y' Else 'N' End) as SecRole1
,Max(Case When RoleID = 'SecRole2' Then 'Y' Else 'N' End) as SecRole2
,Max(Case When RoleID = 'SecRole3' Then 'Y' Else 'N' End) as SecRole3
,Max(Case When RoleID = 'SecRole4' Then 'Y' Else 'N' End) as SecRole4
,...
From UserRoles
Group By UserID
SQL is for datacollection, it doesn't care whether you use the data for reporting or grids...
"When did ignorance become a point of view" - Dilbert
|
|
|
|
|
Not being able to get the @colname variable to work, here's what I've now been working on. If it works, there will be 100+ case statements. Will what you recommend above work with an Update? My code seems to go through the table being updated once rather than through the CSEEmployeeRoles table which has the multiple records per user. I've done things like this in SQL reports before but never had this problem.
update table1 set
AuditView = case when table2.SecurityRole='Audit View' and AuditView='No' then 'Yes' else AuditView end,
BasicArchive = case when table2.SecurityRole='Basic Archive' then 'Yes' else BasicArchive end,
BasicModify = case when table2.SecurityRole='Basic Modify' then 'Yes' else BasicModify end,
BasicView = case when table2.SecurityRole='Basic View' then 'Yes' else BasicView end,
CentralScanOperations = case when table2.SecurityRole='Central Scan Operations' then 'Yes' else CentralScanOperations end,
CreateorMaintainParticipant = case when table2.SecurityRole='Create or Maintain Participant' then 'Yes' else CreateorMaintainParticipant end,
CreateRefundParticipant = case when table2.SecurityRole='Create Refund Participant' then 'Yes' else CreateRefundParticipant end,
from EmergencyContact.dbo.CSEEmployeeRoles as table2 right join EmergencyContact.dbo.CSERolesRolledUp as table1
on (table2.EmployeeID = table1.EmployeeID)
Thanx again, Jörgen, for your input.
|
|
|
|
|
Both Piebald and Jorgen have essentially answered your question
You MUST not store the data again (unless you are going to normalise the data as suggested by piebald)
You should use a pivot query to present the data in the format the user requires. This article [^]may help (shameless plug)
I would suggest that you do both, restructure the data when you receive it from the State, just b/c someone else has a crappy format (or you are actually receiving denormalised data designed for output) does not mean you can't change it in your loading process.
Create a view (or at least a stored proc) based on the article that supplies the data in the format the users require.
Oh and give Piebald and Jorgen an upvote for the valid answers.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|