|
I'm confused. What's a sequential GUID?
What database are you using and what datatype were you storing this value in?
|
|
|
|
|
I'm using SQL Server 2008. Sequential GUIDs in general are GUIDs that are not completely random but created in way that their value is ascending (and thus remedying many of the drawbacks of conventional fully-random GUIDs when being used as a key in a DB) while still providing the advantage that collisions are virtually non-existent. SQL Server provides a function to generate these since version 2008 2005: NEWSEQUENTIALID[^]
modified 26-Feb-15 7:32am.
|
|
|
|
|
I'm pretty sure that's been available since 2005 but you're using the database to generate the GUID, rather than in code (which was why I was querying what the sequential guid was as this isn't native behaviour). If you don't want to preallocate the key then you have no choice but to have your code react and post-allocate the IDs.
|
|
|
|
|
Pete O'Hanlon wrote: I'm pretty sure that's been available since 2005 Correct, was a typo.
Pete O'Hanlon wrote: but you're using the database to generate the GUID, rather than in code (which was why I was querying what the sequential guid was as this isn't native behaviour) I actually created the sequential GUID with a custom "algorithm" in C#. I just linked to the T-SQL-Doc because I understood your comment as if you hadn't heard of sequential GUIDs at all yet
Pete O'Hanlon wrote: If you don't want to preallocate the key then you have no choice but to have your code react and post-allocate the IDs. Well, that's the original idea of this thread: Why not provide sequence-values (Int's) from the DB/DAL to the client? To me this seems to be an easy way to avoid GUIDs and still not having to deal with temporary keys in the client-code. But I wasn't able to find any evidence that somebody has already done this so I wanted to ask here if you can spot some flaw in that idea.
|
|
|
|
|
manchanx wrote: I actually created the sequential GUID with a custom "algorithm" in C#. I just linked to the T-SQL-Doc because I understood your comment as if you hadn't heard of sequential GUIDs at all yet Gotcha. My confusion was that we were talking client side.
In order to preallocate values, you would need to be able to guarantee that the number you got from the DB/DAL was unique. The question is, how would you do that? You can't use a count of underlying records for a table because a single delete will break this. You can't use the current tick because this isn't fine grained enough. You will also have to guarantee that you have sufficient locking in place to ensure that you only ever get the same value for one and only one record.
|
|
|
|
|
My current solution is a custom table to manage sequences and a queued key-provider-service in the DAL. The table could probably be replaced with SQL-Server-Sequences (which the key-provider-service would then abuse without actually committing inserts) but I'm not sure what I would gain by doing that.
|
|
|
|
|
Pete O'Hanlon wrote: In order to preallocate values, you would need to be able to guarantee that the number you got from the DB/DAL was unique.
Statistical averaging with guids - the chance is significantly low.
Sequential guids might make that lower since they are time based, thus part of the guid is based on the current time now and thus that part will never be the same in the future.
|
|
|
|
|
It doesn't look like he wants to do this.
|
|
|
|
|
Hello all first time here
I am trying to increase my cyber sec knowledge by creating a small IDS. I was hoping someone could review the code and give me some feed back and maybe point me in the right direction. Currently I am needing intrusion sig's for filters.txt if anyone knows a database of some sort. I also am not too sure where to go next. My current thought is to just check for in/out bin/sh, if bin/sh were to come across the network tap then disconnect and block all future connection attemtps.
Please note that this is basically running Pseudo code.
I am well aware of the pythonic programming, for now I am just trying ideas
Any and all advice would be awesome
Thanks
import pcap,dpkt
import socket
import os
def capture():
dev= pcap.lookupdev()
for ts, pkt in pcap.pcap(name=dev, snaplen=65535, promisc=True, immediate=False):
eth = dpkt.ethernet.Ethernet(pkt)
if eth.type!=2048:
ip = eth.data
typepack = eth.type
try:
dst_ip_6= socket.inet_ntop(socket.AF_INET6, ip.dst )
except AttributeError:
continue
else:
ip = eth.data
tcp = ip.data
typepack = eth.type
try:
src_ip = socket.inet_ntoa(ip.src)
dst_ip = socket.inet_ntoa(ip.dst)
if dst_ip == '192.168.1.2':
with open('//usr//home//mrfree//Desktop//Scripts//ipLog.txt','a') as log:
log.write('Session:%s:%s,%s\n'%(src_ip,tcp.dport,ts))
print('Session:%s:%s,%s\n'%(src_ip,tcp.dport,ts))
if tcp.dport < 1028:
log.write('Out of bounds connection attempt, Blocking %s \n'%(src_ip))
print('Out of bounds connection attempt, Blocking %s \n'%(src_ip))
with open('//usr//home//mrfree//Desktop//Scripts//filters.txt','r') as filters:
filters = filters.read()
if filters in tcp.data:
log.write('Attempted Shell connection, Blocking %s \n'%(src_ip))
subprocess.call('pfctl -k {0}'.format(src_ip))
print('Attempted Shell connection, Blocking %s \n'%(src_ip))
except AttributeError,TypeError:
continue
if __name__ == "__main__":
capture()
|
|
|
|
|
A general comment that since this is a design/architecture forum a design/architecture might be a better starting point than a hunk of code to elicit comments.
|
|
|
|
|
Sorry this is my first time on this thread. I didn't see any other subsections that looked more appropriate for deigning/building an IPS. I assure you, I was merely trying to find general help in the Design and Architecture of a program that would watch over network and interior components of a FreeBSD operating system. Please understand this was just a miss-understanding for your website here is not so user friendly. Thanks
Mod:please delete thread
|
|
|
|
|
orphansec wrote: Sorry
It was a suggestion not a warning.
Myself I might comment on what your code should do if you explained what you want it to do. But I won't comment on that block of code mainly because I don't want to try to figure out what is that you think you are doing with it.
|
|
|
|
|
orphansec wrote: Any and all advice would be awesome The general rule here is that people will help you to identify and fix bugs in your code, when you post a detailed question. Code review is a much more time consuming activity, so very few people have the time or inclination to do it. Having looked at your code I cannot see anything that stands out as wrong, but then I don't really understand what its purpose is. It also helps if you avoid TLAs (such as IDS) and abbreviations (such as sigs). Remember the more information you provide the more chance someone will be able to help you.
|
|
|
|
|
Note: I am not asking specifically about the actual implementations of Model-View paradigms in ASP.NET, or WPF.
Here's one diagram of Model-View paradigms: [^] (the source is from a JavaScript centric article).
I am particularly interested in how you conceptualize which "components" (model, controller, view, viewmodel, etc.) do the "business" of managing sources of Data (server, cloud, web, local data-stores), possibly using an ORM, and how Views get their data and have data bound to Controls in View. And, if data must be "transformed" for use, which component is "responsible" for transformtion ... i.e, where is the transformation performed.
thanks, Bill
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
I'm not quite sure if this is what you're asking for, but here I go.
I'm using MVP with WinForms (my experience with other flavors of MVxx isn't worth mentioning). My way of implementing it:
The Presenter is the only thing that knows about the other two. It subs to events from View and Model, calls methods on them and can read their properties. It contains the state of the View, the logic to manipulate it and synchronizes between Model and View - but it does no processing of the data whatsoever.
The Model can take different shapes: It can be a rather dumb data container, potentially already filled via constructor arguments. Or it can contain the code to pull data from the various sources and potentially transform it. Or it can be an adapter to a non-primitive business object (e.g. some kind of workflow). In the latter two cases it may hold the session of the ORM.
So either the Presenter "knows" that the data is already present on instantiation or it waits for some "data available"-event from the Model, which may be the result of a previous user request -> View fires event -> Presenter calls Model. The Presenter then hands the data to the View by calling methods on the View that take the data as arguments and either bind it or "just display" it (e.g. setting Label.Text = x).
Does this help?
- Sebastian
|
|
|
|
|
Thanks, Sebastian, for your very interesting response. Got my upvote
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
I suspect we have a bastardised setup of MVVM on WPF but here is our structure.
WCF serving up observablecollections or int as the transport format. Database makes extensive use of stored procedures and Views, 90% of which are generated by an in house ORM.
The WCF also has the Models (the type for the observablecollections) which represent the Views of each table in the database. The model project is shared by the WCF and the client projects. Properties in the models implement INotifyPropertChanged. So other than the INPC the models have no functionality.
Client has a DataServices folder where the database tables are represented by a class that gets the collections from the WCF. The ViewModel is bound to the View and gets the data from DataServices classes and populates the Model collections. 98% of all the work is done by the ViewModel.
By sharing the Models project between the WCF and the client there is no translation required, this may be technically wrong but it works perfectly.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi Friends
I'm looking for some new and fresh ideas.
Using asp.net, jscript, webservices, Jquery.
My question is this, the best way to setting controls and display data to an specific profile or user group.
I already used switched off/on options in some controls like gridview for example, hidden or show a specific data colums and display an specific button. The logic code is in the code behind, looking for an specific profile for all this (user-profile-option).
I think than is not the best way to do this. By the way it's hard to change and complex
Other idea is't to separate in a few pages with the specific data and controls to be used for the specific profile, but I think that it's very dificult to edit, administrate, and lost the develop control.
Other idea it's a little extrem.
what about to setting wish data columns and wish control can or can not to use the especific profile in a sql table in the data base.
This sql table contains in a data columns the options in JavaScript to do some action specific.
This sql table contains the columns name that to be showed or not in the user interface.
Someone has any ideas ???
Greetins
|
|
|
|
|
First, I think the Application's hardware and user context are going to constrain any solution: it's one thing if you are talking about a distributed application where each user (client) has a "rich-client" communicating as needed with a server/web/cloud(s). Another thing where you have "thin clients" and most of the "work" is done on the server/web/cloud(s).
Ideally, I think that each user should only have access to a customized UI that contains only Controls/facilities appropriate to their Group/Role/Permissions, but that "KISS" principle may have to be ignored depending on the reality of software development in the "real-world."
I wrote a long response to another thread on this Forum about user-role permissions and UI; even though that's written specifically about using C# and Windows Forms, I think you might get something out of it [^].
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
A recent exchange between _Maxxx_ and CDP1802 here: [^].
Resonates with an urge I have to "get my feet wet" in database programming in a scenario that is data-intensive, that does use a server for the database. I am familiar with writing "persistence mechanisms" in WinForms. And, not having used SQL, am eager to use C# and Linq primarily.
_Maxxx_'s statements about code-first DB's possibly returning a "glut" of data make me wonder if there isn't a way to have an "intermediary" app running on the server that takes Linq queries from clients as input, and returns highly-filtered data.
I am pretty sure this question's too broad without specifying the type of data involved: I am interested in data where there are many multiple-references/linkages across categories/objects. I have been investigating/studying DB's like Neo4j [^], but using that would take me off into Java-land where I do not want to go.
Appreciate any thoughts !
thanks, Bill
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
modified 11-Feb-15 7:31am.
|
|
|
|
|
You don't need an intermediary app, besides all that would do would eat memory on the server, which I'm sure SQL Server would prefer to eat
I think in general the issue is that because you'll have a POCO with a collection of another POCO, people will typically just use that collection to get the related details, without realising that in the background these POCO's are likely mapped to tables, and you've just asked EF to do SELECT * from two tables But you can use linq queries to narrow down what you want and EF will then just bring back just what you want.
Its just 'harder', or at least less obvious that you need to do that. And its pretty easy to see the SQL EF generates, and have it logged so that you can check it.
I find that getting the state of EF object graphs the harder thing to grasp and visualise.
|
|
|
|
|
I appreciate your thoughts, and the take-away I have from your comments is to familiarize myself with EF.
thanks, Bill
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
BillWoodruff wrote: if there isn't a way to have an "intermediary" app running on the server that takes Linq queries from clients as input, and returns highly-filtered data.
Yes but then all you are doing then is introducing yet another server that might have to deal with too much data.
Instead one should start with a design that applies the filter in the database and does it in an effective manner. Doing that reduces the load on the server and the data that needs to be returned.
|
|
|
|
|
Thanks for your reply, and I think I grok the very common-sense gist of your comment, which I interpret as "memory ain't cheap on the server, either." Hope that's not too far off-the-mark. Bill
«I'm asked why doesn't C# implement feature X all the time. The answer's always the same: because no one ever designed, specified, implemented, tested, documented, shipped that feature. All six of those things are necessary to make a feature happen. They all cost huge amounts of time, effort and money.» Eric Lippert, Microsoft, 2009
|
|
|
|
|
It isn't just memory - moving the data to another server requires work both for each server and the OS as well. And that is true even on the same box.
One of the worst solutions I have seen for applications is with designs that decided they were going to make the app database agnostic and the way they did that was to move all (all) of the business logic off of the database. That works for small volumes but is absolutely useless for large volumes when large volumes must be processed (in one case I saw they moved the entire database to a client box, processed it, then moved it back.) It couldn't scale at all. Probably could have scaled but they didn't design it that way. And processing on the database even if didn't scale for massive volumes at least would have work for the real volumes that their actual solution couldn't handle.
|
|
|
|
|