|
SSETH wrote: Could you please post here your approach(Design) in C++.. I did in my previous message. The way you control access to different features is by taking decisions at run time, as to what facilities and data a particular user has access to. Doing it your way is nigh on impossible, and would need regular changes in the code.
|
|
|
|
|
First off of course this is a made up problem that should never be implemented specifically in code. Parts, depending on what you mean, might be implemented in they system but not code specifically (design versus code.)
But approaching it code wise.
sseth21k wrote: 1. The third person should not be able to see the other details of the class!
The implementation via class A. You have an API class that returns a proxy B. The caller uses B. B internally is implemented such that A supplies all the behavior.
Specifically in C++ this means that A will NOT be in an include file. Rather it should be a source file. In ractical wise you would probably put in its own package with its own include file
Code wise you could tighten this even further by delivering a binary rather than source code. Then all they could see would be the Proxy.
sseth21k wrote: the component should be accessible to only that particular third person!
There is a credential class. It has specific data that identifies the caller.
The API (above) is modified to take an instance of the credential class. The API checks the data to 'match' to the 'third person'. If it matches it returns the Proxy. If it doesn't match it returns null.
|
|
|
|
|
This is probably the first question I have asked on CP, so please be gentle , also my terminology might be a bit all over the place.
I admit I am pretty much a beginner as far as entity framework and architecture are concerned. I am in a bit of quagmire, I have inherited several applications that rely on one database and that database has one datacontext project which all these applications plug into.
Currently we are using TFS and each of these .Net applications have a project reference to this data context project, which is fine if you do not need to branch out and if it is a single programmer outfit.
Now I want to implement a better release strategy after a few D-Days and issues with half finished code (not caused by me). I want to go with the Gitflow strategy where I have three types of branches, a release (golden) branch, a trunk development branch and a day to day development branch. Now with several projects all tied together by a single datacontext project it is painful creating new branches, merging up, as it is huge and we lose track of changes. I agree there is very tight coupling and this needs to be reduced, I want to come up with a sane way of splitting the context and the applications into single entities. I agree what ever I do it will be time consuming but I would love a few opinions and experiences on people who have battled with this issue.
Thanks for reading,
|
|
|
|
|
A (current release) Entity Framework app is not limited to one "DbContext" per database (if that's what you're referring too).
The "entity class definitions" are what's "common" here; a given DbContext is simply a class that identifies the entities too be used in a given "context" (i.e. program / app).
I can have 3 tables: A, B and C.
One context can include tables A, B and C; another just B and C ... like a "view".
So; I don't see DbContext classes as limiting your ability to move forward.
|
|
|
|
|
Thanks for clearing that up for me, in that case I shall have multiple DbContexts referring to the same table but only sections I need for different applications. Thanks again,
|
|
|
|
|
|
git (regardless of process on top of it) is not well suited for handling multiple deliverables unless in fact those deliverables are real deliverables.
So for example if your company has a logging library that has its own sprints, its own requirements, schedule and delivered versions then that is a deliverable library. And the applications in the rest of the company would use a delivered version of that library.
With git that would then exist in its own repository.
In comparison what often happens is that there are several large entities which compose a single delivered application/system. A change in one part of the system invariably requires a change in another one or even several entities. All entities are manage in one sprint and a release involves all of the entities at the same time.
I am guessing you have the second. And it is unlikely you will ever have the first. Possible just not likely.
The only way to really ease the burden with the second is to very carefully manage dependencies. And that is not something that technology does - it requires a person. Thus if you add a new feature then the back end code is finished and, hopefully if possible, QAd, before the front end is even started. Obviously this is a probably when modifying existing code but then you can add new functionality as different endpoints and denigrate the old endpoints for removal in a future sprint. Again this is a manual process.
darkliahos wrote: I would love a few opinions and experiences on people who have battled with this issue.
A good a conscientious project manager can help a lot. There are not a lot of those however.
|
|
|
|
|
Here's some thoughts on a potential design for a file transfer application. I welcome any input you may have:
Purpose
Provide a real-time file transfer service that allows users to upload/download files to/from a server and to provide automatic synchronization between the server and the user machines.
A user-specified folder structure can be defined on the local machine. When a file or folder is added, modified, or deleted (Known as a Change) in any file or folder in this structure is the added, modified, or deleted on the server accordingly.
When a Change occurs, all users must be notified, and automatic folder/file synchronization between the server and the local file structures must be automatic and transparent to the users.
Proposed Solution (Prototype)
- The local machine will run a FileSystemWatcher hosted by a windows service.
- FTP wil be used to transfer files to/from the server.
- A SignalR service will be hosted on the server and function as the mechanism for maintainin connection to and communicating Changes Messages between clients.
A Change Message contains the following data:
1. Client Id (GUID) - The ID of the client originating the change
2. Item Type - File or Folder
2. Name - Full path and name of the file or folder
4. Action - Create, Modify, Delete
5. Location - Client or Server
Use Case 1 - File Added
A user drags a file into a folder called c:\TheApp\SomeFolder\MyFile.txt. The file does not already exist in the folder. The FileSystemWatcher detects the new file, FTP's it to the server with progress reporing. Once the upload is commplete then the client transmists a message to the server as such:
Client: {6FD41E1C-0057-44E4-B1AA-E0A4A263ABA3}
ItemType: File
Name: "c:\TheApp\SomeFolder\MyFile.txt"
Action New
Location: Client
The server recieves the message, verifys that the file exists on the server, then generates and sends the following message to all clients except the sender:
Client: {6FD41E1C-0057-44E4-B1AA-E0A4A263ABA3}
ItemType: File
Name: "SomeFolder\MyFile.txt"
Action New
Location: Server
The client recieves the message and then initiates an FTP of the file "SomeFolder\MyFile.txt" to "c:\TheApp\SomeFolder\MyFile.txt".
Use Case 2 - Folder Deleted
A user removes the folder c:\TheApp\SomeFolder. The FileSystemWatcher detects the change and transmists a message to the server as such:
Client: {6FD41E1C-0057-44E4-B1AA-E0A4A263ABA3}
ItemType: Folder
Name: "c:\TheApp\SomeFolder\"
Action Delete
Location: Client
The server recieves the message, verifys that the folder exists on the server, deletes the folder "SomeFolder", then generates and sends the following message to all clients except the sender:
Client: {6FD41E1C-0057-44E4-B1AA-E0A4A263ABA3}
ItemType: File
Name: "SomeFolder\MyFile.txt"
Action New
Location: Server
The client recieves the message and then deletes the folder "c:\TheApp\SomeFolder".
Issue - Simultaneous Changes by Mutiple Users
User A changes the contents of a file but does NOT save and leaves the file in an edited state. User B changes the contents of the same file and saves it. User A then saves changes to the file.
The local component of the application could use the date/time stamp of the file to determine the disposition of the file, but there may be differences in those date/times. The simplest implementation could be to run the last action received on the file.
|
|
|
|
|
What happens if a user is not logged on? Do they get "synced" when they log on?
How "current" does the syncing have to be? Why?
Design choices are premature at this point; IMO.
|
|
|
|
|
Gerry Schmitz wrote: What happens if a user is not logged on? Do they get "synced" when they log on?
Yes
Gerry Schmitz wrote: How "current" does the syncing have to be? Why?
Real-time. This app wil be applying business rules to documents placed in the folder structure, so it's important to keep the folders/files as up to date as possible. For example, there may be a requirement to destroy a doc automatically after a certain date/time. The server would send a Destroy message to the clients, and then doc would be removed.
Gerry Schmitz wrote: Design choices are premature at this point; IMO
Not sure I agree. We've been discussing and documenting the requirements for a year now. It's time to prototype, so I'm looking for technologies that will fulfil the requirements and then to get started.
Thanks for your input.
If it's not broken, fix it until it is
|
|
|
|
|
How can a client be "real-time" when it is off-line (and in effect acting like a "device")?
I see nothing in your descriptions that requires a "real-time" solution. All "to do" items can be logged in a database and dispatched based on triggers, async callbacks and / or scheduled take up processing.
|
|
|
|
|
Gerry Schmitz wrote: How can a client be "real-time" when it is off-line (and in effect acting like a "device")?
The app is never off line. Since SignalR can maintain a connection to the client via WebSockets, then the app is able to communicate between the client an server at all times.
Gerry Schmitz wrote: I see nothing in your descriptions that requires a "real-time" solution.
You're right in that we probably could accomplish our objectives by polling the server at specified intervals. But given the ability of SignalR [^]there's no reason not to.
If it's not broken, fix it until it is
modified 4-Feb-16 15:38pm.
|
|
|
|
|
As I said, you've already designed a solution.
|
|
|
|
|
Well time will tell. We'll know for sure once the prototyping is done. Once it's done I'll post it here as an article so I can get more feedback.
Thanks for your input.
If it's not broken, fix it until it is
|
|
|
|
|
I am thinking of writing something like DropBox. I'm trying to decide on the right technologies.
For the service I will need to upload/download files, and a way to call back to clients with notifications:
- I tested WCF, but it has been difficult to get working. There's always come config setting that is not right which gives me strange errors.
- I tested a SignalrR service which was very simple to set up and handles callbacks easily. but from what I can see SignalR doesn't do file upload/download. I thought of converting the file to a byte array, attaching it to a class and sending it to the server, but that doesn't feel right.
I need to
A) Upload/Download Files
B) Call back to the client
What's the right service to do with this?
Thank you
modified 2-Feb-16 16:39pm.
|
|
|
|
|
I don't think that Dropbox works with callbacks; instead, it might be checking the progress on the document ad-hoc, to update a shell overlay icon when needed.
zephaneas wrote: What's the right service to do with this? For A there'd be FTP. For B, any socket would do as long as it can stay connected, and in a not-always-connected environment I'd recommend email (or a similar structure).
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: I don't think that Dropbox works with callbacks
Why do you think that? Any reason in particular?
If it's not broken, fix it until it is
|
|
|
|
|
Kevin Marois wrote: Any reason in particular? Two; having written an overlay icon shell plugin thingy, which requests the status of a file as soon as it needs to show an icon in the explorer, and the second being the Dokan-plugin, which demonstrates how easy it would be create a drive that shows whatever you want (like remote files) as if they are part of the filesystem - also works on the "whenever Explorer shows it and needs it" principle.
Not on calling back to the system to let it know that the final bytes have been written. That is already implied by closing the stream
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
zephaneas wrote: B) Call back to the clien
That, as stated, is almost never the correct solution.
A server might send a message to a client, but using the same communication link that the client established in the first place.
zephaneas wrote: which gives me strange errors.
Err...welcome to network traffic?
You either roll your own API of find another and use it. And there are many, many choices. As mentioned in the other thread File Transfer Protocol (FTP) is one specifically addressing file transfer. Other protocols tend to be message based and not file based.
You might want to start first with a file listing service instead of a transfer service. Thus the client will list the files on the server and nothing else. That is going to be easier and is something you would need to do anyways.
|
|
|
|
|
B) Call back to the clien
jschell wrote: That, as stated, is almost never the correct solution.
Why not?
If it's not broken, fix it until it is
|
|
|
|
|
Kevin Marois wrote: Why not? Because of the nature of the thing; a server serves the clients' request. If the client needs an update, it should ask the server. Having the server notify the client is a well-known anti-pattern.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I'm not sure if I agree with that. From what I can see that's the whole point of WebSockets, and SignalR which is built on it - to maintain a connection to the clients for e purpose of real-time communication.
|
|
|
|
|
"Real time"?
zephaneas wrote: From what I can see that's the whole point of WebSockets Depends on which part of the Dropbox client you want to recreate. It is not my opinion, but from Explorers' view it makes sense; your file's status is not relevant to the user until he requests that file.
Before it can be requested, the status is requested. Explorer will still show the files, just not the correct status initially. You can see this happening visually on a slow computer when the first overlay-icon is the blue refreshing-arrows when first opened, and than the actual status with the correct overlay-icon once the status is requested.
Now, real-time is reserved for anything that is updated within 1/24 of a second, as that is what the human eye perceives as real-time. I don't care what framework you use, if it is on Windows, it will be as realtime as the idiot that ran a marathon just to deliver a message.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
zephaneas wrote: From what I can see that's the whole point of WebSockets, and SignalR which is built on it
We either have a nomenclature issue or you are mistaken about what websockets do (I know nothing about the second.)
Communication involves two parts
1. Establishing the connection
2. Sending messages.
In normal communications, like web traffic a client (from any computer, any application) attempts to 'connect' to the server (any computer and server applications.)
Websockets allow a client to create a connection to a server and then facilitate message handling (2 in the above) between the client and the server.
A real callback requires a reverse of that connection protocol in that the server would then need to do 1 by attempting to connect to the original client. Websockets do not do that.
Some reasons for clients not doing real callbacks.
1. The server cannot in fact connect to the client. Although a client might have a route to a server the server is not likely to have a route to the client. Nor even know how to connect to the client. This is much, must more likely to be true on the internet.
2. Servers are intended to be static resources. Clients are temporary. Thus even if a server attempted a callback the client might no longer be there.
3. Establishing connections can be a resource intensive process as is handling connections. Asking a server to do both, when a client is likely connecting to the server often in the first place is a pointless waste of resources. Not to mention adding complexity to the system.
|
|
|
|
|
You should reconsider WCF. Yes; once you get the settings right, "save" them.
Other than that, we have lots of services communicating with different third parties exchanging multi-megabyte compressed payloads (shipping documents and label images) asynchronously from multiple locations.
|
|
|
|