|
led mike wrote: at least you work with people that use the word "Abstract"! That's more than I can say!
Ouch.
|
|
|
|
|
Dude, it looks like you are being stalked by a 2.0 voter? Even I don't have a stalker!
led mike
|
|
|
|
|
led mike wrote: Dude, it looks like you are being stalked by a 2.0 voter? Even I don't have a stalker!
Damn. I'm not even worthy of a Univoter.
|
|
|
|
|
I often accidentally vote 2.... when a little spaced out it is far too easy to think, as you arrive at the last message, that those numbers represent pages. So I try to navigate to page 2, and get "thank you for voting"..!
I must apologize to any victims (none today tho).
|
|
|
|
|
Hi,
We have a whole bunch of PC's that monitor other machines (We're talking about 30-40 monitors PC's tracking 1 machine each)
I'm planning on writing a 'dash board' app that provides a manager with a live view of the state of all the machines. I can't seem to think of a good way to architect it.
Options:
1) PULL - Monitor app exposes a web service (or some other connection type, remoting or whatever) and the dash board app calls it to get the current status. Problem, will need to be called around once a second to be live enough, this could be an issue if lots of managers have the app open watching the status. number of pull requests required will be (running_dash_board_app_count * monitor_app_count)
2) PUSH - Dash board app subscribes in some way to notifications. Monitor app pushes out notifications (via web services/remoting whatever) to dash board app. Again, same issue, notifications will need to be pushed out around once a second, so the network traffic could get really high. Notifications messages will again be (running_dash_board_app_count * monitor_app_count).
What other options are there. I don't want to flood the network with stacks of traffic just pushing or pulling status updates. This doesn't scale at all, say we had 20 managers, each new machine will add 20 new connections.
Simon
|
|
|
|
|
Push: Monitor app is stateful, has to handle registrations, throttling and all sorts of BS. Saves a request packet on updates, perhaps 1% of the total bandwidth.
hence, Pull: Use this method.
|
|
|
|
|
Mark Churchill wrote: Push: Monitor app is stateful, has to handle registrations, throttling and all sorts of BS. Saves a request packet on updates, perhaps 1% of the total bandwidth.
All good points thanks, so yes, pull is favourable over push. But surely there's a better way.
5 monitors & 5 dash boards = 25 messages every second (1 group within the company)
50 monitors & 50 dash boards = 2500 messages every second (Expected load across the whole company with current deployment of monitors)
500 monitors & 50 dash boards = 25000 messages every second (expected load in a few years when we reach the planned state of having all machines being monitored)
And this is making an assumption about concurrency. There's a good chance a lot of people will want leave the dash board open to keep track of things, this could push the total dash board apps running way up. Also, yes there could be some reduction by allowing the dash board to only display the particular subset of machines each user was most interested in.
Is there really no way I can reduce the network traffic? it just seems so un-scalable. These monitors aren't heavy server boxes, they're piddly little SFF PCs that just about manage winxp & .net. (Am I worrying unnecessarily? is 25000 messages a second tiny compared to general network bandwidth.)
Simon
|
|
|
|
|
What data do you have per monitored machine? "Up"? Could be something that could be pulled in one request for all monitored machines (or the subset the user is interested in).
If the bandwidth/total messages a second is too much to handle and you are pushing these limits, then you'll just have to go with broadcast, push, or a stateful service (GetState(monitoredSet) GetChanges(monitoredSet)). I'd be inclined to do the latter using WCF, its probably the easiest.
|
|
|
|
|
Cheers Mark, Led Mike has come up with the idea of a server in the middle to gather the data and reduce the network load on the monitors. Is this what you were getting at with your idea of "pulled in one request for all monitored machines"?
Mark Churchill wrote: broadcast
I'm interested in this idea. Is it possible for all the monitors to simply broadcast their state, and each dash board to listen without increasing the bandwidth? I've not really thought about this kind of communication. How would I go about doing this. Is it something that WCF supports?
Simon
|
|
|
|
|
Sorry, I assumed that the "monitor app" was some sort of aggregation server anyway
I was working on the assumption that the monitor app would be pinging/receiving some sort of data from the monitored machines.
Then several dashboard apps would pull data from the monitor app. By "pulled in one request" I mean that the dashboard could ask the server for an entire snapshot of state in one request, and then (if bandwidth was going to be an issue) after that ask for a difference. I was suggesting this over dashboards subscribing/having data pushed from the monitor server.
I wasnt too concerned about how the monitored things updated the monitor app. I could see that being a variety of methods, such as the monitor app pinging them, or servers pushing info onto message queues.
I dont think WCF specifically supports broadcast (I might be wrong here, I didnt check) - but i think MSMQ does. If you are on a LAN and your router supports it I guess you could save some bandwidth with UDP broadcast, but I think we'd be in overkill territory there :P
But I agree with Mike, I think the "monitor app"/ aggregation server is a given (for a start it gives you centralised logging / stats if you need them). It also lets your dashboard clients request only the data they are interested in. TBH it could probably just dump your info in a SQL database and have the dashboards request them out, or even use dependancies...
|
|
|
|
|
Mark Churchill wrote: Sorry, I assumed that the "monitor app" was some sort of aggregation server anyway [Wink]
I was working on the assumption that the monitor app would be pinging/receiving some sort of data from the monitored machines.
Ahh, maybe I explained it badly. Each monitor app is running on a standalone PC that is connected to 1 machine. It receives several inputs from the machine which are things like production speed, temperature, etc. The connection from the PC to the machine is via a serial port connection to the machines PLC. (The monitor app actually serves 2 purposes, the first is to display the monitored data visually, but also allows some control of the machine via a touch screen interface)
Mark Churchill wrote: I dont think WCF specifically supports broadcast (I might be wrong here, I didnt check) - but i think MSMQ does. If you are on a LAN and your router supports it I guess you could save some bandwidth with UDP broadcast, but I think we'd be in overkill territory there [Poke tongue]
Yeah, although I like the idea of a broadcast based system, I'm thinking now it's slightly over the top, the server options seems the best.
Mark Churchill wrote: But I agree with Mike, I think the "monitor app"/ aggregation server is a given (for a start it gives you centralised logging / stats if you need them). It also lets your dashboard clients request only the data they are interested in. TBH it could probably just dump your info in a SQL database and have the dashboards request them out, or even use dependancies...
Yeah, there are plenty of advantages to a server approach. Also that would allow me to easily build in a security layer to only grant access to the appropriate people, without having to have the monitor app's aware of all the privileges, all they need to know is only to give up the data to the server.
Thanks guys, you've helped me loads. Can get cracking on a decent design now.
Simon
|
|
|
|
|
Simon Stevens wrote: But surely there's a better way.
Simon Stevens wrote: Is there really no way I can reduce the network traffic?
Perhaps you don't have each dashboard connect directly to a monitor. Instead you write a server application to do this and host it on it's own machine (not a monitor machine). The server app will communicate directly with the monitors. Then the dashboard apps communicate with the server. This might introduce small amount of latency (I don't know if that is important to you) as the server machine becomes a bottle neck, but it would reduce network connections and the load on monitor machines since they only communicate with a single remote process.
led mike
|
|
|
|
|
Genius. But yet so simple, why didn't I think of that! It solves all my issues. A bit of latency is fine, so a simple server box in the middle can aggregate all the data from the monitors and provide it easily to the dash board apps without a gazillion messages flying around.
Thanks Led Mike. One big fat 5 is heading your way.
Simon
|
|
|
|
|
Frankly I think you have some misconceptions about what constitutes heavy network traffic. Admittedly I don't know how much data your messages will contain but the number of messages as such doesn't seem to me as much of an issue.
If I am wrong, however... you could use some form of multicast/broadcast technique so that a monitored PC doesn't have to send the same packet to each of it's monitors. I don't know how it is done but it sure is possible. Apart from that you can do simpler things like scale back the poll frequency a bit and try to optimize the communication - use compact bitarrays instead of boolean flags, for example, and a remoting over a low-latency protocol like UDP rather than some bloated XML Web service. Remoting uses binary serialization so it's very quick as well as very compact; XML is neither (though it has it's uses of course).
|
|
|
|
|
dojohansen wrote: Frankly I think you have some misconceptions about what constitutes heavy network traffic. Admittedly I don't know how much data your messages will contain but the number of messages as such doesn't seem to me as much of an issue.
I did wonder if this might be the case. My messages really should be rather small. We're talking maybe a few dozen status bit flags, a few integer values (32 bit), and a few string values (only 10 or so chars long each). And could be optimised further by only transmitting changes between each message.
It just doesn't seem very scalable, each new addition to the system adds a whole load of extra messages.
You say that the level of messages I've described is fine. Out of interest, at what point would you say I should start being concerned at the level of traffic?
Simon
|
|
|
|
|
I don't know if it is fine or not; I'm just going by some very crude speculation. A second is a very long time in computing terms.
If I launch task manager and visit the networking tab, then select columns to show the number of unicasts and non-unicasts per interval, then convert this to per second (at "normal" update speed task manager appears to use an interval close to 2 seconds) I see that the activity varies a bit but is between 50 and 100 "casts" per second when idle. My network utilization varies between 0.01% and 0.03%.
Another crude thought: When loading a web page, the browser requests the document and then issues separate requests for all the linked resources, such as js files, style sheets, and above all images. It seems to me this load must be many thousand times greater than what you're trying to do, yet I am sure a much greater proportion of PCs on our corporate network than just one in thousand is browsing the web at any given time.
I don't know if there even is a general answer to your question (since network capacity might vary rather a lot in the world and you've said little about the network on which this will operate), and if there is one I don't have the knowledge to provide it. So I cannot say at what level you should be concerned. But very simple and crude common sense observations seem to me to indicate that it would be a non-issue.
I may be wrong of course, but until someone presents me with some better basis than just my own arrogant speculation I *think* adding a few bytes per second to the network traffic of each PC translates into adding less than 1% overhead to the network overall. Check your own stats, I'm sure you'll have at least a kilobyte per second in network traffic when your PC is idle!
|
|
|
|
|
1- is it right to start refactoring a code by revising the architecture or test the code with old architecture and when you became sure that it's working properly , then start improving the architecture.
2- is that right to change an architecture which is working properly because you find better ways?
|
|
|
|
|
fateme_developer wrote: or test the code with old architecture
"or test"? What does that mean? How can you have existing architecture that has NOT been tested?
fateme_developer wrote: 2- is that right to change an architecture which is working properly because you find better ways?
There is no single answer to that question. All the project variables must be considered.
led mike
|
|
|
|
|
led mike wrote: All the project variables must be considered.
would you express more? what are important variables and how do they effect the final dicision?
|
|
|
|
|
fateme_developer wrote: would you express more? what are important variables and how do they effect the final dicision?
There are far to many but for example, say you developed an XBox game. Once you shipped it there is likely very little benefit in refactoring the code since you won't be modifying and extending the game. In the example there still might be a reason to refactor. If in developing the game you also developed your graphics engines that you intend to use in future games. Then it would be a good decision to refactor the engine but not the game level code.
led mike
|
|
|
|
|
The best way to approach any refactoring excercise really should be to make sure that you have unit tests in place (with as much code coverage as possible) and that those unit tests should all pass. This allows you to refactor and objectively verify that the refactoring hasn't broken anything. That being said, this isn't always possible depending on the situation, but as long as you are careful and targeted in the refactoring you can minimize the potential risks.
As for changing the architecture because you find better ways to do something, you have to make a decision if the risks of changing the architecture at that point in time outweigh the benefits.
Scott Dorman Microsoft® MVP - Visual C# | MCPD
President - Tampa Bay IASA
[ Blog][ Articles][ Forum Guidelines] Hey, hey, hey. Don't be mean. We don't have to be mean because, remember, no matter where you go, there you are. - Buckaroo Banzai
|
|
|
|
|
Scott Dorman wrote: The best way to approach any refactoring excercise really should be to make sure that you have unit tests in place
I agree you should have unit tests, but not for the purpose of refactoring. Sure they are invaluable to the effort but unit tests should be in place for many reasons having nothing to do with refactoring and the reasons come into play long before you have anything to refactor.
led mike
|
|
|
|
|
led mike wrote: you should have unit tests, but not for the purpose of refactoring
Absolutely. Ideally, unit tests should be in place long before any thoughts of refactoring like this occur. Since that isn't always the case, the next best thing is to recommend that they be in place before the refactoring.
Scott Dorman Microsoft® MVP - Visual C# | MCPD
President - Tampa Bay IASA
[ Blog][ Articles][ Forum Guidelines] Hey, hey, hey. Don't be mean. We don't have to be mean because, remember, no matter where you go, there you are. - Buckaroo Banzai
|
|
|
|
|
Yes. It's hard to imagine someone is thinking about design and architecture and refactoring but they haven't yet thought about the fact they should have unit tests. As hard as it might be to imagine that someone, no doubt they exist.
led mike
|
|
|
|
|
It's not so strange. There are many cases where constructing unit tests would in itself represent a huge development effort, and I for one disagree that it is a necessarily a good idea to do so when about to embark on a refactoring.
The project I work on is a point in case. It is a web application and it isn't easy to unit test, among other reasons because intrinsic asp.net objects (request, response, session state, application state, httpserverutility) are used indiscriminately anywhere, so to make a unit test for some business class you need to make mocks of these, which you can't, because they are all sealed classes. Even if you could it would be quite a bit of extra work. And if your refactoring involves removing all these objects from your business logic you don't get any long-term benefit from doing all that work - once your tests are developed your refactoring then breaks them!
And that leads me to the more general observation: Refactoring isn't (usually) just about changing implementation details; it usually involves interface changes, decoupling of objects, and construction logic changes - perhaps introducing a factory somewhere. Most of these changes are of a nature that make them very likely to break any tests. So why do all the development effort to make a bunch of tests that you will immediately go about invalidating? In my view, far better to refactor and make tests for your *new* code as you go. In the end, if you had bugs that are still there after the refactoring there is no reason to think the refactoring itself would make them harder to fix afterwards, and if your refactoring introduced bugs that is no different to if you had made the unit tests for the "before refactoring" version of the application.
So I don't get it. What would be the great benefit of developing tests to know that some code you're about to change works, when those tests cannot be reused to test the code after the changes?
|
|
|
|
|