|
I've been trying to run a simple block of code from C sharp in a powershell run space.
I went to run that command in the powershell example project on my workstation and it did not work, but that demo application will run on the server that has scom installed.
Is there a management pack (like with Exchange) that I need to install on my workstation to get access to the dlls and commands that I have through powershell when on the server locally. Any ideas?
Import-Module –Name OperationsManager
New-SCOMManagementGroupConnection -ComputerName "serverName"
get-scomgroup
foreach ($group in get-scomgroup)
{
Write-Host $group
}
How to run PowerShell scripts from C#[^]
|
|
|
|
|
Explained here[^]
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Thank you Eddy.
I also had a hard time getting a PS namespace going inside of c sharp, but was able to get that going late last night locally and then modified it to work remotely. I will post those functions for reference here when I get back to work on Monday and compare what I'm doing to powershell girls code.
|
|
|
|
|
I am looking for more information on how to prevent thread starvation/blocking/racing in an application or service that could possibly running up to 200 threads simultaneously across multiple processors/cores.
Most of the examples I have found online show how to coordinate between 3 or 4 threads max and I am planning a app that will have hundreds.
I've made applications that have run up to 30 threads but when going way beyond that I am worried about some threads not getting any CPU time not to mention synchronization.
Where can I find some really good articles and blogs?
What are the best books on the subject?
There are a lot of books and articles on the subject, I am looking for the most informative and instructional but not presented in an engineering language above my understanding.
I originally posted this on the Q and A forum and it was suggested that I also imply about it on the C# forum.
All suggestions are appreciated.
|
|
|
|
|
I suppose I have to be the typical "that" guy but...
Why do you suppose you need 200 threads? You do realize that dividing work, once you reach the number of cores you may be degrading performance instead of improving it? More threads does not mean more throughput, I suppose I don't know what you are really trying to do, but maybe a good design review is needed before starting down a rather disappointing path...
|
|
|
|
|
I am working on a modular processing model and I was thinking scalability. My current model would be for 20-40 threads. However, if the job became very large scale like processing millions of records every hour, how could I expand the scale while still giving each thread its own slice of CPU time and not having any threads get starved out.
|
|
|
|
|
Without knowing more about your problem space:
I'd suggest building a queue (or queues) of the different types of processing (items in the queue are the data for a given processing type) and having threads that loop around: remove item from the queue and process it. As long as there is something in the queue, the threads will do work. When there's nothing there, there's no work to do.
Determine the number of threads (by experimentation(?) on your hardware) that gives near-optimal performance.
It's possible that if things are IO bound or processed off-the-box (e.g., SQL server) the best number of threads will be more or less than the number of cores.
(Or have the application try to self-tune?)
|
|
|
|
|
That's really kind of where I am going. This is more of experiment to see if creating a modular processing scheme is worth the trouble. I've got some base classes created and was taking the idea to it's endgame. That's why I was asking about thread starvation. In a system that would be processing information sequentially but in an assembly-line fashion, what would it take to prevent threads from being starved out if the system grew to over 200 threads.
|
|
|
|
|
As has been pointed out by others, adding threads isn't free.
At some point the management of the threads and switching between them becomes significant in the big picture.
|
|
|
|
|
Just use the .NET Threadpool or task parallel library
Does the queuing for you
|
|
|
|
|
Or more specifically the servers that run the internet you need to goto an event drive process. One thread per core each doing work when available.
|
|
|
|
|
That's why those servers don't run on Windows; it does not let you hog the machine using a few threads. It's the OS that manages them, and under Windows, all threads are equal.
If it weren't, then some software-vendors that are known to put their apps in the startup-folder would try to have exclusive access to a core, just to have a "fast application".
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
With that many threads your CPU/OS will be overloaded just trying to control and switch them. A very bad idea.
Veni, vidi, abiit domum
|
|
|
|
|
Foothill wrote: I am worried about some threads not getting any CPU time Why? Open the task-manager, and see how many threads are running in the various apps on your system. My browser is aleady above 30, with me having opened 3 tabs.
Windows decides "when" a thread gets put on a core, and how much time it'll get. Unless you muck with their priorities, they'd be equally important.
Foothill wrote: not to mention synchronization. Works the same as with 30 threads.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
You have obviously never worked as a TA, thirty is a hard-codeable number : )
Eddy Vluggen wrote: Works the same as with 30 threads.
|
|
|
|
|
I am looking for documentation for how to have 50+ threads running under a single process in a high throughput application and not have threads be starved. Synchronization would be handled through design. I would like to be able to take multithreading and parallelism to their logical limits.
|
|
|
|
|
"Multi-threading and parallelism to their logical limits"
Then think (literally) outside the box. Taking something "to its limit" is a REALLY poor design...
The newest and greatest parallel processing architecture is distributed processing, using agents on multiple machines (boxes), with a master delegating work. This way you really can have the best of both worlds, throughput, millions of processed data items per second, etc. SETI@home has been doing this since early 2000's, and the concept is catching on. Either you can create a processing farm and not push a machine to its limit, or you can create some kind of public distributed processing system and let people with free idle-time (and even better, free electricity and equipment) do your processing for you.
|
|
|
|
|
First, what makes you think that the number of threads you launch is going to do the job?? Without the cores to support that many, you're just wasting resources and killing throughput, not improving it.
You have to find out what the domain of the problem is first. Why are you launching threads? What causes one thread to take so long processing a single record? Is it a compute-bound problem? Or is it an I/O problem where the thread is stalled, waiting for an I/O operation to complete??
Without knowing the exact causes of the delays in processing a record, throwing threads around will get you nowhere fast. You can throw threads at a stack of records, but if there are not enough cores or enough I/O throughput to run those threads you'll get no benefit. You may have to add hardware to solve the problem, not threads.
But, this is going to take a ton of research to figure out.
|
|
|
|
|
The problem I am trying to address, to start with, is how some of our developers continue to use SQL and Oracle as an application and not for data storage as it was intended. I've seen these immensely complicated SQL statements they have written which is a horrible programming practice and was looking for solutions using modular programming and parallelism. The problem is processing 4 million records in 45 different ways but quickly and efficiently. There are no real problems yet and I stress yet. So far their solution to slow processing is to throw more hardware at it and not increase the efficiency of the actual application.
|
|
|
|
|
Well, you're screwed before you even being.
No amount of hardware or threading or anything other than going back and redesigning and reworking that pile of crap is going solve the problem. You cannot fix bad design with anything, other than redesign.
You can throw all the threads you want at the problem, but they'll all just end up sitting idle waiting the SQL to process. Sure, your application will be starting hundreds of threads, but the SQL server will not be matching you. It'll spin up only what it can work with and will queue up any work it can't readily get to.
|
|
|
|
|
Foothill wrote: All suggestions are appreciated.
It isn't about the threads. It is about the work.
If you have 30 threads querying remote networks is significantly different than having 30 threads working on matrix computations based on in memory data.
Both of those are impacted if you are attempting to run the 'application' on a server that is already running other 'applications'.
And all of that is dependent on whether you do everything correctly. Messing up a single sync and you will suddenly have an application that is running slower than a single threaded app. Or deciding that an optimal solution which completely ignores the bandwidth limits of the network.
|
|
|
|
|
What I am working toward is more of an assembly line approach to some complex financial transactions. Some of my other replies to other posts under this topic will help give a better idea of what I am working toward. This project is testing the feasibility of creating truly modular programming, which is not creating new modules for an existing program and recompiling but passing code changes into an already running program and the code changes are applied without the program missing a beat. In a sense, it would end up a process that would never have to be restarted and processes constantly.
|
|
|
|
|
Foothill wrote: it would end up a process that would never have to be restarted and processes
constantly.
That doesn't really have anything to do with threading however. At least not in the context of what you suggested in your original post.
|
|
|
|
|
<itemtemplate>
<asp:Label ID="ProductPromotionLabel" runat="server"
Text='<%# string.Concat("RM ",Eval("ProductPromotion")) %>'
/>
</ItemTemplate >
how to get the data on productoromationlabel?
|
|
|
|
|
loop through the records in the DataList and search for the Label on the row with the FindControl method
MSDN FindControl[^]
Edit: This is an ASP.Net question. Please use the appropriate forum in the future.
|
|
|
|