|
I am using the following code for compression of data at the server:
The job of this method is to execute the specified method passed as a string and return the object as a compressed stream (serializing the object and compressing the serialized stream) to the client application. When value is returned to the client, the decompression takes place there and the decompressed stream is deserialized to obtain the actual object
The method below takes parameters as name of method to be executed including the methodarguments and argumenttypes to distinguish the different overloaded methods and identify the exact method and avoid ambiguity. ExecObject is the object with which the specified method will be invoked.
//SERVER SIDE METHOD
public Stream CompressedMethodCall(string MethodName, object[] MethodArguments, Type[] ArgumentTypes, object ExecObject)
{
object Result;//the object which will be returned by the specified methodname
System.Runtime.Serialization.Formatters.Binary.BinaryFormatter Serializer; //to serialize the object
System.IO.Compression.GZipStream Compressor;//to compress the serialized object
Stream SerializedStream;//contains the stream which is serialized
Stream CompressedStream;//contains the compressed serailized stream
byte[] Buffer;
try
{
MethodInfo mi = ExecObject.GetType().GetMethod(MethodName, ArgumentTypes);
Result = mi.Invoke(ExecObject, MethodArguments);//now result has the object which should be serialized and compressed
if (Result != null)
{
Serializer = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();//binaryformatter is used for compact serialization
SerializedStream = new MemoryStream();
Serializer.Serialize(SerializedStream, Result);//now SeralizedStream has the serialized object
SerializedStream.Position = 0;
Result = null;
CompressedStream = new MemoryStream();
Compressor = new System.IO.Compression.GZipStream(CompressedStream, System.IO.Compression.CompressionMode.Compresstrue);
while (SerializedStream.Position != SerializedStream.Length)
{
long BytesRemaining = SerializedStream.Length - SerializedStream.Position + 1;
if (BytesRemaining < BufferLength)
Buffer = new byte[BytesRemaining];
else
Buffer = new byte[BufferLength];
SerializedStream.Read(Buffer, 0, Buffer.Length);
Compressor.Write(Buffer, 0, Buffer.Length);
}
SerializedStream.Close();
CompressedStream.Position = 0;
return CompressedStream;
}
}
catch (Exception ex)
{
return null;
}
return null;
}
Now the returned value(compressed stream) is used to obtain the actual object. I am doing it this way
//CLIENT SIDE STATIC METHOD
public class DeCompressor
{
const int BufferLength = 4096;
public static object GetObject(Stream CompressedStream)
{
object Result;
BinaryFormatter DeSerializer = new BinaryFormatter();
Stream SerializedStream = new MemoryStream();
CompressedStream.Position = 0;
GZipStream DeCompressor = new GZipStream(CompressedStream, CompressionMode.Decompress,true);
byte[] Buffer;
int Read = 1;
try
{
while (Read > 0)
{
Buffer = new byte[BufferLength];
Read = DeCompressor.Read(Buffer, 0, BufferLength);
if (Read == 0) Read = DeCompressor.Read(Buffer, 0, BufferLength);//sometimes it decompressing 0 bytes eventhough there is data to be decmopressed. If I again call Read() it will decompress. To avoid this, I am using it twice
SerializedStream.Write(Buffer, 0, Read);
}
CompressedStream.Close();
SerializedStream.Position = 0;
Result = DeSerializer.Deserialize(SerializedStream);
SerializedStream.Close();
return Result;
}
catch (Exception ex)
{
Castor.CastorMessageBox.Show(ex.Message);
return null;
}
}
This method is working fine. But sometimes decompression is not producing the exact stream which was produced at server. Due to this, deserialization process is not successful and I am unable to retreive the actual object.
I am using VS2008. Any help would be appreciated
|
|
|
|
|
Hi,
I did not read all that as it lacks PRE tags and hence also readability.
FYI: MSDN on Stream.Read says: Implementations of this method read a maximum of count bytes from the current stream and store them in buffer beginning at offset
|
|
|
|
|
So I have just joined a company that creates .Net web apps. We have a very large Core Class library, made up of about 20 different projects and they are stored is VSS as 20 different projects. These projects have various different functionality but have a strict dependency on each other.
Currently people are using websites in VS and checking these into VSS.
Now if I want to setup this project on my machine I have to
1: get the website from VSS
2: Find out which Core libraries it uses
3: find out the order of the dependency
4: Add each project (Core library) to my solution in the correct order
5: Add a reference from my website to each of the core libraries in my solution
6: Hopefully it compiles and I am good to go, although more often than not there is serious versioning issues but that’s another post
Considering we need the core source for debugging my questions is
1) Is there a better way for Core team to deliver the Core libraries to us so that we can still debug?
2) Any way that we can package the libraries ourselves so that the process of creating a new website is much easier?
Thanks in advance
|
|
|
|
|
A lot of companies insist on making different projects for different parts of the same enterprise wide solution and all it really does is create a large head-ache when it comes time to track everything down and figure out why, when you are debugging, the line numbers don't seem to match up with the code!
The core library should, ideally be merged into one project. You will find a lot of people disagree with me on this point but those are usually the same people that don't like to do merges on check-in ... which is the primary reason, I believe, the projects are so diverged. After-all, how can you add a new file to a project if someone else has the project file checked out. The thought process then follows, more projects=more productivity.
One last thing to consider is to have the core libraries strongly-named. While it doesn't really help with the interdependency fiasco it will at least help track down the errors.
|
|
|
|
|
I tend to agree. While having many files and projects might be advantageous in the early stages, it becomes a hassle when things settle down. I tend to merge those auxiliary projects into just one or a few; my rule of thumb is: if you can't draw the DLL dependency graph by heart, it got to shrink.
|
|
|
|
|
One practice is for the core assemblies to be published to a central point so you get the released assemblies from there instead of the projects. Any updates have to be posted to that central point, including updates to the entire dependency tree.
A lot easier if you put it all into one.
You could also package the core assemblies and deploy then using windows installer, much as you would a third party assembly. Again this would lead to stricter control of the release and update cycle.
|
|
|
|
|
Thank for the replies guys.
One suggestion "One practice is for the core assemblies to be published to a central point so you get the released assemblies from there instead of the projects."
I think we used to do this in my old company, so on all boxes. DEV, QA, UAT we would have the complied assemblies stored in the same area, whenever they were updated they would be published to the common area on all boxes. This way we were always working against the latest version. I am trying to think how this could work while still needing the code to debug?
Also with regard to versioning of core libraries, would I be correct in assuming there should always be backwards compatibility under a major new version is realised, meaning no matter what changes are made to the core dlls, your site should never break?
|
|
|
|
|
The backwards compatability issue is always a problem. There may be a breaking change because the core library developer does not know what you are doing in your web site, but with care they should not break stuff in that manner.
Certainly as far as versioning is concerned, if you don't like the new version, revert to the old version and no problems. The dependency chain should never break in this manner.
You should be able to debug without the code. Just ensure that the .pdb files are deployed with the dlls, and that they are built in debug not release mode. You should have a separate release build set without pdbs for deployment to the production environment.
|
|
|
|
|
That's brilliant advice U.N.C.L.E. about not needing the code just the .pdb, thanks for that one. I am going to try and get a process together and suggest it to management and this will help a lot. When I have my suggested process done I will post it here and maybe you can pick it apart. Thanks again for the advice
|
|
|
|
|
The workflow described beats the purpose of having a "core" set? It's at minimum large, and it sounds like a lot of work to use it..
How big is the entire thing, if the core is made up of 20 different projects?
I are Troll
|
|
|
|
|
I am not excatly sure what you're getting at? The core team develop a library that drive the companies products and services. My team uses a subset of these libraries.
How big is it? Well I am not sure but I'd say pretty small on an commerical scale, only about 3 devs on the core team.
|
|
|
|
|
cullyk wrote: I am not excatly sure what you're getting at?
Twenty projects seemed like a lot; but those are the projects using the core - not the projects that make up the core.
I are Troll
|
|
|
|
|
I am writing a few complicated classes I'd like to unit tests. They depend on other component I'd like to replace when unit testing. I'm not sure on how to go about it.
I don't want constructor with zillion of parameter.
I am trying to get rid of prism / unity container (wondering if it's a good idea though) so I can't use it to load different services depending of the circumstances. One the key reason is that it's not apparent which services are needed for a given (single or set of) object(s) and it was often the case in the past that the big object was loading zillion of services from the container depending on the called method.
Not sure I can use MEF as MEF tends to load everything in a DLL, whereas I'd like to choose individual component myself depending on the circumstances. Plus what if the code is not instantiated through MEF? I would have empty required fields!
Any ideas?
Basically the I have a class like that
class SomeManager
{
HardwareInput hInput;
ILogger logger;
IWebService service;
....
}
And I'd like to replace / set the hInput, logger and service depending of the circumstances.
Ideally I'm happy to have the constructor setup the value and have some dependency injectino code replace those.
I'm thinking to make them public so they could be changed explicitly, but I find that dangerous... although.. the setter could apply some check...
A train station is where the train stops. A bus station is where the bus stops. On my desk, I have a work station....
_________________________________________________________
My programs never have bugs, they just develop random features.
|
|
|
|
|
Hi all,
I have problem in running web service. When I run the web service , I got the error "Request format is unrecognized for URL unexpectedly ending in 'web method name'".
Even though I tried to put below code under system.web in web.config file. It doesn't work. Anybody can help?
<webservices>;
<protocols>;
<add name="HttpGet">;
<add name="HttpPost">;
<protocols>;
<webservices>;
flowerppk
|
|
|
|
|
Did you notice after posting this message that your code does not appear?
It's time for a new signature.
|
|
|
|
|
oops! so sorry.
here is the code again!
<webServices>
<protocols>
<add name = "HttpGet"/>
<add name = "HttpPost"/>
</protocols>
</webServices>
flowerppk
|
|
|
|
|
Hi,
I have a VB application to access a SQL database on the Server (24-hour operation), once a while I do have a Microsoft.NET Framework error dialog pop-up and my application hang-up!
1. I intend to delete this error & re-launch my application over automatically! I do have a VC++ Kill process to remove my application (succesfully remove all other applications) ... but it failed and can't remove this Pop-up dialog!
2. I notify that if I launch the Window Task Manager,when I select in Process tab my application, when I select Terminate/End button ... a new dialog pop-up and I have to select Terminate button one more time ... finally the Microsoft.NET Framework error dialog disappear!
Anyone know any application (or link) to perform Step #2 above automatically?
Thank to any help
|
|
|
|
|
This may sound ridiculous, but wouldn't it be easier to fix the bug causing the error in the first place?? Why go through all the calisthenics and genuflecting?
|
|
|
|
|
Dagnabit. That's just crazy talk.
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
Dave Kreskowiak wrote: Why go through all the calisthenics and genuflecting?
to increase body strength and flexibility, that at least is what wikipedia says on the subject.
|
|
|
|
|
I will try to analyze the error this time, even the application required to continue immediately
Thank for the hint
|
|
|
|
|
1) What's the error?
2) Where does it occur? Because, depending on where it is, you could handle what's happening without the need for your application to restart.
|
|
|
|
|
Since the error happen when the application required to continue immediately, so I didn't have time to debug. But as you suggested, I have no choice to do it!
- I will wait for it happen again to analyzy it. I will ask for help again if I can't solve it myself
Thank for the guideline
|
|
|
|
|
I'm not sure if I'm posting this on the right board, but I'll give it a shot.
13 years ago we developed an expert system using VB6. It comprised of a standalone MDI-interface with a whole bunch of COM+ components.
As time advanced, our software needed an overhaul. The decision was made to move to .Net (C#) module by module. As we implemented the MDI in C# (.NET3.5), we started to see a memory leak when we replaced the old VB6-MDI. The .net MDI starts the existing COM+ components the same way as the old MDI did. The old application did not have memory leaks, so it must have started with the .net transition.
Now my question is, is there a way to hunt down that memory leak? What is the best tool to use and can that cool work with .net and com+ at the same time?
Thank in advance!
The consumer isn't a moron; she is your wife.
|
|
|
|
|