|
Maximilien wrote: They do. It's in the menu bar.
Sorry, I once, out of curiosity, downloaded the tiger version of MAC on a Vmware Hdd. I think it was illegal, so I deleted that. I had not enough time to investigate all parts of the OS. I couldn't see that button, and I don't remember where it was exactly. But you seems to have a good experience with that, so you're right probably. I'm sorry because of such a stupid mistake.
Maximilien wrote: IMO, it's not a good idea to have an Autosave that replace a save (and "save as") command.
You're right and I added that already today. It's almost a long time since I posted the comment today.
Anyway, thank you so much for your help.
// "Life is very short and is very fragile also." Yanni while (I'm_alive) { cout<<"I love programming."; }
|
|
|
|
|
I'm creating a new rule for the IT department, every new deployed system will have an associated Deployment Diagram.
The first attempt, using Visio 2007, I didn't find the best fit Diagram model.
Do you have any tips on tools that could be used ? I prefer
Microsoft tools but another ones can also be considered.
Thanks for any tip !
|
|
|
|
|
You could use Rational Rose[^].
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
Try downloading some free visio templates -- make sure they are UML v2.0
rational rose sucks and it's way too expensive for most teams to justify the cost
if u use eclipse there is the umlet plugin and you can run it standalone as well...you have to get used to it's odd user interface, but if you have the true personality of a software engineer you can figure it out...it also outputs your model diagrams in many file formats...it's free too.
i personally like Artisan RT studio, but again it's tough to justify the costs of these tools.
visio is virtually free if you have win-doz office junk...
check out the omg sysml stuff too...it's interesting, and offers other diagrams as well
kind regards,
David
|
|
|
|
|
Hi all,
Our project has two teams - Application Production Support and the Development team. Its a reasonably big project (around 50 people). Can anyone suggest the release model and/or how the configuration management should work for the project? If you could explain how it works in any of your projects, maybe we can take a hint from there.
Thanks in advance.
|
|
|
|
|
Virat Soni wrote: If you could explain how it works in any of your projects, maybe we can take a hint from there.
Just a crazy idea but maybe you could use Google to find things like this[^]
|
|
|
|
|
led mike wrote: Just a crazy idea but maybe you could use Google
Oh you crazy so and so. It's zany ideas like this that make people chuckle. You're mad you are.
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
use subversion for your cm tool, you can use it from the command line, or tortoisesvn (windows gui version), or subclipse (eclipse plugin).
use trac to track progress, tasks, bugfixes, user docs etc... http://trac.edgewall.org
setup subversion and trac on an apache webserver, and you should have no problem.
make sure you identify two people to handle all the CM work, and appoint them as your official "build-meisters", set up nightly cron jobs for builds, etc.
have two branches a delivery branch, integration branch, and make sure the developers create their own branches when they work on code.
set periodic dates for builds, do code reviews and make sure your developers unit test their stuff before checking their code into to the integration branch. test everyones stuff together using the integration branch...make corrections, retest, repeat until you have something good...then merge it into the deliverable branch.
reset the integration branch from that point on the delivery branch... and repeat...
make sure the developers continually grab the latest from the integration branch if their stuff doesn't make the deadline or cut... they need to always play with the current stuff..
kind regards,
David
|
|
|
|
|
Hi all,
am working on pharmacy related project in which now working on Low Level Design document, want to know what no clearly what comes under below stated section in LLD:
Design Alternatives:
Brief description of the design alternatives considered for this module should be stated. Reasons for selection of a particular design from the alternatives.
hari.k
|
|
|
|
|
This is just asking what you considered in your design, and why you chose the particular design you did. It's normally in there to show that you did consider alternatives and the design isn't just something that you threw together.
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
I need some ideas or pointers on storing several hundred gigabytes of data. Currently, the files transferred range from 1 KB up to 600 MB and vary in sizes.
What I would like to do is break the files down into small (8K?) blocks and index them with a hash. Then in a database, create a chain so the files can be reconstructed. I like this because it will allow duplicated blocks of data to be identified and reduce the size of the storage. Bad idea? How might a directory structure look to accomplish this so it isn’t impossible to enumerate through the files?
I thought about storing the files blocks in a database as blobs but I think that would be too much strain on the database resources. SQL 2008 will have some nice features to accomplish this but it will be a year or two before we get there.
Any ideas? Thanks in advance.
|
|
|
|
|
Whats wrong with the filesystem for storing your data, with a seperate index if necessary? We probably need more information on what type of data you have and how you intend to index it.
When you said "I have a few hundred gig of files and I need to store them somehow" the first thing that comes to mind is NTFS
|
|
|
|
|
Thanks for the response Mark.
The system I am working on is a pub/sub that distributes files to multiple subscribers. It is an in-house system that transfers manufacturing data – application, documents, collected data, test results, etc. I currently have it where the data is uploaded to a file server in the sky and the subscribers then download – nothing too complicated.
It is currently architected where the subscribers connect through a web farm (load balanced) to download small chunks of 64K until the transfer is complete. Each chunk is a new http request which is causing heavy I/O between the web servers and file server as it opens the file, reads until the requested chunk and then returns the data. It would be ideal to just stream the file using the same connection until the transfer is complete and resume with network hiccups. The problem here is that some of the third-party sites use older proxy servers which won't allow for that. In addition, there is a firewall(beyond my control) which limit the amount of time a connection is allowed to remain open.
My new plan is to store the files in smaller chunks so it is more efficient when downloading so it doesn't have to navigate through the large files up to the position of the chunk being downloaded. I could also leverage this to reduce duplications of data stored on the server. Unfortunately with this design, I would then have to open a database connection to identify where the blocks of data are stored and how to piece them together. I would probably end up in a worse scenario with the IO to the database and calculating what to return. This is where I thought storing the data in the database might be better as I would already have a connection and I can query the data and return the exact requested chunk. I am torn here because of the associated costs for storing that much data in a database.
I am just curious if there are any ideas. I probably should not worry about how the data is stored and work on solutions to solve the number of http requests.
Thanks
|
|
|
|
|
rcardare wrote: I probably should not worry about how the data is stored and work on solutions to solve the number of http requests.
Yes, it sounds to me your current solution is not using chunked-encoding, or not using it correctly.
|
|
|
|
|
Hi,
It seems like the issue with some legacy clients not handling large downloads may be solvable by the clients requesting the data in chunks. This doesnt mean the data needs to be stored in chunks though
I'd solve the issues with how the clients retrieve the files first. Then you can see how appropriate your storage mechanism is (I'd suggest NTFS with additionally indexing in SQL Server would work fine.
|
|
|
|
|
Here is my two cents for what it's worth:
avoid moving the data around, why not use the database to tell the client where the data is located?
whereever there is data there should be services to provide the client with the "knowledge" about the data he is looking for.
when you are working with large chunks of data, avoid moving it around, and especially across a network.
make the local service perform as much work as possible and return an "answer".
David
|
|
|
|
|
We had a similar problem with large file distribution.
I wrote a system where a client would download the file and then send broadcast messages out so that other local hosts could pick up the file and download it from the lan. Also if your using windows clients look at the Background intelligent file transfer.
a programmer traped in a thugs body
|
|
|
|
|
Dear friends,
I guess and must have this thing been discussed many times before also, but now a fresh question.
What should messages on user interface say to user while these situations (and many more)(user is in US).
Also before, at the moment of doing and after performing the action...
File Copy
File Not found
File Delete
File Send
Database Update
Database Insert
Database Delete
Please help with the syntax and approprite sentences.
thanks in advance,
|
|
|
|
|
The convention for a file copy is to show a file being copied animation, and to provide feedback on the file being copied. For instance, if there are multiple files being copied you would normally tell the user where they are in the copying process, e.g. Copying 10 of 24 and it is common to show a progressbar detailing performance.
Anyway, the standard way to design your application is to write the messages out in your own language (stored in a resource DLL), and then have someone translate the commands out in the relevant language.
When it comes to standards, take a look at what the tools you use on a day to day basis tell you. What does Word say in these situations? Visual Studio? Explorer? It's generally best to follow what others have done rather than reinventing things yourself.
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
Hi Guys,
I'll be starting to work on web app that would send out massive mails for our clients campaign (approx 20k per batch), the webserver's and mail server are being hosted on our end so we have the complete control of the platform.
I'm just wondering:
1. What would it takes me to maintain a high volume of mails being received by our mail server
2. Would it be possible to have another mail server where we could crate a load balance for the volumes of mails?
3. Can you give reference / URL where I could check on the design for this kind of web apps.
Thanks
Dom
|
|
|
|
|
1) cross posting is discouraged here ( that means you posted this same message in more than one CP forum)
2) people don't like SPAM including the people at CodeProject
However hope is not lost, since you think this is a "Web App" with any luck you will never get it to work.
|
|
|
|
|
Hello comrades !!! I need your help. Here is what we have:
We have a jpg image that contains gray background and a lot of white eggs on it. We need to get the number of eggs and the size of each eggs. (i think in pixels).
How can i do it ?
The hardest problem is to get the whole egg (To mark the edges) and smart divide screen into some parts, for easiest count.
Thanks for help.
One nation - underground
|
|
|
|
|
Gosh. Cross posting homework. Excellent.
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
I work for a state agency that has a multiple page, multiple question application form. Because the application centers around eligibility for a payoff, there are different paths that an applicant may take through the application. For instance, depending upon their answers, they may only need to fill out one or two pages of questions, but could conceivably need to fill out more pages depending upon whether or not further information is required.
What sort of approach/technology would be ideal to handing this sort of thing in .NET? Currently, it is handled in a very clumsy manner involving multiple .aspx pages and a lot of unnecessary, hard-coded session variables and logic checks. In this environment it's easy for things to become unnecessarily complicated, and changes to a single question might require changes in multiple places throughout the application (especially to database calls).
One method I was thinking of would be a single .aspx page with a multiview control backed by XML files for each page of questions where the questions are given attributes that define what type of question they are (i.e. yes/no, checkbox, text), as well as the name they are to be given in the database (i.e. "absence_eligibility1"), but I am unsure of the best way to implement the dependencies.
Any ideas?
|
|
|
|
|
Depending on how much time you are planning to invest in this, it may be more cost effective to get an off-the-shelf solution that can handle this type of skip logic. These types of product range in price from hundreds to thousands of dollars.
*DISCLAIMER* I am the architect for a company that produces this type of software, but there are many options so you might want to check around. I can tell you from first hand experience that once you get into conditional logic and/or have to create and deploy multiple questionnaires you've got quite a bit of work to do.
|
|
|
|