|
In image processing, high contrast colors are often used to display labeled images. That is, image that has gone thru the labeling/connected component process. In that case, to display the resultant image, we use something called "Binary Color Table", which consists of 15 different colors, that are completely different with each other.
These are the colors;
byte bSize = 15; // Binary palette has 15 colors, cycled 15 times.
byte r[] = {255,0,0,255,255,0,255,255,127,127,0,0,255,127,127};
byte g[] = {0,255,0,255,0,255,127,0,255,0,127,255,127,255,127};
byte b[] = {0,0,255,0,255,255,0,127,0,255,255,127,127,127,255};
|
|
|
|
|
Take a look also here[^], there are also the text tables for each array.
Russell
|
|
|
|
|
I am very worried about a man in the middle attack on my public key cryptography and want to set up a shared key logon. I want the client to have individual shared keys to reduce the likelyhood of a known plaintext/ciphertext attack. I would also really like to set up key distribution so each user recieves a new key for each session and when the session is closed the client recieves a new key to log on. I want this to be like an ssl logon seesion except that I want the entire website encrypted. Could someone please post any good references for a shared key logon or some code,please?
|
|
|
|
|
A general suggestion: Depending on a single cipher, no matter how good, is brittle. Using multiple cipher layers makes your code exponentially harder to decrypt.
BTW, how are you distributing keys? If the man in the middle is intercepting communications, he'd also get the new keys.
|
|
|
|
|
I've just finished writing a utility to migrate data from optical libraries to a NAS box and now I'm trying to come up with a formula to estimate the time of completion given the following data:
1) Total amount of data
2) No: of drives in library
3) Average read speed of the drives
4) Total no: of files
5) Fixed overhead for each file
6) Average write speed of the NAS box (this also takes into account the network write speed)
The formula that I am using now looks like this:
(Total data / (No: of drives * Average read speed)) + (Total files * Fixed overhead) + (Total data/ Average write speed)
I don't think this right in all cases. The utility launches 1 thread for each drive in the library. So there is some parallelisation of the copy process. But I think the above formula would only work if the copying is done in a sequential manner.
Does anyone have a better idea on how to do this by taking into account that the reads and writes happen in parallel?
Please note that in the program itself, I just use the no: of files processed so far and the time taken to process them to guesstimate the time remaining. This formula is to create an excel file where the user can enter the data given above and get an approximate time of completion before actually starting the migration.
Any help is greatly appreciated.
The user formerly known as pkam.
|
|
|
|
|
I think the gating factor is the slower of reading, writing, or data transfer rate. It doesn't matter how fast you can read the data if writing slower. If reading can't keep up with writing, then reading is the limiting process. Of course, this analysis is based on aggregate rates which may be hard to judge but it seems you have some average values to work with.
If you don't have the data, you're just another a**hole with an opinion.
|
|
|
|
|
Thank you for taking the time to respond to my question, Tim.
The user formerly known as pkam.
|
|
|
|
|
There are complexities and interactions you can't anticipate, so a more reliable approach is to make completion-time measurements for different parameter combinations.
Looking at the graphs of times for different values of a single parameter will give you insight as to how it really affects completion time.
Multiple regression will give you formulas that estimate completion time based on the values of multiple parameters.
|
|
|
|
|
Thank you for your response Alan. I just needed a rough estimate. So for now I'm using the method suggested by Tim. Also the utility has currently been tested only on a small test configuration. When we do further testing, I'll try out your method.
The user formerly known as pkam.
|
|
|
|
|
Hi, can somebody tell me how do i get the equation of a spline thru n points.In my problem, i had 3d points scattered all over the space, i projected them on a plane.I am supposed to join these points and create a spline using c/c++ program.Can anyone help me out in this.Plse do et me know.Thnx in advance.
|
|
|
|
|
Does your homework problem specify a particular type of spline? There are lots.
If you don't have the data, you're just another a**hole with an opinion.
|
|
|
|
|
My task was actually to develop a program which would generate a curve to connect a set of points scattered all over the space and for that i wanted to create a cubic spline.Anyways, thnx for ur very humble reply.
|
|
|
|
|
The standard approach to a 3D spline is to treat the X, Y, and Z coordinates separately, and do three 2D splines. I.e. do a spline for all the X's vs. their time index (0, 1, 2, 3...), then the Y's and Z's.
To interpolate for a fractional time index, you get the X, Y, and Z coordinates independently from the three corresponding 2D splines.
|
|
|
|
|
Thnk you very much for the reply.
|
|
|
|
|
Hi,
I'm trying to create a program that guesses the next colour the user will select, based on previous selections. I've come across Markov Chains, and this seems like the tool I need. However, after reading a lot on the theory, I'm having difficulty in implementing it (I'm using C#). Basically, I have five colours. (Red, Green, Blue, Yellow, Black). The user selects a colour, then another one and so on. After N amount of colours chosen, the user selects "predict". I want the program to then guess what the next colour would most likely be based on the previous selections. How do I calculate the values of the transition matrix each time the user selects a colour? And how do I allow the program 'learn' that the user likes a particular order?
Thanks for any help
|
|
|
|
|
Your assumption that there's a pattern to the user's selections may not be correct.
You can store the total number of times the user selected each color. The highest number would be the user's favorite color selection.
You could also have an N-by-N matrix representing two consecutive color selections. (The X coordinate is the ith selection, and the Y coordinate is the (i+1)th selection.) The number at each (X, Y) pair is the number of times this sequence was chosen. Thus for any selection, X, you could see what the most common following color, Y, was, and use that for your prediction.
Likewise, this could be extended beyond the 2-dimensional case to tabulate longer sequences.
|
|
|
|
|
Hi,
Thanks for the reply! The method you suggested was the one I originally tried, but I'm sure how efficient it would be if, instead of a few colours, I use 5000 as the matrix would be huge. Could you suggest a more efficient method for large scale predicitons? I heard neural networks might work, but I'm not sure which method would be the best.
|
|
|
|
|
It's difficult to use a neural net with non-numeric data, e.g. colors. But you could use a neural net with the colors' red, green, and blue components, which ARE numeric.
So you'd have three input neurons for each previous color, e.g. for two previous colors you'd have r0, g0, b0, r1, g1, b1. There would be either three output neurons for the RGB components of the predicted color, or you could have three separate neural nets for the three outputs, each with a single output neuron. (The inputs would be the same for all three nets.)
You should have at least one, and possibly several middle layers.
The training set would have RGB components for the previous colors, and the actual selected color's RGB values, which you'd subtract from the predicted color's RGB values to generate error terms for back-propagation.
(This is all assuming there's a pattern to the user's selections, which hasn't been established.)
|
|
|
|
|
Apologies for the delay, I've been 'computerless' for some time. I've been thinking over your suggestion. I've used colours in this example to keep it simple, but in reality it's going to be images. Each image has various attributes (which I assume are inputted in the feature vector), such as "category", "size", "filetype" etc.. The aim of the program is to monitor what images the user selects (there could be thousands). Although there may not necessarily be a pattern, the program should take into consideration when the user selects images (e.g. is it opened from Windows Explorer, or from Photoshop. If the latter, then chances are, (s)he want's a JPG as that's what they have been opening before). As you can probably tell, this isn't a 'real' program, it's just for experimentation. However, would neural networks still be the way forward? Would I use back propagation to see if the user selects a GIF instead of a JPG, the error should be reduced slightly, and eventually it'll always open GIF instead of JPG? How would I check that the user usually selects category 'Car' after category 'wallpaper'?
Thanks. Sorry if this is confusing. I'll clarify any points that are ambiguous.
|
|
|
|
|
I just got back from a vacation too.
Neural nets don't work well with non-numeric data such as {GIF, JPG, ...} or {Car, Wallpaper, ...}, unless there are exactly two choices which can be modeled as 0 or 1.
In a system involving an intelligent agent (the user), one must take into account the Principle of Rationality: An intelligent agent will choose actions that will help it accomplish its goals. What are your users' goals?
It's also possible there's an order-independent clustering of attributes selected, e.g. the user has a preference for small, brightly colored images.
What are your goals for this program?
|
|
|
|
|
Hi Alan!
Would it be possible to enumerate "GIF, JPG" etc... and use those in the neural net? Or is it only good if it's binary?
The goal of this program is to adapt the interface to each user. The user logs into their profile, and the interface would then be different based on what the user usually does. So for someone who tends to select 'car' after 'wallpapers', the interface would automatically streamline to have those two near each other. There would be a huge number of permutations and so I am wary of simply having a huge matrix that contains all possible combinations and thei relative probabilities.
Thanks for the help.
|
|
|
|
|
If you have more than two items in an enumeration, you're implying an ordering that doesn't exist to the neural net. You could have separate input neurons for GIF, JPG, etc. that would each be 0 or 1.
Adapting the user interface to streamline a user's work flow is a good idea. It might help to look at logs of actual users' patterns to see what you'd get the most benefit from optimizing.
|
|
|
|
|
I think that's the best way to go about it. I should have each attribute as seperate input neurons and see if they 'fire' the neuron. If I was to go about using neural networks, am I correct in thinking that the best way to do this is to use the backpropagation method? Or is there one better suited for this scenario?
Also, could I achieve the same effect using (Hidden?) Markov models?
Thanks again for the help. Much appreciated!
|
|
|
|
|
The backpropagation method is the standard algorithm for training a neural net. There are many variants but there's no consensus yet on which is better. They generally give similar results.
With neural nets, you can take the user's last n selections into account by simply adding more input neurons. With a Markov process, only the last state is considered.
|
|
|
|
|
What's the algorithm for finding the shortest path for connecting points on a plane, provided that:
1- You can use unlimited branches
2- You can move vertically and horizontally only, no diagonal moving
3- The first point, which is the starting point, is predefined
Please provide me with algorithm ,flowchart, vb, c++, etc.... code for this problem
Thank you
|
|
|
|
|