|
If you want some help, try explaining what you need without cryptic terms you haven't defined.
|
|
|
|
|
I'm currently studying cryptography implementation in depth. To that end I'm reading various RFCs and trying to implement the algorithms so I can get a better understanding of them and hopefully make my applications more secure when cryptography is used. Right now I'm reading the TLS RFC 5246.
The first algorithm in the document is a pseudorandom function (PRF) that takes a secret, a seed, and a label and produces an output of a specified length. This is Section 5 (Page 14 in the PDF) of the document. It defines a function called P_hash(secret, data) that uses a single hash function to expand a secret and seed to an arbitrary length:
Pseudo-code (Page 15):
P_hash(secret, seed) = HMAC_hash(secret, A(1) + seed) +
HMAC_hash(secret, A(2) + seed) +
HMAC_hash(secret, A(3) + seed) + ...
where + indicates concatenation.
A() is defined as:
A(0) = seed
A(i) = HMAC_hash(secret, A(i-1))
P_hash is iterated as many times as necessary to produce the required length. The example given in the RFC is if P_SHA256 is being used to create 80 bytes, it will be iterated three times (through A(3) ), creating 96 bytes of data where the last 16 bytes of the final iteration is discarded to leave the needed 80 bytes.
The PRF is created by applying P_hash to the secret as follows:
PRF(secret, label, seed) = P_<hash>(secret, label + seed)
I've defined the two functions (PRF & P_hash ) but added two additional parameters; the first if reqLength to set the output length and HmacLength which is an enum that will restrict the allowed hash function to SHA256, SHA384, or SHA512.
The code that follows I believe follows the RFC correctly, however I feel it is inefficient because of how I'm going back and forth between List(Of Byte) and Byte() 's, but I can't figure out how to simplify it; possibly because I've been programming all day or maybe because it is 1:30 AM. Either way I was hoping that someone could help simplify the code because everything I tried (like eliminating some of the loops) resulted in the code not compiling for various reasons.
The code I have so far is this:
Public Class PRF
Public Enum P_SHA
HMAC_256
HMAC_384
HMAC_512
End Enum
Public Function PRF(secret As Byte(), label As Byte(), seed As Byte(), reqLength As Integer, Optional HmacLength As P_SHA = P_SHA.HMAC_512)
Dim temporaryArray As New List(Of Byte)
temporaryArray.AddRange(label)
temporaryArray.AddRange(seed)
Return P_hash(secret, temporaryArray.ToArray, reqLength, HmacLength)
End Function
Private Function P_hash(secret As Byte(), seed As Byte(), reqLength As Integer, Optional HmacLength As P_SHA = P_SHA.HMAC_512) As Byte()
Dim data As New List(Of Byte)
Dim HMAC_hash As HMAC
Select Case HmacLength
Case P_SHA.HMAC_256
HMAC_hash = New HMACSHA256(secret)
Case P_SHA.HMAC_384
HMAC_hash = New HMACSHA384(secret)
Case Else
HMAC_hash = New HMACSHA512(secret)
End Select
Dim i As Integer = 1
Dim A As New List(Of Byte())
A.Add(seed)
Dim concatenateByte As New List(Of Byte())
Dim temporaryArray As New List(Of Byte)
Do Until data.Count >= reqLength
concatenateByte.Clear()
concatenateByte.Add(A(i - 1))
concatenateByte.Add(seed)
For Each byt As Byte() In concatenateByte
For Each b As Byte In byt
temporaryArray.Add(b)
Next
Next
data.AddRange(HMAC_hash.ComputeHash(temporaryArray.ToArray))
Loop
If data.Count = reqLength Then
Return data.ToArray
Else
Return data.GetRange(0, reqLength).ToArray
End If
End Function
End Class
The arrows indicate the parts of the code that I imagine there is a way to eliminate although I can't see it.
Any suggestions or advice would be greatly appreciated. Thanks in advance.
|
|
|
|
|
I pass Event mechanism in MS, locking send / publish notification .I do not pass Concurrent Collection yet.
Consider, as definition ask, Events as a certain particular type of Delegates.When invoke declared Delegate,the correspondent method is invoked.Assume that invocation is made for Annonymous Methods, but - code for method is recommanded to be expressed much more as Lambda Expression, Closures and Functors.- .So,I consider evaluating Closures with SystemReflection namespace.Intuitvly seems that
SystemReflection offer all need stuff for this.Can Closures be somehow different evaluated?
modified 10-Aug-13 7:51am.
|
|
|
|
|
Does anyone know of a paid or free C# implementation of GARCH?
I have thumbed thru few Excel spreadsheet samples and C++ code in Boost as well as statistics based library source ... but that implementation seems too complicated because I want something for educational purposes. May eventually move to the same industrial versions found in those library but it is difficult to start from there.
GARCH stands for Generalized AutoRegressive Conditional Heteroskedasticity. It is a statistical algorithm
|
|
|
|
|
How to find the shortest path between two nodes in a undigraph,but the constraints of this path is that it should via several given intermediate nodes. This uidigraph is a sparse graph and may not a connected graph.
The weight of every edge in the graph is the same. The number of nodes in the graph is about several thousand.
Do you have some ideas about it? Looking forward to your reply!
|
|
|
|
|
I think the best way of solving is using Divide and conquer/Dynamic Optimization. Just find the shortest path between the first node and the intermediate nodes using BFS until you find the first of these nodes. After that you get that found node and using BFS you look for the next intermediate node and so on until you find the last intermediate node. After that using BFS you find the path between the last visited intermediate node and the last given node.
Before all of that you can check in case your graph is not connected if the first , last and all of the intermediate nodes are in one subgraph.
Other way of solving the problem is using complete depletion algorithm. This means making all possible paths and looking for the min path which contains the intermediate nodes.
I hope this will help you
Microsoft ... the only place where VARIANT_TRUE != true
|
|
|
|
|
Thank you for your reply. But the problem is the order of the intermediate nodes are uncertain. We don't know which is the first and which is the next.
|
|
|
|
|
Thats why when you use BFS and start visiting the child nodes to the current node and check if they are one of these intermediate you will find the closest intermediate node. And from there you will determine their order.
Microsoft ... the only place where VARIANT_TRUE != true
|
|
|
|
|
But in some cases,the closest intermediate node can not be the first node of the path.If choose the closest intermediate node as the first node,some other intermediate node may not reach or access using a simple path.In other words,the path which we want to find should not have duplicate nodes.
|
|
|
|
|
Actually you can have in your path duplicate nodes. Consider the following possibility
A is your first node , B last and the P, Q, R are the intermediate, and you have the following edges
(A,P) (A,Q) (Q,R) (R,B). You cannot escape from duplicating A . Its impossible to make a path without it
If the algorithm creates path with duplication of some of the nodes that means you have similar situation. Normally using BFS you still have the visited nodes and you can filter the visited or better put them in different queue to be used only in case you don't find other path between the current 2 nodes
Node: if you filter them in some cases you wont find a path
Microsoft ... the only place where VARIANT_TRUE != true
|
|
|
|
|
The situation you have mentioned may indeed exist.In that case, we can confirm that no path meet the condition.If we find all the possible path between the 2 nodes using DFS or BFS and then choose the path which meet the condition,we will find that it too slow to comlpete for huge number of nodes.
|
|
|
|
|
You can use dijkstra's algorithm for finding the shortest distance between any two nodes. There is also one more method that is kruskal algorithm you can try that to. If you need any help I am ready to help.
______________________________________________
mobile application development company india
|
|
|
|
|
Image compression using half an coding or anyother.(algorithm
)
|
|
|
|
|
snehal122 wrote: half an coding What's that?
|
|
|
|
|
cod.
Use the best guess
|
|
|
|
|
"an coding" => "encoding"
"half" => hm... I do not know an encoding with such (or a similar) name
|
|
|
|
|
|
can u tell me the mechanism of HuffMan Encoding,thx
|
|
|
|
|
I am using the SplineInterpolator class in Java, and am getting some unexpected results. I think it is because of the restriction that cubic splines are required to have a continuous second derivative at each knot, but am not sure. Here is the observed peculiarity:
Lets say I have five data points, (x1, y1), (x2, y2), ... (x5, y5), and create a cubic spline interpolation between them. Then I compute another cubic spline interpolation over only the first four out of five knots.
My expectation was that the first two cubic segments (the segment between points 1 and 2, and the segment between points 2 and 3) would be identical between the two interpolations, but this is not the case. Instead, each segments has a different interpolating function.
Is there any way to compute a cubic spline interpolation that has the same interpolating function between all common knots (excluding x segments at each end, where x is some constant)? I know this is possible if I ignore the requirement that the second derivative is continuous, but then it is not technically a cubic spline. Also, if there is already a Java library that does what I am asking, please let me know! Thanks,
Sounds like somebody's got a case of the Mondays
-Jeff
modified 23-Jul-13 17:57pm.
|
|
|
|
|
Just think again: that is the expected behavior when the "real" function is not a cubic function. And the interpolation is best between the two central points of the input data.
The underlying forumla is a*x^3+b*x^2+c*x+d=y. Enter 4 x-values with the corresponding y-values, and calculate a,b,c,d.
|
|
|
|
|
I understand that, but given the exact same data points, shouldn't the cubic interpolation of those points always be the same? I think the problem is the additional restriction that the second derivative be continuous between each cubic interpolation. Without such a restriction, I could compute a cubic interpolation between each pair using a slope determined from the neighboring data points (e.g., each piece of the interpolating method would be defined ONLY by the two points involved, and the two points surrounding those two points). However, such an approach would not have a continuous second derivative. In order to accomplish such a goal, I think the algorithm propagates the second derivatives through to match at each endpoint, instead of the first derivatives, which results in slightly different solutions when provided a consecutive subset of the original data points.
Hopefully this attempts to clarify the problem (I am not fitting a single polynomial to two different data sets; I am applying a 3rd degree polynomial between each pair of consecutive data points).
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
No. You are confusing the task of determining the coefficients of a one-dimensional cubic polynomial from 4 data points with the task of interpolating an arbitrary number of 3D points with a piecewise polynomial cubic spline curve.
The latter creates one new spline segment for each additional point beyond the first. So if you have 4 points, you get three segments, not one!
|
|
|
|
|
The only way to solve your requirement is a polygon: By your own requirement, a spline over N points should be equal to the joining of the N-1 splines you get when creating splines for two consecutive points in the sequence. Since with two points, the resulting spline curve is always a line segment, the full spline will be a polygon.
The moment you introduce continuity conditions for the first or higher derivative, the spline needs to look ahead to a point or points beyond its scope, so it can determine what its derivatives must evaluate to at its end points.
|
|
|
|
|
For anyone else looking at this, I figured out the answer. A cubic spline over N+1 points is solving for the 4 coeffients for each of the N cubic spline segments. This implies that you need exactly 4N equations to compute a unique spline over the data points (since you have 4N unknowns).
A cubic spline uses the following set of 4N-2 equations to compute the 4N cubic coefficients (the other two equations are boundary conditions, usually f''0(x0) = f''N(xN) = 0):
1) fn(xn) = yn (N equations)
2) fn(xn+1) = yn+1 (N equations)
3) f'n(xn+1) = f'n+1(xn+1) (N-1 equations)
4) f''n(xn+1) = f''n+1(xn+1) (N-1 equations)
As you can see by the above equations, the spline is not matched to meet a specified slope at each point, but instead only guarantees that two consecutive segments will have a continuous derivative (and second derivative). Because of this, I had to use cubic interpolation between each set of points using the equation given as Equation 1 on the Wikipedia Cubic Spline[^] page. This results in a piecewise interpolation that has a continuous first derivative, but does not have a continuous second derivative. In essence, this changes equalities 3 and 4 from above to be:
3) f'n(xn) = kn (N equations)
4) f'n(xn+1) = kn+1 (N equations)
Not a spline, but it consistently fits any consecutive subset of a dataset with the exact same cubic polynomial interpolations. I use the data points adjacent to those defining a segment's endpoints to estimate the slope at the segment endpoints.
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
I am looking for a statistical method or algorithm to analyse change over time and to determine whether any value is outside of a range and is a false positive or aoutlier.
The context is that I have webcams which monitor movement.
The movement returned is as a percentage of change between frames.
Occasionally a cloud covers the sun or a shadow falls across the webcam image which will cause a spike in percentage as change reported.
A normal percentage being returned may fluctuate with a range of change over 0.5 seconds of 10%.
A sudden change in lighting would change suddenly the percentage by say 60% and I would consider this to be a false positive.
So what I am looking for is some way of analysing change over time and catching the false positives rather than reporting them as movement.
Any pointers or links much appreciated.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|