|
The formatting in the psuedocode for the algorithms seems to be messed up. Variables are missing and they are basically unreadable.
An example : "For i= 1 to do"
1 to what?
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
I do agree, but the initial original article was on a blog (at the URL given by the author in 2015) which is no longer available (domain name has expired and has been cybersquatted).
So as is, the article is now unusable.
It is a known fact that the "shortest" path is a NP problem and has no satisfactory solution with bounded time or resource.
The best solutions (used for example for routing in geographic applications, or in electronics) is to use heuristics. By not using the weight very strictly but splitting the graph into subgraphs using some reasonable thresholds (based on simple statistics like average and standard deviation), so that all nodes in a subgraph are within a given distance range, then recompute the thresholds on each subgraph recursively. You get a first solution on which you'll compute the actual weights between all nodes/vertices of each minimal subgraph (on which you have an algorithmically exact solution).
Then you'll sort the subgraphs by their standard deviation and join them by pair, starting by the two subgraphs that have the largest deviation to recompute the estimated statistics (average and standard deviation: notice that these are estimate because you have not tested all pairs, but you can perform an algorithmic optimization by using a random half of sets of vertices/nodes from the two subgraphs, and knowing that these statistics are only representative of a total estimation of a quarter of the full join, you need to compensate the standard deviation because it is a sample of the population and not the full population). On this new subgraph find the shortest paths, quick-sort the vertices/nodes and then restart it once by selecting half the population.
This done your new joined subgraph should now sort in the list of other subgraphs at a lower position because it should now have a lower standard deviation value.
Any way the article focuses on the problem with negative costs: this is a theoretical problem, but generally comes from an incorrect modelisation of the problem where you've not properly isolated the variables of your problem, or you have chosen the incorrect metric for evaluating the weights by a direct sum: you should still be able to change the weighting function, by transforming it to a positive function. And the simple arithmertic sum of weight is not necessarily the best way to evaluate the total costs of traversing two edges. Such cases exist in financial applications (e.g. purchase costs vs. sell revenues), but the simple balance is not representative because it is often first benefitable to maximize the number of transactions to minize the risks for the same final balance, and what is then important is no really the sign but the abolute value of individual transactions which should be minimized. But then you have fixed costs per transaction and fixed revenues for prcessing them and their sum is easily predictable and you can make the total highly profitable if you maximize again these transactions (wchich can be largely automated for their processing).
If you can't avoid the negative costs, at least you can offset all of them by an arbitrary value, so that the sum of positive costs in your graph is at least 50% higher than the absolute of negative weights: traversing your graph should then converge and not diverge, and you can easily split the graph into subgraphs that will "break" the negative cycles into separate parts in distinct subgraphs.
So the optimization and sorting can still work on these subgraphs.
Once you have made these splits, you just have to make sure that each subgraph is fully connected and rearrange the population of vertices between pairs of subgraphs.
Now you can join the subgraphs together by trying to reconnect them by pair using random selection in each subgraph and testing only a few nodes (the other links to complete the traversal are already part of each resolved/optimized subgraph): instead of joining distinct nodes, you just replace the inital large graph by a small graph whose nodes have weights computed from average and deviation, on which an algorithmic solution provides a good solution.
Then in your graph traversal you can lookup into each subgraph to find a few nearby nodes that could help reduce the actual distance between subgraphs.
Basically all this works like a "quick-sort" algorithm, and uses the wellknown "divide-and-conquer" strategy. You get a first satisfactory solution on which you can perform "local" optimizations to swap algorithmically nodes in a window with a bounded size.
You can then use the satisfactory graph you got by reuysing it to recuirsively split the graph again but now you'll get weights with lower standard deviation (if not, your heuristic is terminated), and can perform it again and again until you reach your execution time limit: you can propose the solution to the user, who can then decide to continue the processing for another time slot.
There are interesting adjustments you can do to the algorithm to make it converge faster: instead of using weights directly, you can also add a random weight within about 67% of the standard deviation of weights at start of each loop; and then reduice the random offsets to 50%, 33%, 15%, using a a decaying threshold, because it is very helpful to avoid very local extremums that block the convergence to a much better solution: this is like adding noise or temperature to the data and lowering the temperature. This strategy works extremely well for finding geograghic routing paths, or for solding printed circuits routing in electronics. This worlks also when you want to minimize just one type of weights but also several ones: starting with a zero-tremperatur does not help because you need to cycle between each type of weight and covering too fast on one criteria would cause other types of weights/costs to be largely ignored.
These kind of problems are also very suitable for being implemented with neural networks, which are naturally miassively parallelized and use the divide-and-conquer strategy: their interest is that they no longer need to use any "sort" (this works as long as they are fully parallelized; iof the neural network is partially parallelized, and needs sequencing to perform each step, the sorting is still helpful to determine the cutting steps): it is for now very difficult to implement neural networks with more than about 200-500 neurones, but most path-finding practial problems have to work with graphs containing many thousands nodes, possibly millions
|
|
|
|
|
And your book has what, exactly, to do with ill-formatted pseudocode? Your extraneous reply reminds me of the old Microsoft/helicopter/technically correct/useless joke.
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
|
|
|
|
|
tl;dr
also, punctuation!
|
|
|
|
|
What's wrong with my punctuation, compared to the incorrect punctuation of your non-sense reply not poorly written with a natural language with almost no effort at all? Do you think you wrote any sentence that is not ambiguous even in English?
Sorry but the article above is competley unusable, it's not jsut a question of format, but essential parts are completely missing. So ambiguity is everywhere (and there's no way to look at the original to determine what was meant, given that it was on a site at the given URL that has disappeared completely).
If I was studying or llearing the Dijkstra algorithm, there's ample litterature about it everywhere (including in other CodeProject-hosted pages where there are also working sample code). Here nothing works. It's not just some typos you can correctly guess easily.
And the article itself is also incorrect in basic punctuation too. I diud not criticize the author that asked for help to fix it in 2015 (but nobody helped him at that time before the author abandoned his project or found a solution elsewhere).
And it's not me that revived the topic: it was revived by being "featured" by CodeProject itself in its home page or mailing list; even if it was visibly abandoned in its unusable state since long.
|
|
|
|
|
It is a known fact that the "shortest" path is a NP problem and has no satisfactory solution with bounded time or >>resource
False.
Shortest Path is P because there are deterministic polynomial time algorithms that solve it.
|
|
|
|
|
False, this is known as the wellknown "Traveling Salesperson" (proven NP-class) problem.
Only some very strictly limited subclasses of "shortest path" (with severe restrictions) search are polynomial (P-class).
But even if it is polynomial with these restrictions, it is still unusable for notable applications such as navigation: the degree of the polynom and the base N factor is too high (gerographic databases contain billions nodes and millions paths, it would not find any path in a reasonnable time. All algorithms are in fact replaced by heuristic that don't necesarily provide the absolute shortest path (which would be stupid because evaluation of weights is based on estimations on average conditions, and realtime conditions make them variable: we need adaptative heuristics that can adapt to these unpredictable but unavoidable variations): the required restrictions have to be canceled and there's no other choice than using heuristics, especially when the path determination is for a unique temporary usage and will run on small local devices or with strictly limited resources on a remote server that must serve many concurrent users.
These heuristics will use various metrics for different conditions, and need to find alternatives (not just the immediate "best" solution which will be unusable in practice for any travel above about 1 kilometer: users actually don't need these navigation helpers for short distances where there's no real choice and variability for the path even if the cost/time for this single path (or a couple of paths, one of them being the most frequent and evident, except in exceptional conditions where the other one will be used unconditionally) will be varying much more than what is set in averaged metrics in the database.
The article jsut above just speaks about some of these conditions. And in practice it's unusable in a graph with more than about 200 nodes, if you need a reply in a few seconds, or with more than 1000 nodes if you need a reply in a few minutes. If you're designing an object to manufacture, you can use it with graphs having about 2000 nodes and let it run for a few hours. If your project must complete in a limtied number of days, these hours doing nothing else will be costly because the design must be tested and will need to be tweaked repeatedly after tests.
So you use heuristics that give a "good enough" solution by accepting a deviation by some reasonnable percentage from the optimum. When you have a deadline (happens all the time in real working conditions) these P-algorithms will be phased out: instead you'll make a design with smaller (arbitrary) building blocks that will be optimized and assembled later, you'll test and optimiez only the few building blocks that cost you the most after testing but is needed for over 80% of cases, you'll have no time (and no budget) to optimize further the remaining 20% cases), and you won't break the initial optimization of the important blocks by making small local optimizations for its interconnections with the other blocks.
Interative production models favor heuristic that will constantly give you good enough results, you'll stop the optimization before the completion of the full algorithm because running it longer will offer only very small benefits compered to the huge cost of the delay.
Heuristic solutions also have a huge advantage: they are more resistant to unexpected changes of conditions, or to failures, as they are more easily interoperable by allowing replacements with compatible components within a predefined range of acceptable characteristics, and it is easier to reproduce at lower cost. And they also work because it's not possible to define a complete set of precise "weights" over a very large graph and these weights are evaluated progressively over a long time and not reevaluated very often (geographic databases are best refreshed completely over several decennials, even if there are yearly updates, and there are always large set of paths that were never really evaluated but only estimated by loose formulas based on small sets of sample conditions deliveraing only partial metrics with a significant error margin: it makes no sense to optimize more to get high precision which are orders of magnitude smaller than the error margin).
The polynomial algorithm does not even work with statistically estimated metrics where the weight for each link traversal is not a single precise value, but is only an estimated range of possible values with 80% of probability and a few % or errors for the range width.
|
|
|
|
|
I see your pointlessly long reply, and raise you two absurdities:
The Entity Framework is a set of technologies in ADO.NET that support the development of data-oriented software applications. Architects and developers of data-oriented applications have typically struggled with the need to achieve two very different objectives. They must model the entities, relationships, and logic of the business problems they are solving, and they must also work with the data engines used to store and retrieve the data. The data may span multiple storage systems, each with its own protocols; even applications that work with a single storage system must balance the requirements of the storage system against the requirements of writing efficient and maintainable application code.
The Entity Framework enables developers to work with data in the form of domain-specific objects and properties, such as customers and customer addresses, without having to concern themselves with the underlying database tables and columns where this data is stored. With the Entity Framework, developers can work at a higher level of abstraction when they deal with data, and can create and maintain data-oriented applications with less code than in traditional applications. [3]
History
The first version of Entity Framework (EFv1) was included with .NET Framework 3.5 Service Pack 1 and Visual Studio 2008 Service Pack 1, released on 11 August 2008. This version was widely criticized, even attracting a 'vote of no confidence' signed by approximately one thousand developers.[4]
The second version of Entity Framework, named Entity Framework 4.0 (EFv4), was released as part of .NET 4.0 on 12 April 2010 and addressed many of the criticisms made of version 1.[5]
A third version of Entity Framework, version 4.1, was released on April 12, 2011, with Code First support.
A refresh of version 4.1, named Entity Framework 4.1 Update 1, was released on July 25, 2011. It includes bug fixes and new supported types.
The version 4.3.1 was released on February 29, 2012.[6] There were a few updates, like support for migration.
Version 5.0.0 was released on August 11, 2012[7] and is targeted at .NET framework 4.5. Also, this version is available for .Net framework 4, but without any runtime advantages over version 4.
Version 6.0 was released on October 17, 2013[8] and is now an open source project licensed under Apache License v2. Like ASP.NET MVC, its source code is hosted at GitHub using Git.[9] This version has a number of improvements for code-first support.[10]
Microsoft then decided to modernize, componentize and bring .NET cross-platform to Linux, OSX and elsewhere, meaning the next version of Entity Framework would be a complete rewrite.[11] On 27 June 2016 this was released as Entity Framework Core 1.0, alongside ASP.Net Core 1.0 and .Net Core 1.0.[12] It was originally named Entity Framework 7, but was renamed to highlight that it was a complete rewrite rather than an incremental upgrade and it doesn't replace EF6.[13]
EF Core 1.0 is licensed under Apache License v2, and is being built entirely in the open on GitHub. While EF Core 1.0 shares some conceptual similarities with prior versions of Entity Framework, it is a completely new codebase designed to be more efficient, powerful, flexible, and extensible, will run on Windows, Linux and OSX, and will support a new range of relational and NOSQL data stores.[11]
EF Core 2.0 was released on 14 August 2017 along with Visual Studio 2017 15.3 and ASP.NET Core 2.0 [14]
Architecture
ADO.NET Entity Framework stack
The architecture of the ADO.NET Entity Framework, from the bottom up, consists of the following:
Data source specific providers, which abstract the ADO.NET interfaces to connect to the database when programming against the conceptual schema.
Map provider, a database-specific provider that translates the Entity SQL command tree into a query in the native SQL flavor of the database. It includes the Store-specific bridge, which is the component responsible for translating the generic command tree into a store-specific command tree.
EDM parser and view mapping, which takes the SDL specification of the data model and how it maps onto the underlying relational model and enables programming against the conceptual model. From the relational schema, it creates views of the data corresponding to the conceptual model. It aggregates information from multiple tables in order to aggregate them into an entity, and splits an update to an entity into multiple updates to whichever table(s) contributed to that entity.
Query and update pipeline, processes queries, filters and updates requests to convert them into canonical command trees which are then converted into store-specific queries by the map provider.
Metadata services, which handle all metadata related to entities, relationships and mappings.
Transactions, to integrate with transactional capabilities of the underlying store. If the underlying store does not support transactions, support for it needs to be implemented at this layer.
Conceptual layer API, the runtime that exposes the programming model for coding against the conceptual schema. It follows the ADO.NET pattern of using Connection objects to refer to the map provider, using Command objects to send the query, and returning EntityResultSets or EntitySets containing the result.
Disconnected components, which locally cache datasets and entity sets for using the ADO.NET Entity Framework in an occasionally connected environment.
Embedded database: ADO.NET Entity Framework includes a lightweight embedded database for client-side caching and querying of relational data.
Design tools, such as Mapping Designer, are also included with ADO.NET Entity Framework, which simplifies the job of mapping a conceptual schema to the relational schema and specifying which properties of an entity type correspond to which table in the database.
Programming layer, which exposes the EDM as programming constructs which can be consumed by programming languages.
Object services, automatically generate code for CLR classes that expose the same properties as an entity, thus enabling instantiation of entities as .NET objects.
Web services, which expose entities as web services.
High-level services, such as reporting services which work on entities rather than relational data.
Entity Data Model
The entity data model (EDM) specifies the conceptual model (CSDL) of the data, using a modelling technique that is itself called Entity Data Model, an extended version of the Entity-Relationship model.[15] The data model primarily describes the Entities and the Associations they participate in. The EDM schema is expressed in the Schema Definition Language (SDL), which is an application of XML (Extended markup language). In addition, the mapping (MSL) of the elements of the conceptual schema (CSDL) to the storage schema (SSDL) must also be specified. The mapping specification is also expressed in XML.[16]
Visual Studio also provides the Entity Designer for visual creation of the EDM and the mapping specification. The output of the tool is the XML file (*.edmx) specifying the schema and the mapping. Edmx file contains EF metadata artifacts (CSDL/MSL/SSDL content). These three files (csdl, msl, ssdl) can also be created or edited by hand.
Mapping
Entity Data Model Wizard[17] in Visual Studio initially generates a one-to-one (1:1) mapping between the database schema and the conceptual schema in most of the cases. In the relational schema, the elements are composed of the tables, with the primary and foreign keys gluing the related tables together. In contrast, the Entity Types define the conceptual schema of the data.
The entity types are an aggregation of multiple typed fields – each field maps to a certain column in the database – and can contain information from multiple physical tables. The entity types can be related to each other, independent of the relationships in the physical schema. Related entities are also exposed similarly – via a field whose name denotes the relation they are participating in and accessing which, instead of retrieving the value from some column in the database, traverses the relationship and returns the entity (or a collection of entities) it is related with.
Entity Types form the class of objects entities conform to, with the Entities being instances of the entity types. Entities represent individual objects that form a part of the problem being solved by the application and are indexed by a key. For example, converting the physical schema described above, we will have two entity types:
CustomerEntity, which contains the customer's name from the Customers table, and the customer's address from the Contacts table.
OrderEntity, which encapsulates the orders of a certain customer, retrieving it from the Orders table.
The logical schema and its mapping with the physical schema is represented as an Entity Data Model (EDM), specified as an XML file. ADO.NET Entity Framework uses the EDM to actually perform the mapping letting the application work with the entities, while internally abstracting the use of ADO.NET constructs like DataSet and RecordSet. ADO.NET Entity Framework performs the joins necessary to have entity reference information from multiple tables, or when a relationship is traversed. When an entity is updated, it traces back which table the information came from and issues SQL update statements to update the tables in which some data has been updated. ADO.NET Entity Framework uses eSQL, a derivative of SQL, to perform queries, set-theoretic operations, and updates on entities and their relationships. Queries in eSQL, if required, are then translated to the native SQL flavor of the underlying database.
Entity11 types and entity sets just form the logical EDM schema, and can be exposed as anything. ADO.NET Entity Framework includes Object Service that presents these entities as Objects with the elements and relationships exposed as properties. Thus Entity objects are just front-end to the instances of the EDM entity types, which lets Object Oriented languages access and use them. Similarly, other front-ends can be created, which expose the entities via web services (e.g., WCF Data Services) or XML that is used when entities are serialized for persistence storage or over-the-wire transfer.[18]
Entities
Entities** are instances of EntityTypes; they represent the individual instances of the objects (such as customer, orders) to which the information pertains. The identity of an entity is defined by the entity type it is an instance of; in that sense an entity type defines the class an entity belongs to and also defines what properties an entity will have. Properties describe some aspect of the entity by giving it a name and a type. The properties of an entity type in ADO.NET Entity Framework are fully typed, and are fully compatible with the type system used in a DBMS system, as well as the Common Type System of the .NET Framework. A property can be SimpleType, or ComplexType, and can be multi-valued as well. All EntityTypes belong to some namespace, and have an EntityKey property that uniquely identifies each instance of the entity type. The different property types are distinguished as follows:
SimpleType, corresponds to primitive data types such as Integer, Characters and Floating Point numbers.[19]
ComplexType, is an aggregate of multiple properties of type SimpleType, or ComplexType. Unlike EntityTypes, however, ComplexTypes cannot have an EntityKey. In Entity Framework v1 ComplexTypes cannot be inherited.[20]
All entity instances are housed in EntityContainers, which are per-project containers for entities. Each project has one or more named EntityContainers, which can reference entities across multiple namespaces and entity types. Multiple instances of one entity type can be stored in collections called EntitySets. One entity type can have multiple EntitySets.
EDM primitive types (simple types):[19][21]
EDM type CLR type mapping
Edm.Binary Byte[]
Edm.Boolean Boolean
Edm.Byte Byte
Edm.DateTime DateTime
Edm.DateTimeOffset DateTimeOffset
Edm.Decimal Decimal
Edm.Double Double
Edm.Guid Guid
Edm.Int16 Int16
Edm.Int32 Int32
Edm.Int64 Int64
Edm.SByte SByte
Edm.Single Single
Edm.String String
Edm.Time TimeSpan
Relationships
Any two entity types can be related, by either an Association relation or a Containment relation. For example, a shipment is billed to a customer is an association whereas an order contains order details is a containment relation. A containment relation can also be used to model inheritance between entities. The relation between two entity types is specified by a Relationship Type, instances of which, called Relationships, relate entity instances. In future releases, other kinds of relationship types such as Composition, or Identification, may be introduced. Relationship types are characterized by their degree (arity) or the count of entity types they relate and their multiplicity. However, in the initial release of ADO.NET Entity Framework, relationships are limited to a binary (of degree two) bi-directional relationship. Multiplicity defines how many entity instances can be related together. Based on multiplicity, relationships can be either one-to-one, one-to-many, or many-to-many. Relationships between entities are named; the name is called a Role. It defines the purpose of the relationship.
A relationship type can also have an Operation or Action associated with it, which allows some action to be performed on an entity in the event of an action being performed on a related entity. A relationship can be specified to take an Action when some Operation is done on a related entity. For example, on deleting an entity that forms the part of a relation (the OnDelete operation) the actions that can be taken are:[22]
Cascade, which instructs to delete the relationship instance and all associated entity instances.
None.
For association relationships, which can have different semantics at either ends, different actions can be specified for either end.
Schema definition language
ADO.NET Entity Framework uses an XML based Data Definition Language called Schema Definition Language (SDL) to define the EDM Schema. The SDL defines the SimpleTypes similar to the CTS primitive types, including String, Int32, Double, Decimal, Guid, and DateTime, among others. An Enumeration, which defines a map of primitive values and names, is also considered a simple type. Enumerations are supported from framework version 5.0 onwards only. ComplexTypes are created from an aggregation of other types. A collection of properties of these types define an Entity Type. This definition can be written in EBNF grammar as:
EntityType ::= ENTITYTYPE entityTypeName [BASE entityTypeName]
[ABSTRACT true|false] KEY propertyName [, propertyName]*
{(propertyName PropertyType [PropertyFacet]*) +}
PropertyType ::= ((PrimitiveType [PrimitiveTypeFacets]*)
| (complexTypeName) | RowType
PropertyFacet ::= ( [NULLABLE true | false] |
[DEFAULT defaultVal] | [MULTIPLICITY [ 1|*] ] )
PropertyTypeFacet ::= MAXLENGTH | PRECISION | SCALE | UNICODE | FIXEDLENGTH | COLLATION
| DATETIMEKIND | PRESERVESECONDS
PrimitiveType ::= BINARY | STRING | BOOLEAN
| SINGLE | DOUBLE | DECIMAL | GUID
| BYTE | SBYTE | INT16 | INT32 | INT64
| DATETIME | DATETIMEOFFSET | TIME )
Facets[23] are used to describe metadata of a property, such as whether it is nullable or has a default value, as also the cardinality of the property, i.e., whether the property is single valued or multi valued. A multiplicity of “1” denotes a single valued property; a “*” means it is a multi-valued property. As an example, an entity can be denoted in SDL as:
<complextype name="Addr">
<property name="Street" type="String" nullable="false">
<property name="City" type="String" nullable="false">
<property name="Country" type="String" nullable="false">
<property name="PostalCode" type="Int32">
<entitytype name="Customer">
<key>
<propertyref name="Email">
<property name="Name" type="String">
<property name="Email" type="String" nullable="false">
<property name="Address" type="Addr">
A relationship type is defined as specifying the end points and their multiplicities. For example, a one-to-many relationship between Customer and Orders can be defined as
<association name="CustomerAndOrders">
<end type="Customer" multiplicity="1">
<end type="Orders" multiplicity="*">
<ondelete action="Cascade">
Querying data
Entity SQL
ADO.NET Entity Framework uses a variant of the Structured Query Language, named Entity SQL, which is aimed at writing declarative queries and updates over entities and entity relationships – at the conceptual level. It differs from SQL in that it does not have explicit constructs for joins because the EDM is designed to abstract partitioning data across tables. Querying against the conceptual model is facilitated by EntityClient classes, which accepts an Entity SQL query. The query pipeline parses the Entity SQL query into a command tree, segregating the query across multiple tables, which is handed over to the EntityClient provider. Like ADO.NET data providers, an EntityClient provider is also initialized using a Connection object, which in addition to the usual parameters of data store and authentication info, requires the SDL schema and the mapping information. The EntityClient provider in turn then turns the Entity SQL command tree into an SQL query in the native flavor of the database. The execution of the query then returns an Entity SQL ResultSet, which is not limited to a tabular structure, unlike ADO.NET ResultSets.
Entity SQL enhances SQL by adding intrinsic support for:
Types, as ADO.NET entities are fully typed.
EntitySets, which are treated as collections of entities.
Composability, which removes restrictions on where subqueries can be used.
Entity SQL canonical functions
Canonical functions are supported by all Entity Framework compliant data providers. They can be used in an Entity SQL query. Also, most of the extension methods in LINQ to Entities are translated to canonical functions. They are independent of any specific database. When ADO.NET data provider receives a function, it translates it to the desired SQL statement.[24]
But not all DBMSs have equivalent functionality and a set of standard embedded functions. There are also differences in the accuracy of calculations. Therefore, not all canonical functions are supported for all databases, and not all canonical functions return the same results.
Group Canonical functions[24]
Aggregate functions Avg, BigCount, Count, Max, Min, StDev, StDevP, Sum, Var, VarP
Math functions Abs, Ceiling, Floor, Power, Round, Truncate
String functions Concat, Contains, EndsWith, IndexOf, Left, Length, LTrim, Replace, Reverse, Right, RTrim, Substring, StartsWith, ToLower, ToUpper, Trim
Date and Time functions AddMicroseconds, AddMilliseconds, AddSeconds, AddMinutes, AddHours, AddNanoseconds, AddDays, AddYears, CreateDateTime, AddMonths, CreateDateTimeOffset, CreateTime, CurrentDateTime, CurrentDateTimeOffset, CurrentUtcDateTime, Day, DayOfYear, DiffNanoseconds, DiffMilliseconds, DiffMicroseconds, DiffSeconds, DiffMinutes, DiffHours, DiffDays, DiffMonths, DiffYears, GetTotalOffsetMinutes, Hour, Millisecond, Minute, Month, Second, TruncateTime, Year
Bitwise functions BitWiseAnd, BitWiseNot, BitWiseOr, BitWiseXor
Other functions NewGuid
LINQ to Entities
[icon]
This section needs expansion. You can help by adding to it. (March 2010)
The LINQ to Entities provider allows LINQ to be used to query various RDBMS data sources. Several database server specific providers with Entity Framework support are available.
Native SQL
In the Entity Framework v4 new methods ExecuteStoreQuery() and ExecuteStoreCommand() were added to the class ObjectContext.
Visualizers
Visual Studio has a feature called Visualizer. A LINQ query written in Visual Studio can be viewed as Native SQL using a Visualizer during debug session. A Visualizer for LINQ to Entities (Object Query) targeting all RDBMS is available via VisualStudioGallery.
See also
Free and open-source software portal
Microsoft portal
List of object-relational mapping software
LINQ to SQL
.NET Persistence API (NPA)
References
"Releases · aspnet/EntityFrameworkCore · GitHub".
Krill, Paul (20 July 2012). "Microsoft open-sources Entity Framework". InfoWorld. Retrieved 24 July 2012.
https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/ef/overview
ADO .NET Entity Framework Vote of No Confidence
"Update on the Entity Framework in .NET 4 and Visual Studio 2010". ADO.NET team blog. May 11, 2009. Archived from the original on January 20, 2010. Retrieved November 1, 2011.
"EF4.3.1 and EF5 Beta 1 Available on NuGet". ADO.NET team blog. February 29, 2012. Archived from the original on March 25, 2012. Retrieved March 27, 2012.
"EF5 Available on CodePlex". August 11, 2012.
"EF6 RTM Available". October 17, 2013. Archived from the original on 2014-03-30.
"Entity Framework - Home". September 14, 2016.
"EF Version History".
"EF7 - New Platforms, New Data Stores". May 19, 2014. Archived from the original on 2015-09-29.
"Entity Framework Core 1.0.0 Available". 27 June 2016.
Hanselman, Scott. "ASP.NET 5 is dead - Introducing ASP.NET Core 1.0 and .NET Core 1.0 - Scott Hanselman". www.hanselman.com. Retrieved 2016-07-11.
"Announcing .NET Core 2.0". .NET Blog. 14 August 2017.
"Entity Data Model". MSDN, Microsoft. August 2, 2012. Retrieved August 15, 2013.
CSDL, SSDL, and MSL Specifications, MSDN, archived from the original on 8 November 2010, retrieved December 6, 2010
Entity Data Model Wizard, MSDN, retrieved December 6, 2010
Kogent Solutions Inc. (2009), ASP.NET 3.5 Black Book, Dreamtech Press, ISBN 81-7722-831-5
Simple Types (EDM), MSDN, retrieved December 6, 2010
ComplexType Element (CSDL), MSDN, retrieved December 6, 2010
Conceptual Model Types, MSDN, retrieved December 6, 2010
OnDelete Element (CSDL), MSDN, retrieved December 6, 2010
Facets (CSDL), MSDN, retrieved December 6, 2010
Canonical Functions (Entity SQL), MSDN, retrieved March 29, 2010
Further reading
Lee, Craig (June 14, 2010), ADO.NET Entity Framework Unleashed (1st ed.), Sams, p. 600, ISBN 0-672-33074-1, archived from the original on October 1, 2012
Lerman, Julia (August 2010), Programming Entity Framework (2nd ed.), O'Reilly Media, p. 912, ISBN 978-0-596-80726-9
Jennings, Roger (February 3, 2009), Professional ADO.NET 3.5 with LINQ and the Entity Framework (1st ed.), Wrox, p. 672, ISBN 0-470-18261-X
Mostarda, Stefano (December 2010), Entity Framework 4.0 in Action (1st ed.), Manning Publications, p. 450, ISBN 978-1-935182-18-4
External links
The ADO.NET Entity Framework (at Data Developer Center)
The source code of the Entity Framework version 6 hosted on GitHub
EntityFramework on GitHub
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
What is "stupidity" (your word, not mine) here is your lazy copy-pasting of a text that you did not write yourself (and that you cited abusively from a copyrighted source that your have no right to cite extensively).
You did that in a few seconds without any effort and it was only there with a clear intent to pollute this site with REALLY out of topic and stolen content.
what you just did is just SPAM, and COPYVIO: you've voluntarily violated two requirements of this site and abused your "admin" privileges here.
Worse: you incorrectly marked my own message as spam (which it was absoltuely not: not commercial, not unsollicited, not massively corssposted, not irrespectuous to anyone, and on topic): only your reply was a real spam (also harmful because you first cited "bird names" in it).
modified 1-Mar-19 14:36pm.
|
|
|
|
|
I'm surprised that you confined your reply to just a single paragraph. Surely, I deserved more...
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
What's the point? Your long copy-pasted copyvio is still abusive for this site, and illegal anyway.
And excuse me if my English is not fully "correct" to you, but I'm not an English native speaker. And this article above was posted by an Indian person in 2015 that received no help since 4 years (not even by you).
|
|
|
|
|
It's called "fair use" in this country, because I'm mocking you.
Beyond that, the public can modify wiki entries, so it's essentially in the public domain.
Suck on that, cupcake.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
So I now ask other moderators on this site to blovk you for your antiocooperative ans insulting behavior. You have eveidently abused your dominant position on this site.
Sooty if what I did did not match your own "standard" but I never sollicitated you and you just brought nuisance here.
And sorry for contradicting you but "fair use" does not apply here as it requires being at least on topic (which your illegal copy-pasted randomly selected post was not).
You have demonstrated that you don't want to repect any one and you are just here to enforce your dominant position and exclude all other "newcomers" by making FALSE statement about them (sich as incorrectly stating that I spammed, when you just spammed this site). You are reversing the responsability.
I wonder how people can even give you any trust on this site:
please keep yourself in the articles you created on this site, but don't do elsewhere where you were not invited, and made absolutely no effort to help anyone and just wanted to harass them with bird names.
Being a major contributor on this site gives you even more responsability and proves that you have no excuse: you perfectly know the rules, and voluntarily chose to ignore them. Is that this way that you keep your "eliteness" on this site: excluding any one that could come to do things a honestly with reasonnable efforts, and then remove the merits of the time past by them trying to do that? Did you even help me or anyone else that posted on this page?
I strongly contest your decisions made here. This page can be used now as a proof of your misbehavior because it's highly probable now that you've masdively abused your position on this site repeatedly, and that many people now hesitate to contradict you because you have the huge power to destroy all their efforts by a simple click and no effort at all. I fear that this is site is one of those where a few people maintain their privilege by force. And that you do that because you have hidden interests (most probably enonomically oriented, the same way that corruption works in public political affairs) because you want to sell something and use this site as an advertizing platform.
(Note: I could reedit the post you unlegitimately blocked, and corrupted; there were some minor typos, but you gave me no chance to fix them and this was not an attempt to abuse the site: anyone should be able to fix his own erros later, just to avoid additional unfruitful out-of-topic comments like seen above about minor typos, comments that are just proofs of lack of friendship). This site is made to be open to many people reading a lot, and posting sometimes without being harassed. People have also other legitimate interests elsewhere but don't have to force an exclusive role; if they do that, then the whole site will become anticcoperative and will be closed for use by some self-selected elite, and far from the initial objectives; people will abandon this site will will go elsewhere, where their own voice is heard and their work is respected (e.g. GitHub, or their own blogs). Minor linguistic errors is not a problem and they are fixable without insulting people for that. Thanks.)
modified 1-Mar-19 16:46pm.
|
|
|
|
|
Some images and an explanation of the different methods would be fine.
|
|
|
|
|
Yeah sure. Even the algorithm part included in this article should have contained more equations. But it doesn't because I find it hard to add the equations to it. But here is the original article (It's my blog) and you can clearly see that the algorithm has equations. With those equations, the algorithm probably makes sense!
I tried adding the equations as images, but it didn't work. The images didn't get embedded properly.
modified 7-Mar-16 8:40am.
|
|
|
|
|
So let's say that we have a graph in which the weight depends also on the direction traveled. Is there a standard algorithm for those situations?
|
|
|
|
|
The code I have attached with this article implements Bellman Ford algorithm, and the graph is in this case is dynamic. For academical propose, there is an array implementation to this algorithm. But here, I have used a dynamic graph. There are 2 structures used for forming the graph, one structure's object is the "Link", while the other structure is the "Node". Each node will have a list stl holding a link to the connecting node. So if you are in node 'A', then A->LinkNext shows the list of "Links" linking to other nodes in the graph. Lets say 'A' has a link to 'B'. Now, the information that 'A' is linked to 'B' can only be retrieved from 'A' and not from 'B'. If there is a two way connection from 'A' to 'B' then it means that there is a link from 'B' to 'A', i.e., B->LinkNext contains a link back to 'A'.
Again in the code, the "link" structure stores the weight value. So the link's weight from 'A' to 'B' need not be equal to the link's weight from 'B' to 'A'. The program attached to this article, by default lets us assign two different weight values for the two directions.
|
|
|
|
|
I see. This is very interesting, thank you for the effort and the explanation.
|
|
|
|
|
Great article but the source code is missing.
---------------
Sten Hjelmqvist
|
|
|
|
|
I fixed it. I might have deleted that file by mistake.
|
|
|
|
|
Keep it up. I'm interested in min-cost/max-flow methods where shortest path can be discovered, too (less efficiently of course.)
|
|
|
|
|
Thanks. I'll write more articles.
|
|
|
|
|
The reason I am asking this is, I wrote this article in word and now I am having trouble quickly formatting it to latex equations. I had modified this article to make it look much better but I am not able to upload it properly. The full article is there in my blog here: http://www.creativitymaximized.com/2015/10/shortest-path-algorithm-elaborated.html[^].
Is there any quick way to convert word equations to latex?
And to add to it, the algorithm in my original article contains equations. I guess I can't do it if it is formatted as "code". So is there any way to resolve this problem?
modified 7-Oct-15 9:33am.
|
|
|
|
|
I got to know how to add equations, I'll add them soon and update this article. I am having my exams this week, so I am not able to work now. I'll improve this article this weekend. (Yes I understand this article is missing many stuff. Is it possible to revert this article to a draft and make the modifications and re-upload it?)
|
|
|
|
|