The study of networks goes back to Euler in the 18th century, and the current availability of massive data sets on empirical networks intensified interest in their study. A series of statistical methods for the analysis of such data has developed in the last decades, as well as models to explain the topologies observed in these networks. Many of these methods are algorithmic, and a natural concern is whether the proposed methods are computationally efficient. Other key issues are assessing whether network models appropriately describe the phenomena studied, and if they have good estimators.
The abstract representation of a network, or graph G = G (V, E) consists of a set V of vertices and a set of edges G = G (V, E) consists of a set V of vertices and a set of edges E 2 (V2) (where 2 (V2) is the class of subsets of V with two elements) (Nasini & Castro, 2015). When {x, y} belongs to E, we say that the vertices x and y are connected, or are neighbors. For example, the graph with 3 vertices all connected between them is called triangle, and the graph in which all vertices except one have the even (single) neighbor is said to be a star.
Figure 1.1: Graph examples. On the left, a triangle. On the right, 5-star Points represent vertices and the lines represent edges.
The most studied models are the Binomial Random Graph (brg), denoted by G (n, p). This is a graph with n vertices labeled, where each edge is independently present with probability p = p (n). Thus G (n, 1/2) is a uniform distribution over the 2 (n2) graphs. In general, the probability associated with the graph G (V, E) is p | E | (1-p) (n2) - | E |. The brg model is well studied - it is known, for example, that it presents phase for graph invariant [ER59, ER60] and recently a principle of large was designed for the [CV11] model.
Although rich in properties, the brg model is not suitable for describing networks empirical. Our focus is the model called exponential random graph (erg), which is used in the social sciences [HL81, SPRH06]. The probability of a graph G with n vertices in this model is given by
Â
Where is a vector of real parameters, T1, T2,. . . , Tk are real functions in graph space (for example, number of edges, triangles, stars, circuits, ...), and is a normalization constant, so that
The expression in the exponent is sometimes called Hamiltonian (a term of mechanics statistical relationship, unrelated to the term of graph theory), which is used to the measurement of probability in the graph space, assigning greater mass to graphs with "desirable" properties (Yaneer 2015). For example, we set parameters h,> 0 and, for every graph G with n labeled vertices, E (G) edges and T (G) triangles, we define the Hamiltonian of G as
A measure of probability in the graph space can then be defined as
Where ps is the normalization constant and sometimes called a partition function. Note that the binomial model is a particular family of erg. Consider the template,
Where ps is a function of n and b. As the sum of probability of graphs with n-vertices must be equal to 1
The random graph G (n, p) assumes unique values, and can be written in form of an erg using indicator functions (of the empty graph and the complete graph). However, in general, the probability distribution of ergs and brgs are distinct. In addition, they are difficult to calculate (since the number of graphs is 2 (n2) = 2n (n-1) / 2), or which in practice makes it impossible to directly sample ergs. This has motivated the search for distributions that approach ergs. In particular, it is desired to obtain samples from p. Such samples are essential for estimates of maximum likelihood and for the inference methods in these models.
Random graphs are unrealistic because they have no local structure. We have already seen that the tendency in a network of acquaintances is to form triangles - if A is a friend of B and B is a friend of C, the chance of A becoming C's friend is great. In a random graph, this chance would be the same for any pair of vertices (people, in this case), so it does not match the actual social networks. The coefficient of agglomeration C is that it measures the chance of closing triangles (and, consequently, the local structure). In random graphs, C is p, where p is the probability of connection between two nodes (Reuven & Shlomo, 2010). This says that the probability of connection between random vertices is the same as that of vertices with several neighbors in common. In many real networks, C is much larger than p, showing a "neighborhood" effect. To better represent real networks, we need a model where the mean distances are small, without losing the "neighborhood" factor that is, keeping the coefficient of agglomeration C.
It was in 1998 that Duncan James Watts and Steven Strogatz proposed a simple model to reproduce the characteristics mentioned in the previous paragraph. Their intention, as Watts himself says in his book Six Degrees, was to capture four elements in the model. The first was that social networks consisted of many groups, with many internal connections. The second was that social networks were not static, because new relationships are made and others cease to exist over time. The third was that relationships do not necessarily have the same probability of existing, that is, the context influences future relationships. The fourth and last element consisted in the fact that our decisions are derived from our preferences and characteristics (when we take them), these decisions lead us to meet new people.
Knowing that there were already studies on the phenomenon Small World, they began to try to find out what caused this effect. The question at a time was: what is the simplest model capable of reproducing the Small World effect? At one extreme, we have a regular network, which is a network with symmetry of connections. In the other, a random network (that does not have that name for nothing) (Clauset, Shalizi & Newman, 2009). A regular network is not an appropriate model to see people relations; after all, we do not even have the same number of friends. A random network may even seem more real than a fully symmetric network, but to say that it is good for modeling would ignore much of what sociology has already brought to light about the social forces that work in these systems - the triangles, again! So the model would have to be between the organization and the mess.
Let L be the average distance between vertices. Here, link is synonymous with edge. The basic idea of the model is that, on the one hand, the random networks have low L and low C; on the other hand, regular networks have high L and high C. They sought an intermediate regime between these two extremes. The algorithm is also simple. A regular graph is created and, for each vertex, a random edge (not yet chosen) is chosen and it is pointed randomly to another vertex, connecting the vertex in question to another. In doing so, for low values of p, a model with high C and low L is obtained.
The graph below illustrates the result of the work of Watts and Strogatz, published in Nature. From the compromise between high coefficient of agglomeration and low average distance (order and disorder), the Small World networks were discovered.
Neighbor vertices differ in few connections and this remains even after the rearrangements of the connections, having only a local effect, so as not to influence the structure of the network as a whole. The agglomeration coefficient decreases with reallocations but decreases slowly. The average distance drops dramatically in each rearrangement. This makes sense because a reallocation will not only be reducing the distance between the two vertices related to it, but also any pair of vertices whose path from one to the other passes through this rearranged edge, having an effect on the network structure. An astonishing result obtained by Watts and Strogatz was that, on average, the L of the network after five relocations falls by half regardless of the size of the network.
The Watts-Strogatz model reproduces the main characteristics of many social networks. To facilitate, imagine several people in a stadium watching a football game in the 80's. They can only talk to people nearby, because the noise of the crowd is too big for them to be able to talk to someone on the other side of the stadium (Watts 257). However, some people get a radio to talk to a random fan present, also with a radio. Now, this gang is much easier to contact people in the stadium bleachers than someone without a radio. The rearrangement of the edges is when the radios are introduced.
Model Watts-Strogatz. Between order and clutter is the Small World structure. Source: http://escoladeredes.net/profiles/blogs/redes-complexas-da-internet-as Small World in other networks
Watts and Strogatz started searching for data from real networks to see if their predictions matched reality. From the work of Watts and Strogatz, the graph model "petitmonde "(Small world network) knows a rapid success, mainly due to his direct link with Stanley Milgram's work on the famous "Six degrees of separation". These graphs are characterized by two properties: the average distance between two nodes is proportional to the logarithm the number of nodes; a lot of structures are close clicks (Watts 257). A small world graph then has a higher clustering coefficient (actually much higher) and a smaller diameter than a random graph of the same order and size.
The figure below summarizes the theoretical type of graph proposed and its method of construction. The starting graph is k-regular (all vertices have the same degree k) and each vertex is linked to its close neighbors. Randomly, a link is removed and a link added to the starting graph. Gradually, the k-regular graph is transformed into a random graph by deletion - adding links. Between these two extremes, the graph produced presents the small-world properties.
The disadvantage is that all or almost all graphs from empirical data, and in particular non planar graphs, correspond to the definition the small-world network. Thus, this graph, although of theoretical essence, finds itself empirically in nature unlike the graphs of random or regular. The success of this model is therefore somehow in its universality. Anyway, the properties of these graphs are not very precise, which may also partly explain its "universality".
Greedy Algorithms
They are generally used to solve problems of optimization (get the maximum or minimum). They make decisions based on the information that is available at every moment. Once the decision is made, it does not return to rethink in the future. They are usually quick and easy to implement (Watts 257). They do not always guarantee to reach the optimal solution. There are situations in which we cannot find a greedy algorithm that provides optimal solutions. On many occasions, you could get better solutions reconsidering alternatives discarded by a greedy algorithm (when, from an optimal solution local cannot reach an optimal solution global). Despite this, greedy algorithms are useful that provide a quick solution to problems complex, although this is not optimal.
To be able to solve a problem using the greedy approach, we will have to consider 6 elements:
Set of candidates (selectable elements).
Partial solution (selected candidates).
Selection function (determines the best candidate of the set of selectable candidates).
Feasibility function (determines if it is possible to complete the partial solution to reach a solution of the problem).
Criterion that defines what a solution is (indicates if the partial solution obtained solves the problem
Objective function (value of the solution reached).
Laws of Power
Â
A power-law distribution is a special kind of probability distributi...
Request Removal
If you are the original author of this essay and no longer wish to have it published on the thesishelpers.org website, please click below to request its removal:
- Math Essay Example: Magic Polygons
- Essay Sample on Business Mathematics
- Summary of Mathematical Games by Martin Gardner
- Graph Theory: Concept of the Small World
- Impact of Involving Students in Written Explanations of their Problem-Solving on Fifth Grade Students' Math Achievement
- Paper Example on Set Theory
- Consumer Mathematics - Paper Example