The Sixth International Conference on Network Analysis NET 2016
May 26-28, 2016
Laboratory of Algorithms and Technologies for Networks Analysis (LATNA),
National Research University Higher School of Economics, Nizhny Novgorod, Russia
Thursday, May 26
Room 209 HSE, 136 Rodionova Str.
Room 209 HSE, 136 Rodionova Str.
09:00 – 09:30 Registration
09:30 - 09:50 Panos M. Pardalos
*The Sixth International Conference on Network Analysis NET 2016*
09:50 -10:40 Rene van Bevern
*Exploiting demand structure for efficiently serving arcs and edges in networks*
10:40-11:00 Coffee Break
11:00-11:50 Fuad Aleskerov
*Power in network structures*
11:50 – 12:30 Session 1 (2 доклада)
12:30-14:00 Lunch Break
14:00-14:50 Sergey Sevastyanov
*Efficient analysis of networks with cycles*
14:50-15:10 Coffee Break
15:10 -16:10 Session 2 (3 доклада):
16:10-16:30 Coffee Break
16:30-17:50 Session 3 (3 доклада):
Friday, May 27
Room 209 HSE, 136 Rodionova Str.
09:30-10:20 Ernesto Estrada
*Communicability functions and geometrical properties of network*
10:20-10:40* *Coffee Break
10:40-11:30 Valentina Kuskova
*Network methods applied: Newest empirical results from network studies in Russia*
11:30-12:30 Session 4 (3 доклада):
12:30-14:00 Lunch Break
14:00-14:50 Sergey Nikolenko
*Buffer Management with Heterogeneous Processing*
14:50-15:10 Coffee Break
15:10-16:10 Session 5 (3 доклада):
16:10-16:30 Coffee Break
16:30-17:30 Session 6 (3 доклада)
Saturday, May 28
Room 209 HSE, 136 Rodionova Str.
09:30-10:20 Maxim Jukovsky
*Random graphs: models and asymptotic characteristics *
10:20-10:40 Coffee Break
10:40-12:20 Session 7 (5 докладов)
12:20-14:00 Lunch Break
14:00-14:50 Oleg Prokopyev
*Sequential Network Interdiction with Incomplete Information*
14:50-15:10 Coffee Break
15:10-16:10 Session 8 (3 доклада)
16:10-16:30 Coffee Break
16:30-17:30 Session 9 (3 доклада)
**Power in Network Structures**
**Fuad Aleskerov, Natalia Mescheryakova, Sergey Shvydun, ****alesk@hse.ru*** *
*National Research University Higher School of Economics *
*Institute of Control Sciences of Russian Academy of Sciences, Moscow, **Russian Federation*
We consider an application of power indices, which take into account preferences of agents for coalition formation proposed for an analysis of power distribution in elected bodies to reveal most powerful (central) nodes in networks.
These indices take into account the parameters of the nodes in networks, a possibility of group influence from the subset of nodes to single nodes, and intensity of short and long interactions among the nodes.
Some properties of the indices are discussed.
Various applications are presented for the network of countries – those of religion, migration, foreign claims, conflicts, exchange of students, food export-import, etc.
**Double best response as a network stability concept**
**Nikolay Bazenkov, ****n.bazenkov@yandex.ru*** *
*Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russian Federation*
In network formation games the key point is what network should be considered as a stable solution. We investigate a concept of network stability called double best response by analogy with conventional best response dynamics used in network formation games. Here we apply double best response to the game called minimal-cost connectivity game where every player wants to be connected to as many players as possible with minimal individual cost. The network formation process consists of a sequence of two-stage local adjustments. On the first stage a randomly selected ``leader'' agent chooses a subset of her neighbors and proposes to everyone of them to form a link. On the second stage each ``follower'' decides to accept or decline the proposal. A situation when no leader can make a proposal that will improve her payoff called equilibrium in double best responses (EDBR). Next we investigate the properties of networks stable according to EDBR concept.
First, we analyze the setting where every player has full information about the current network structure. We show that if players are located on a plane then every EDBR network is a subgraph of the Relative Neighborhood Graph (RNG) which is a well-known structure in wireless networks topology control literature. Second, we analyze players which possess only local information about the network structure and show that the “RNG proeprty” still holds if during the local adjustment the ``leader'' is better informed than the ``followers''. We obtain the bottom and top bounds on the efficiency of EDBR networks and show that EDBR produce networks that are significantly more efficient than conventional Nash equilibrium, pairwise stability or strong stability concepts.
**Exploiting demand structure for efficiently serving**
**arcs and edges in networks**
**Ren****é**** van Bevern, ****rvb@nsu.ru*** *
*Novosibirsk State University, Novosibirsk, Russian Federation*
We study the problem of finding minimum-length routes for a fleet of vehicles of equal
capacity that are initially located in a vehicle depot and have to serve the demands of all
clients located in a transportation network. It is canonical to model the transportation network as a graph whose edges are weighted by distances. Yet there is some freedom in modeling the clients: in the Capacitated Vehicle Routing Problem (CVRP), the clients
are a subset of the vertices of the graph, whereas in the Capacitated Arc Routing
Problem (CARP), the clients are a subset of the arcs or edges of the graph. CARP models tasks where the roads of the network themselves are the clients or all clients along a road have to be served (e.g., in street sweeping, household waste collection, or meter reading).
Both CVRP and CARP are NP-hard and can easily be translated into each other. Due to this perceived equivalence, research is mostly concentrated around the older CVRP. Nevertheless, solving route planning tasks using CARP rather than CVRP, when applicable, is rewarding: based on a recent survey and recent work [1, 2] on the parameterized and approximation complexity of arc routing problems, this talk shows what makes CARP significantly easier than CVRP and presents recent theoretical and experimental results on fixed-parameter algorithms for efficiently solving NP-hard variants of CARP by exploiting the structure of the subgraph that is induced by the client arcs or edges.
References:
1. René van Bevern, Rolf Niedermeier, Manuel Sorge, and Mathias Weller. Complexity
of arc routing problems. In *Arc Routing: Problems, Methods, and Applications*,
MOS-SIAM Series on Optimization. SIAM, 2014.
2. René van Bevern, Christian Komusiewicz, and Manuel Sorge. Approximation algorithms for mixed, windy, and capacitated arc routing problems. In *Proceedings of the 15th Workshop on Algorithmic Approaches for Transportation Modeling, Optimization, and Systems (ATMOS’15)*, volume 48 of *OpenAccess Series in Informatics (OASIcs)*. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 2015.
**Overlapping community detection in social networks with partly missing node attributes**
**Vladisvlav Chesnokov, ****v.o.chesnokov@yandex.ru*** *
*Bauman Moscow State Technical University, Moscow, Russian Federation*
Community detection is widely used in Data Mining, biology, sociology, telecommunications, transport and other sciences. For example, in social networks, nodes can represent people, while edges can be relationships between them: friendship, family ties, working relations, etc and so communities are groups of people having something in common. One of the main features of social graphs is that nodes have many attributes associated with them like age, gender, favorite movie, workplace, hometown, etc. Only few community detection algorithms use the plethora of this information. But the full information about a person's attributes can be obtained rarely due to some restrictions: privacy issues or data loss.
A fast algorithm for overlapping community detection is proposed. It uses information about node attributes, requires neither training nor knowledge about their nature and is tolerant to their partial absence.
There are three assumptions behind the proposed algorithm. The first states that social graphs are formed by affiliation model [1]. Simply speaking, communities define graph structure and not vice versa. Moreover, each community has an associated attribute and each vertex in community has this attribute. The second assumption is triadic structure of social networks [2], i.e. if there is relation between persons A and B and between B and C, then, with high probability, there is the same relation between A and C. The last hypothesis is related to the first one and states that nodes which are close to each other have similar attribute set [3].
The developed algorithm is based on attribute transfer and consists of five steps. Each node is initialized with empty set of attributes --- they are called key attributes further on. The first step is iterative. At each iteration, all nodes of graph are visited and their key attributes sets are updated by the following rule: if the qualified majority of vertex neighbours has common attribute, then it is added to key attributes set for this vertex. The step is finished when there were no updates at last iteration. Due to small world phenomenon, the number of iterations is small: during experiments it was less than 10 for most of graphs. The same procedure with weakened addition rule is performed at the second step. Nodes which belong to several communities or lie on the fringe are attached to existing communities. After that, nodes are joined into communities by attributes and they are split into connected components. The fourth step is merging: components with the same node set are joined into single community. So, each community has a set of attributes which it was possibly formed by. The last step is slightly modified procedure of first and third steps applied to subgraph of vertices with empty key attribute set. Vertices in this subgraph either belong to some community formed by unspecified attribute or does not belong to any community. Communities from this step are added to set obtained at step four. Summing up, the algorithm has near-linear runtime: each iteration of the first and the second steps is linear in graph size because each edge visited at most twice. The third step is finding connecting components, which can easily be done in linear time or less. The fourth step is also solved in linear time with hashes. Moreover, the algorithm can be easily parallelized for large graphs.
The developed algorithm was tested on Facebook and Twitter datasets with ground-truth hand labeled communities from SNAP [4]. It was compared against five algorithms: modularity maximization, Infomap, AGM-fit, BigCLAM and CESNA. Also, tolerance to partial node attribute absence was tested. Experiments showed that the developed method outperforms all competitors by F1-score and Jaccard index by more that 10%. The algorithm still gives high scores when up to 80% of node attributes values are deleted from graph.
References:
1. Yang, J., Leskovec, J.: Community-affiliation graph model for overlapping network community detection. -- In: 12th {IEEE} International Conference on Data Mining, {ICDM} 2012, Brussels, Belgium, December 10-13, 2012, pp. 1170--1175.
2. Granovetter, M.: The strength of weak ties. American Journal of Sociology -- 78(6), 1973.
3. Dougnon, R.Y., Fournier-Viger, P., Nkambou, R.: Advances in Artificial Intelligence: 28th Canadian Conference on Artificial Intelligence, Canadian AI 2015, Halifax, Nova Scotia, Canada, June 2-5, 2015, Proceedings, chap. -- Inferring User Profiles in Online Social Networks Using a Partial Social Graph, pp. 84--99. -- Springer International Publishing, Cham, 2015.
4. Leskovec, J., Krevl, A.: SNAP Datasets: Stanford large network dataset collection. -- http://snap.stanford.edu/data.
**Machine learning application to human brain network studies**
**Yulia Dodonova, Leonid Zhukov **
**ya.dodonova@mail.ru, leonid.e.zhukov@gmail.com **
*National Research University Higher School of Economics, Moscow, Russian Federation*
*The Institute for Information Transmission Problems Russian Academy of Sciences*
Network science is becoming a popular instrument for neuroscience research. At the macroscopic scale, aggregated neural pathways can be modeled by graphs called connectomes. These graphs are usually small and contain about 100 vertices labelled according to brain regions; a set of uniquely labeled vertices is the same across different connectomes. The edges are weighted and represent either structural connections (the number of fiber tracts) or functional relations (co-activation) between brain regions. The networks are spatially embedded: vertices are localized in 3D space, and edges have physical lengths. These latter characteristics are rarely incorporated in analysis, although it is sometimes acknowledged that differences in physical size pose additional requirements on normalization of individual connectomes.
Network brain characteristics are expected to provide insights into associations between brain structure and particular phenotypes. More practical questions are whether connectomes can be useful in discriminating between normal and pathological brain structure (which is a classification task) and predicting the onset of disease or treatment outcomes (which can be considered a regression task). However, a typical study is based on a sample of only about a hundred of connectomes and the above questions are mostly addressed in terms of statistical significance of group differences in network characteristics.
In this paper, we use machine learning algorithms to predict phenotypes from brain structure. We use graph kernel approach and run SVM classifier on precomputed matrices of pairwise distances between connectomes. We also explore whether normalized Laplacian spectra of connectomes can be useful for our classification tasks. We compare our results against the performance of classifiers that use vectors of features, such as “bag-of-edges” (vectorized weighted adjacency matrices), weighted degrees, and a set of global network metrics. Finally, we discuss how information on physical properties of the networks can be incorporated in analysis, in a form of both normalization and construction of the appropriate null networks.
**Communicability functions and geometrical properties of networks**
**Ernesto Estrada, ernesto.estrada@strath.ac.uk **
*University of Strathclyde, Glasgow, United Kingdom*
Recent interest in geometric properties of networks has triggered research on embedding graphs on certain geometric spaces. Our approach follows a different route as it finds the geometry induced by functions of the adjacency matrix of networks, known as communicability functions. In this talk I will motivate the necessity for characterizing the spatial efficiency of networks. Then, I will show how matrix functions defining the communicability among nodes in a network induces an embedding into high- dimensional Euclidean spaces, which allow us to characterize the spatial efficiency of graphs. We will define a communicability distance function and the corresponding Euclidean angles between the position vectors of the nodes in the induced embedding. I will show examples of applications of these concepts to characterize the spatial efficiency of networks as well as some dynamical processes taking place on them.
**One class of smooth modification of Space-Filling Curves**
**Alexey Goryachih, ****goryachihalexeysergeevich@gmail.com*** *
*Lobachevsky State University, Nizhny Novgorod, Russian Federation*
This work presents one class of smooth modification of Space-Filling Curves. These modifications make approximations of Hilbert Curves differentiable in all points, and save differentiability of an optimized function. Some results of applications the modifications and comparison with unmodified curve are presented in this work.
**Generalized Huffman Trees and Wiener Index for Graphs with Weighted Vertices**
**Mikhail Goubko, ****mgoubko@mail.ru*** *
*Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russian Federation*
For a simple undirected connected graph G the Wiener index WI(G) is calculated as a half of the sum of elements of its distance matrix D(G). WI(G) is one of the most renowned graph indices widely used in mathematical chemistry, quantitative structure-property relation (QSPR) studies, and network analysis. In 1997 Klavzar and Gutman defined the Wiener index for vertex-weighted graphs as a quadratic form VWWI(G, mu) = 1/2**mu*'*D(G)**mu*, where *mu* is the vector of positive vertex weights. VWWI generalizes WI and is closely connected to spectral properties of the graph.
We minimize VWWI over the set of trees with the given vertex weights' and degrees' sequences and show the optimal tree to be the, so-called, generalized Huffman tree built in a bottom-up manner by sequentially connecting vertices of the least weights to vertices of the least degree. Chemical trees are constructed, which minimize the Wiener index over all chemical trees with given vertex weights. It is also conjectured that the tree with given vertex weights' and degrees' sequences that maximizes VWWI is a “reversed” Huffman tree, which is built by sequentially connecting the heaviest vertices to vertices with the biggest degree.
Several applications are discussed. Firstly, we find isomers of simple alcohols with extremal normal boiling point by minimizing the linear combination of VWWI and vertex-degree-based graph indices. Secondly, we suggest a lower bound of the routing (communication) tree cost.
**Bayesian Preference Learning for Residential Demand Response in Power Grids**
**Mikhail Goubko, Dmitry Ignatov, Alexey Neznanov, ****mgoubko@mail.ru**
*Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russian Federation*
*Skoltech Center for Energy Systems, *
*Skolkovo Institute of Science and Technology, Skolkovo**, Russian Federation*
In the coming years residential consumers will face real-time electricity tariffs with energy prices varying day to day. They are assumed to provide stimuli for demand response (consumption peak shaving through load shift or curtailment). Nevertheless, efficient electricity saving under real-time price schedules requires enormous efforts and requires advanced automation (a recommender system or an automatic control system). To perform well, these systems have to learn consumer’s preferences from her actions. Under the assumption of rational (economic) behavior a consumer chooses a scenario of home appliance use to balance the energy bill and the consumer’s comfort level. We propose a Bayesian learning algorithm to estimate the comfort level function of a residential appliance user from the history of her actions in different situations. In numeric experiments with datasets generated from a simulation model of a rational consumer interacting with small home appliances our algorithm outperforms popular contemporary regression analysis tools giving more accurate predictions of consumer’s actions. This approach can be extended and used to control an air heating and conditioning system, which is responsible for up to half of a household’s energy bill.
**Improving Spectral Partitions for Optimal Controlled Islanding of Power Grid**
**Mikhail Goubko, Vassily Ginz, ****mgoubko@mail.ru*** *
*Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russian Federation*
*Skoltech Center for Energy Systems, *
*Skolkovo Institute of Science and Technology, Skolkovo**, Russian Federation*
Controlled islanding is a part of frequency stabilization strategy in power grids. It implies partitioning a grid into independent islands in case of massive failures to minimize accident aftermaths, prevent cascading blackouts and keep critical infrastructure alive. To achieve these goals slow coherence of generators must be avoided in the islands formed. Also, line switch off distortion and load shedding must be minimized. Existing approaches account for just one or two of these factors. We propose a two-step islanding algorithm that balances multiple criteria: slow coherence of generators, minimal line switching-off distortion, island size, and minimum load shedding. The mathematical challenge is high dimension (up to 10^4 nodes in a grid) and dependence of cost function on power flow directions (so, spectral clustering is inapplicable).
Several hierarchical spectral clustering schemes are used at the first step to cut the problem dimension (caring for coherence and distortion only), and CPLEX tools for the mixed integer quadratic problem are employed at the second step to choose a balanced partition of the aggregated grid that minimizes a combination of coherence, distortion and load shed (approximated by power imbalance). The problem dimension on the second step depends only on the desired number of islands K but not on the dimension of the initial grid. A greedy heuristic generates a starting solution for the branch and bound scheme assuring high performance of the second step. The algorithm was compared with the basic one-step hierarchical clustering scheme on example grids with 118, 2383, and 9241 nodes, showing good performance (10% average improvement of the cost function) with modest increase in computation time (<1 second on the average). Constrained AC OPF solution was used to estimate load shed for benchmarking.
**Computing of the Width of Simplices With Bounded Minors of the Constraints Matrix**
**D. V. Gribanov, A. Y. Chirkov, ****dimitry.gribanov@gmail.com*** *
*National Research University Higher School of Economics, Nizhny Novgorod, Russian Federation*
We will show that the width of simplices defined by systems of linear inequalities can be computed in polynomial time if some minors of their constraint matrices are bounded. Additionally, we present some quasi-polynomial-time and polynomial-time algorithms to solve the integer linear optimization problem defined on simplices minus all their integer vertices assuming that some minors of the constraint matrices of the simplices are bounded.
**Towards a lattice-based algorithm for consensus clustering**
**Dmitry I. Ignatov, ****dignatov@hse.ru*** *
*National Research University Higher School of Economics**, Moscow**, Russian Federation*
We propose a new lattice-based algorithm for consensus clustering. As the input the algorithm takes n partitions of a certain set of objects obtained by k-means algorithm after its n different executions. The resulting consensus partition is extracted from a (partial) antichain of the concept lattice built on formal context objects×classes, where the classes are the set of all cluster labels from each initial k-means partition. We compare the results of the proposed algorithm in terms of ARI measure with the state-of-the-art algorithms on synthetic datasets and in several cases the best measure values are demonstrated by our approach.
**Поделитесь с Вашими друзьями:** |